uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,500,515
arxiv
\section{Introduction} \label{sec:introduction} Distributed systems are considered as an inevitable solution to store or process large amount of data. However, distributing the computation raises major concerns regarding security and privacy of data and algorithms. This is particularly crucial, if we have to offload the computation and storage tasks to some untrusted, but probably cheaper or more powerful, parties. There is a rich history of study in the literature for data privacy in distributed environments. However, these days, the algorithm privacy could be even more important than data privacy. Not only the algorithms could be very valuable, but also in some cases parameters of the algorithm could carry lifetime secrets such as biological information of the individuals. Compared to \emph{data privacy}, our understanding of the fundamental limits of \emph{algorithm privacy} is very limited. Motivated by this, we introduce the private function retrieval (PFR) problem, where a set of servers with access to a database is connected to a user. The user wishes to compute a function of the files while keeping the function private from each individual server. The goal is to characterize the fundamental limits of the communication cost (between the user and the servers) needed in order to privately compute the function. Recently, there has been intense interest in characterizing the fundamental performance limits of distributed computing systems from an information theoretic perspective. Among these, we can name the distributed storage systems \cite{dimakis10}, distributed cache networks \cite{maddah14}, private information retrieval (PIR) \cite{chor95,shah14,Sun17}, and distributed computing~\cite{li17,yu17}. In all of these cases, information theoretic ideas and tools have been found useful to provide a fundamental, and often very different, understanding on how to run the system efficiently. In this work, our goal is to characterize the fundamental limits of PFR from an information theoretic perspective. To be precise, in this paper, we consider a system including one user connected to $N$ non-colluding servers, each storing a database of $K$ equal-size files, $W_1, \dots, W_K$. The user wishes to compute a linear combination of these $K$ files by downloading enough equations from the $N$ servers. While retrieving the linear combination, the user wishes to keep the coefficients hidden from each individual server. This means that each server must be equally likely uncertain about which combination is requested by the user. The goal is to minimize the required downloading load to retrieve the result of computation privately. The PFR problem can be considered as an extension of the PIR problem, where the user is interested in one of $K$ files. The PIR problem has been introduced in \cite{chor95} and its capacity in the basic setup has been characterized recently in~\cite{Sun17}. Several extensions of PIR problem has been studied in literature, including the PIR with colluding servers~\cite{sun16colluding,banawan17colluding}, the PIR with coded servers~\cite{banawan16coded}, and the symmetric PIR~\cite{sun16symmetric,wang17symmetric}. To address the problem of PFR, we first focus on the cases where the coefficients are from a binary field. For this case, we find the optimal scheme for two servers ($N=2$) and any arbitrary number of files, $K$. In particular, we show that the capacity of this case is $\frac{1}{2}\Big(1-\frac{1}{2^K}\Big)^{-1}$. Interestingly, this is equal to the capacity of the PIR with two servers and arbitrary number of files, $K$. We extend this scheme, and propose an achievable solution for the general setup with $N$ servers, $K$ files and coefficients from a general valid field. The capacity of PFR problem has been studied in a parallel and independent work~\cite{sun17PFR}. In~\cite{sun17PFR}, the capacity of PFR has been characterized for a system with two servers ($N=2$), two messages ($K=2$) and arbitrary linear combination. In this paper, we characterize the capacity of PFR for a system with $N=2$ servers, an arbitrary number of files, and binary coefficients. The achievable schemes proposed by two papers are very different. The remainder of this paper is organized as follows. Section~\ref{sec:problem} formally introduces our information-theoretic formulation of the PFR problem. Section~\ref{sec:main} presents main results. Sections~\ref{sec:scheme_binary} and \ref{sec:scheme_gen} contain proofs. \section{Problem Setting} \label{sec:problem} We consider a system, including a user connected to $N$ non-colluding servers, each stores an identical copy of a database. The database incudes $K$ files $W_1, \dots, W_K$, where each file $W_k$ has $L$ equal-size segments (or so-called layered) $W_k[1],\dots,W_k[L]$ for $k=1,\dots,K$, i.e., $W_k=\{W_k[t]\}_{t=1}^{L}$. Each segment $W_k[t]$, $t\in [1:L]$, $k\in [1:K]$ is chosen independently and uniformly at random from the finite field $GF(p^F)$, for some $F\in\N$ and prime number $p$. The database is shown as $\left\{ \W[t] \right\}_{t=1}^L$, where $\W[t]=[W_1[t], \dots, W_K[t]]^T\in (GF(p^F))^K$. The user is interested in a specific linear function of $\W[t]$, represented as \begin{equation} \label{eq:linear} \{\vv^T\W[t]\}_{t=1}^L, \end{equation} where $\vv$ is an $K$--dimensional non-zero vector, with entries from the finite field $GF(q)$, for some integer $q$. We assume that $GF(q)$ is a sub-field of $GF(p^F)$, thus $q=p^M$, for some integer $M$, where $M|F$. Therefore, the operations in \eqref{eq:linear} are well-defined over $GF(p^F)$. Excluding the parallel vectors, there are $\frac{q^K-1}{q-1}$ distinct options for vector $\vv$, denoted by $\vv(1),\dots,\vv(\frac{q^K-1}{q-1})$. We use a short-hand notation $M(i,t)=\vv^T(i)\W[t]$. Note that in the PFR problem with binary coefficients, we set $p=2$ and $M=1$ that yields $q=2$. Assume that the user chooses $\vv=\vv(\theta)$ for some $\theta \in [1:\frac{q^K-1}{q-1}]$, thus the user wishes to compute $\{{M}(\theta,t)\}_{t=1}^L=\{\vv^T(\theta)\W[t]\}_{t=1}^L$ by downloading some equations from the servers. So, the user sends $N$ queries $Q_{\vv(\theta)}^{(1)},\dots,Q_{\vv(\theta)}^{(N)}$, to server 1 to $N$ respectively, where $Q_{\vv(\theta)}^{(n)}$ is the query sent by the user to the $n$-th server in order to retrieve $\{M(\theta,t)\}_{t=1}^L=\{\vv^T(\theta)\W[t]\}_{t=1}^L$. Since the queries are independent of the messages, we have \begin{equation}\label{eq:query} I(\{ \W[t] \}_{t=1}^L;Q_{\vv(i)}^{(1)},\dots,Q_{\vv(i)}^{(N)})=0, \end{equation} for all $i\in\left[1:\frac{q^K-1}{q-1}\right]$. In response to $Q_{\vv(\theta)}^{(n)}$, the $n$-th server computes an answer $A_{\vv(\theta)}^{(n)}$ as a function of its database and the received query, thus \begin{equation}\label{eq:answer} H(A_{\vv(\theta)}^{(n)}|Q_{\vv(\theta)}^{(n)},\{ \W[t] \}_{t=1}^L)=0. \end{equation} Let $\max\limits_{\theta} \sum\limits_{n=1}^N|A_{\mathbf{v(\theta)}}^{(n)}|=QF$, for some integer $Q$, represent the download cost in $p$-ary units. While retrieving $\{M(\theta,t)\}_{t=1}^L=\{\vv^T(\theta)\W[t]\}_{t=1}^L$ from $A_{\vv(\theta)}^{(1)},\dots,A_{\vv(\theta)}^{(N)}$ and $Q_{\vv(\theta)}^{(1)},\dots,Q_{\vv(\theta)}^{(N)}$, the user must keep the index $\theta$, (or equivalently the vector $\vv(\theta)$) hidden from each individual server. To satisfy the privacy constraint, the $\frac{q^K-1}{q-1}$ query-answer function $(Q_{\vv(i)}^{(n)},A_{\vv(i)}^{(n)}),i=1,\dots,\frac{q^K-1}{q-1}$ must be identically distributed in each server $n=1,\dots,N$. That is \begin{equation}\label{eq:privacy} (Q_{\vv(1)}^{(n)},A_{\vv(1)}^{(n)},\{ \W[t] \}_{t=1}^L)\sim(Q_{\vv(\theta)}^{(n)},A_{\vv(\theta)}^{(n)},\{ \W[t] \}_{t=1}^L) \end{equation} for each $\theta\in\{1,\dots,\frac{q^K-1}{q-1}\}$ and $n=1,\dots,N$. An $(L,Q)$ PFR scheme consists of $N(\frac{q^K-1}{q-1})$ query-answer function $(Q_{\vv(\theta)}^{(n)},A_{\vv(\theta)}^{(n)})$ for $i=1,\dots,\frac{q^K-1}{q-1}$ and $n=1,\dots,N$; and $\frac{q^K-1}{q-1}$ decoding functions that map $\{Q_{\vv(\theta)}^{(n)},A_{\vv(\theta)}^{(n)}\}_{n=1}^N$ to $\{ \hat{M}(\theta,t) \}_{t=1}^L$ as the estimate of $\{{M}(\theta,t)\}_{t=1}^L$ for $i=1,\dots,\frac{q^K-1}{q-1}$ with probability of error $$P_{e,L}=\max_{\theta}\Pr\left(\{ M(\theta,t) \}_{t=1}^L \neq \{\hat{M}(\theta,t) \}_{t=1}^L\right),$$ while the privacy constraint \eqref{eq:privacy} is satisfied. The rate of this code is defined as \begin{equation}\label{eq:rate} R=\frac{L}{Q}. \end{equation} \begin{definition} A rate $R$ is achievable if there exists a sequence of $(L,Q)$ PFR schemes where $P_{e,L}\rightarrow0$ as $L\rightarrow\infty$. The capacity of PFR problem is defined as \begin{equation*} C \defeq \sup \big\{R: R \text{ is achievable} \big\}. \end{equation*} \end{definition} Thus from the Fano's inequality, the correctness condition, i.e., $P_{e,L}\rightarrow0$, implies that \begin{equation}\label{eq:correctness} \frac{1}{L}H(\{M(i,t)\}_{t=1}^L|Q_{\vv(i)}^{(1)},A_{\vv(i)}^{(1)},\dots,Q_{\vv(i)}^{(N)},A_{\vv(i)}^{(N)})=o(L), \end{equation} where from the Landau notation, we have $f(n)=o(g(n))$ if $\lim\limits_{n\rightarrow \infty}\frac{f(n)}{g(n)}\rightarrow 0$. \section{Main Results}\label{sec:main} The first theorem presents the capacity of the PFR problem with binary coefficients ($q=2$) when $N=2$ servers are available with $K$ arbitrary messages. \begin{theorem}\label{thm:binary_capacity} For the PFR problem, with K messages and $N=2$ servers and binary coefficients, the capacity is \begin{equation}\label{eq:capacity_binary} C= \frac{1}{2}\left(1-\frac{1}{2^K}\right)^{-1}. \end{equation} \end{theorem} \begin{remark} Recall that the user needs the results of $\{\vv^T \W[t] \}_{t=1}^{L}$ for some integer $L$ and for some $\vv\in (GF(2))^K\setminus\{\mathbf{0}\}$. Clearly $\vv$ has $2^K-1$ options listed in a set $\mathcal{V}=\{\vv(1), \ldots, \vv(2^K-1)\}$. Therefore, the goal is to design an achievable scheme which has two properties: (1) \emph{correctness}, meaning the user can decode what is asked for, (ii) \emph{privacy} meaning that for every single server, all members of $\mathcal{V}$ are equiprobable, independent of the real $\vv$. The above theorem states that minimum communication load, normalized to the size of a file, to guarantee both privacy and correctness is $\frac{1}{2}\left(1-\frac{1}{2^K}\right)^{-1}$. \end{remark} \begin{remark} In the proposed achievable scheme, the set of requests to each server is symmetric with respect to all vectors in $\mathcal{V}=\{\vv(1), \ldots, \vv(2^K-1)\}$, thus the privacy is guaranteed. However, the requests of two servers are coupled to exploit two opportunities. In the first opportunity, every requests from a server, except a few, has a counterpart request from the other server, such that these two together can reveal $\vv^T \W[t]$ for some $t$. This justifies the factor of $\frac{1}{2}$ in \eqref{eq:capacity_binary}. In the second opportunity, in some cases, a request from one server directly reveals a value of $\vv^T \W[t]$ for some $t$. This has been reflected in the factor $\left(1-\frac{1}{2^K}\right)^{-1}$ in~\eqref{eq:capacity_binary}. These two opportunities are exploited together efficiently such that not only the correctness and privacy have been guaranteed, but also the scheme achieves the optimal bound. \end{remark} \begin{remark} We note that for this case, the user asks for the results of $\{\vv^T \W[t] \}_{t=1}^{L}$, where $\vv$ has $2^K-1$ options listed in a set $\mathcal{V}=\{\vv(1), \ldots, \vv(2^K-1)\}$. Therefore, the user wants to hide its requested combination $\{\vv^T \W[t] \}_{t=1}^{L}$ among $2^K-1$ (virtual) files, namely $\{\vv^T(1) \W[t] \}_{t=1}^{L}, \ldots, \{\vv^T(2^K-1) \W[t] \}_{t=1}^{L}$. Apparently these virtual files are not linearly independent. One solution for this problem is to ignore this dependency, and to consider a PIR problem with $2^K-1$ virtual files. That approach achieves the rate of $\frac{1}{2}\left(1-\frac{1}{2^{2^K-1}}\right)^{-1}$ (see~\cite{Sun17} for the rate of PIR). However, here, the surprising fact is that the proposed scheme achieves the rate of $\frac{1}{2}\left(1-\frac{1}{2^K}\right)^{-1}$, as if there are only $K$ options for $\vv$. This is done by efficiently exploiting the linear dependency of vectors in $\{\vv(1), \ldots, \vv(2^K-1)\}$. \end{remark} \begin{remark} The PFR problem with binary coefficients reduces to the PIR problem if we restrict the possible coefficient vectors $\vv$ to those with \emph{unit} Hamming weight. Thus, the converse of PIR problem in\cite[Theorem~1]{Sun17} with $N=2$ is valid for the PFR problem with binary coefficients. The proposed achievable scheme detailed in~\ref{sec:scheme_binary} meets this converse. \end{remark} The next lemma extends the achievable scheme of Theorem~\ref{thm:binary_capacity} to the case of arbitrary number of servers and arbitrary $GF(q)$ field for the coefficient vectors $\vv(i)$. \begin{lemma}\label{lemma:ach_gen} For the PFR problem with $N$ servers, $K$ messages, and the coefficient vectors $\vv\in(GF(q))^K\setminus\{\mathbf{0}\}$, if $q\geq N$, the following rate is achievable. \begin{equation}\label{eq:ach_gen2} R=\Big(1-\frac{1}{N}\Big) \cdot \Big(1+\frac{\frac{1}{N-1}}{(\frac{q^K-1}{q-1})^{N-1}}\Big) \end{equation} \end{lemma} \begin{remark} In this case, the user needs the results of $\{\vv^T \W[t] \}_{t=1}^{L}$ for some integer $L$ and for some $\vv\in (GF(q))^K \setminus \{\mathbf{0}\}$. Eliminating parallel vectors in $(GF(q))^K \setminus \{\mathbf{0}\}$, there are $\frac{q^K-1}{q-1}$ options for $\vv$, listed in the set $\mathcal{V}=\{\vv(1), \ldots, \vv(\frac{q^K-1}{q-1})\}$. If we treat each of $\{\vv^T(i)\W[t] \}_{t=1}^{L}$, for $i=1,\ldots, \frac{q^K-1}{q-1}$ as a virtual file, and apply the PIR scheme for these virtual files, we achieve the rate of $$\left(1-\frac{1}{N}\right) \cdot \left( 1-N^{-\frac{q^K-1}{q-1} } \right)^{-1}.$$ One can verify that the proposed scheme strictly outperforms the PIR-based scheme. \end{remark} \begin{corollary} For the PFR problem with $K$ messages and the coefficient vectors $\vv\in(GF(q))^K\setminus \{\mathbf{0}\}$, with $N\rightarrow \infty$ servers, the capacity is equal to \begin{equation}\label{eq:ach_gen} C=1-\frac{1}{N} . \end{equation} \end{corollary} The above corollary derives directly from Lemma~\ref{lemma:ach_gen}. This rate meets the PIR converse. \section{PFR Scheme with binary coefficients (Achievability Proof of Theorem~\ref{thm:binary_capacity})}\label{sec:scheme_binary} In this section, we present the achievable scheme for the PFR problem with two servers ($N=2$) and arbitrary number of messages $K$, where the coefficients are from the binary field. The proposed scheme guarantees the privacy by keeping the requests to one server symmetric with respect to all $\vv(i) \in \mathcal{V}$. However, the requests to both servers are coupled in a certain way. In most of the cases, each request to one server has a counterpart in the set of requests from the other server. These two together reveals $\vv^T(\theta)\W[t]$ for some $t$. Some other requests directly reveals $\vv^T(\theta) \W[t]$ for some $t$ without any recombining with other server. Let $L=2^{K+1}$ and define $\mathcal{V}\defeq(GF(2))^K\setminus \{\mathbf{0}\}$. Also, consider $\pi$ as a random permutation of the set $\{1,2,\dots,L\}$. The user generates this permutation, uniformly at random, among all permutations, and keeps it private from the servers. Apply this random permutation to reorder the messages. In particular, reorder the message vectors to get $\tilde{\W}[t]\defeq \W[\pi(t)]$ for $t\in\{1,\dots,L\}$. Without loss of generality assume that the user is interested in retrieving $\{\vv(\theta)\W[t]\}_{t=1}^L$, for some $\theta \in \{1, \ldots, 2^K-1\}$. \textbf{Phase 1}: \begin{enumerate}[(i)] \item User asks server~1 to send back \begin{equation}\label{eq:phase1-1} M(i,i)=\vv^T(i)\tilde{\W}[i], \quad i=1,\dots,2^K-1. \end{equation} \item User asks server~2 to send back \begin{equation}\label{eq:phase1-2} M(i,2^K-1+i)=\vv^T(i)\tilde{\W}[2^K-1+i], \quad i=1,\dots,2^K-1. \end{equation} \end{enumerate} \textbf{Phase 2}: \begin{enumerate}[(i)] \item User asks server~1 to send back \begin{equation}\label{eq:phase2-11} M(\theta,2(2^K-1)+1)=\vv^T(\theta)\tilde{\W}[2(2^K-1)+1], \end{equation} and also \begin{equation}\label{eq:phase2-12} (\vv^T(\theta)-\vv^T(i))\tilde{\W}[2^K-1+i], \quad i=1,\dots,2^K-1, \ i \neq \theta. \end{equation} \item User asks server~2 to send back \begin{equation}\label{eq:phase2-21} M(\theta,2(2^K-1)+2)=\vv^T(\theta)\tilde{\W}[2(2^K-1)+2], \end{equation} and also \begin{equation}\label{eq:phase2-22} (\vv^T(\theta)-\vv^T(i))\tilde{\W}[i], \quad i=1,\dots,2^K-1, \ i \neq \theta. \end{equation} \end{enumerate} It is important to note that the above requests will be send to the servers in a random order. The requests and answers from server~1 and server~2 for $\theta=1$ are shown in Table~\ref{tbl:scheme_ser1} and Table~\ref{tbl:scheme_ser2}, respectively. \begin{table*} \renewcommand{\arraystretch}{1.3} \setlength\tabcolsep{3pt} \caption{Requests from server~1 for $\theta=1$} \label{tbl:scheme_ser1} \centering \begin{tabular}{|c||cccc||cccc||cc|} \hline & $\tilde{\W}[1]$ & $\tilde{\W}[2]$ &\dots& $\tilde{\W}[2^K-1]$& $\tilde{\W}[2^K]$ & $\tilde{\W}[2^K+1]$& \dots& $\tilde{\W}[2^{K+1}-2]$ & $\tilde{\W}[2^{K+1}-1]$ & $\tilde{\W}[2^{K+1}]$\\ \hline \hline $\vv(1)$& $\vv^T(1)$ & && & & & & &$\vv^T(1)$ & \\ $\vv(2)$& & $\vv^T(2)$ && & & $ (\vv^T(1)-\vv^T(2))$ & & & & \\ \dots& & &\dots& & & & \dots& & & \\ $\vv(2^K-1)$& & & & $\vv^T(2^K-1)$ & & & & $ (\vv^T(1)-\vv^T(2^K-1))$ & & \\ \hline \end{tabular} \end{table*} \begin{table*} \renewcommand{\arraystretch}{1.3} \setlength\tabcolsep{2pt} \caption{Requests from server~2 for $\theta=1$} \label{tbl:scheme_ser2} \centering \begin{tabular}{|c||cccc||cccc||cc|} \hline & $\tilde{\W}[1]$ & $\tilde{\W}[2]$ &\dots& $\tilde{\W}[2^K-1]$& $\tilde{\W}[2^K]$ & $\tilde{\W}[2^K+1]$& \dots& $\tilde{\W}[2^{K+1}-2]$ & $\tilde{\W}[2^{K+1}-1]$ & $\tilde{\W}[2^{K+1}]$\\ \hline \hline $\vv(1)$& & && &$\vv^T(1)$ & & & & & $\vv^T(1)$ \\ $\vv(2)$& & $ (\vv^T(1)-\vv^T(2))$ && & & $ \vv^T(2)$ & & & & \\ \dots& & &\dots& & & & \dots& & & \\ $\vv(2^K-1)$& & & & $ (\vv^T(1)-\vv^T(2^K-1))$ & & & & $ \vv^T(2^K-1)$ & & \\ \hline \end{tabular} \end{table*} \subsection{Proof of correctness}\label{subsec:correct} To prove the correctness, we show that the user can recover $\{\vv^T(\theta)\W[t]\}_{t=1}^L$ from the combinations \eqref{eq:phase1-1}-\eqref{eq:phase2-22}, received from both servers, while the rate of the scheme is equal to \eqref{eq:capacity_binary}. We remind that $L=2^{K+1}$, and thus $2^{K+1}$ combinations must be derived from the available equations at the user. $\vv^T(\theta)\tilde{\W}[\theta]$ is given in \eqref{eq:phase1-1}. To obtain $\vv^T(\theta)\tilde{\W}[t]$ for all $t \in \{1, 2,\dots,2^K-1\}\setminus \{\theta \}$, the user combines \eqref{eq:phase1-1} and \eqref{eq:phase2-22}. Similarly, $\vv^T(\theta)\tilde{\W}[2^K-1+\theta]$ is given in \eqref{eq:phase1-2} and $\vv^T(\theta)\tilde{\W}[t]$ for all $t \in \{ 2^K, \dots,2^{K+1}-2 \} \setminus \{ 2^K-1+\theta \}$ can be obtained by combining \eqref{eq:phase1-2} and \eqref{eq:phase2-12}. Finally, $\vv^T(\theta)\tilde{\W}[2^{K+1}-1]$ and $\vv^T(\theta)\tilde{\W}[2^{K+1}]$ are given in \eqref{eq:phase2-11} and \eqref{eq:phase2-21}, respectively. The total number of downloads is \begin{equation*} Q=2(2^{K+1}-2)=4(2^K-1) \end{equation*} and so the rate of the code is \begin{equation*} R=\frac{L}{Q}=\frac{2^{K+1} }{4(2^K-1)}=\frac{1}{2}\left(1-\frac{1}{2^K} \right)^{-1}. \end{equation*} \subsection{Proof of privacy}\label{subsec:privacy} Our privacy proof is based on the fact that we preserve the equal number of requests for any possible coefficient vector in addition to using a random permutation over the message layers. Furthermore, we send the requests to each server in a random order. First, consider server~1 with its requests \eqref{eq:phase1-1}, \eqref{eq:phase2-11} and \eqref{eq:phase2-12}. As seen, server~1 only observes that the user requests a linear combination for $2^{K+1}-1$ layers of messages, while two layers are left out. The indices of these layers do not leak any information about the requested combinations vector $\vv(\theta)$, thanks to the random permutation of the message layers. Now, let's check the requested coefficient vectors in \eqref{eq:phase1-1}, \eqref{eq:phase2-11} and \eqref{eq:phase2-12}. We note that the set $\{\{\vv(\theta)-\vv(i)\}_{i\in [1:2^K-1]\setminus\{\theta\}},\vv(\theta)\}$ is equal to the set $\mathcal{V}=\{\vv(1),\dots,\vv(2^K-1)\}$. This means that each possible coefficient vector $\vv\in\mathcal{V}$ is requested exactly twice, and in a random order, and thus no information can be obtained by server~1. In fact, it can be easily shown that for any coefficient vector $\vv(j),j=1,\dots,2^K-1$, there is a permutation $\pi$ of the set $\{1,2,\dots,L\}$ that maps the requests of $\{\vv^T(j) \W[t]\}_{t=1}^L$ from one server to the requests of $\{\vv^T(\theta) \W[t]\}_{t=1}^L$ from the same server. The privacy condition at server~2 is guaranteed similarly. \section{General PFR Scheme (Proof of Lemma~\ref{lemma:ach_gen})}\label{sec:scheme_gen} In this section, we present the general achievable scheme for the PFR problem with $N$ servers, $K$ messages and the linear combinations over $GF(q)$. At first, we define some notations. We define \begin{align} \mathcal{V}=(GF(q))^K\setminus \{\mathbf{0}\} \end{align} as the set of all options for $\vv(\theta)$. In addition, for each $\vv(\theta)\in\mathcal{V}$, we define \begin{align} \mathcal{V}^{(\theta)}=\left\{ \beta \vv(\theta), \beta \in GF(q)\setminus\{0\} \right\}, \end{align} as the set of all parallel and non-zero vectors to $\vv(\theta)$. We also define $\mathcal{V}_N\defeq\mathcal{V}^{N-1}$ as a set of all $(N-1)$-tuples of vectors $(\vv(i_1),\dots,\vv(i_{N-1}))$ with each element from $\mathcal{V}$ (all possible $\vv$ vectors): \begin{equation*} \mathcal{V}_N=\{(\vv(i_1),\dots,\vv(i_{N-1})): \vv(i_j)\in\mathcal{V},j=1,\dots,N-1\}. \end{equation*} Note that $ | \mathcal{V}_N|=(q^K-1)^{N-1}$. Moreover, we define $\mathcal{V}^{(\theta)}_N$ as a set of all $(N-1)$-tuples of vectors with each element from $\mathcal{V}^{(\theta)}$: \begin{equation*} \mathcal{V}^{(\theta)}_N=\{(\vv(i_1),\dots,\vv(i_{N-1})): \vv(i_j)\in\mathcal{V}^{(\theta)},j=1,\dots,N-1\}. \end{equation*} Apparently, $\mathcal{V}^{(\theta)}_N \subset \mathcal{V}_N$. Note that $|\mathcal{V}_N^{(\theta)}|=(q-1)^{N-1}$. Now we are ready to detail the proposed scheme in three steps. \textbf{Step 1:} Consider $L$ layers of messages. The user generates a random permutation $\pi$ of the set $\{1,2,\dots,L\}$, and keeps it private from the servers. Apply this random permutation to reorder the message vectors and define $\tilde{\W}[t]\defeq \W[\pi(t)]$ for $t\in\{1,\dots,L\}$. In addition, choose $N-1$ distinct $\alpha_1, \alpha_2, \ldots \alpha_{N-1} \in GF(q)$ and consider a $(N-1)\times(N-1)$ Vandermonde matrix as \[ V=\begin{bmatrix} 1 & \alpha_{1} & \dots & \alpha_{1}^{N-2} \\ 1 & \alpha_2 & \dots & \alpha_{2}^{N-2} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & \alpha_{N-1} & \dots & \alpha_{N-1}^{N-2} \end{bmatrix} \] Also, consider $\pi'$ as a random permutation of the set $\{1,2,\dots,N-1\}$ and apply this random permutation to the columns of $V$ to get $\tilde{V}=[\tilde{\alpha}_{i,j}]_{(N-1)\times(N-1)}$. \textbf{Step 2:} For each $(\vv_m(i_1),\dots,\vv_m(i_{N-1}))\in\mathcal{V}_N\setminus\mathcal{V}_N^{(\theta)}$, $m=0,\dots,(q^K-1)^{N-1}-(q-1)^{N-1}-1$, repeat the following: \begin{enumerate}[(i)] \item User asks server~1 to send back \begin{equation}\label{eq:step3-1} \sum\limits_{j=1}^{N-1}\vv_m^T(i_j)\tilde{\W}[j+m(N-1)]. \end{equation} \item User asks server~$n$, $n=2,\dots,N$ to send back \begin{equation}\label{eq:step3-2} \sum\limits_{j=1}^{N-1}(\vv_m^T(i_j)+\tilde{\alpha}_{j,n-1}\vv^T(\theta))\tilde{\W}[j+m(N-1)]. \end{equation}where $\tilde{\alpha}_{j,n-1}$ is the element of permuted Vandermonde matrix $\tilde{V}$ in the $j$-th row and the $(n-1)$-th column. \end{enumerate} We show in the proof of correctness that in each round of this step (i.e., for each $m$), the desired combination $\vv^T(\theta)\tilde{\W}[t]$ is retrieved over $N-1$ message layers. \textbf{Step 3:} For each $(\vv_m(i_1),\dots,\vv_m(i_{N-1}))\in\mathcal{V}_N^{(\theta)}$, $m=0,\dots,(q-1)^{N-1}-1$ repeat the following: \begin{enumerate}[(i)] \item User asks server~1 to send back \begin{equation}\label{eq:step4-1} \sum\limits_{j=1}^{N-1}\vv_m^T(i_j)\tilde{\W}[j+(q^K-1)^{N-1}(N-1)+mN]. \end{equation} \item User asks server~$n$, $n=2,\dots,N-1$ to send back \begin{equation}\label{eq:step4-2} \sum\limits_{j=1}^{N-1}\tilde{\alpha}_{j,n}\vv_m^T(i_j)\tilde{\W}[j+(q^K-1)^{N-1}(N-1)+mN]. \end{equation}where $\tilde{\alpha}_{j,n}$ is the element of permuted Vandermonde matrix $\tilde{V}$ in the $j$-th row and the $n$-th column. \item User asks server~$N$ to send back \begin{equation}\label{eq:step4-3} \vv_m^T(i_{N-1})\tilde{\W}[N+(q^K-1)^{N-1}(N-1)+mN]+\sum\limits_{j=1}^{N-2}\vv_m^T(i_j)\tilde{\W}[j+(q^K-1)^{N-1}(N-1)+mN]. \end{equation} \end{enumerate} Note that the set of requests to each server are sent in a random order. We show in the proof of correctness that in each round of this step (i.e., for each $m$), the desired combination $\vv^T(\theta)\tilde{\W}[t]$ is retrieved over $N$ message layers. \subsection{Proof of correctness}\label{subsec:correct} To prove the correctness, we show that the user can recover $\{\vv^T(\theta)\W[t]\}_{t=1}^L$ from the combinations \eqref{eq:step3-1}-\eqref{eq:step4-3}, received from $N$ servers, while the rate of the scheme is equal to \eqref{eq:ach_gen}. In Step~2, we have $|\mathcal{V}_N\setminus\mathcal{V}_N^{(\theta)}|=(q^K-1)^{N-1}-(q-1)^{N-1}$ rounds. In each round $m$, $m\in \{0,\dots,(q^K-1)^{N-1}-(q-1)^{N-1}-1$\}, $N-1$ layers of the desired combination are recovered. The reason follows. Subtracting \eqref{eq:step3-1} from \eqref{eq:step3-2}, the user has access to \begin{equation}\label{eq:correct} \sum\limits_{j=1}^{N-1}\tilde{\alpha}_{j,n-1}\vv^T(\theta)\tilde{\W}[j+m(N-1)], \end{equation} for $n=2,\dots,N$. Since Vandemonde matrix is full rank, \eqref{eq:correct} provides $N-1$ independent linear combinations of $\vv^T(\theta)\tilde{\W}[1+m(N-1)],\dots,\vv^T(\theta)\tilde{\W}[N-1+m(N-1)]$ (that is $N-1$ layers of the desired combination). Thus, we obtain $\vv^T(\theta)\tilde{\W}[1+m(N-1)],\dots,\vv^T(\theta)\tilde{\W}[N-1+m(N-1)]$ from \eqref{eq:correct}. In total, \begin{equation}\label{eq:L_step3} (N-1)((q^K-1)^{N-1}-(q-1)^{N-1}) \end{equation} layers of the desired combination are recovered in Step~2. The user downloads one equation from each server in each round. Thus, the total number of the downloaded equations in this step is \begin{equation}\label{eq:Q_step3} N((q^K-1)^{N-1}-(q-1)^{N-1}). \end{equation} In Step~3, we have $|\mathcal{V}_N^{(\theta)}|=(q-1)^{N-1}$ rounds. In each round $m$, $m \in \{0,\dots,(q-1)^{N-1}-1\}$, $N$ layers of the desired combination are recovered. The reason follows. Since Vandemonde matrix is full rank, the user from \eqref{eq:step4-1} and \eqref{eq:step4-2} has access to the $N-1$ independent linear combinations of $$\vv_m^T(i_1)\tilde{\W}[1+(q^K-1)^{N-1}(N-1)+mN],\dots,\vv_m^T(i_{N-1})\tilde{\W}[N-1+(q^K-1)^{N-1}(N-1)+mN].$$ These are $N-1$ layers of the desired combination, as in this step, the coefficient vectors satisfy $(\vv_m(i_1),\dots,\vv_m(i_{N-1}))\in\mathcal{V}_N^{(\theta)}$, and thus all are parallel to $\vv(\theta)$. Eliminating the $N-2$ layers from \eqref{eq:step4-3}, the user recovers the $N$-th layer in this step which is $\vv_m^T(i_{N-1})\tilde{\W}[N+(q^K-1)^{N-1}(N-1)+mN]$. Therefore, $N$ layers of the desired combination are recovered for each $m$. In total, \begin{equation}\label{eq:L_step4} N(q-1)^{N-1} \end{equation} layers of the desired combination are recovered in Step~2. The user downloads one equation from each server in each round. Thus, the total number of the downloaded equations in this step is \begin{equation}\label{eq:Q_step4} N(q-1)^{N-1}. \end{equation} From \eqref{eq:Q_step3} and \eqref{eq:Q_step4}, the total number of downloads is \begin{equation*} Q=N(q^K-1)^{N-1} \end{equation*} and totally \begin{equation}\label{eq:L} L=(N-1)(q^K-1)^{N-1}+(q-1)^{N-1} \end{equation} layers of the desired combination are recovered and so the rate of the code is as \eqref{eq:ach_gen}. \subsection{Proof of privacy}\label{subsec:privacy} The privacy proof is based on the fact that we preserve the equal number of requests for any possible combination vector in addition to using a random permutation over the message layers. In addition, the requests to each servers are sent in random order. First, consider server~1 with its requests \eqref{eq:step3-1} and \eqref{eq:step4-1}. As seen, server~1 only observes that the user requests a linear combination of $N-1$ message layers with all possible coefficient vectors. The indices of the message layers do not leak any information about the requested combinations vector $\vv(\theta)$, thanks to the random permutation of the message layers. In addition, due to the random order of the requests, asking for all possible coefficient vectors makes $Q_{\vv(i)}^{(1)},A_{\vv(i)}^{(1)}$ equiprobable for $i=1,\dots,q^K-1$. Now, consider server~$n$, $n=2,\dots,N$, with its requests \eqref{eq:step3-2}, \eqref{eq:step4-2} and \eqref{eq:step4-3}. Again, server~$n$ only observes that the user requests a linear combination of $N-1$ message layers with all possible coefficient vectors (with a random order). Because: \begin{itemize} \item The set of $\mathcal{\bar{V}}_m=\{(\vv_m(i_1),\dots,\vv_m(i_{N-1}))\}$ that is used in the scheme covers $\mathcal{V}_N$ (i.e., all possible $(N-1)$-tuples of vectors $(\vv(i_1),\dots,\vv(i_{N-1}))$ with each element from $\mathcal{V}$). \item The set $\{(\vv_m(i_1)+\tilde{\alpha}_{1,n-1}\vv(\theta),\dots,\vv_m(i_{N-1})+\tilde{\alpha}_{N-1,n-1}\vv(\theta))\}$ is equal to $\mathcal{\bar{V}}_m$. \item When $(\vv_m(i_1),\dots,\vv_m(i_{N-1}))\in\mathcal{V}_N^{(\theta)}$ (in Step~3), two sets $\{(\vv_m(i_1)+\tilde{\alpha}_{1,n-1}\vv(\theta),\dots,\vv_m(i_{N-1})+\tilde{\alpha}_{N-1,n-1}\vv(\theta))\}$ and $\{(\tilde{\alpha}_{1,n}\vv_m(i_1),\dots,\tilde{\alpha}_{N-1,n}\vv_m(i_{N-1}))\}$ are equal. \end{itemize} Thus, it can be shown that for any combination vector $\vv(j),j=2,\dots,\tilde{K}$, there is a random permutation $\pi$ of the set $\{1,2,\dots,L\}$ that maps the request of $\vv(\theta)$ to the request of $\vv(j)$.
1,116,691,500,516
arxiv
\section{Introduction and Summary} {}In most practical quant trading applications\footnote{\, Similar issues are also present in other practical applications unrelated to trading or finance.} one faces an old problem when computing a sample covariance matrix of returns: the number $N$ of returns (e.g., the number of stocks in the trading universe) is much larger than the number $T$ of observations in the time series of returns. The sample covariance matrix $C_{ij}$ ($i,j=1,\dots,N$) in this case is badly singular: its rank is at best $T-1$. So, it cannot be inverted, which is required in, e.g., mean-variance optimization \cite{Markowitz1952}. In fact, the singularity of $C_{ij}$ is only a small part of the trouble: its off-diagonal elements (more precisely, sample correlations) are notoriously unstable out-of-sample. {}The aforesaid ``ills" of the sample covariance matrix are usually cured via multifactor risk models,\footnote{\, For a general discussion, see, e.g., \cite{GrinoldKahn}. For explicit implementations (including source code), see, e.g., \cite{HetPlus}, \cite{StatRM}.} where stock returns are (linearly) decomposed into contributions stemming from some number $K$ of common underlying factors plus idiosyncratic ``noise" pertaining to each stock individually. This is a way of dimensionally reducing the problem in that one only needs to compute a factor covariance matrix $\Phi_{AB}$ ($A,B=1,\dots,K$), which is substantially smaller than $C_{ij}$ assuming $K\ll N$.\footnote{\, This does not solve all problems, however. Thus, unless $K < T$, the sample factor covariance matrix is still singular (albeit the model covariance matrix $\Gamma_{ij}$ that replaces $C_{ij}$ need not be). Furthermore, the out-of-sample instability is still present in sample factor correlations. This can be circumvented via the heterotic risk model construction \cite{Het}; see below.} {}In statistical risk models\footnote{\, See \cite{StatRM}, which gives complete source code, and references therein.} the factors are based on the first $K$ principal components of the sample covariance matrix $C_{ij}$ (or the sample correlation matrix).\footnote{\, The (often misconstrued) ``shrinkage" method \cite{Ledoit} is nothing but a special type of statistical risk models; see \cite{S=FM}, \cite{StatRM} for details.} In this case the number of factors is limited ($K \leq T-1$), and, furthermore, the principal components beyond the first one are inherently unstable out-of-sample. In contrast, factors based on a granular fundamental industry classification\footnote{\, E.g., BICS (Bloomberg Industry Classification System), GICS (Global Industry Classification Standard), ICB (Industry Classification Benchmark), SIC (Standard Industrial Classification), etc.} are much more ubiquitous (in hundreds), and also stable, as stocks seldom jump industries. Heterotic risk models \cite{Het} based on such industry classifications sizably outperform statistical risk models.\footnote{\, In the heterotic risk model construction the sample factor covariance matrix at the most granular level in the industry classification typically would be singular. However, this is rectified by modeling the factor covariance matrix by another factor model with factors based on the next-less-granular level in the industry classification, and this process of dimensional reduction is repeated until the resultant factor covariance matrix is small enough to be nonsingular and sufficiently stable out-of-sample \cite{Het}, \cite{HetPlus}. Here one can also include non-industry style factors. However, their number is limited (especially for short horizons) and, contrary to an apparent common misconception, style factors generally are poor proxies for modeling correlations and add little to no value \cite{HetPlus}.} Another alternative is to replace the fundamental industry classification in the heterotic risk model construction by a statistical industry classification based on clustering (using machine learning techniques) the return time series data \cite{StatIC},\footnote{\, Such statistical industry classifications can be multilevel and granular.} without any reference to a fundamental industry classification. Risk models based on statistical industry classifications outperform statistical risk models but underperform risk models based on fundamental industry classifications \cite{StatIC}. {}In this paper we discuss a different approach to building a risk model using machine learning techniques. The idea is simple. A sample covariance matrix $C_{ij}$ is singular (assuming $T \ll N$), but it is semi-positive definite. Imagine that we could compute a large number $M$ of ``samplings" of $C_{ij}$, call them $C_{ij}^{(m)}$, $m=1,\dots,M$, where each ``sampling" is semi-positive definite. Consider their mean\footnote{\, In fact, instead of the arithmetic mean, here we can more generally consider a weighted average with some positive weights $w_m$ (see below). Also, in this paper $C_{ij}^{(m)}$ are nonsingular.} \begin{equation}\label{samplings} \Gamma_{ij} = {1\over M} \sum_{m=1}^M C_{ij}^{(m)} \end{equation} By construction $\Gamma_{ij}$ is semi-positive definite. In fact, assuming $C_{ij}^{(m)}$ are all (sizably) different from each other, $\Gamma_{ij}$ generically will be positive definite and invertible (for large enough $M$). So, the idea is sound, at least superfluously, but the question is, what should these ``samplings" $C_{ij}^{(m)}$ be? Note that each element of the sample covariance matrix $C_{ij}$ ($i\neq j$) only depends on the time series of the corresponding two stock returns $R_i(t)$ and $R_j(t)$, and not on the universe of stocks, so any cross-sectional ``samplings" cannot be based on sample covariance matrices. In principle, serial ``samplings" could be considered if a long history were available. However, here we assume that our lookback is limited, be it due to a short history that is available, or, more prosaically, due to the fact that data from a while back is not pertinent to forecasting risk for short horizons as market conditions change. {}A simple way around this is to consider cross-sectional ``samplings" $C_{ij}^{(m)}$ that are not sample covariance matrices but are already dimensionally reduced, even though they do not have to be invertible. Thus, given a clustering of $N$ stocks into $K$ clusters, we can build a multifactor risk model, e.g., via an incomplete heterotic construction (see below). Different clusterings then produce different ``samplings" $C_{ij}^{(m)}$, which we average via Eq. (\ref{samplings}) to obtain a positive definite $\Gamma_{ij}$, which is {\em not} a factor model. However, as usual, the devil is in the detail, which we discuss in Section \ref{sec2}. E.g., the matrix (\ref{samplings}) can have nearly degenerate or small eigenvalues, which requires further tweaking $\Gamma_{ij}$ to avert, e.g., undesirable effects on optimization. {}In Section \ref{sec3} we discuss backtests to compare the machine learning risk models of this paper to statistical risk models, and heterotic risk models based on fundamental industry classification and statistical industry classification. We briefly conclude in Section \ref{sec4}. Appendix \ref{app.A} provides R source code\footnote{\, The code in Appendix A is not written to be ``fancy" or optimized for speed or otherwise.} for machine learning risk models, and some important legalese relating to this code is relegated to Appendix \ref{app.B}. \section{Heterotic Construction and Sampling}\label{sec2} {}So, we have time series of returns (say, daily close-to-close returns) $R_{is}$ for our $N$ stocks ($i=1,\dots,N$, $s=1,\dots, T$, and $s=1$ corresponds to the most recent time in the time series). Let us assume that we have a clustering of our $N$ stocks into $K$ clusters, where $K$ is sizably smaller than $N$, and each stock belongs to one and only one cluster. Let the clusters be labeled by $A=1,\dots,K$. So, we have a map \begin{eqnarray}\label{G.map} &&G:\{1,\dots,N\}\mapsto\{1,\dots,K\} \end{eqnarray} Following \cite{Het}, we can model the sample correlation matrix $\Psi_{ij} = C_{ij}/\sigma_i\sigma_j$ (here $\sigma_i^2 = C_{ii}$ are the sample variances) via a factor model: \begin{eqnarray}\label{model.cor} &&{\widetilde \Psi}_{ij} = \xi_i^2~\delta_{ij} + \sum_{A,B = 1}^K \Omega_{iA}~\Phi_{AB}~\Omega_{jB} = \xi_i^2~\delta_{ij} + U_i~U_j~\Phi_{G(i), G(j)}\\ &&\Omega_{iA} = U_i~\delta_{G(i), A}\\ &&\xi_i^2 = 1 - \lambda(G(i))~U_i^2\\ &&\Phi_{AB} = \sum_{i\in J(A)} \sum_{j\in J(B)} U_i~\Psi_{ij}~U_j\label{Phi} \end{eqnarray} Here the $N_A$ components of $U_i$ for $i\in J(A)$ are given by the first principal component of the $N(A)\times N(A)$ matrix $[\Psi(A)]_{ij} = \Psi_{ij}$, $i,j\in J(A)$, where $J(A) =\{i|G(i) = A\}$ is the set of the values of the index $i$ corresponding to the cluster labeled by $A$, and $N_A = |J(A)|$ is the number of such $i$. Also, $\lambda(A)$ is the largest eigenvalue (corresponding to the first principal component) of the matrix $[\Psi(A)]_{ij}$. The matrix $\Omega_{iA}$ is the factor loadings matrix, $\xi_i^2$ is the specific variance, and the factor covariance matrix $\Phi_{AB}$ has the property that $\Phi_{AA} = \lambda(A)$. By construction, ${\widetilde \Psi}_{ii} = 1$, and the matrix ${\widetilde \Psi}_{ij}$ is positive-definite. However, $\Phi_{AB}$ is singular unless $K \leq T - 1$. {}This is because the rank of $\Psi_{ij}$ is (at most) $T-1$. Let $V_i^{(a)}$ be the principal components of $\Psi_{ij}$ with the corresponding eigenvalues $\lambda^{(a)}$ ordered decreasingly ($a=1,\dots,N$). More precisely, at most $T-1$ eigenvalues $\lambda^{(a)}$, $a=1,\dots,T-1$ are nonzero, and the others vanish. So, we have \begin{eqnarray}\label{fac.cov} &&\Phi_{AB} = \sum_{a=1}^{T-1}\lambda^{(a)}~{\widetilde U}_A^{(a)}~{\widetilde U}_B^{(a)}\\ &&{\widetilde U}_A^{(a)} = \sum_{i\in J(A)} U_i~V_i^{(a)} \end{eqnarray} So, the rank of $\Phi_{AB}$ is (at most) $T-1$, and the above incomplete heterotic construction provides a particular regularization of the statistical risk model construction based on principal components. In the complete heterotic construction $\Phi_{AB}$ itself is modeled via another factor model, and this nested ``Russian-doll" embedding is continued until at the final step the factor covariance matrix (which gets smaller and smaller at each step) is nonsingular (and sufficiently stable out-of-sample). \subsection{Sampling via Clustering} {}However, there is another way, which is what we refer to as ``machine learning risk models" here. Suppose we have $M$ different clusterings. Let ${\widetilde \Psi}_{ij}^{(m)}$ be the model correlation matrix (\ref{model.cor}) for the $m$-th clustering ($m = 1,\dots, M$). Then we can construct a model correlation matrix as a weighted sum \begin{eqnarray} &&{\widetilde\Psi}_{ij} = \sum_{m=1}^M w_m~{\widetilde \Psi}_{ij}^{(m)}\\ &&\sum_{m=1}^M w_m = 1 \end{eqnarray} The simplest choice for the weights is to have equal weighting: $w_m = 1/M$. More generally, so long as the weights $w_m$ are positive, the model correlation matrix ${\widetilde\Psi}_{ij}$ is positive-definite. (Also, by construction ${\widetilde\Psi}_{ii} = 1$.) However, combining a large number $M$ of ``samplings" ${\widetilde \Psi}_{ij}^{(m)}$ accomplishes something else: each ``sampling" provides a particular regularization of the sample correlation matrix, and combining such samplings covers many more directions in the risk space than each individual ``sampling". This is because ${\widetilde U}_A^{(a)}$ in Eq. (\ref{fac.cov}) are different for different clusterings. \subsection{K-means} {}We can use k-means \cite{Forgy}, \cite{Lloyd1957}, \cite{Lloyd1982}, \cite{Hartigan}, \cite{HartWong}, \cite{MacQueen}, \cite{Steinhaus} for our clusterings. Since k-means is nondeterministic, it automatically produces a different ``sampling" with each run. The idea behind k-means is to partition $N$ observations into $K$ clusters such that each observation belongs to the cluster with the nearest mean. Each of the $N$ observations is actually a $d$-vector, so we have an $N \times d$ matrix $X_{is}$, $i=1,\dots,N$, $s=1,\dots,d$. Let $C_a$ be the $K$ clusters, $C_a = \{i| i\in C_a\}$, $a=1,\dots,K$. Then k-means attempts to minimize \begin{equation}\label{k-means} g = \sum_{a=1}^K \sum_{i \in C_a} \sum_{s=1}^d \left(X_{is} - Y_{as}\right)^2 \end{equation} where \begin{equation}\label{centers} Y_{as} = {1\over n_a} \sum_{i\in C_a} X_{is} \end{equation} are the cluster centers (i.e., cross-sectional means),\footnote{\, Throughout this paper ``cross-sectional" refers to ``over the index $i$".} and $n_a = |C_a|$ is the number of elements in the cluster $C_a$. In Eq. (\ref{k-means}) the measure of ``closeness" is chosen to be the Euclidean distance between points in ${\bf R}^d$, albeit other measures are possible.\footnote{\, E.g., the Manhattan distance, cosine similarity, etc.} \subsection{What to Cluster?} {}Here we are not going to reinvent the wheel. We will simply use the prescription of \cite{StatIC}. Basically, we can cluster the returns, i.e., take $X_{is} = R_{is}$ (then $d = T$). However, stock volatility is highly variable, and its cross-sectional distribution is not even quasi-normal but highly skewed, with a long tail at the higher end -- it is roughly log-normal. Clustering returns does not take this skewness into account and inadvertently we might be clustering together returns that are not at all highly correlated solely due to the skewed volatility factor. A simple ``machine learning" solution is to cluster the normalized returns ${\widetilde R}_{is} = R_{is} / \sigma_i$, where $\sigma_i^2 = \mbox{Var}(R_{is})$ is the serial variance ($\sigma_i^2 = C_{ii}$). However, as was discussed in detail in \cite{StatIC}, this choice would also be suboptimal and this is where quant trading experience and intuition trumps generic machine learning ``lore". It is more optimal to cluster ${\widehat R}_{is} = R_{is} / \sigma_i^2$ (see \cite{StatIC} for a detailed explanation). A potential practical hiccup with this is that if some stocks have very low volatilities, we could have large ${\widehat R}_{is}$ for such stocks. To avoid any potential issues with computations, we can ``smooth" this out via ``Winsorization" of sorts (MAD = mean absolute deviation):\footnote{\, This is one possible tweak. Others produce similar results.} \begin{eqnarray}\label{tweak} &&{\widehat R}_{is} = {R_{is} \over {\sigma_i u_i}}\\ &&u_i = {\sigma_i\over v}\\ &&v = \exp(\mbox{Median}(\ln(\sigma_i)) - 3~\mbox{MAD}(\ln(\sigma_i))) \end{eqnarray} and for all $u_i < 1$ we set $u_i = 1$. This is the definition of ${\widehat R}_{is}$ that is used in the source code internally. Furthermore, Median($\cdot$) and MAD($\cdot$) above are cross-sectional. \subsection{A Tweak}\label{sub.tail} {}The number of clusters $K$ is a hyperparameter. In principle, it can be fixed by adapting the methods discussed in \cite{StatIC}. However, in the context of this paper, we will simply keep it as a hyperparameter and test what we get for its various values. As $K$ increases, in some cases it is possible to get relatively small eigenvalues in the model correlation matrix ${\widetilde\Psi}_{ij}$, or nearly degenerate eigenvalues. This can cause convergence issues in optimization with bounds (see below). To circumvent this, we can slightly deform ${\widetilde\Psi}_{ij}$ for such values of $K$. {}Here is a simple method that deals with both of the aforesaid issues at once. To understand this method, it is helpful to look at the eigenvalue graphs given in Figures \ref{Figure1}, \ref{Figure2}, \ref{Figure3}, \ref{Figure4}, which are based on a typical data set of daily returns for $N=2000$ stocks and $T=21$ trading days. These graphs plot the eigenvalues for a single ``sampling" ${\widetilde\Psi}^{(m)}_{ij}$, as well as ${\widetilde\Psi}_{ij}$ based on averaging $M=100$ ``samplings" (with equal weights), for $K=150$ and $K=40$ ($K$ is the number of clusters). Unsurprisingly, there are some small eigenvalues. However, their fraction is small. Furthermore, these small eigenvalues get even smaller for larger values of $K$, but increase when averaging over multiple ``samplings", which also smoothes out the eigenvalue graph structure. {}What we wish to do is to deform the matrix ${\widetilde\Psi}_{ij}$ by tweaking the small eigenvalues at the tail. We need to define what we mean by the ``tail", i.e., which eigenvalues to include in it. There are many ways of doing this, some are simpler, some are more convoluted. We use a method based on eRank or effective rank \cite{RV}, which can be more generally defined for any subset $S$ of the eigenvalues of a matrix, which (for our purposes here) is assumed to be symmetric and semi-positive-definite. Let \begin{eqnarray} &&\mbox{eRank}(S) = \exp(H)\\ &&H = -\sum_{a=1}^L p_a~\ln(p_a)\\ &&p_a = {\lambda^{(a)} \over \sum_{b=1}^L \lambda^{(b)}} \end{eqnarray} where $\lambda^{(a)}$ are the $L$ {\em positive} eigenvalues in the subset $S$, and $H$ has the meaning of the (Shannon a.k.a. spectral) entropy \cite{Campbell60}, \cite{YGH}. {}If we take $S$ to be the full set of $N$ eigenvalues of ${\widetilde \Psi}_{ij}$, then the meaning of $\mbox{eRank}(S)$ is that it is a measure of the effective dimensionality of the matrix ${\widetilde \Psi}_{ij}$. However, this is not what we need to do for our purposes here. This is because the large eigenvalues of ${\widetilde \Psi}_{ij}$ contribute heavily into $\mbox{eRank}(S)$. So, we define $S$ to include all eigenvalues ${\widetilde\lambda}^{(a)}$ ($a=1,\dots,N$) of ${\widetilde \Psi}_{ij}$ that do not exceed 1: $S = \{{\widetilde\lambda}^{(a)} | {\widetilde\lambda}^{(a)} \leq 1\}$. Then we define (here $\mbox{floor}(\cdot) = \lfloor\cdot\rfloor$ can be replaced by $\mbox{round}(\cdot)$) \begin{equation}\label{eq.eRank} n_* = |S| - \mbox{floor}(\mbox{eRank}(S)) \end{equation} So, the tail is now defined as the set $S_*$ of the $n_*$ smallest eigenvalues ${\widetilde\lambda}^{(a)}$ of ${\widetilde \Psi}_{ij}$. {}We can now deform ${\widetilde\Psi}_{ij}$ by (i) replacing the $n_*$ tail eigenvalues in $S_*$ by ${\widetilde\lambda}_* = \mbox{max}(S_*)$, and (ii) then correcting for the fact that the so-deformed matrix no longer has a unit diagonal. The resulting matrix ${\widehat\Psi}_{ij}$ is given by: \begin{eqnarray} &&{\widehat\Psi}_{ij} = \sum_{a=1}^{N - n_*} {\widetilde \lambda}^{(a)}~{\widetilde V}_i^{(a)}~{\widetilde V}_j^{(a)} + z_i~z_j \sum_{a = N - n_* + 1}^N {\widetilde \lambda}_*~{\widetilde V}_i^{(a)}~{\widetilde V}_j^{(a)}\\ &&z_i^{2} = y_i^{-2}~\sum_{a = N - n_* + 1}^N {\widetilde \lambda}^{(a)}~[{\widetilde V}_i^{(a)}]^2\\ &&y_i^2 = \sum_{a = N - n_* + 1}^N {\widetilde \lambda}_*~[{\widetilde V}_i^{(a)}]^2 \end{eqnarray} Here ${\widetilde V}_i^{(a)}$ are the principal components of ${\widetilde \Psi}_{ij}$. This method is similar to that of \cite{RJ}. The key difference is that in \cite{RJ} the ``adjustments" $z_i$ are applied to all principal components, while here they are only applied to the tail principal components (for which the eigenvalues are deformed). This results in a smaller distortion of the original matrix. The resultant deformed matrix ${\widehat\Psi}_{ij}$ has improved tail behavior (see Figure \ref{Figure5}). Another bonus is that, while superfluously we only modify the tail, the eigenvectors of the deformed matrix ${\widehat\Psi}_{ij}$ are no longer ${\widetilde V}_i^{(a)}$ for all values of $a$, and the eigenvalues outside of the tail are also deformed. In particular, in some cases there can be some (typically, a few) nearly degenerate\footnote{\, They are not degenerate even within the machine precision. However, they are spaced much more closely than other eigenvalues (on average, that is).} eigenvalues ${\widetilde \lambda}^{(a)}$ in the densely populated region of ${\widetilde \lambda}^{(a)}$ (where they are of order 1), i.e., outside of the tail and the higher-end upward-sloping ``neck". The deformation splits such nearly degenerate eigenvalues, which is a welcome bonus. Indeed, the issue with nearly degenerate eigenvalues is that they can adversely affect convergence of the bounded optimization (see below) as the corresponding directions in the risk space have almost identical risk profiles. \section{Backtests}\label{sec3} {}Here we discuss some backtests. We wish to see how our machine learning risk models compare with other constructions (see below). For this comparison, we run our backtests exactly as in \cite{Het}, except that the model covariance matrix is build as above (as opposed to the full heterotic risk model construction of \cite{Het}). To facilitate the comparisons, the historical data we use in our backtests here is the same as in \cite{Het}\footnote{\, The same data is also used in \cite{StatIC}, \cite{StatRM}.} and is described in detail in Subsections 6.2 and 6.3 thereof. The trading universe selection is described in Subsection 6.2 of \cite{Het}. We assume that i) the portfolio is established at the open with fills at the open prices; and ii) it is liquidated at the close on the same day (so this is a purely intraday strategy) with fills at the close prices (see \cite{MeanRev} for pertinent details). We include strict trading bounds \begin{equation} |H_i| \leq 0.01~A_i \end{equation} Here $H_i$ are the portfolio stock holdings ($i=1,\dots,N$), and $A_i$ are the corresponding historical average daily dollar volumes computed as in Subsection 6.2 of \cite{Het}. We further impose strict dollar-neutrality on the portfolio, so that \begin{equation} \sum_{i=1}^N H_i = 0 \end{equation} The total investment level in our backtests here is $I$ = \$20M (i.e., \$10M long and \$10M short), same as in \cite{Het}. For the Sharpe ratio optimization with bounds we use the R function {\tt{\small bopt.calc.opt()}} in Appendix C of \cite{Het}. Table \ref{table.summary} gives summaries of the eigenvalues for various values of $K$. Considering that the algorithm is nondeterministic, the results are stable against reruns. Table \ref{table.backtests} summarizes the backtest results. Here we can wonder whether the following would produce an improvement. Suppose we start from the sample correlation matrix $\Psi_{ij}$ and run the algorithm, which produces the model correlation matrix ${\widetilde\Psi}_{ij}$. Suppose now we rerun the algorithm (with the same number of ``samplings" $M$) but use ${\widetilde\Psi}_{ij}$ instead of $\Psi_{ij}$ in Eq. (\ref{Phi}) to build ``sampling" correlation matrices $\Psi^{(m)}_{ij}$. In fact, we can do this iteratively, over and over again, which we refer to as multiple iterations in Table \ref{table.iter}. The results in Table \ref{table.iter} indicate that we do get some improvement on the second iteration, but not beyond. Let us note that for $K\geq 100$ with iterations (see Table \ref{table.iter}) the method of Subsection \ref{sub.tail} was insufficient to deal with the issues with small and nearly degenerate eigenvalues, so we used the full method of \cite{RJ} instead (see Subsection \ref{sub.tail} and Table \ref{table.iter} for details), which distorts the model correlation matrix more (and this affects performance). \section{Concluding Remarks}\label{sec4} {}So, the machine learning risk models we discuss in this paper outperform statistical risk models \cite{StatRM}. They have the performance essentially similar to the heterotic risk models based on statistical industry classifications using multilevel clustering \cite{StatIC}. However, here we have single-level clustering, and there is no aggregation of clusterings as in \cite{StatIC}. Also, the resultant model correlation matrix ${\widetilde \Psi}_{ij}$ is {\em not} a factor model, whereas the models of \cite{StatIC} are factor models. Note that both the machine learning risk models of this paper and the models of \cite{StatIC} still underperform the heterotic risk models based on fundamental industry classifications; see \cite{Het}, \cite{HetPlus}. {}In this regard, let us tie up a few ``loose ends", so to speak. Suppose we take just a single ``sampling" $\Psi^{(m)}_{ij}$. This is an incomplete, single-level heterotic risk model. However, $\Psi^{(m)}_{ij}$ by construction is positive-definite, so we can invert it and use it in optimization. So, does averaging over a large number $M$ of ``samplings" (as in the machine learning risk models of this paper), or implementing a multilevel ``Russian-doll" embedding \cite{RD} as in \cite{StatIC}, add value? It does. Thus, two runs based on a single ``sampling" with $K=40$ and $M=1$ produced the following results: (i) ROC = 42.434\%, SR = 15.479, CPS = 2.044; and (ii) ROC = 42.735\%, SR = 15.51, CPS = 2.054 (see Table \ref{table.backtests} for notations). Also, what if, instead of using a single k-means to compute $\Psi^{(m)}_{ij}$, we aggregate a large number $P$ of k-means clusterings as in \cite{StatIC}? This does not appear to add value. Here are the results from a typical run with $K=30$, $M=100$ and $P=100$: ROC = 42.534\%, SR = 15.764, CPS = 2.09. Apparently, and perhaps unsurprisingly, aggregating multiple clusterings and averaging over multiple ``samplings" has similar effects. This, in fact, is reassuring.
1,116,691,500,517
arxiv
\section{Introduction} Stellar nucleosynthesis via the charged particle induced reactions terminates when nuclear binding energy reaches its peak value near $A=56$. The reactions become endothermic in nature and further production of elements depends on the rates of neutron capture on stable end-products and successive $\beta$-decays. The mechanism is classified into two, namely, the slow neutron capture process and the rapid neutron capture process depending on timescales of neutron captures and $\beta$-decays. There is also a minor contribution from p-process which occurs via the capture of protons resulting in the production of so-called p-nuclei when the path moves near the proton drip line of the nucleosynthesis chart. An interesting region in the path of heavy element nucleosynthesis exists near the proton $(Z=50)$ shell-closure. In this region, certain nuclei have several long-lived, $\beta$-decaying isomers. Abundance of some elements has contributions from all three processes. The exact contributions to these processes according to the observed abundances requires a large network calculations the key inputs of which are the neutron capture rates. The current models for s-process and p-process, because of their inherent discrepancies, can not fully describe the origin of certain isotopes in this region of interest. The isotopic abundances can be determined from the $\sigma$N statistics, for which cross sections for all the elements in the nucleosynthesis chain have to be known with sufficient accuracy. Some elements in this region, for example, $^{114,115}$Sn have natural abundances too low to have sufficient enrichment in the samples. Hence, experimental measurement is extremely difficult. The existing data have to be corrected for isotopic impurities. Hence, theoretical extrapolations are in great demand in this respect. Certain isotopes are produced entirely via s-process. These s-only isotopes are of special interest as no other nucleosynthesis mechanism takes part in their production. As a consequence, the abundance ratios are simply their isotopic ratios and hence they are very useful in constraining certain parameters regarding s-process study. Especially, nuclei in the concerned region are synthesized in the main component of s-process, which, in general, occurs in low-mass thermally pulsing asymptotic giant branch (AGB) stars. The neutron flux in the main component is sufficient so that the steady flow equilibrium is achieved and hence, the elements can be produced to their saturation abundances. In contrast to the main component, uncertainty in cross section of one isotope affects only that particular isotope and not the entire abundance distribution and hence, the abundance pattern does not suffer from the so-called propagation effect. Branchings in the s-process path occur whenever the neutron capture rate becomes comparable to the corresponding $\beta$-decay rate. Then a competition takes place between the two processes and one can define the branching ratio which is defined as the ratio of neutron capture rate to the sum of all decay and capture rates. The cross sections for long-lived fission products (LLFPs), produced in fission reactors, for example, for $^{129}$I, is required in nuclear transmutation technology in which the harmful LLFPs are reduced in amount by converting into stable or short-lived nuclei. In addition, some stable isotopes are also produced in fission reactors. Their cross sections are also very important for isotopic separation of LLFP if the transmutation system is not adapted to it. In the high-temperature stellar environment, the neutrons are thermalized by collisions. The experimental neutron spectra, produced, in principle, by three reactions, e.g., $^{7}$Li$(p,n)^{7}$Be, $^{18}$O$(p,n)^{18}$F, and $^{13}$C$(\alpha,n)^{16}$O at thermal energies of 25, 5, and 8 keV, respectively. However, modern network codes coupled with stellar models require cross sections at several higher energies up to $\sim$ 100 keV during various phases of stellar burning. In such scenarios, the cross sections at higher energies have to be extrapolated from statistical model calculations. Thus, in general, experiments can not be performed at all astrophysical energies for all concerned isotopes. Further, it is not possible to measure the stellar reaction rates under normal laboratory conditions. Hence, all the complicated reaction mechanisms activated in interstellar environments are impossible to measure in normal laboratory conditions and theoretical extrapolations are required. On the other hand, experimental information is necessary to test the validity and to impose the requirement of further developments of theoretical models. In fact, only a close collaboration between theory and experiment will help in getting the complete picture of stellar heavy element nucleosynthesis. The current work presents Hauser-Feshbach statistical model calculations of reaction cross sections and rates at astrophysically important energies for various nuclei around the $Z=50$ shell closure. The current paper is organized as follows. First, we have briefly described the theory. In the results section, the $(n,\gamma)$ cross sections for a number of elements starting from indium to xenon are compared with the available experimental results. After that, the Maxwellian-averaged cross sections, first at 30 keV and then at several other energies are presented along with the MOST predictions and available experimental data. We have also presented the astrophysical reaction rates for some selected important isotopes around the Sn shell closure. Finally, we have discussed our work. \section{Theoretical framework} The microscopic optical model potential is widely used to describe the absorption and scattering phenomena. A complex microscopic optical model potential simplifies the complicated many-body descriptions of nucleon-nucleus interaction by an average one-body mean-field potential. It divides the incident flux into a part describing elastic scattering and another part describing other non-elastic channels. The solution of the Schr{\"o}dinger equation with this potential then yields angular distribution as well as total reaction cross sections. The basic building block in constructing an optical model potential is the nucleon-nucleon interaction. We have chosen the standard real density dependent M3Y (DDM3Y) interaction \cite{ddm3y} and folded it with the target radial matter densities, obtained from relativistic mean field model. The integration is done in spherical coordinates \cite{n82}. A spin-orbit interaction term with energy dependent phenomenological potential depths is also included according to Scheerbaum's prescription \cite{scheerbaum}. In our previous studies, this potential has been found to describe the elastic scattering as well as proton and neutron capture reaction phenomena at low energies \cite{gg_cl1,gg_cl2,gg_cl3,55-60,110-125,40-55,n82,n50}. In the present work, this potential has been used to calculate the radiative neutron capture cross sections in and around the $Z=50$ shell-closure. The baryonic matter density is extracted in the relativistic-mean-field (RMF) approach. The standard FSU Gold lagrangian density \cite{fsugold} with a definite set of parameters \cite{n82} is used to describe the RMF theory. Our RMF model has previously been found to reproduce the experimental binding energies and root-mean-square charge radius values in this region of interest \cite{110-125}. The cross sections are calculated in compound nuclear Hauser-Feshbach (HF) formalism using the statistical model reaction code TALYS1.8 \cite{talys2}. Various input parameters play crucial roles in statistical model calculation. The level densities are taken from Goriely's microscopic calculations \cite{ldmodel}. In the case of radiative capture cross sections, the exit channel deals with one or more $\gamma$-rays. The strength function which describes the transmission, depends on the $\gamma$-ray energy and also on the energy and width of giant dipole resonances. We have taken the dominant E1 $\gamma$-ray strength function from the Hartree-Fock-Bogolyubov calculation \cite{e1strength}. The transmission coefficients are obtained from the microscopic optical model potential as the function of phase shifts. The complex phase shifts are represented in terms of the logarithmic derivative of the wave functions obtained from the solution of Sch{\"o}dinger wave equation. Width fluctuation correction is also included in our calculation according to Moldauer's formula. These are mainly the correlation factors with which all partial channels for incoming and outgoing particles have to be multiplied. These renormalization factors redefine the transmission coefficients by redistributing the total width over all possible channels in order to conserve the total cross section. The major effect is to enhance the elastic channel and weak channel over the dominant one. \begin{figure*} \includegraphics[scale=1.05]{path.eps} \caption{(Color online) S-process path near Sn-Sb-Te region. The shaded rectangles represent stable isotopes. The weak branchings are designated by dashed lines. Rectangles with thick borders denote s-only isotopes and those filled with patterns represent p-nuclei. \label{s-path}} \end{figure*} The peak and width of the distribution depends on the centrifugal quantum number $(l)$. This $l$ accounts for the contribution of different partial waves. Hence centrifugal barrier does play role in the case of neutron induced reactions by shifting the peak and width slightly. We have calculated the $(n,\gamma)$ cross sections in the energy range from 1 keV to 1 MeV. Neutrons are easily thermalized in a stellar high-temperature environment. Hence, Maxwellian-averaged cross sections are obtained by averaging the total $(n,\gamma)$ cross sections over the Maxwell-Boltzmann distribution as follows. \begin{equation} <\sigma>=\frac{2}{\sqrt\pi} \frac{ \int_{0}^{\infty} \sigma(E_{n}) E_{n} exp(-E_{n}/kT) dE_{n}}{\int_{0}^{\infty} E_{n} exp(-E_{n}/kT) dE_{n}} \end{equation} Here, $E_{n}$ is the energy in center-of-mass frame, $K$ is Boltzmann constant, and $T$ is the temperature. Similarly, Maxwellian-averaged stellar reaction rates can be obtained by considering thermodynamical equilibrium between the compound nuclear cross sections of nuclei existing in ground states as well as in different excited states. More details on theoretical formalism are available in Dutta {\em et. al. } \cite{n82,n50}. Classical s-process calculation prefers the MACS value at the energy of 30 keV. However, in recent days, more improved network models coupled with stellar hydrodynamics, those explicitly take care of branching analysis, demand MACS values over a range of thermal energies. In most of the cases, especially the measurement by activation technique, it is not always possible to have MACSs over the entire required range and in those cases extrapolations of calculated ones are very necessary. For this reason, we have calculated the MACSs from 5 to 100 keV for a few selected important isotopes those do not have experimental MACSs available. \begin{figure} \includegraphics[scale=0.600]{sn1.eps} \caption{Comparison of $(n,\gamma)$ cross sections of the present calculation with experimental measurements for $^{115}$In and $^{112,114}$Sn. Solid lines indicate theoretical results. For convenience of viewing, cross section values of $^{112}$Sn are multiplied by a factor of 2. \label{snngxs1}} \end{figure} \begin{figure} \includegraphics[scale=0.600]{sn2.eps} \caption{ Comparison of $(n,\gamma)$ cross sections of the present calculation with experimental measurements for $^{115,116,117,118}$Sn. Solid lines indicate the theoretical results. For convenience of viewing, cross section values of $^{115}$Sn and $^{116}$Sn are multiplied by factor of 5 and 0.1, respectively. \label{snngxs2}} \end{figure} \begin{figure} \includegraphics[scale=0.600]{sn3.eps} \caption{ Comparison of $(n,\gamma)$ cross sections of the present calculation with experimental measurements for $^{119,120,122,124}$Sn. Solid lines indicate the theoretical results. For convenience of viewing, cross section values of $^{122}$Sn are multiplied by a factor of 0.1. \label{snngxs3}} \end{figure} \section{Results} Fig.\ref{s-path} shows the s-process nucleosynthesis chain near the $Z=50$ shell closure. We have plotted theoretical $(n,\gamma)$ cross sections with experimental measurements for a number of $(n,\gamma)$ reactions in the range of interest from Fig. \ref{snngxs1} to Fig. \ref{xengxs}. Experimental values on various targets are available in the website of National Nuclear Data Center \cite{nndc}. In general, the recent measurements are taken for comparison. In some cases, two or more different data sets are plotted to accommodate a sufficient number of data over the entire energy interval of interest. The theoretical $(n,\gamma)$ cross sections on $^{115}$In and several isotopes of tin are plotted with experimental data in Figs. \ref{snngxs1}, \ref{snngxs2}, and \ref{snngxs3}. Experimental data for $^{115}$In is from the measurement of Kononov {\em et. al. } \cite{in115_1xs}. The proton magic element tin has the greatest number of stable isotopes. The isotopes $^{112,114}$Sn are very rare and produced only in p-process. Abundances are most precisely measured with the isotopes of a single element and hence a large number of stable isotopes of tin provides an opportunity to test the accuracy of theoretical models. The isotope $^{116}$Sn is shielded against r-process $\beta$ decay flow by the stable isobar $^{116}$Cd. Hence, it is least affected by any nearby branchings and experiences complete s-process flow. Hence, it can be used to normalize the entire $\sigma N$ abundance curve. Isotopes $^{114,115}$Sn have very low enrichment. Timokhov {\em et. al. } \cite{sn112_1sn114_2sn115_2sn122av1sn124av1xs} measured the capture cross sections on stable isotopes of tin including $^{112,114-120,122,124}$Sn. We have taken the data for $^{118,119}$Sn from Macklin {\em et. al. } \cite{sn117sn118sn119sn120_1xs}. We have also plotted the data of Nishiyama {\em et. al. } \cite{sn116_2xs} for $^{116-119}$Sn and Koehler {\em et. al. }\cite{sn116_1xs} for $^{116}$Sn. These older measurements do not provide data for the low energy region below 20 keV. Later, Wisshak {\em et. al. } \cite{sn114_3sn115_3sn115_1xs} measured neutron capture cross sections on $^{114-118,120}$Sn from 3 to 225 keV using gold as standard. They reported an uncertainty $\sim$ 1 \% which is better than the previous measurements. We have also plotted their data with our calculations. As can be seen from Figs. \ref{snngxs1}, \ref{snngxs2}, and \ref {snngxs3} that the data agree fairly with our theoretical values. \begin{figure} \center \includegraphics[scale=0.600]{te2.eps} \caption{ Comparison of $(n,\gamma)$ cross sections of the present calculation with experimental measurements for $^{121,123}$Sb $^{122,123}$Te. Solid lines indicate the theoretical results. For convenience of viewing, cross section values of $^{121}$Sb and $^{123}$Sb are multiplied by factors of 3 and 10, respectively. \label{tengxs1}} \end{figure} \begin{figure}[htp] \center \includegraphics[scale=0.600]{te1.eps} \caption{Comparison of $(n,\gamma)$ cross sections of the present calculation with experimental measurements for $^{124-126}$Te. Solid lines indicate the theoretical results. For the convenience of viewing, cross section values of $^{127}$I, $^{129}$I, and $^{126}$Te are multiplied by factors of 2, 10, and 0.5, respectively. \label{tengxs2}} \end{figure} Antimony has two stable isotopes, $^{121}$Sb and $^{123}$Sb. This element bypasses the nucleosynthesis flow from tin to tellurium. Tolstikov {\em et. al. } \cite{sb121sb123xs} measured the capture cross sections on $^{121,123}$Sb isotopes via activation technique in the energy range from 0.3-2.7 MeV. They also compared their results with the statistical model calculations using an optical model potential. We have plotted their measured values with our theoretical calculations in Fig. \ref{tengxs1}. Tellurium, the close neighbor of the element tin, is unique as it has three s-only nuclei among its eight naturally occurring isotopes and one of them has an odd mass number. The three elements $^{122,123,124}$Te are shielded from r-process by the isotopes of tin and antimony. The three neighboring s-only isotopes can be used to check the validity of local approximation, i.e., $\sigma_{A}N_{A}$= constant, in classical s-process scenario where flow equilibrium is believed to be achieved. The natural abundances are quite low due to the small cross sections near the shell closure. Hence, high sensitivity in the experimental procedure is demanded. Similar to $^{116}$Sn, the isotope $^{124}$Te is also subject to full s-process flow and can become useful calibration point to constrain the main s-process. Xia {\em et. al. } \cite{te123xs1} measured the neutron capture cross sections for $^{122,123,124}$Te from 1 to 60 keV energy range using a setup of three Moxon-Rae detectors and time of flight (TOF) technique. The systematic uncertainty was $\sim$ 5\% whereas the statistical uncertainty was less than 2\%. Wisshak {\em et. al. } \cite{te123_2te124_1te125_1te126_1xs} measured the neutron capture cross sections on $^{120,122-126}$Te in the energy range from 10 to 200 keV. Previously, Macklin and Gibbons \cite{te122te123_3te124_3te125_3xs} used total energy technique to determine cross sections on the same isotopes of tellurium from 30 to 220 keV. We have plotted these experimental data with our calculated results in Figs. \ref {tengxs1} and \ref {tengxs2}. The $(n,\gamma)$ cross sections for $^{127,129}$I are plotted in Fig. \ref{tengxs2}. The isotope $^{129}$I is a long-lived fission product with $\beta$ decay half-life of about 15.7 million years and is useful in nuclear transmutation technology. Its formation in s-process is blocked by the instability of $^{128}$I. It can be formed in r-process via the $\beta$ decay of $^{129}$Te. Noguere {\em et. al. } \cite{i127_2i129_2xs} have recently measured the $(n,\gamma)$ cross sections on $^{127,129}$I. We have also plotted the data of Voignier {\em et. al. } \cite{i127_1xs} for $^{127}$I and Macklin \cite{i129_1xs} for $^{129}$I. The element xenon is of particular interest as it has nine stable isotopes produced in different processes under heavy element nucleosynthesis. It has two s-only isotopes $^{128,130}$Xe \begin{figure} \center \includegraphics[scale=0.600]{xe.eps} \caption{ Comparison of $(n,\gamma)$ cross sections of the present calculation with experimental measurements for $^{128-130}$Xe. Solid lines indicate the theoretical results. \label{xengxs}} \end{figure} Unlike other elements, solar xenon abundance can not be determined from the analysis of spectral meteorites or primitive meteorites. Thus, it has to be determined from the systematic study of $\sigma$N statistics. Fig.~\ref{xengxs} shows the calculated $(n,\gamma)$ cross sections for $^{128,129,130}$Xe. Experimental data are taken from Reifarth {\em et. al. } \cite{xe128xe129xe130xs}. They have measured the $(n,\gamma)$ cross sections using gold as standard in the neutron energy range from 3 to 225 keV using TOF method with an uncertainty of 2\%. The ratios of xenon to gold experimental cross sections were converted into absolute xenon cross sections by using gold data of R. L. Macklin (private communication) followed by a normalization by a factor of 0.989 with the absolute value of Ratynski and K{\"a}ppeler \cite{ratynski_gold_ref}. \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.2} \begin{table*}[htb] \center \caption{Maxwellian averaged cross sections at $kT=30$ keV for nuclei near the $Z=50$ shell closure. Experimental values are from Ref. \cite{kadonis1}. For unstable and radioactive nuclei, experimental data are not available. See text for other experimental values. \label{macs30kev}} \begin{tabular}{crrrcrrr}\hline &\multicolumn{3}{c}{MACS (mb)}& &\multicolumn{3}{c}{MACS (mb)}\\\cline{2-4}\cline{6-8} Nucleus&Present & Exp. & MOST & Nucleus&Present & Exp. & MOST\\ \hline $_{49}^{113}$In&603&787$\pm$70&314& $_{49}^{115}$In&433&706$\pm$70&298\\ $_{50}^{112}$Sn&188.7 &210$\pm$12 &153 & $_{50}^{114}$Sn&101.5& 134.4$\pm$1.8 & 72.9\\ $_{50}^{115}$Sn&284& 342.4$\pm$8.7 & 245& $_{50}^{116}$Sn&69& 91.6$\pm$0.6 & 45.6\\ $_{50}^{117}$Sn&415& 318.8$\pm$4.8 & 299& $_{50}^{118}$Sn&66.3& 62.1$\pm$0.6 & 48.2\\ $_{50}^{119}$Sn&306& 180$\pm$10 & 240& $_{50}^{120}$Sn&44.3&36.2$\pm$0.3 & 30.2\\ $_{50}^{121}$Sn&217& & 341& $_{50}^{122}$Sn&44.5&21.9$\pm$1.5 & 32.9\\ $_{50}^{124}$Sn&17.6&12.0$\pm$1.8 & 13.6\\ $_{51}^{121}$Sb&678& 532$\pm$16 & 417& $_{51}^{122}$Sb&1229& & 772\\ $_{51}^{123}$Sb&709 &303$\pm$9 & 422\\ $_{52}^{122}$Te&204 &295$\pm$3 & 165& $_{52}^{123}$Te&981 &832$\pm$89 & 548\\ $_{52}^{124}$Te&164 &155$\pm$2 & 105& $_{52}^{125}$Te&552 &431$\pm$4 & 382\\ $_{52}^{126}$Te&91.3 &81.3$\pm$1.4 & 73.7& $_{52}^{127}$Te&851&&\\ $_{52}^{128}$Te&101&44.4$\pm$1.3 &74.3\\ $_{53}^{127}$I&766&635$\pm$30 & 470& $_{53}^{128}$I&1407.2& & \\ $_{53}^{129}$I&901&441$\pm$22 &497\\ $_{54}^{124}$Xe&692 &644$\pm$83&500& $_{54}^{126}$Xe&471&359$\pm$51&333\\ $_{54}^{128}$Xe&169&262.5$\pm$3.7&105& $_{54}^{129}$Xe&588&617$\pm$12&307\\ $_{54}^{130}$Xe&125&132.0$\pm$2.1&89.8& $_{54}^{131}$Xe&618 &&319\\ $_{54}^{132}$Xe&60.1&64.6$\pm$5.3&67.4& $_{54}^{133}$Xe&464&&235\\ \hline \end{tabular} \end{table*} We have also calculated the Maxwellian averaged cross sections and compared them with the recommended values by Bao {\em et. al. } \cite{bao} and theoretical MOST2005 \cite{most2005} predictions in Table~\ref{macs30kev}. These values are available in KADoNiS data base \cite{kadonis1}. \setlength{\tabcolsep}{12pt} \renewcommand{\arraystretch}{1.25} \begin{table*} \center \caption{Maxwellian-averaged cross sections over a range of thermal energies for several branch-point nuclei in the s-path near the Sn shell closure and for stable $^{131}$Xe. \label{macs_range}} \begin{tabular}{cccccc}\hline $KT$ (MeV)&\multicolumn{5}{c}{MACS (mb)}\\\cline{2-6} &$^{121}$Sn$(n,\gamma)$ & $^{122}$Sb$(n,\gamma)$ &$^{127}$Te$(n,\gamma)$&$^{131}$Xe$(n,\gamma)$&$^{133}$Xe$(n,\gamma)$\\\hline 0.005&683&3601&2400&1778&1348\\ 0.010&437&2346&1565&1159& 873\\ 0.015&338&1855&1241& 915& 686\\ 0.020&281&1572&1060& 777& 582\\ 0.025&244&1377& 939& 685& 514\\ 0.030&217&1229& 851& 618& 464\\ 0.040&179&1012& 727& 527& 398\\ 0.050&154& 859& 643& 466& 353\\ 0.060&137& 744& 582& 421& 421\\ 0.080&113& 583& 499& 359& 277\\ 0.100& 98& 476& 443& 316& 248\\ \hline \end{tabular} \end{table*} \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.25} \begin{table*} \center \caption{Astrophysical reaction rates $N_{A}<\sigma v>$ (cm$^{3}$ mol$^{-1}$ sec$^{-1}$) over a range of stellar temperature from the present calculations for the targets $^{112,114,115}$Sn and $^{131}$Xe. The rates are in the order of $10^{7}$. For the sake of comparison, we have also listed the theoretical rates from BRUSLIB data base \cite{bruslib}. \label{rates}} \begin{tabular}{ccc|cc|cc|cc}\hline $T_{9}$(GK) & \multicolumn{8}{c}{$N_{A}<\sigma v>$ (cm$^{3}$ mol$^{-1}$ sec$^{-1}$)}\\\hline &\multicolumn{2}{c}{$^{112}$Sn}&\multicolumn{2}{c}{$^{114}$Sn} &\multicolumn{2}{c}{$^{115}$Sn}&\multicolumn{2}{c}{$^{131}$Xe}\\\cline{2-3}\cline{4-5}\cline{6-7}\cline{8-9} & Pres. & Ref.\cite{bruslib}& Pres.& Ref.\cite{bruslib}& Pres. & Ref.\cite{bruslib} & Pres. & Ref.\cite{bruslib}\\\hline 0.1&2.754&3.308&1.594&1.405&4.273&3.239&9.831&4.473\\ 0.2&2.746&3.287&1.562&1.309&4.132&3.128&9.250&4.291\\ 0.3&2.785&3.250&1.571&1.263&4.110&3.045&8.960&4.186\\ 0.4&2.829&3.222&1.588&1.241&4.118&2.983&8.874&4.153\\ 0.5&2.877&3.206&1.608&1.232&4.138&2.939&9.003&4.206\\ 0.6&2.928&3.205&1.631&1.234&4.169&2.913&9.274&4.316\\ 0.7&2.982&3.219&1.657&1.244&4.210&2.901&9.598&4.445\\ 0.8&3.041&3.247&1.686&1.259&4.260&2.901&9.912&4.567\\ 0.9&3.104&3.286&1.718&1.279&4.319&2.913&10.827&4.671\\ 1.0&3.171&3.335&1.752&1.303&4.389&2.934&10.398&4.751\\ 2.0&4.043&4.189&2.228&1.662&5.631&3.551&10.379&4.477\\ 3.0&5.389&5.597&3.018&2.203&6.652&3.640&7.024&1.912\\ 4.0&6.954&6.647&3.878&2.367&5.304&1.849&2.446&0.522\\ 5.0&4.971&4.296&3.647&1.359&2.287&0.707&0.645&0.189\\\hline \end{tabular} \end{table*} For a number of selected isotopes, we have presented the Maxwellian-averaged cross sections and reaction rates over a range of astrophysical energies and temperatures in Tables \ref{macs_range} and \ref{rates}. The neutron capture cross sections of isotopes acting as important branch points plays crucial role in the production of a few s-oly isotopes. Moreover, these unstable branch points are barely accessible to experimental measurements. The production of three neighboring s-only isotopes $^{122,123,124}$Te is governed by the branches at $^{121}$Sn and $^{122}$Sb. The branch at $^{127}$Te has contribution towards the production of s-only isotopes $^{128,130}$Xe, the MACSs of which are well-determined with uncertainty less than 2\%. The branch at $^{133}$Xe has a marginal neutron flow towards neutron-rich isotope $^{134}$Xe owing to its short $\beta$-decay life time ($t_{1/2}=5.2$ days). The two important s-only isotopes $^{134,136}$Ba are affected by branch-points those are strongly sensitive to stellar temperature, electron density and neutron density. The MACS values for the branch-point nucleus $^{133}$Xe is needed to correctly predict the small but non-negligible s-predictions for these two s-only nuclei. These four unstable short-lived branch-point isotopes do not have experimental data. The isotope $^{127}$Te even has no theoretical estimation made so far for MACS. Hence, we have presented MACS values for them from 5 to 100 keV. The Experiment has yet not been performed on the stable isotope $^{131}$Xe, possibly due to low enrichment in sample. Hence, we have presented our calculated values for both cross sections and reaction rates over a range of thermal energies and stellar temperatures for this nucleus. The astrophysical origin of some rare isotopes, e.g., $^{112,114,115}$Sn in this region of interest has been a long-standing problem. These nuclei are bypassed by the main flow of s-process and post freeze-out $\beta$ decay flow of r-process and hence, are classified as p-nuclei. Their abundances are small and it is difficult to predict exact s-process contributions towards their abundances. The empirical abundance scaling laws have been proposed between the p- and s-only nuclei with the same atomic number in solar abundances from theoretical model calculations \cite{hayakawa}. These scaling laws suggest that they are basically produced in photo-disintegration reactions. The rates of such reactions can be obtained from the forward $(n,\gamma)$ rates by the principle of detailed balance. For this reason, we have presented the astrophysical radiative neutron capture rates from our calculations for the rare p-only isotopes of tin in Table \ref{rates}. The theoretical rates from the BRUSLIB data library \cite{bruslib} are also listed alongside to show the difference. The rates in the BRUSLIB data base were evaluated using the reaction code TALYS. The inputs for statistical model calculations are different from our study. Various global and local microscopic models were used for nuclear structure properties, deformation, optical potential, level densities, strength functions, etc., whenever the experimental information were not available. The optical potential of Ref. \cite{bruslib_optmod} was used. Nuclear masses and deformations were from HFB mass model based on extended Skyrme force with a four parameter delta function pairing force \cite{bruslib_hfb}. The E1 $\gamma$-ray strength functions were taken from HFB+QRPA calculations of Goriely \cite{bruslib_e1}. Nuclear level densities were taken from the same reference as in our present case of study. It can be seen that the rates that our calculations yield are quite different from those of BRUSLIB data base. Especially for stable $^{131}$Xe, our predicted values are larger by more than 2 times at comparatively lower temperature and by more than 3 times at temperatures $>$ 1 GK. \section{Discussion and Summary} From the figures and tables it can be observed that our microscopic model reproduces the experimental cross section values for nuclei studied in the present case reasonably, except for a very few cases. Our theory yields almost twice the values for $^{122}$Sn, $^{123}$Sb, $^{129}$I, and $^{128}$Te while that of $^{115}$In is a factor of half less than the experimental one at 30 keV. Nevertheless, agreements between theory and experiment for MACS values are within the satisfactory limits with the difference ranging in between $\sim$ 5-25\%. For example, experimentally the cross sections of $^{124}$Te has been known with accuracy of 1\%. We present a value that differs from this by 5\%. This definitely depicts the predictive power of our theoretical model. The exceptions are $^{115}$In, $^{128}$Te, $^{129}$I, for which our model either largely overestimates or underestimates the measurements. It is notable as well as interesting that for $^{122}$Sb and $^{131,133}$Xe the predicted MACS values at 30 keV are very different from the pre-existing MOST calculations. It is to be noted that most of the nuclei studied in the present work near the neighborhood of the $Z=50$ shell-closure and participating in the main s-component reside close to the stability valley. Hence, deformation is very small and the spherical optical model described in terms of nuclear radius $R=r_{0}A^{1/3}$ can be reliably applied. However in case of small quadrupole or hexadecapole deformations arising due to static or dynamic collective rotational or vibrational excitations should be more accurately treated in Distorted-wave Born Approximation (SWBA) calculation or by coupled channel calculations in case of strong deformations. Application of statistical model requires a large number of overlapping resonances at the compound nucleus formation energy so that the average of the transmission coefficients, those do not show resonant behavior, can be obtained. This, in turn, requires a large number of levels per energy range in the compound nucleus which can act as doorway states to its formation. The individual resonance widths then can be replaced by an average one. However, the nuclei near the shell closures, in general, have low level densities and it is difficult to apply the statistical model in these regions. However, a comparatively broader s-wave neutron level spacings allow one to apply statistical model at low level densities even near the shell-closures. Statistical model predicts somewhat overestimated values of cross sections for low level densities in compound nuclei \cite{ldens_low}. A lower limit to the number of levels per energy window that is enough to replace the sum by an integral over the HF cross section is set to be about 10 in the worst case, by the numerical calculation in Ref. \cite{rau1}. Rauscher {\em et. al. } \cite{rau1} have also derived and presented a lower temperature limit above which the statistical model reaction rate calculations for various neutron, proton, and $\alpha$-induced reactions are valid. At low energies of astrophysical regimes, the cross sections can have direct capture component as well as contribution from individual or narrow single resonances those have to be described by Breight-Wigner terms. In extreme cases, interference terms may also appear. Within stars, nuclei exist both in the ground state and thermally populated excited states in statistical equilibrium. Hence, cross sections for nuclei existing only in the ground state gives merely an incomplete picture. To get the accurate stellar cross sections, these values have to be complemented with stellar enhancement factors (SEFs). \begin{equation} SEF=\frac{< \sigma>^{*}}{<\sigma>^{gr}} \end{equation} Here, $<\sigma>^{*}$ denotes the MACS values averaged over thermally populated levels for nuclei within stars and $<\sigma>^{gr}$ denotes the MACS values for nuclei in the ground state. However, ground state cross sections can easily be compared to the laboratory measurements. Our cross sections do not take these SEFs into account. However, at low energies and temperatures relevant to neutron capture reactions for s-process, especially for the main component in low mass AGB stars, these SEFs are below 8\%. Hence, we do not expect significant modifications or changes in our results after the inclusion of these factors. Moreover, due to the fact that the ground state cross section constitutes only a minor fraction of MACS, uncertainties in theoretical MACS values may be in some cases largely underestimated. It is to be noted that we have provided the cross sections and reaction rates from a complete theoretical view point. We have not used any experimental information for the nuclear inputs (except for nuclear binding energies). Even, the nuclear density distributions for folding the effective interaction has been taken from theoretical relativistic-mean-field model. The important inputs, such as level densities and E1-$\gamma$-ray strength functions are taken from current microscopic models which are considered to be more reliable and hence can better predict the observables away from the experimentally accessible region in the nuclear landscape than other local or global phenomenological models. It has been seen that the largest uncertainty in theoretical calculations results from an inappropriate description of nuclear level density. Obviously, if one uses experimental information, a better accuracy could be achieved. However, a complete theoretical formalism makes our model acceptable to predict the values for nuclei away from stability valley in the nuclear landscape for which experimental information is scarce or still do not exist. Moreover, we have not locally tuned the parameters for each individual reaction in doing which the agreement between theory and experiments would certainly become better. However, as our aim is to apply this model to predict those values for which cross sections or rates are still unknown, a single parameterized approach over entire energy region and mass range is nevertheless more convenient. Thus, it can be concluded that provided our statistical model has some limitations, it is moderate to predict the unknown values reasonably within a certain range of astrophysical energies and temperatures. In summary, we have studied the $(n,\gamma)$ cross sections for nuclei of astrophysical importance theoretically and compared them with the available experimental measurements. The DDM3Y NN interaction, folded with target radial matter densities, obtained from relativistic mean field approach has been used to construct the microscopic optical model potential. Finally, we have presented Maxwellian-averaged cross sections and astrophysical reaction rates from the prediction of our theoretical model. \section{Acknowledgement} Authors acknowledge the University Grants Commission (Junior Research Fellowship and Departmental Research Scheme) and the Alexander Von Humboldt Foundation for providing financial assistance.
1,116,691,500,518
arxiv
\section{Introduction} \label{sec:intro} The problem of efficient population transfer between two bound states through a laser induced continuum structure \cite{Knight90}, has attracted considerable attention over the years \cite{Carroll92,Nakajima94,Caroll96,Vitanov97,Yatsenko97,Unanyan98,Paspalakis97,Paspalakis98,Buffa98,Tran99,Han07,Han08,Cederbaum15}. In these works, the population transfer is accomplished using a counterintuitive STIRAP pulse sequence \cite{Bergmann98,Kral07,Vitanov17,Bergmann19}, which in most cases consists of two delayed Gaussian pulses. The method has been experimentally demonstrated in helium atoms \cite{Peters05,Peters07}, while other interesting applications include coherence effects \cite{Thanopulos06,Thanopulos10}, like population trapping and electromagnetically induced transparency, photonics \cite{Limonov17}, specifically optical analogs for light waves propagating in waveguide-based photonic structures \cite{Longhi08,Longhi21}, and qubits coupled through a bosonic structural continuum \cite{Huang19} or the quasi-continuous spectrum of modes in a waveguide \cite{Kannan20}. There is actually a plethora of systems where the population transfer between two bound states coupled via a continuum could be useful \cite{Hsu16}. Recently, the method has been extended to the case where multilevel degenerate states interact through a common continuum structure \cite{Zlatanov21}. In the present work, we first evaluate the performance of a simple sin-cos protocol, where the mixing angle of the applied fields varies linearly with time, and find that it performs worse than the Gaussian STIRAP protocol. We then use optimal control \cite{Bryson} to find the optimal shapes of pulses which maximize the population transfer between two bound states coupled through a continuum structure. We consider bounded controls and using an elementary theoretical analysis we explain that the optimal pulses have the bang-interior and interior-bang form, where the bang part corresponds to the maximum allowed control value, while the interior part corresponds to control values between zero and this maximum bound. The analytic determination of the switching times as well as of the interior control values is a formidable task, thus we recourse to numerical optimal control \cite{bocop}. We are benchmarking our method by applying it to the system used in Ref. \cite{Vitanov97}, and compare our results with those obtained there using Gaussian STIRAP pulses. We find that the optimal method outperforms STIRAP pulses. The extent of improvement depends on the effective two-photon detuning and the size of incoherent losses. Specifically, in the case of effective two-photon resonance, the improvement is more dramatic for larger incoherent losses, while when the effective two-photon detuning is taken into account, the improvement is better for smaller incoherent losses. We also demonstrate the increase in transfer efficiency with the increase in the absolute value of the Fano factor. Note that numerical optimal control has been previously used to increase the population transfer between two bound states coupled via a continuum \cite{Buffa98}. The main difference of the present approach is that we use as control variables only the envelopes of ionization pulses, while in Ref. \cite{Buffa98} the effective two-photon detuning was also used as an extra control variable. The structure of the paper is as follows. In the next section we formulate the problem and summarize the findings of Ref. \cite{Vitanov97} with Gaussian pulses, while in section \ref{sec:linear} we evaluate the performance of the simple sin-cos protocol. In section \ref{optimal_solution} we analyze the optimal control problem, while in section \ref{sec:results} we present the results of numerical optimization and compare them with those of Ref. \cite{Vitanov97}. Section \ref{sec:conclusion} concludes this work. \section{Population transfer through continuum states} \label{sec:formulation} The dynamics of two bound states coupled by two laser pulses through a continuum of intermediate states is governed by the following equation \cite{Vitanov97} \begin{equation} \label{LICS} i \left[ \begin{array}{c} \dot{c}_g\\ \\ \dot{c}_e \end{array} \right] = \left[ \begin{array}{cc} \Sigma_g-\frac{1}{2}i\Gamma_g & -\frac{1}{2}\sqrt{\Gamma^p_g\Gamma^{s}_e}(q+i)\\ & \\ -\frac{1}{2}\sqrt{\Gamma^p_g\Gamma^s_e}(q+i) & \Sigma_e-\frac{1}{2}i\Gamma_e+D \end{array} \right] \left[ \begin{array}{c} c_g\\ \\ c_e \end{array} \right], \end{equation} where $c_g(t), c_e(t)$ are the probability amplitudes of the initial (ground) state $|g\rangle$ and the target (excited) state $|e\rangle$, respectively. In this equation, $q$ is the constant Fano parameter and $D$ is the two-photon detuning, \begin{equation} \label{ionization_widths} \Gamma_g=\Gamma^p_g+\Gamma^s_g, \quad \Gamma_e=\Gamma^p_e+\Gamma^s_e \end{equation} are the total ionization widths and \begin{equation} \label{ionization_widths} \Sigma_g=\Sigma^p_g+\Sigma^s_g, \quad \Sigma_e=\Sigma^p_e+\Sigma^s_e \end{equation} are the corresponding dynamic Stark shifts of states $|g\rangle$ and $|e\rangle$, respectively. Note that the individual ionization widths and Stark shifts are proportional to the intensities of the pump and Stokes pulses \begin{equation} \Gamma^{\beta}_{\alpha}(t)=G^{\beta}_{\alpha}I_\beta(t), \quad \Sigma^{\beta}_{\alpha}(t)=S^{\beta}_{\alpha}I_\beta(t) \; (\alpha=g,e; \beta=p,s), \end{equation} where the coefficients $G^{\beta}_{\alpha}, S^{\beta}_{\alpha}$ depend on the particular atomic states and the laser frequencies. Using the modified probability amplitudes defined by the following population preserving phase transformation \cite{Vitanov97} \begin{equation} b_{\alpha}(t)=c_{\alpha}(t)\exp\left\{i\int_{-\infty}^t[\Sigma_g(t')+\frac{1}{2}q\Gamma^p_g(t')]dt'\right\}, \end{equation} where $\alpha=g,e$, we end up with the equation \begin{eqnarray} \label{system} i \left[ \begin{array}{c} \dot{b}_g\\ \\ \dot{b}_e \end{array} \right]=& \left[ \begin{array}{cc} -\frac{1}{2}\Gamma^p_g(q+i)-\frac{1}{2}i\Gamma^s_g & -\frac{1}{2}\sqrt{\Gamma^p_g\Gamma^s_e}(q+i)\\ & \\ -\frac{1}{2}\sqrt{\Gamma^p_g\Gamma^s_e}(q+i) & -\frac{1}{2}\Gamma^s_e(q+i)-\frac{1}{2}i\Gamma^p_e+\delta \end{array} \right] \nonumber\\ \times\left[ \begin{array}{c} b_g\\ \\ b_e \end{array} \right],& \end{eqnarray} where \begin{equation} \label{delta} \delta=D+\Sigma_e-\Sigma_g-\frac{1}{2}q(\Gamma^p_g-\Gamma^s_e) \end{equation} is the effective two-photon detuning, which is in general time-dependent. Note that the validity of the two-level approximation is confirmed in Refs. \cite{Kylstra98,Paspalakis00}. \begin{table}[t] \caption{\label{tab:eff} Maximum excited-state population obtained with Gaussian STIRAP pulses (second column), the simple sin-cos protocol (third column), and optimal pulses (fourth column), for different values of parameter R (first column), expressing the strength of incoherent ionization. The top part of the table corresponds to zero effective detuning $\delta=0$, while the bottom part to nonzero $\delta\neq0$, given from Eq. (\ref{delta}) with $D=0$.} \begin{ruledtabular} \begin{tabular}{cccc} \textrm{$R$}& \textrm{Gaussian}& \textrm{Sin-Cos}& \textrm{Optimal}\\ \colrule 0 & 1 & 1 & 1 \\ \colrule 1/16 & 0.84 & 0.8332 & 0.8702 \\ \colrule 1/4 & 0.71 & 0.6945 & 0.7528 \\ \colrule 1 & 0.53 & 0.4347 & 0.5691 \\ \end{tabular} \end{ruledtabular} \begin{ruledtabular} \begin{tabular}{cccc} \textrm{$R$}& \textrm{Gaussian}& \textrm{Sin-Cos}& \textrm{Optimal}\\ \colrule 0 & 0.53 & 0.5134 & 0.6280 \\ \colrule 1/16 & 0.51 & 0.4975 & 0.5948 \\ \colrule 1/4 & 0.48 & 0.4545 & 0.5373 \\ \colrule 1 & 0.40 & 0.3228 & 0.4207 \\ \end{tabular} \end{ruledtabular} \end{table} As explained in detail in Ref. \cite{Vitanov97}, the terms $\Gamma^p_g$ (pump pulse applied on the $|g\rangle$-continuum transition) and $\Gamma^s_e$ (Stokes pulse applied on the $|e\rangle$-continuum transition) lead to the formation of a STIRAP system, where population is transferred from the ground to the excited state through the continuum. On the other hand, the terms $\Gamma^p_e$ (pump pulse applied on the $|e\rangle$-continuum transition) and $\Gamma^s_g$ (pump pulse applied on the $|g\rangle$-continuum transition) lead to irreversible ionization. At least one of these incoherent channels is always present, resulting to incomplete transfer of population between the bound states. In Ref. \cite{Vitanov97} the authors use Gaussian pump and Stokes pulses of the same width $2T$ separated by a delay $2\tau$ \begin{equation} \label{gaussian} f_p(t)=e^{-\left(\frac{t-\tau}{T}\right)^2},\quad f_s(t)=e^{-\left(\frac{t+\tau}{T}\right)^2}, \end{equation} and test the performance using the following ionization widths and Stark shifts \begin{subequations} \label{widths_shifts} \begin{eqnarray} \Gamma^p_g(t)=Af_p(t),&\quad &\Gamma^s_g(t)=0,\\ \Gamma^p_e(t)=RAf_p(t),&\quad &\Gamma^s_e(t)=Af_s(t),\\ \Sigma^p_g(t)=Af_p(t),&\quad &\Sigma^s_g(t)=-Af_s(t),\\ \Sigma^p_e(t)=Af_p(t),&\quad& \Sigma^s_e(t)=3Af_s(t), \end{eqnarray} \end{subequations} while the Fano parameter is set to $q=-6$. Note that this is close to the value $q=-5.87$, corresponding to the hydrogen atom \cite{Yatsenko97}. Parameter $A$ is proportional to the intensity of the lasers, while $R$ quantifies the strength of incoherent ionization. Four values of parameter $R$ are used ($R=0, 1/16, 1/4, 1$), covering the range from weak to strong incoherent ionization, for both zero ($\delta=0$) and nonzero ($\delta\neq0$) effective two-photon detuning, given from Eq. (\ref{delta}) with $D=0$. Note that effective two-photon resonance can be achieved with additional laser pulses \cite{Yatsenko97} or frequency chirping \cite{Paspalakis97}. For each of these $4\times 2=8$ cases, the authors of Ref. \cite{Vitanov97} simulate Eq. (\ref{system}) for various widths and delays of the Gaussian pulses, and find the maximum excited-state population achieved. These results are summarized in the second column of Table \ref{tab:eff}. \section{A simple sin-cos control protocol} \label{sec:linear} Let us for a moment consider the idealized situation without incoherent ionization and with zero effective two-photon detuning, $R=0$ and $\delta=0$, and set \begin{equation} \label{Gamma_sincos} \Gamma^p_g(t)=A\sin^2{\theta(t)},\quad \Gamma^s_e(t)=A\cos^2{\theta(t)}, \end{equation} so the mixing angle $\theta(t)$ is defined as $\tan{\theta(t)}=\sqrt{\Gamma^p_g(t)/\Gamma^s_e(t)}$. In this case, the adiabatic eigenstates of Eq. (\ref{system}) are \begin{equation} \label{eigenstates} |\psi_0\rangle=\left[ \begin{array}{c} \cos{\theta}\\ -\sin{\theta} \end{array} \right], \quad |\psi_1\rangle=\left[ \begin{array}{c} \sin{\theta}\\ \cos{\theta} \end{array} \right], \end{equation} with corresponding eigenvalues $\omega_0=0$ and $\omega_1/A=-(q+i)/2$. It can be easily shown that the probability amplitudes in the adiabatic basis, \begin{equation} \label{transformation} \left[ \begin{array}{c} a_0\\ a_1 \end{array} \right] = \left[ \begin{array}{cc} \cos{\theta} & -\sin{\theta}\\ \sin{\theta} & \cos{\theta} \end{array} \right] \left[ \begin{array}{c} b_g\\ b_e \end{array} \right], \end{equation} obey the following equation \begin{equation} \label{adiabatic} i \left[ \begin{array}{c} \dot{a}_0\\ \dot{a}_1 \end{array} \right] = \left[ \begin{array}{cc} 0 & -i\dot{\theta}\\ i\dot{\theta} & -\frac{A}{2}(q+i) \end{array} \right] \left[ \begin{array}{c} \dot{a}_0\\ \dot{a}_1 \end{array} \right]. \end{equation} Observe that, if the mixing angle is slowly varied from $\theta(0)=0$ to $\theta(T)=\pi/2$ at the final time $t=T$, then the system remains in the eigenstate $|\psi_0\rangle$, which is gradually transformed from $|\psi_0(0)\rangle=|g\rangle$ to $|\psi_0(T)\rangle=|e\rangle$, thus a perfect population transfer is accomplished. This is the reason behind the success of the Gaussian STIRAP pulses used in Ref. \cite{Vitanov97}. The adiabaticity condition is satisfied when $\dot{\theta}/A\ll q/2$, where observe from Eq. (\ref{adiabatic}) that $A|q|/2$ is the frequency separation between the adiabatic eigenstates, thus larger Fano factors facilitate the adiabatic process. Of course, deviations from the adiabaticity and/or the ideal conditions $R=0, \delta=0$ lead to incomplete population transfer, see the second column of Table \ref{tab:eff}. \begin{figure}[t] \centering \begin{tabular}{c} \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Ramp_T} \includegraphics[width=.85\linewidth]{Ramp_T}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Ramp_T_full} \includegraphics[width=.85\linewidth]{Ramp_T_full}} \end{tabular} \caption{(Color online) Excited-state population obtained with the simple sin-cos protocol as a function of the normalized pulse duration $AT$, for $q=-6$ and four different values of parameter $R$, which essentially determines the strength of incoherent ionization, $R=0$ (red solid line), $R=1/16$ (blue dashed line), $R=1/4$ (green dashed-dotted line), $R=1$ (cyan dotted line). (a) Case with effective two-photon resonance $\delta=0$. (b) Case with nonzero $\delta$ obtained from Eq. (\ref{delta}). On each curve is highlighted the maximum efficiency achieved, except case $\delta=0, R=0$, where complete population transfer is obtained in the limit of large $AT$. The isolated points with the same abscissas indicate the best efficiencies obtained in Ref. \cite{Vitanov97} with Gaussian STIRAP pulses for the same values of $R$.} \label{fig:Eff_vs_ramp} \end{figure} The form of Eq. (\ref{adiabatic}) motivates us to consider the situation where the mixing angle increases with constant rate \cite{Carroll92} \begin{equation} \dot{\theta}=\frac{\pi}{2T} \quad \mbox{constant}, \end{equation} thus the ionization widths of Eq. (\ref{Gamma_sincos}) follow a simple sin-cos control protocol \begin{equation} \label{sincos} \Gamma^p_g(t)=A\sin^2{\left(\frac{\pi t}{2T}\right)},\quad \Gamma^s_e(t)=A\cos^2{\left(\frac{\pi t}{2T}\right)}. \end{equation} The usage of this protocol is also motivated by our recent work \cite{Stefanatos_Opt_Lett20} in a different context, involving the double-$\Lambda$ atom–light coupling scheme. We show there that the performance of the aforementioned protocol, when applied to the dynamical system (\ref{system}) with $R=q=\delta=0$, approaches that of the optimal protocol, where the mixing angle is varied linearly too, but also includes some initial and final $\delta$-kicks changing $\theta$ instantaneously at the beginning and end. For the sin-cos protocol, Eq. (\ref{adiabatic}) can be easily integrated and we obtain in the idealized situation the following analytic expression for the population of the excited state at the final time $t=T$ \begin{equation} \label{constant_eff} |c_e(T)|^2=e^{-\eta AT}\left[\cosh{(\kappa AT)}+\frac{\eta}{2\kappa}\sinh{(\kappa AT)}\right]^2, \end{equation} where \begin{equation} \label{eta_kappa} \eta=\frac{1}{2}(1-iq),\quad\kappa=\frac{1}{2}\sqrt{\eta^2-\left(\frac{\pi}{AT}\right)^2}. \end{equation} \begin{figure*}[t] \centering \begin{tabular}{cc} \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Con_lin_d0} \includegraphics[width=.4\linewidth]{Con_lin_d0}} & \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Pop_lin_d0} \includegraphics[width=.4\linewidth]{Pop_lin_d0}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Con_lin_d} \includegraphics[width=.4\linewidth]{Con_lin_d}} & \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Pop_lin_d} \includegraphics[width=.4\linewidth]{Pop_lin_d}} \end{tabular} \caption{(Color online) (a) Ionization pulses $\Gamma^p_g(t)$ (red solid line) and $\Gamma^s_e(t)$ (blue dashed line) obtained with the simple sin-cos protocol for normalized duration $AT=1.9$, in the case of effective two-photon resonance $\delta=0$ and $R=1/4$. (b) Corresponding evolution of populations of the ground (blue dashed line) and excited states (red solid line). (c) Ionization pulses obtained with the sin-cos protocol for normalized duration $AT=1.3$, $\delta\neq 0$ and $R=1/16$. (d) Corresponding evolution of populations.} \label{fig:Ex1} \end{figure*} The transfer efficiency of the sin-cos protocol for the idealized case $R=0, \delta=0$, given in Eq. (\ref{constant_eff}), is displayed in Fig. \ref{fig:Ramp_T} (red solid line) as a function of the normalized duration $AT$, a quantity which is also proportional to the area ($AT/2$) of the pulses (\ref{sincos}). Observe that the efficiency increases with the pulse area and approaches unity in the limit $AT\rightarrow\infty$. We also display the efficiency for the nonzero values of $R$ previously used, and observe that in this case the efficiency is maximized at a finite pulse area. Both characteristics are also observed with Gaussian pulses in Ref. \cite{Vitanov97}. The maximum efficiencies corresponding to different $R$ are shown with markers in Fig. \ref{fig:Ramp_T}, and are also listed in the third column of the upper part of Table \ref{tab:eff}. For comparison, we also display with markers the maximum efficiencies obtained for the same values of $R$ in Ref. \cite{Vitanov97} using Gaussian pulses, listed in the second column of the upper part of Table \ref{tab:eff}. From Fig. \ref{fig:Ramp_T} or Table \ref{tab:eff} we note that the two methods achieve the same maximum efficiency for $R=0$, similar maximum efficiencies for $R=1/16$ and $R=1/4$ (blue squares and green triangles respectively), while for $R=1$ (cyan diamonds) the maximum obtained with Gaussian pulses is larger. In Fig. \ref{fig:Ramp_T_full} we display similar results but for the case where $\delta\neq0$, obtained from Eq. (\ref{delta}) with $D=0$. In this case, the maximum efficiencies obtained with the sin-cos protocol are smaller than those obtained with Gaussian pulses for all values of $R$, with the difference becoming more distinct for larger $R$. The maximum efficiencies obtained with the two protocols for different $R$ are also listed in the second and third columns of the lower part of Table \ref{tab:eff}. The better performance of the Gaussian pulses can be attributed to the fact that in this case there are two parameters over which the optimization is performed, the pulse width $2T$ and the delay $2\tau$, while for the sin-cos pulses the only parameter is the pulse duration $T$. In Fig. \ref{fig:Ex1} we show the sin-cos ionization widths (\ref{sincos}) and the corresponding evolution of populations for two specific examples, $\delta=0, R=1/4, AT=1.9$ (first row) and $\delta\neq 0, R=1/16, AT=1.3$ (second row). \section{Optimal control analysis of the problem} \label{optimal_solution} In order to find pulse shapes which outperform the Gaussian pulses, we employ in this section optimal control theory \cite{Bryson}. We use as state variables $x_i$, $i=1,2,3,4$, the real and imaginary parts of the probability amplitudes $b_g, b_e$, \begin{equation} \label{state} b_g=x_1+ix_2,\quad b_e=x_3+ix_4 \end{equation} and as control variables the square roots of the ionization widths, \begin{equation} \label{controls} \sqrt{\Gamma^p_g(t)}=u_1(t),\quad \sqrt{\Gamma^s_e(t)}=u_2(t). \end{equation} With the above definitions, we find from Eq. (\ref{system}), using also Eq. (\ref{widths_shifts}), the state equations \begin{subequations} \label{state_system} \begin{eqnarray} \dot{x}_1&=&-\frac{1}{2}u_1^2x_1-\frac{q}{2}u_1^2x_2-\frac{1}{2}u_1u_2x_3-\frac{q}{2}u_1u_2x_4, \\ \dot{x}_2&=&\frac{q}{2}u_1^2x_1-\frac{1}{2}u_1^2x_2+\frac{q}{2}u_1u_2x_3-\frac{1}{2}u_1u_2x_4, \\ \dot{x}_3&=&-\frac{1}{2}u_1u_2x_1-\frac{q}{2}u_1u_2x_2-\frac{1}{2}(Ru_1^2+u_2^2)x_3\nonumber\\ &&+\left(\delta-\frac{q}{2}u_2^2\right)x_4, \\ \dot{x}_4&=&\frac{q}{2}u_1u_2x_1-\frac{1}{2}u_1u_2x_2+\left(\frac{q}{2}u_2^2-\delta\right)x_3\nonumber\\ &&-\frac{1}{2}(Ru_1^2+u_2^2)x_4, \end{eqnarray} \end{subequations} where for the effective two-photon detuning we distinguish two cases as before, one with $\delta=0$ and one with $\delta\neq 0$ obtained from Eq. (\ref{delta}) for $D=0$ which, using Eqs. (\ref{widths_shifts}) and (\ref{controls}), becomes \begin{equation} \label{d} \delta=-\frac{q}{2}u_1^2+\left(4+\frac{q}{2}\right)u_2^2. \end{equation} We would like to find the controls $u_1(t), u_2(t)$ which maximize the final population of the excited state \begin{equation} \label{final_exc} |c_e(T)|^2=|b_e(T)|^2=x_3(T)^2+x_4(T)^2 \end{equation} when starting from the ground state $b_g(0)=1$, corresponding to the initial conditions \begin{equation} \label{initial_condition} x_1(0)=1,\quad x_2(0)=x_3(0)=x_4(0)=0, \end{equation} while satisfying the constraints \begin{equation} \label{constraint} 0\leq u_i(t)/\sqrt{A} \leq 1,\quad i=1,2. \end{equation} An important observation can be immediately made by inspecting the system equations. If $\pi_1, \pi_2$ denote the optimal policies in the time intervals $[0,\,T_1]$ and $[0,\,T_2]$, respectively, with $T_1\leq T_2$, then $|c_e(T_2)|^2\geq |c_e(T_1)|^2$, i.e. a better transfer efficiency can be obtained in a longer duration. This can be easily shown as follows. Consider the longer time interval and suppose that for $0\leq t\leq T_1$ we apply policy $\pi_1$, while for $T_1< t\leq T_2$ we set $u_1(t)=u_2(t)=0$. But with this latter control choice, the left hand sides of system equations (\ref{state_system}) become zero, for both $\delta=0$ and $\delta$ given from Eq. (\ref{d}), thus the system remains in the state reached at $t=T_1$ and the efficiency at the final time $t=T_2$ equals $|c_e(T_1)|^2$, the efficiency of policy $\pi_1$. Obviously, this should be lower or equal to the efficiency obtained with policy $\pi_2$, which is by definition optimal over the whole time interval $[0,\,T_2]$. Note that there is no contradiction with the sin-cos protocol and the Gaussian pulses, where the maximum efficiency is obtained for finite duration (pulse area), since for the optimal pulses there is no association between the pulse duration and area. In order to obtain an idea about the form of the optimal $u_i(t)$, we will use some simple elements from optimal control theory. The control Hamiltonian of the problem is defined as \cite{Bryson} \begin{equation} \mathcal{H}_c=\lambda_1\dot{x}_1+\lambda_2\dot{x}_2+\lambda_3\dot{x}_3+\lambda_4\dot{x}_4=\mathcal{H}_c(\bm{\lambda},\mathbf{x},\mathbf{u}), \end{equation} which becomes a function of the state variables $\mathbf{x}=[x_1, x_2, x_3, x_4]^T$ and the controls $\mathbf{u}=[u_1, u_2]^T$ by replacing the state derivatives in the above definition using Eq. (\ref{state_system}). The Lagrange multipliers $\bm{\lambda}=[\lambda_1, \lambda_2, \lambda_3, \lambda_4]^T$ satisfy the adjoint equations \begin{equation} \label{adjoint} \dot{\bm{\lambda}} = -\frac{\partial \mathcal{H}_c}{\partial\mathbf{x}}. \end{equation} and the terminal conditions \cite{Bryson} \begin{subequations} \label{final_lambda} \begin{eqnarray} \lambda_1(T)&=&\frac{\partial |c_e(T)|^2}{\partial x_1(T)}=0,\\ \lambda_2(T)&=&\frac{\partial |c_e(T)|^2}{\partial x_2(T)}=0,\\ \lambda_3(T)&=&\frac{\partial |c_e(T)|^2}{\partial x_3(T)}=2x_3(T),\\ \lambda_4(T)&=&\frac{\partial |c_e(T)|^2}{\partial x_4(T)}=2x_4(T). \end{eqnarray} \end{subequations} According to Pontryagin's Maximum Principle \cite{Bryson}, the optimal controls are chosen to maximize $\mathcal{H}_c$. But from Eq. (\ref{system}) it turns out that $\mathcal{H}_c$ is a quadratic function of the control variables $u_1, u_2$, which are restricted in the square (\ref{constraint}) on the $u_1u_2$-plane. If for some finite time interval the optimal $u_i$ is one of the bounds of the constraint (\ref{constraint}) then it is called a bang control, otherwise it lies in the interior and is determined from the relation $\partial \mathcal{H}_c/\partial u_i=0$. From our previous experience with systems where $\mathcal{H}_c$ is quadratic in the controls \cite{Stefanatos_PRA04}, in the context of nuclear magnetic resonance spectroscopy, we know that for short durations $T$ the optimal $u_i$ have the bang form (both obtain the maximum value), since in this case the major limitation is the short available time and not the incoherent losses. Of course, the transfer efficiencies obtained are also limited. For the more interesting case with longer durations, where larger efficiencies can be achieved, the optimal $u_i$ assume the bang-interior and interior-bang form, in order to engineer a path along which the ionization losses are minimized (note that the bang segments correspond to maxima). The analytical determination of the switching times, from bang to interior and vice versa, is a formidable task, as is the solution of the optimal control problem which is a two-point boundary value problem, since we are given the initial conditions (\ref{initial_condition}) for the state variables and the terminal conditions (\ref{final_lambda}) for the Lagrange multipliers. For example, we mention that even in Ref. \cite{Stefanatos_PRA04}, where we indeed solve a (simpler) problem of this type, the switching times are determined by solving a system of transcendental equations, which eventually needs to be done numerically. For these reasons, we will not pursue further the analytical investigation of the problem, but continue using numerical optimal control in the next section. \section{Numerical results and discussion} \label{sec:results} \begin{figure}[t] \centering \begin{tabular}{c} \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Eff_T} \includegraphics[width=.85\linewidth]{Eff_T}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Eff_T_full} \includegraphics[width=.85\linewidth]{Eff_T_full}} \end{tabular} \caption{(Color online) Excited-state population obtained with the optimal pulses as a function of the normalized pulse duration $AT$, with step $\delta T=0.1/A$, for $q=-6$ and four different values of parameter $R$, which essentially determines the strength of incoherent ionization. The isolated points on the right vertical axes indicate the best efficiencies obtained in Ref. \cite{Vitanov97} with Gaussian STIRAP pulses for the same values of $R$. (a) Case with effective two-photon resonance $\delta=0$. (b) Case with nonzero $\delta$ obtained from Eq. (\ref{d}).} \label{fig:Eff_vs_T} \end{figure} In order to solve numerically the control problem defined in the previous section, we use the optimal control solver BOCOP \cite{bocop}. In Fig. \ref{fig:Eff_T} we plot for the case $\delta=0$ the final population of the excited state $|c_e(T)|^2$ obtained with optimal pulses, as a function of the normalized duration $AT$ with step $\delta T=0.1/A$, using $q=-6$ and the four values of parameter $R$ previously utilized. Observe that in all cases the efficiency increases with increasing duration, as it was proved in the previous section. For $R=0$ the efficiency approaches unity for large $T$, as is the case for the Gaussian pulses and the sin-cos protocol. For $R>0$, the efficiency saturates to a value lower than one, which decreases with increasing $R$. Note that efficiencies close to these saturation limits can be obtained at finite durations, which become smaller for increasing $R$. We list the saturation limits (maximum efficiencies) in the fourth column of the upper part of Table \ref{tab:eff}. For comparison, in Fig. \ref{fig:Eff_T} we display with isolated points on the right vertical axes the best efficiencies obtained in Ref. \cite{Vitanov97} with Gaussian pulses, for the same values of $R$. Although for $R=0$ both methods have the same maximum efficiency (unity), the optimal method performs better for increasing $R>0$. Similar results are obtained for the case $\delta\neq 0$, shown in Fig. \ref{fig:Eff_T_full}, but now the saturation efficiencies are smaller than before, for the same values of $R$, and are obtained in shorter durations. As before, these maximum efficiencies are listed in the fourth column of the lower part of Table \ref{tab:eff}. Observe that, even for $R=0$, the saturation efficiency is less than unity. The optimal method performs better for all $R$, but now the improvement is reduced for increasing $R$. \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{Eff_R} \caption{(Color online) Maximum excited-state population obtained with the optimal pulses (in the limit of large $AT$) as a function of parameter $R$, with step $\delta R=0.05$, for the case of effective two-photon resonance $\delta=0$ (red circles) and nonzero $\delta$ from Eq. (\ref{d}) (red squares). The isolated points at the specific values $R=0, 1/16, 1/4, 1$ indicate the best efficiencies obtained in Ref. \cite{Vitanov97} using Gaussian STIRAP pulses.} \label{fig:Eff_R} \end{figure} \begin{figure}[t] \centering \begin{tabular}{c} \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Eff_q} \includegraphics[width=.85\linewidth]{Eff_q}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Eff_q_full} \includegraphics[width=.85\linewidth]{Eff_q_full}} \end{tabular} \caption{(Color online) Maximum excited-state population obtained with the optimal pulses (in the limit of large $AT$) as a function of Fano parameter $q$, with step $\delta q =0.5$, for four different values of $R$. The isolated points at $q=-6$ indicate the best efficiencies obtained in Ref. \cite{Vitanov97} with Gaussian STIRAP pulses for the same values of $R$. (a) Case with effective two-photon resonance $\delta=0$. (b) Case with nonzero $\delta$ obtained from Eq. (\ref{d}).} \label{fig:Eff_vs_q} \end{figure} \begin{figure*}[t] \centering \begin{tabular}{cc} \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Con_opt_d0} \includegraphics[width=.4\linewidth]{Con_opt_d0}} & \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Pop_opt_d0} \includegraphics[width=.4\linewidth]{Pop_opt_d0}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Con_opt_d} \includegraphics[width=.4\linewidth]{Con_opt_d}} & \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:Pop_opt_d} \includegraphics[width=.4\linewidth]{Pop_opt_d}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:smooth_con} \includegraphics[width=.4\linewidth]{Smooth_con}} & \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:smooth_pop} \includegraphics[width=.4\linewidth]{Smooth_pop}} \end{tabular} \caption{(Color online) (a) Optimal ionization pulses $\Gamma^p_g(t)$ (red solid line) and $\Gamma^s_e(t)$ (blue dashed line) for normalized duration $AT=3$, in the case of effective two-photon resonance $\delta=0$ and $R=1/4$. (b) Corresponding evolution of populations of the ground (blue dashed line) and excited states (red solid line). (c) Optimal ionization pulses obtained for normalized duration $AT=0.9$, $\delta\neq 0$ and $R=1/16$. (d) Corresponding evolution of populations. (e) Smoothed version of the pulses of Fig. \ref{fig:Con_opt_d}. (f) Corresponding evolution of populations.} \label{fig:Ex2} \end{figure*} The different behavior of improvement for increasing $R$, when $\delta=0$ and $\delta\neq 0$, is better demonstrated in Fig. \ref{fig:Eff_R}. There, we display the maximum excited-state population obtained with the optimal pulses (in the limit of large $AT$) as a function of parameter $R$, with step $\delta R=0.05$, for the case of effective two-photon resonance $\delta=0$ (red circles) and nonzero $\delta$ from Eq. (\ref{d}) (red squares). The isolated points at the specific values $R=0, 1/16, 1/4, 1$ indicate again the best efficiencies obtained in Ref. \cite{Vitanov97} using Gaussian STIRAP pulses. Now it is clear that for $\delta=0$ the improvement becomes better with increasing $R$, while for $\delta\neq 0$ becomes worse. This behavior can be explained if we recall that in general a larger parameter $R$ corresponds to a shorter effective duration available for the transfer. In the absence of two-photon detuning, the Gaussian STIRAP pulses can exploit the longer times which are available for smaller $R$ and obtain a transfer efficiency close to the optimal. But the presence of two-photon detuning deteriorates STIRAP, which cannot exploit the longer available durations for smaller $R$, to the same extent as the optimal method does. In Fig. \ref{fig:Eff_vs_q} we plot the maximum excited-state population obtained with the optimal pulses (in the limit of large $AT$) as a function of Fano parameter $q$, with step $\delta q =0.5$, for the four different values of $R$ used throughout this paper. The isolated points at $q=-6$ indicate the best efficiencies obtained in Ref. \cite{Vitanov97} with Gaussian STIRAP pulses for the same values of $R$. Observe that the maximum efficiency increases with increasing $|q|$, except of course the case $\delta=0, R=0$, where equals unity for all $q$ (obtained in the limit of large $AT$). The improvement is more dramatic for the case with $\delta\neq 0$, shown in Fig. \ref{fig:Eff_q_full}. This can be understood if we recall that in section \ref{sec:linear} we identified $|q|/2$ as the frequency separation between the adiabatic eigenstates, thus its increase reduces drastically the deteriorative influence of the effective two-photon detuning. In Fig. \ref{fig:Ex2} we display the optimal ionization widths and the corresponding evolution of populations for the two pairs of $\delta, R$ also used in Fig. \ref{fig:Ex1}. Specifically, we use the values $\delta=0, R=1/4, AT=3$ (first row) and $\delta\neq 0, R=1/16, AT=0.9$ (second row). The (finite) normalized durations for each case are selected such that the obtained efficiency closely approaches the maximum listed in the last row of Table \ref{tab:eff}. Observe that the optimal pulses have the bang-interior and interior-bang form described in Sec. \ref{optimal_solution}. And, although both pulses have nonzero values at the initial and final times, they follow in general the counterintuitive pulse order of STIRAP, since $\Gamma^p_g(t)$ (red solid line) corresponds to the pump pulse and $\Gamma^s_e(t)$ (blue dashed line) to the Stokes pulse. Note that the use of optimized pulses with nonzero Rabi frequency at the boundary times is not unusual in quantum control, and as a characteristic example we mention the Vitanov type optimized STIRAP pulses, see Ref. \cite{Clerk16} which describes their so-called ``superadiabatic" version. Another interesting observation is that in the second example, the control $\Gamma^p_g(t)$ maintains its maximum value for a longer portion of the available time interval than in the first example, compare the red solid lines in Figs. \ref{fig:Con_opt_d} and \ref{fig:Con_opt_d0}. The reason is that the incoherent terms (those involving $R$) in system Eqs. (\ref{state_system}) are proportional to $u_1^2\sim\Gamma^p_g(t)$, thus for smaller values of $R$, as in the second example, larger values of $\Gamma^p_g(t)$ can overall improve the efficiency despite the small increase of the incoherent losses. Additionally, from Eq. (\ref{d}) we see that larger values of $\Gamma^p_g(t)$ and thus of $u_1$ reduce the effective two-photon detuning, which is taken into account in the second example. One of the striking differences between the optimal pulses and the Gaussian pulses is that they are not smooth but they have a kink, at the point where they change form from interior to bang or vice versa. In order to evaluate the implications of this feature, we find the transfer efficiency using a smoothed version of the pulses of Fig. \ref{fig:Con_opt_d}, displayed in Fig. \ref{fig:smooth_con}, which might be more suitable for practical implementation. The smoothed pulses are obtained by undersampling the original pulses by a factor of 20 and then using cubic interpolation for the sampled points. In Fig. \ref{fig:smooth_pop} we display the evolution of populations for the smoothed pulses, which looks identical to Fig. \ref{fig:Pop_opt_d}, obtained with the original optimal pulses. Actually, there is a very slight decrease in the transfer efficiency. We close by investigating the robustness of the proposed method. We consider the distorted pulses $\alpha\Gamma_{g,e}^{p,s}(t)$, where $\alpha$ is the distortion parameter. Note that, since $\alpha$ eventually multiplies the right hand sides of system equations, see Eqs. (\ref{state_system}) and (\ref{d}), it can also be used to rescale time as $t'=\alpha t$, thus $\alpha>1$ corresponds also to pulse dilation, while $\alpha<1$ to pulse contraction. In Figs. \ref{fig:error1}, \ref{fig:error2} we display with red solid lines the transfer efficiency obtained when the distorted pulses are applied to the system, corresponding to the examples shown in the first and second row of Fig. \ref{fig:Ex2}, respectively. The horizontal blue lines indicate the best efficiency obtained with Gaussian pulses, \emph{without} taking into account any error. Observe that the advantage of our method over the undisturbed Gaussian pulses is maintained for an appreciable range of the distortion parameter. The noticed asymmetry, where the performance is better for $\alpha>1$, is because these $\alpha$ values correspond to larger pulse areas. \begin{figure}[t] \centering \begin{tabular}{c} \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:error1} \includegraphics[width=.85\linewidth]{error2}} \\ \subfigure[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad]{ \label{fig:error2} \includegraphics[width=.85\linewidth]{error1}} \end{tabular} \caption{(Color online) Excited state population obtained with the distorted pulses $\alpha\Gamma_{g,e}^{p,s}(t)$, using as reference pulses ($\alpha=1$) (a) the pulses shown in Fig. \ref{fig:Con_opt_d0}, (b) the pulses shown in Fig. \ref{fig:Con_opt_d}.} \label{fig:robustness} \end{figure} \section{Conclusion} \label{sec:conclusion} We used optimal control theory to find pulses which maximize the population transfer between two bound states coupled via a continuum of states. We obtained better efficiencies than with the standard Gaussian STIRAP pulses, while the degree of improvement depends on whether we take into account the effective two-photon detuning, as well as the size of incoherent ionization. The present work is expected to be useful for applications involving population transfer between bound states through a continuum, for example coherence effects, like population trapping and electromagnetically induced transparency, optical analogs for light waves propagating in waveguide-based photonic structures, and qubits coupled via a continuum of bosonic or waveguide modes.
1,116,691,500,519
arxiv
\section*{Acknowledgements} #1 } \newcommand{\Rep}{\mathbf{Rep}} \newcommand{\C}{\mathbb{C}} \newcommand{\E}{\mathbb{E}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\curly}{\mathcal} \newcommand{\HH}{\mathrm{H}} \newcommand{\G}{\mathbf{G}} \newcommand{\A}{\mathbb{A}} \newcommand{\Z}{\mathbb{Z}} \renewcommand{\H}{\mathbb{H}} \newcommand{\w}{\mathrm{w}} \newcommand{\union}{\cup} \newcommand{\intersect}{\cap} \newcommand{\kp}{\vdash} \renewcommand{\hat}{\widehat} \renewcommand{\check}{\widecheck} \newcommand{\Dirsum}{\bigoplus} \newcommand{\hmtpc}{\simeq} \newcommand{\isom}{\cong} \newcommand{\homeo}{\approx} \newcommand{\til}{\widetilde} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ext}{Ext} \DeclareMathOperator{\Mut}{Mut} \newcommand{\codim}{\mathrm{codim}} \newcommand{\Tr}{\mathrm{tr}} \newcommand{\compose}{\circ} \newcommand{\bfs}[1]{\ensuremath\mathbf{#1}} \newcommand{\dprod}{\displaystyle \mathop{\prod}^{\curvearrowright}} \title[q.~dilog.~identities for $n$-cycles]{Quantum dilogarithm identities for $n$-cycle quivers} \author[J.~Allman]{Justin Allman} \address{Department of Mathematics, U.S.~Naval Academy, Annapolis, MD, USA} \email{[email protected]} \keywords{% quantum dilogarithm, quiver representation, maximal green sequence } \begin{document} \begin{abstract} We prove quantum dilogarithm identities for $n$-cycle quivers. By the combinatorial approach of Keller, each side of our identity determines a maximal green sequence of quiver mutations. Thus we interpret our identities as factorizations of the refined Donaldson--Thomas invariant for the quiver with potential. Finally, we conjecture an upper bound on the possible lengths of maximal green sequences. \end{abstract} \maketitle \section{Introduction} \label{s:intro} Quantum dilogarithm identities are remarkable equalities of non-commutative power series which have several interpretations in mathematics and physics. In combinatorics they encode partition counting identities akin to the Durfee's square recursion (e.g.~\cite{rraway2018}). In physics, they are quantum versions of identities in supersymmetric gauge theories (e.g.~\cite{lfrk1994}). In geometry, they represent wall-crossing formulas for stability conditions and relate Poincar\'e series in cohomological Hall algebras (e.g.~\cite{mkys2011}). Keller initiated the study of the so-called \emph{refined Donaldson--Thomas invariant} (henceforth DT-invariant), which is the common value of these equalities, by a combinatorial algorithm called \emph{maximal green sequences} of quiver mutations \cite{bk2013.fpsac}. Such sequences were known to physicists as ``finite chambers'' for BPS spectra in supersymmetric quantum conformal field theories. Recent progress has been made in classifying maximal green sequences. For example, the ``No Gap Conjecture'' of \cite{tbgdmp2014} has been established for tame hereditary algebras, and hence for quivers which are acyclic orientations of simply-laced extended Dynkin diagrams \cite{shki2019}. Furthermore, results on the lower bounds for lengths of maximal green sequences in certain types have appeared, e.g.~\cite{agtmks2017,agtmks2018}. In this note we continue a program of studying quantum dilogarithm identities and the associated DT-invariants via topology, see \cite{rr2013,jarr2018}. Whereas maximal green sequences produce implicit quantum dilogarithm identities (via an algorithm), our identities are explicitly determined. On one hand, we study identities of Betti numbers in different stratifications of the quiver's representation space. On the other, we establish identities by dimension counting arguments in the associated quantum algebra. We seek to consider a first approach to the non-acyclic case. As such, we study quivers with \emph{potential} (which is a formal polynomial of cyclic paths unique up to cyclic permutation), and where the topological arguments are complicated by \emph{rapid decay cohomology}. We study the specific case of the $n$-cycle quiver. Let $\Gamma_n$ denote the quiver below where we set $a_i$ to be the arrow whose head is the vertex $i$, and to which we assign the potential $W = -a_1a_2\cdots a_n$ throughout the rest of the paper. \begin{equation} \label{eqn:Cn.picture} \vcenter{\hbox{ \begin{tikzpicture} \node (1) at (0,0) {$1$}; \node (2) at (1.25,0) {$2$}; \node (dots) at (2.75,0) {$\cdots$}; \node (n) at (4.25,0) {$n$}; \draw[->] (n) -- (dots); \draw[->] (dots) -- (2); \draw[->] (2) -- (1); \draw[->, domain=0.25:4] plot (\x, {.2*\x*(\x-4.25)}); \end{tikzpicture} }}\nonumber \end{equation} \section{Quiver preliminaries} \label{s:quiver.preliminaries} A quiver $Q$ is a directed graph with vertices $Q_0$ and edges $Q_1$ called \emph{arrows}. Every $a\in Q_1$ has a \emph{tail}, $ta$, and \emph{head}, $ha$, in $Q_0$. Given a \emph{dimension vector} $\gamma = (\gamma(i))_{i\in Q_0}$ of non-negative integers, we associate the \emph{representation space}, $\Rep_\gamma(Q)$, and a group, $\G_\gamma$, as follows \begin{equation} \label{eqn:defn.repspace.and.G} {\textstyle \Rep_\gamma(Q) = \Dirsum_{a\in Q_1} \Hom(\C^{\gamma(ta)},\C^{\gamma(ha)}), \quad\quad \G_\gamma = \prod_{i\in Q_0} \GL(\gamma(i),\C).} \nonumber \end{equation} The group $\G_\gamma$ acts on $\Rep_\gamma(Q)$ via $(g_i)_{i\in Q_0} \cdot (X_a)_{a\in Q_1} = (g_{ha} X_a g_{ta}^{-1})_{a\in Q_1}$. Let $e_i$ denote the dimension vector with a $1$ at vertex $i\in Q_0$ and zeroes elsewhere. Then for every quiver, we define the $\Z$-bilinear anti-symmetric form $\lambda$ by \begin{equation} \label{eqn:lambda.defn} \lambda(e_i,e_j) = \#\{\text{arrows~} a:i\to j\} - \#\{\text{arrows~} a:j \to i\} \nonumber \end{equation} and extending linearly to all dimension vectors. This is the opposite antisymmetrization of the Euler form. Further, fix an indeterminate $q^{1/2}$. The \emph{quantum algebra} $\A_Q$ is the $\Q(q^{1/2})$-algebra with underlying vector space spanned by the symbols $y_\gamma$, for every dimension vector $\gamma$, subject to the relation that for any two dimension vectors $\gamma_1$ and $\gamma_2$, we have $y_{\gamma_1+\gamma_2} = -q^{-\frac{1}{2}\lambda(\gamma_1,\gamma_2)}y_{\gamma_1}y_{\gamma_2}$, which implies the relation $y_{\gamma_2}y_{\gamma_1} = q^{\lambda(\gamma_1,\gamma_2)}y_{\gamma_1}y_{\gamma_2}$. We let $\hat{\A}_Q$ denote the completion of this algebra allowing formal power series in the variables $y_\gamma$. \subsection{Dynkin quivers} A \emph{Dynkin quiver} is an orientation of a simply-laced Dynkin diagram, i.e.~of type $A$, $D$, or $E$. In this paper, we need only so-called \emph{equioriented} $A_n$ quivers \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \node (1) at (0,0) {$1$}; \node (2) at (1.25,0) {$2$}; \node (dots) at (2.75,0) {$\cdots$}; \node (n) at (4.25,0) {$n$}; \draw[->] (n) -- (dots); \draw[->] (dots) -- (2); \draw[->] (2) -- (1); \end{tikzpicture} }}\nonumber \end{equation} and so henceforth the symbol $A_n$ denotes the quiver above. The simple roots of $A_n$ can be identified with the dimension vectors $e_i$, and we denote by $\Phi(A_n)$ the corresponding set of positive roots. Each $\beta\in\Phi(A_n)$ is realized as a dimension vector for $Q$ and is identified with the interval of positive integers $[k_1,k_2]$ since it can be written uniquely in the form $\beta = \sum_{i=k_1}^{k_2} e_i$ for some $1\leq k_1 \leq k_2 \leq n$. Hence $|\Phi(A_n)| = \frac{1}{2}n(n+1) = \binom{n+1}{2}$. We let $\beta_0 = [1,n]$ denote the longest root. For a fixed dimension vector $\gamma$, the work of Gabriel implies that $\Rep_\gamma(Q)$ admits finitely many $\G_\gamma$-orbits if and only if $Q$ is {Dynkin} \cite{pg1972}. Explicitly, Dynkin orbits are in bijection with lists $m=(m_\beta)_{\beta\in\Phi(A_n)}$ of non-negative integers such that $\gamma = \sum_{\beta\in\Phi(A_n)} m_\beta\,\beta$. We call $m$ a \emph{Kostant partition} of $\gamma$ and write $m\vdash\gamma$. Let $\curly{O}_m$ denote the corresponding $\G_\gamma$-orbit. \subsection{Counting Betti numbers of orbits} Stabilizers of points in $\curly{O}_m$ are conjugate, and hence isomorphic. Let $\G_m$ denote the isomorphism type of such a stabilizer. A standard argument in equivariant cohomology gives an isomorphism $\HH^*_{\G_\gamma}(\curly{O}_m) \isom \HH^*_{\G_m}(pt) = \HH^*(B\G_m)$ where the last equality uses the definition of the so-called \emph{Borel construction}. Here and throughout the paper, all cohomologies have coefficients in $\R$. The \emph{Poincar\'e series} of a cohomology algebra $H^*$ is $\curly{P}[H^*] = \sum_{k\geq 0} q^{k/2} h_k$ where $h_k=\dim_\R(H^k)$ is the $k$-th Betti number. We set $\curly{P}_d := \curly{P}[\HH^*(B\GL(d,\C))]$. Now $\HH^*(B\GL(d,\C))$ is a polynomial ring in the Chern classes $c_i$ of $\GL(d,\C)$, with $\deg(c_i) = 2i$, whence \begin{equation} \label{eqn:curly.P.d} \curly{P}_d = \sum_{k\geq 0} q^k \dim_\R(\HH^{2k}(B\GL(d,\C))) = \prod_{j=1}^d \frac{1}{1-q^j}. \nonumber \end{equation} Finally, up to homotopy we have that $\G_m \hmtpc \prod_{\beta\in\Phi(A_n)} \GL(m_\beta,\C)$ \cite[Proposition~3.6]{lfrr2002.duke}, and so putting everything together, the K\"unneth isomorphism further implies that \begin{equation} \label{eqn:poincare.orbit} \curly{P}(m) := \curly{P}[\HH^*_{\G_\gamma}(\curly{O}_m)] = \prod_{\beta\in\Phi(A_n)} \curly{P}_{m_\beta}. \end{equation} \subsection{$n$-cycle quivers and potentials} Henceforth assume $n\geq 3$. $\Gamma_n$ admits infinitely many quiver orbits (it is not Dynkin); however, we give two natural methods to stratify $\Rep_\gamma(\Gamma_n)$ into finitely many $\G_\gamma$-stable subvarieties. First, given a Kostant partition $m\kp\gamma$ for the $A_n$ subquiver in $\Gamma_n$, we define the subvariety \begin{equation} \label{eqn:defn.w.linear.stratum} \Sigma^1_m = \left\{ (X_{a_i})_{i=1}^n \in \Rep_\gamma(\Gamma_n) : (X_{a_1},\ldots,X_{a_{n-1}}) \in \curly{O}_m \subset \Rep_\gamma(A_n) \right\}. \end{equation} Further, for any integer $1 \leq \ell < n$, write $j = n-\ell$. We rename the vertices and arrows of $\Gamma_n$ according to the picture \begin{equation} \label{eqn:Cn.w.quad.picture} \vcenter{\hbox{ \begin{tikzpicture} \node (1) at (0,1) {$1$}; \node (topdots) at (2,1) {$\cdots$}; \node (ell) at (4,1) {$\ell$}; \node (1') at (3.5,0) {$1'$}; \node (botdots) at (2,0) {$\cdots$}; \node (j') at (.5,0) {$j'$}; \node at (.95,1.25) {$a_1$}; \node at (3.25,1.25) {$a_{\ell-1}$}; \node at (1.45,-.25) {$a_{j'-1}$}; \node at (2.9,-.25) {$a_{1'}$}; \node at (4.25,.5) {$b$}; \node at (-.25,.5) {$a$}; \draw[->] (ell) -- (topdots); \draw[->] (topdots) -- (1); \draw[->] (j') -- (botdots); \draw[->] (botdots) -- (1'); \draw[->] (1) -- (j'); \draw[->] (1') -- (ell); \end{tikzpicture} }} \end{equation} Notice $A_\ell$ is the subquiver along the top row and $A_{j}$ is the subquiver along the bottom row. We respectively denote the restriction of a dimension vector $\gamma$ to these subquivers by $\gamma|A_\ell$ and $\gamma|A_{j}$. Given Kostant partitions $m \kp \gamma|A_\ell$ and $m'\kp\gamma|A_{j}$ we define \begin{equation} \label{eqn:defn.w.quad.stratum} \Sigma^2_{m,m'} = \left\{ (X_{a_i})_{i=1}^n \in \Rep_\gamma(\Gamma_n) : \begin{array}{ll} (X_{a_1},\ldots,X_{a_{\ell-1}}) \in \curly{O}_m \subset \Rep_{\gamma|A_\ell}(A_\ell) \\ (X_{a_{1'}},\ldots,X_{a_{j'-1}}) \in \curly{O}_{m'} \subset \Rep_{\gamma|A_{j}}(A_j) \end{array} \right\}. \end{equation} Observe that for every $\gamma$ and every $\ell$, we have \begin{equation} \label{eqn:stratifications} { \bigsqcup_{m \kp \gamma} \Sigma^1_m = \Rep_\gamma(\Gamma_n) = \bigsqcup_{m\kp \gamma|A_\ell,\,m'\kp \gamma|A_{j}} \Sigma^2_{m,m'}} \end{equation} We remark that $\Sigma^1_m$ and $\Sigma^2_{m,m'}$ coincide for $\ell = n$, but we still choose to make a formal distinction. Corresponding to the potential $W$ we define the regular function \begin{gather*} \label{eqn:Cn.superpotential} W_\gamma:\Rep_\gamma(Q) \to \C \\ (X_{a_1} ,\ldots, X_{a_n} ) {\longmapsto} -\Tr( X_{a_1} \compose \cdots \compose X_{a_n} ) \end{gather*} which makes sense because $X_{a_1} \compose \cdots \compose X_{a_n} \in \Hom(\C^{\gamma(1)},\C^{\gamma(1)})$ and is well-defined up to cyclic permutation of the arrows. Moreover, we note that $W_\gamma$ is $\G_\gamma$-invariant. Finally, we call the strata defined by \eqref{eqn:defn.w.linear.stratum} \emph{$W$-linear strata} and those defined by \eqref{eqn:defn.w.quad.stratum} \emph{$W$-quadratic strata}. \section{Ordering roots} \label{s:ordering.roots} We now define a total order on the roots of $A_n$ via the \emph{Auslander--Reiten ($AR$) quiver}; we refer the reader to \cite{rs2014} for more details on the general theory. The quiver $AR(Q)$ has the indecomposable quiver representations as its vertices, which by Gabriel's theorem correspond to positive roots in Dynkin type. Below we display $AR(A_n)$ \begin{equation} \label{eqn:AR.An} \vcenter{\hbox{ \begin{tikzpicture} \node (nn) at (1,-1) {$[n,n]$}; \node (11) at (7,-1) {$[1,1]$}; \node (22) at (5,-1) {$[2,2]$}; \node (12) at (6,0) {$[1,2]$}; \node (1n) at (4,2) {$[1,n]$}; \node (2n) at (3,1) {$[2,n]$}; \node (a) at (2,0) {$\iddots$}; \node (b) at (3,-1) {$\cdots$}; \node (d) at (4,0) {$\ddots$}; \node (e) at (5,1) {$\ddots$}; \draw[->] (11) -- (12); \draw[->] (12) -- (22); \draw[->] (e) -- (1n); \draw[->] (d) -- (2n); \draw[->] (1n) -- (2n); \draw[->] (a) -- (nn); \draw[->] (2n) -- (a); \draw[->] (12) -- (e); \draw[->] (22) -- (d); \end{tikzpicture} }} \end{equation} We do not need the arrow information from $AR(Q)$ in this paper, so henceforth we neglect to draw the edges. We observe that for roots $\beta$ and $\beta'$ in the same column of \eqref{eqn:AR.An}, $\lambda(\beta,\beta') = 0$. Hence $y_\beta$ and $y_{\beta'}$ commute in $\hat{\A}_Q$. In general, when $\beta$ appears to the left of $\beta'$, we have $\lambda(\beta,\beta') \geq 0 $. We seek to impose a total order on $\Phi(A_n)$ which exploits this structure in the quantum algebra. Following \cite{mr2010,rr2013,jarr2018} we impose the rule that \begin{equation} \label{eqn:Reineke.order} \beta\prec\beta' \implies \lambda(\beta,\beta') \geq 0. \end{equation} Hence one allowed ordering is given by reading from left to right in \eqref{eqn:AR.An}, with any order allowed on roots from the same column. We call this a \emph{Reineke order} on $\Phi(A_n)$. We remark that the condition $\lambda(\beta,\beta')\geq 0$ is equivalent (in an appropriate sense) to the requirement that $\Hom(M_\beta,M_{\beta'}) = 0 = \Ext(M_{\beta'},M_\beta)$, where $M_\alpha$ denotes the indecomposable quiver representation corresponding to the root $\alpha$, in Reineke's original work \cite[Section~6.2]{mr2010}. This equivalence is proved in the author's joint work with Rim\'anyi \cite[Lemma~5.1]{jarr2018}. Now, set $\Phi^1 = \Phi(A_n)\setminus\{\beta_0\}$ which we associate with the lefthand side of \eqref{eqn:stratifications}. We give $\Phi^1$ a Reineke order, only forgetting the longest root $\beta_0 = [1,n]$. For $1\leq \ell < n$, set $\Phi^2_\ell = \Phi(A_\ell) \sqcup \Phi(A_j)$ which we associate with the righthand side of \eqref{eqn:stratifications}. We order $\Phi^2_\ell$ as follows. Roots from $\Phi(A_\ell)$ (and $\Phi(A_j)$) appear in Reineke order. On the other hand, if one of $\beta$ or $\beta'$ is in $\Phi(A_\ell)$ and the other is in $\Phi(A_j)$, then \begin{equation} \label{eqn:neg.order.rule} \beta\prec\beta' \implies \lambda(\beta,\beta')\leq 0. \end{equation} We observe that an allowed ordering is achieved by aligning the $AR$ quivers of $A_\ell$ and $A_j$ along columns so that they are centered one over the other; i.e.~so that the longest roots $[1,\ell]$ and $[1',j']$ appear in the same column. One checks that again taking columns left to right yields an ordering satisfying \eqref{eqn:Reineke.order} and \eqref{eqn:neg.order.rule}. \begin{eg} \label{eg:A2.A3} Let $n=5$ and $\ell=3$ (hence $j=2$). Then we have the diagram of $AR$ quivers \begin{center} \begin{tikzpicture} \node (33) at (-.5,.5) {$[3,3]$}; \node (22) at (2,.5) {$[2,2]$}; \node (11) at (4.5,.5) {$[1,1]$}; \node (12) at (3.25,1) {$[1,2]$}; \node (23) at (.75,1) {$[2,3]$}; \node (13) at (2,1.5) {$[1,3]$}; \node (A3) at (-2,1) {${AR}(A_3):$}; \node (A2) at (-2,-.75) {${AR}(A_2):$}; \node (2'2') at (.75,-1.2) {$[2',2']$}; \node (1'1') at (3.25,-1.2) {$[1',1']$}; \node (1'2') at (2,-.3) {$[1',2']$}; \draw[dashed] (2.625,-1.5) -- (2.625,1.8); \draw[dashed] (1.375,-1.5) -- (1.375,1.8); \draw[dashed] (.125,-1.5) -- (.125,1.8); \draw[dashed] (3.875,-1.5) -- (3.875,1.8); \draw[dashed] (-3.5,0.1) -- (5.5,0.1); \end{tikzpicture} \end{center} Which gives the following allowed total ordering on $\Phi^2_3$ \[ [3,3]\prec \underbrace{[2',2'] \prec [2,3]} \prec \underbrace{[1',2'] \prec [2,2] \prec [1,3]} \prec \underbrace{[1',1'] \prec [1,2]} \prec [1,1]. \] The braces correspond to sets which commute in $\hat{\A}_{\Gamma_5}$. Within braces, the roots can be permuted to produce another allowed ordering. \end{eg} \section{Statement of the main theorem} \label{s:main.thm} Given a variable $z$, the \emph{quantum dilogarithm series} $\E(z) \in \Q(q^{1/2})[[z]]$ is \begin{equation} \label{eqn:E.defn} \E(z) = 1 + \sum_{d\geq 1} \frac{(-z)^d\, q^{d^2/2}}{\prod_{k=1}^d (1-q^k)} = \sum_{d\geq 0} (-z)^d\,q^{d^2/2}\,\curly{P}_d. \end{equation} \begin{thm} \label{thm:main} Let $n\geq 3$, $1\leq \ell <n$, and $j=n-\ell$. In the completed quantum algebra $\hat{\A}_{\Gamma_n}$, we have the following quantum dilogarithm identity \begin{equation} \label{eqn:main} {\textstyle \dprod_{\phi \in \Phi^1} \E(y_\phi) = \dprod_{\psi \in \Phi^2_\ell} \E(y_\psi)} \end{equation} where the arrows indicate the products are taken in the orders specified in Section \ref{s:ordering.roots}. \end{thm} We interpret the common value of each side as the refined DT-invariant $\E_{Q,W}$. We remark the identity \eqref{eqn:main} still holds with $n=2$, but the $2$-cycle quiver $\Gamma_2$ is not a so-called \emph{cluster quiver} and $\E_{Q,W}$ is not defined in the sense of \cite{bk2013.fpsac}. In any event, when $n=2$ the identity says that $\E(y_{e_1})\E(y_{e_2}) = \E(y_{e_2})\E(y_{e_1})$ which reflects that the quantum algebra is commutative, a fact which holds for any symmetric quiver. \section{Quiver mutation and maximal green sequences} \label{s:mgs} We provide a brief overview of quiver mutation and green sequences; for more details we refer the reader to \cite{bk2013.fpsac,tbgdmp2014} and references therein. Let $Q$ be a quiver without loops or $2$-cycles, aka a \emph{cluster quiver}. $(Q,F)$ is an \emph{ice quiver} if $F\subset Q_0$ (possibly $F=\emptyset$) is a subset of \emph{frozen vertices} with no arrows between them; we will not be allowed to mutate at frozen vertices. For a non-frozen vertex $i\in Q_0$, define the \emph{mutation of $Q$ at $i$} to be the ice quiver $(\mu_i(Q),F)$ where $\mu_i(Q)$ is obtained from $Q$ by the following steps: \begin{enumerate}[label=\arabic*.,leftmargin=*] \item For every $k \to i \to j$ (for any other $k,j\in Q_0$), add an arrow $k\to j$, \item reverse the orientation of any arrow incident with $i$, and \item remove a maximal collection of $2$-cycles \& any arrows between frozen vertices. \end{enumerate} We let $\Mut(Q)$ denote the \emph{mutation class} of $Q$; these are all of the (ice) quivers which can be obtained by sequences of mutations of $Q$. Given a cluster quiver $Q$ (with no frozen vertices), take a copy of the set of vertices, denote it by $\til{Q}_0$ (for $i\in Q_0$, the corresponding vertex in $\til{Q}_0$ is denoted $\til{i}$), and set $F = \til{Q}_0$. We form the \emph{framed quiver} $\hat{Q}$ with vertices $Q_0\sqcup \til{Q}_0$ and arrows the same as $Q$, except we add an arrow $i \to \til{i}$ for every $i\in Q_0$. Given $R\in \Mut(\hat{Q})$, a vertex $i\in R_0$ is \emph{green} if there are no arrows with head $i$ and tail in $\til{Q}_0=F$. The vertex $i$ is \emph{red} if there are no arrows from $i$ to a frozen vertex. A green sequence for $Q$ is a list $\bfs{i} = (i_1,\ldots,i_k)\subset Q_0$, such that $i_1$ is green in $\hat{Q}$ (actually observe that every non-frozen vertex of $\hat{Q}$ is green) and for all $2 \leq j \leq k$, the vertex $i_{j}$ is green in $\mu_{i_{j-1}} \cdots \mu_{i_1}(\hat{Q})$. The sequence $\bfs{i}$ is called a \emph{maximal green sequence} if every non-frozen vertex in $\mu_{\bfs{i}}(\hat{Q}):= \mu_{i_k}\cdots\mu_{i_1}(\hat{Q})$ is red. \begin{eg} \label{eg:mgs} Let $Q = \Gamma_4$. We have the following maximal green sequences\footnote{which can be checked in Keller's quiver mutation Java applet \href{https://webusers.imj-prg.fr/~bernhard.keller/quivermutation/}{https://webusers.imj-prg.fr/\textasciitilde bernhard.keller/quivermutation/}} \[ (1,2,1,3,2,1,4,2,1),\quad (1,2,1,3,4,2,1), \quad (1,3,2,4,1,3) \] of respective lengths $9$, $7$, and $6$. These are respectively the sizes of $\Phi^1$, $\Phi^2_3$, and $\Phi^2_2$, a connection which we now explain. \end{eg} Associated to a green sequence $\bfs{i}$, Keller \cite{bk2013.fpsac} defines a product in $\hat{\A}_{Q}$ as follows \[ \E(\bfs{i}) := \E(y_{\delta_k}) \cdots \E(y_{\delta_1}) \] where $\delta_j= \sum_{i\in Q_0} b_{j,\tilde{i}} \, e_i$ and $b_{j,\tilde{i}}$ is the number of arrows $j \to \til{i}$ in $\mu_{i_{j-1}} \cdots \mu_{i_1}(\hat{Q})$ ($\delta_1 = e_{i_1}$). We remark that because our definition \eqref{eqn:E.defn} of $\E(z)$ differs from Keller's by the involution $q^{1/2} \mapsto -q^{-1/2}$, our convention \emph{reverses} the order of products in \emph{op.~cit.} When $\bfs{i}$ is maximal green, the \emph{refined DT-invariant} of the quiver (with potential) is $\E_{Q,W} := \E(\bfs{i})$; see \emph{op.~cit.}, \cite{tbgdmp2014}, and references therein. When $Q = \Gamma_n$, our topological/geometric proof of Theorem \ref{thm:main} gives several different factorizations of $\E_{Q,W}$ by using the $W$-linear stratification and the $W$-quadratic factorizations (one for each $\ell$). As such, we conjecture several connections to maximal green sequences. First, recall our method for ordering roots by column aligning the $AR$ quiver(s) in Section \ref{s:ordering.roots}. Now, given an $AR$ diagram for any $A_d$ label each \emph{row} by the vertex $1,\ldots,d$ reading from bottom to top. We remark that in the $W$-quadratic case, this means rows for $A_\ell$ will be labeled $1$ to $\ell$, and rows for $A_j$ will be labeled by $1' = \ell +1$ to $j' = n$. \begin{conj} \label{conj:mgs} (a) For any $\ell$, the righthand side of \eqref{eqn:main} is equivalent to $\E(\bfs{i})$ for a maximal green sequence $\bfs{i}$ obtained by recording the row numbers of nodes from right to left and bottom to top in the diagram with $AR(A_\ell)$ and $AR(A_j)$ aligned by columns as in Section \ref{s:ordering.roots}. (b) The lefthand side of \eqref{eqn:main} is equivalent to $\E(\bfs{i})$ for a maximal green sequence $\bfs{i}$ obtained by recording the row numbers of nodes in $AR(A_n)$ from right to left and bottom to top, except that $[1,n]$ is ignored, and the root $[2,n]$ is considered in row $n$ (not $n-1$). \end{conj} \noindent In the statements above, equivalent means ``up to permutation of commuting neighboring factors''. \begin{eg} \label{eg:AR.mgs} Below are the diagrams corresponding to the maximal green sequences in Example \ref{eg:mgs}. We shorten the notation $[k_1,k_2]$ to $k_1k_2$ and write $4$ instead of $1'$ in the $\ell =3$ case, and $3$ (resp.~$4$) instead of $1'$ (resp.~$2'$) in the $\ell =2$ case. \begin{center} \begin{tikzpicture} \node (44) at (0,1) {$44$}; \node (33) at (1,1) {$33$}; \node (22) at (2,1) {$22$}; \node (11) at (3,1) {$11$}; \node (34) at (.5,1.5) {$34$}; \node (23) at (1.5,1.5) {$23$}; \node (12) at (2.5,1.5) {$12$}; \node (24) at (1,2.5) {$24$}; \node (14) at (1.5,2.5) {$\cancel{14}$}; \node (13) at (2,2) {$13$}; \node (r1) at (4,1) {$1$}; \node (r2) at (4,1.5) {$2$}; \node (r3) at (4,2) {$3$}; \node (r4) at (4,2.5) {$4$}; \node (title) at (1.5,0) {$\Phi^1$}; \node (row) at (4,0) {row}; \draw[dashed] (-.5,1.25) -- (4.5,1.25); \draw[dashed] (-.5,1.75) -- (4.5,1.75); \draw[dashed] (-.5,2.25) -- (4.5,2.25); \draw[dashed] (.25,.5) -- (.25,2.75); \draw[dashed] (.75,.5) -- (.75,2.75); \draw[dashed] (1.25,.5) -- (1.25,2.75); \draw[dashed] (1.75,.5) -- (1.75,2.75); \draw[dashed] (2.25,.5) -- (2.25,2.75); \draw[dashed] (2.75,.5) -- (2.75,2.75); \draw (-.5,.5) -- (4.5,.5); \draw (3.5,-.5) -- (3.5,2.75); \draw[thick,->,red] (1 , 1.75) -- (1 , 2.25); \end{tikzpicture} \end{center} \begin{center} \begin{minipage}{0.4\textwidth} \begin{center} \begin{tikzpicture} \node (33) at (0,1) {$33$}; \node (22) at (1,1) {$22$}; \node (11) at (2,1) {$11$}; \node (23) at (0.5,1.5) {$23$}; \node (12) at (1.5,1.5) {$12$}; \node (13) at (1,2) {$13$}; \node (44) at (1,2.5) {$44$}; \node (r1) at (3,1) {$1$}; \node (r2) at (3,1.5) {$2$}; \node (r3) at (3,2) {$3$}; \node (r4) at (3,2.5) {$4$}; \node (title) at (1,0) {$\Phi^2_3$}; \node (row) at (3,0) {row}; \draw[dashed] (-.75,1.25) -- (3.75,1.25); \draw[dashed] (-.75,1.75) -- (3.75,1.75); \draw[dashed] (-.75,2.25) -- (3.75,2.25); \draw[dashed] (.25,.5) -- (.25,2.75); \draw[dashed] (.75,.5) -- (.75,2.75); \draw[dashed] (1.25,.5) -- (1.25,2.75); \draw[dashed] (1.75,.5) -- (1.75,2.75); \draw (-.75,.5) -- (3.75,.5); \draw (2.5,-.5) -- (2.5,2.75); \end{tikzpicture} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \begin{tikzpicture} \node (22) at (0,1) {$22$}; \node (11) at (1,1) {$11$}; \node (12) at (0.5,1.5) {$12$}; \node (2'2') at (0,2) {$44$}; \node (1'1') at (1,2) {$33$}; \node (1'2') at (.5,2.5) {$34$}; \node (r1) at (2,1) {$1$}; \node (r2) at (2,1.5) {$2$}; \node (r3) at (2,2) {$3$}; \node (r4) at (2,2.5) {$4$}; \node (title) at (.5,0) {$\Phi^2_2$}; \node (row) at (2,0) {row}; \draw[dashed] (-.5,1.25) -- (2.5,1.25); \draw[dashed] (-.5,1.75) -- (2.5,1.75); \draw[dashed] (-.5,2.25) -- (2.5,2.25); \draw[dashed] (.25,.5) -- (.25,2.75); \draw[dashed] (.75,.5) -- (.75,2.75); \draw (-.75,.5) -- (2.5,.5); \draw (1.5,-.5) -- (1.5,2.75); \end{tikzpicture} \end{center} \end{minipage} \end{center} \noindent The case $n=4$ was explored in \cite{jarr2018} by thinking of $\Gamma_4$ as the so-called \emph{square product} $A_2 \Box A_2$. The stratifications defined in that work correspond to the $\Phi^2_2$ case above, but the $\Phi^1$ and $\Phi^2_3$ cases above are new explicit factorizations of $\E_{Q,W}$. \end{eg} \begin{eg} \label{eg:A2.A3.mgs} Consider the diagram of Example \ref{eg:A2.A3}. Using that $4=1'$ and $5=2'$, this corresponds to the maximal green sequence $(1,4,2,5,1,3,4,2,1)$ for $\Gamma_5$. \end{eg} The \emph{No Gap Conjecture} \cite[Conjecture~2.2]{tbgdmp2014} states that the set of lengths of all possible maximal green sequences forms an interval of integers. This has been proven up to tame (acyclic) types \cite{shki2019}, but is still open for acyclic quivers, $\Gamma_n$ for example. There are maximal green sequences of length $8$ for $\Gamma_4$, e.g.~$(1, 2, 1, 3, 2, 4, 2, 1)$. Although this sequence cannot be achieved by Theorem \ref{thm:main} and Conjecture \ref{conj:mgs}, it fills in an interval $\{6,7,8,9\}$, and one can check there are no maximal green sequences for $\Gamma_4$ of length less than $6$. Motivated by our results, we conjecture an upper bound for such an interval. \begin{conj} \label{conj:max.min.mgs} For $\Gamma_n$, the maximal length of a maximal green sequence is $\binom{n+1}{2} - 1 = |\Phi^1|$, and this is achieved by the sequence described in Conjecture \ref{conj:mgs}(b). \end{conj} The remainder of the paper is dedicated to the proof of Theorem \ref{thm:main}. \section{Rapid decay cohomology and quivers with potential} \label{s:rdc} \emph{Rapid decay cohomology} is defined for a pair $(X;f)$ where $f:X\to\C$ is a regular function on the variety (or manifold) $X$; see \cite{mkys2011,jarr2018} and references therein. Here, we describe rapid decay cohomology for the space $X = \Rep_\gamma(\Gamma_n)$ and $f = W_\gamma$. Because $W_\gamma$ is a $\G_\gamma$-invariant function, an \emph{equivariant rapid decay cohomology} $\HH^*_{\G_\gamma}(\Rep_\gamma(\Gamma_n);W_\gamma)$ can be defined as follows. Fix $t \in \R$ and let $S_t = \{X \in \Rep_\gamma(\Gamma_n) : \mathrm{Re}(W_\gamma) < t\}$. Then define \[ \HH^*_{\G_\gamma}(\Rep_\gamma(\Gamma_n);W_\gamma) = \lim_{t \to - \infty} \HH^*_{\G_\gamma}\left(\Rep_\gamma(\Gamma_n),W_\gamma^{-1}(S_t)\right). \] The relative cohomologies on the righthand side stabilize at finite $t$ \cite{mkys2011}. Furthermore, by restricting $W_\gamma$, we can similarly define, for any choices of $m$ and/or $m'$, the algebras \begin{equation} \label{eqn:rdc.strata.defn} \HH^*_{\G_\gamma}\left(\Sigma^1_m;W_\gamma | \Sigma^1_m\right) \quad\text{~and~}\quad \HH^*_{\G_\gamma}\left(\Sigma^2_{m,m'};W_\gamma | \Sigma^2_{m,m'}\right). \end{equation} This section is dedicated to computing the algebras \eqref{eqn:rdc.strata.defn} (hence their Poincar\'e series). To reduce notation, we neglect to write when $W_\gamma$ is restricted to a subspace, which will be clear from context. We need the following technical lemma \cite[Lemma~8.6]{jarr2018}. \begin{lem} \label{lem:reduce.normal.form} Let $G$ act on the space $X$. Suppose that $Y \subset X$ is a subspace with isotropy subgroup $G_Y$ and $Z\subset X$ is a $G$-invariant subspace. Further assume that (1) every $G$-orbit in $X$ intersects $Y$, and (2) if $g\in G$ is such that there exists $x\in Y$ with $g\cdot x\in Y$, then $g\in G_Y$. In this scenario, we have $\HH^*_G(X,Z) \isom \HH^*_{G_Y}(Y,Y\intersect Z)$. We call $Y$ a ``normal locus for $X$''. \qed \end{lem} \subsection{$W$-linear case} \label{ss:rdc.wlin} We give details for this case, but only sketch the methods in the $\Sigma^2_{m,m'}$ case, which follows the technique of \cite{jarr2018}. Fix a Kostant partition $m\kp\gamma$ for $A_n$ and a point $X_m \in \curly{O}_m\subset\Rep_\gamma(A_n)$. The value of $m_{\beta_0}$ is the rank of the linear map $(X_m)_{a_1} \compose \cdots \compose (X_m)_{a_{n-1}} : \C^{\gamma(n)} \to \C^{\gamma(1)}$, and we assume that the bases are chosen in each $\C^{\gamma(i)}$ so the matrix of this composition has the block form \[ \left( \begin{array}{c|c} I_{m_{\beta_0}} & 0 \\ \hline 0 & 0 \end{array} \right). \] Define the \emph{normal locus} $\nu^1_m \subset \Rep_\gamma(\Gamma_n)$ to be $\left\{ (X_{a_i})_{i=1}^n : (X_{a_i})_{i=1}^{n-1} = X_m\right\}$. We have immediately from the definition that $\nu^1_m \subset \Sigma^1_m$. The hypotheses of Lemma \ref{lem:reduce.normal.form} apply to $G=\G_\gamma$, $X=\Sigma^1_m$, $Y=\nu^1_m$ (with $G_Y \isom \G_m$ as in Section \ref{s:quiver.preliminaries}), $Z = W_\gamma^{-1}(S_t)\intersect \Sigma^1_m := Z_m$. Therefore \[ \HH^*_{\G_\gamma}\left(\Sigma^1_m,Z_m \right) \isom \HH^*_{\G_m}\left(\nu^1_m, Z_m \intersect \nu^1_m \right). \] Set $w = \gamma(1)\gamma(n)$ and observe that $\nu^1_m \homeo \Hom(\C^{\gamma(1)},\C^{\gamma(n)}) \homeo \R^{2w}$ by identifying $Y\in \nu^1_m$ with its matrix along the arrow $a_n$. In the block form we write \[ Y = \left( \begin{array}{c|c} Y_0 & A \\ \hline B & C \end{array} \right) \] where $Y_0$ is an $m_{\beta_0}\times m_{\beta_0}$ complex matrix. From this we have \[ W_\gamma(Y) = -\Tr((X_m)_{a_1} \compose \cdots \compose (X_m)_{a_{n-1}}\compose Y) = -\Tr(Y_0), \] a linear function in the entries of $Y_0$ (hence our naming convention). Taking the real part of $W_\gamma$ is also linear, whence the pair $(\nu^1_m,Z_m\intersect \nu^1_m)$ is homotopy equivalent to one of \[ \text{I.~}(\R^{2w},\H^{2w}) \text{~when $m_{\beta_0}\neq 0$}, \quad{\text{or}}\quad \text{II.~}(\R^{2w},\emptyset) \text{~when $m_{\beta_0} = 0$}. \] Fixing $t\ll 0$ so that the rapid decay cohomology stabilizes, we have \begin{enumerate}[label={\Roman*}.] \item $\HH^*_{\G_\gamma}\left(\Sigma^1_m;W_\gamma \right) \isom \HH^*_{\G_m}(\R^{2w},\H^{2w}) = 0$ since $\R^{2w} \hmtpc \H^{2w}$, or \item $\HH^*_{\G_\gamma}\left(\Sigma^1_m;W_\gamma \right) \isom \HH^*_{\G_m}(\R^{2w},\emptyset) \isom \HH^*_{\G_m}(pt) = \HH^*(B\G_m)$, \end{enumerate} the latter algebra appeared in Section \ref{s:quiver.preliminaries}. We remark that the vanishing of $\HH^*_{\G_\gamma}(\Sigma^1_m;W_\gamma)$ when $m_{\beta_0} \neq 0$ is the reason for our definition $\Phi^1 = \Phi(A_n)\setminus\{\beta_0\}$ in Section \ref{s:ordering.roots}. \subsection{$W$-quadratic case} \label{ss:rdc.wquad} Fix $m\kp \gamma|A_\ell$ and $m'\kp \gamma|A_{j}$. As in the previous subsection, we can use Lemma \ref{lem:reduce.normal.form} to reduce our computation to a normal locus. By appropriately choosing bases, an analogous argument to \cite{jarr2018} shows that restricting $W_\gamma$ to this normal locus is a \emph{quadratic} function (hence our naming convention). Furthermore, the homotopy arguments of \cite[Section~8]{jarr2018} can be adapted to yield isomorphisms for every $d$ \[ \HH^d_{\G_\gamma}(\Sigma^2_{m,m'};W_\gamma ) \stackrel{\isom}{\longrightarrow} \HH^{d-2\w(m,m')}(B\G_m \times B\G_{m'}) \] where $\w(m,m') = m_{[1,\ell]} m'_{[1',j']}$. Hence using \eqref{eqn:poincare.orbit}, a K\"unneth formula implies \begin{equation} \label{eqn:W.quad.Poincare} \curly{P}[\HH^*_{\G_\gamma}(\Sigma^2_{m,m'};W_\gamma )] = q^{\w(m,m')} \curly{P}(m) \curly{P}(m'). \nonumber \end{equation} \section{Kazarian spectral sequence, Poincar\'e series identities} \label{s:KSS} We now apply \cite[Theorem~9.2]{jarr2018} to the present context. Let $c(m)$ and $c({m,m'})$ respectively denote the complex codimensions of $\Sigma^1_m$ and $\Sigma^2_{m,m'}$ in $\Rep_\gamma(\Gamma_n)$. \begin{thm} \label{thm:kss} There is a spectral sequence $E^{ij}_\bullet$ in rapid decay cohomology which converges to $\HH^{i+j}_{\G_\gamma}(\Rep_\gamma(\Gamma_n);W_\gamma)$, degenerates at the $E_1$ page, and has $E_1$ page (with sums over all $\Sigma^1_m$) \begin{equation} \label{eqn:kss.E1.Wlinear} E^{ij}_1 = \Dirsum_{c(m)=2i} \HH^j_{\G_\gamma}(\Sigma^1_m;W_\gamma) = \mathop{\Dirsum_{c(m)=2i,}}_{\text{and~} m_{\beta_0} = 0} \HH^j(B\G_m). \end{equation} Moreover, for every $1\leq \ell <n$ there is another spectral sequence with the same properties as above except that the $E_1$ page is determined by taking the direct sum over all $\Sigma^2_{m,m'}$ \begin{equation} \label{eqn:kss.E1.Wquad} E^{ij}_1 = \Dirsum_{c(m,m')=2i} \HH^{j}_{\G_\gamma}(\Sigma^2_{m,m'};W_\gamma) = \mathop{\Dirsum_{c(m,m')=2i}} \HH^{j-2\w(m,m')}(B\G_m \times B\G_{m'}). \end{equation} \end{thm} \begin{proof} The proof is analogous to that of \cite[Theorem~9.1]{jarr2018} except that the second equalities in \eqref{eqn:kss.E1.Wlinear} and \eqref{eqn:kss.E1.Wquad} are determined respectively in Sections \ref{ss:rdc.wlin} and \ref{ss:rdc.wquad}. \end{proof} \begin{cor} \label{cor:P.series.ident} Theorem \ref{thm:kss} implies the following identity of Poincar\'e series (for every $1\leq \ell < n$) \begin{equation} \label{eqn:P.series.ident} \mathop{\sum_{m \kp \gamma}}_{m_{\beta_0} = 0} q^{c(m)} \curly{P}(m) = \curly{P}[\HH^*(\Rep_\gamma(\Gamma_n);W_\gamma)] = \mathop{\sum_{m \kp \gamma|A_\ell}}_{m'\kp \gamma|A_{j}} q^{c(m,m')+\w(m,m')} \curly{P}(m)\curly{P}(m') \nonumber \end{equation} where the sum on the left is over Kostant partitions for $A_n$ quiver orbits, and the sum on the right is over pairs of Kostant partitions for $A_\ell$ and $A_{j}$ quiver orbits. \qed \end{cor} \section{Counting in $\hat{\A}_{\Gamma_n}$ and proof of the main theorem} \label{s:count.qalg} Throughout the section, set $y_{e_i} = y_i$ for any vertex $i$ of $\Gamma_n$. \subsection{$W$-linear strata} Set $N = |\Phi^1| = \frac{1}{2}n(n+1)-1$, and write $\phi_1 \prec \cdots \prec \phi_N$ as a total order described in Section \ref{s:ordering.roots} for $\Phi^1$. Let $m\kp \gamma$ and let $m_u$ denote the entry of $m$ corresponding to $\phi_u \in \Phi^1$ (by design $m_{\beta_0} = 0$). Moreover, write $\phi_u = \sum_{i=1}^n d^i_u e_i$. Then \cite[Lemma~5.1]{rr2013} implies that in $\hat{\A}_{\Gamma_n}$ we have \begin{equation} \label{eqn:Wlinear.side.quantum.algebra} \begin{gathered} y_{\phi_1}^{m_1} \cdots y_{\phi_N}^{m_N} = (-1)^{s_1}\cdot q^{p_1}\cdot q^{-\gamma(1)\gamma(n)} \cdot y_{1}^{\gamma(1)} \cdots y_{n}^{\gamma(n)}, \quad \text{where} \\ s_1 = {\textstyle \sum_{u=1}^N m_u(\sum_{i=1}^n d^i_u - 1)} \quad \text{and} \quad p_1 = {\textstyle c(m) + \frac{1}{2}\sum_{i=1}^n \gamma(i)^2 - \frac{1}{2}\sum_{u=1}^N m_u^2}. \end{gathered} \end{equation} The extra factor of $q^{-\gamma(1)\gamma(n)}$ compared to \cite[Lemma~5.1]{rr2013} results from commuting contributions of $e_n$ in positive roots $[k_1,n]$ past those of $e_1$ in roots $[1,k_2]$ (for $k_1>1$, $k_2<n$) since $[k_1,n] \prec [1,k_2]$ in our order. \subsection{$W$-quadratic strata} Set $L = |\Phi(A_\ell)|$, $J=|\Phi(A_j)|$, and $N' = L+J = |\Phi^2_\ell|$. Write a total order $\psi_1\prec \cdots \prec\psi_{N'}$ for $\Phi^2_\ell$ satisfying the conditions in Section \ref{s:ordering.roots}. Let $m_{\psi_i}$ denote the entry, either in $m \kp \gamma|A_\ell$ or in $m'\kp \gamma|A_j$, for the root $\psi_i\in\Phi^2_\ell$. In abuse of notation, we also write $\beta_1 \prec \cdots \prec\beta_L$ as a Reineke order on $\Phi(A_\ell)\subset \Phi^2_\ell$, $\beta'_1\prec \cdots \prec\beta'_J$ as a Reineke order on $\Phi(A_j) \subset \Phi^2_\ell$, and denote the entries from $m$ and $m'$ corresponding to $\beta_u$ and $\beta'_v$ respectively by $m_u$ and $m'_v$. Then define $p'$ by \begin{equation} \label{eqn:Wquad.separate} y_{\psi_1}^{m_{\psi_1}} \cdots y_{\psi_{N'}}^{m_{\psi_{N'}}} = q^{p'} \left( y_{\beta_1}^{m_1} \cdots y_{\beta_L}^{m_L} \right) \left( y_{\beta'_1}^{m'_1} \cdots y_{\beta'_J}^{m'_J} \right) \end{equation} and calculation reveals that $p' = -\gamma(1)\gamma(n) + \w(m,m')$. Again using \cite[Lemma~5.1]{rr2013}, it follows that \eqref{eqn:Wquad.separate} is further equal to \begin{multline} \label{eqn:Wquad.side.quantum.algebra} = (-1)^{s_2}\, q^{p_2}\, q^{p'} \left( y_1^{\gamma(1)} \cdots y_\ell^{\gamma(\ell)} \right) \left( y_{1'}^{\gamma(1')} \cdots y_{j'}^{\gamma(j')} \right) \\ = (-1)^{s_2}\, q^{p_2}\, q^{p'}\, y_1^{\gamma(1)} \cdots y_n^{\gamma(n)} \end{multline} with $s_2 = \sum_{u=1}^{N'} m_{\psi_u}(d^i_{\psi_u}-1)$ and $p_2 = c(m,m') + \frac{1}{2} \sum_{i=1}^n \gamma(i)^2 - \frac{1}{2}\sum_{u=1}^{N'} m_{\psi_u}^2$. In the calculation of $p_2$, we have used that \begin{align*} c(m,m') &:= \codim_\C(\Sigma^2_{m,m'},\Rep_\gamma(\Gamma_n)) \\ & = \codim_\C(\curly{O}_m,\Rep_{\gamma|A_\ell}(A_\ell)) + \codim_\C(\curly{O}_{m'},\Rep_{\gamma|A_j}(A_j)). \end{align*} The above equality follows from the fact that, in the definition \eqref{eqn:defn.w.quad.stratum} of $\Sigma^2_{m,m'}$, no requirements are placed on the maps along the arrows $a$ and $b$ in the picture \eqref{eqn:Cn.w.quad.picture}. \begin{proof}[Proof of Theorem \ref{thm:main}] We compute the coefficients of $y_1^{\gamma(1)}\cdots y_n^{\gamma(n)}$ on each side of \eqref{eqn:main}. On the lefthand side, we need to consider \begin{multline} \label{eqn:lhs1} \mathop{\sum_{m \kp \gamma}}_{m_{[1,n]} = 0} (-1)^{\sum_{u=1}^N m_u} \, q^{\frac{1}{2}\sum_{u=1}^N m_u^2} \, y_{\phi_1}^{m_1} \cdots y_{\phi_N}^{m_N} \, \curly{P}(m) \\ = (-1)^{\sum_{i=1}^n \gamma(i)} \, q^{\frac{1}{2}\sum_{i=1}^n \gamma(i)^2 - \gamma(1)\gamma(n)} \, \mathop{\sum_{m \kp \gamma}}_{m_{[1,n]} = 0} q^{c(m)} \curly{P}(m) \, y_1^{\gamma(1)} \cdots y_n^{\gamma(n)} \end{multline} where we used \eqref{eqn:Wlinear.side.quantum.algebra} to obtain the second expression. On the righthand side we have \begin{multline} \label{eqn:rhs1} \mathop{\sum_{m\kp\gamma|A_\ell}}_{m'\kp \gamma|A_j} (-1)^{\sum_{u=1}^{N'} m_{\psi_u}}\, q^{\frac{1}{2}\sum_{u=1}^{N'} m_{\psi_u}^2}\, y_{\psi_1}^{m_1} \cdots y_{\psi_{N'}}^{m_{N'}} \, \curly{P}(m) \curly{P}(m') \\ = (-1)^{\sum_{i=1}^n \gamma(i)} \, q^{\frac{1}{2}\sum_{i=1}^n \gamma(i)^2 - \gamma(1)\gamma(n)} \\ \times \mathop{\sum_{m\kp\gamma|A_\ell}}_{m'\kp \gamma|A_j} q^{c(m,m') + \w(m,m')} \curly{P}(m) \curly{P}(m') \, y_1^{\gamma(1)} \cdots y_n^{\gamma(n)} \end{multline} where the second expression follows from \eqref{eqn:Wquad.side.quantum.algebra}. Finally, Corollary \ref{cor:P.series.ident} implies that \eqref{eqn:lhs1} and \eqref{eqn:rhs1} are the same. \end{proof} \acknowledgements{We acknowledge partial support from an Office of Naval Research NARC grant.} \bibliographystyle{plainurl}
1,116,691,500,520
arxiv
\section{Introduction} Understanding the interactions of economic agents is a central concern in the economics of information \shortcite{Manski00}. Learning from others' choices is important in many circumstances: patients draw negative quality inferences from the refusal of a kidney in the kidney market \shortcite{Zhang10}; observation of a neighbor's choice in a presidential election reveals information about his/her preferences \shortcite{OrhunUrminsky13}; editors of scientific journals accept/reject papers on the basis of referee decisions; traders in financial markets infer information about asset fundamental values from the order flow, among others. In these contexts, behavioral errors may have severe consequences and ultimately can lead to a socially inefficient outcome, e.g., poor kidney utilization despite the continual shortage in kidney supply. The goal of this paper is to identify the sources of such errors. The fundamental question in social interactions is that how people "glean" information from others' choices. Econometric analysis of choice data often reduces the empirical inference to revelation of preferences by assuming that individuals are Bayesian and have rational expectations. These assumptions are typically necessary for identification of beliefs \shortcite{VictorJeon20}. However, people often deviate from these assumptions and err in making decisions. For example, research has shown that individuals are not Bayesian in laboratory experiments and they do not always choose the payoff-maximizing options (e.g., \shortciteA{KT72, JindalAribarg21}). These errors exist even in non-social settings and for very simple decision tasks \shortcite{Benjamin18}. In social environments, decision-making is often more complex because information is hidden in others' choices and one needs to make inference about others' evaluations. Hence, decision-making errors might be inevitable in social settings. Despite this seemingly clear connection, little is known about the similarities and differences between decision-making errors in the presence and in the absence of social interaction. In this paper, I employ a set of simple laboratory experiments to uncover errors in learning from others' choices and contrast them with errors that are prevalent in the absence of the social element. In the experiment, subjects need to guess about an \textit{ex ante} unknown state of the world and are paid for accuracy. The state is binary and its possible realizations are represented by two boxes that contain ten balls of black and white color. The combination of black and white balls can be different across the two boxes and the content of each box is known to subjects. The true state is randomly realized in the beginning of the experiment, i.e., one box is selected by flipping a fair coin. Subjects do not observe the true state, but they receive a signal about it. They then guess which box is the true state. The key manipulation of the experimental design is the source of the signals: across conditions, subjects receive informationally-equivalent signals that vary only in whether or not the signal arises from a social interaction. In the \textit{Individual} condition (control), a subject observes a ball randomly drawn from the true state. In the \textit{Social} condition (treatment), the subject does not directly observe a ball, but she observes the choice of another participant, called \textit{neighbor} in the experiment, who has observed a ball randomly drawn from the true state. Subjects know the precise signal-generating process and that all participants are incentivized to make a correct choice. So, the provided signal is informationally equivalent across the two conditions: under common knowledge rationality, the choices of the subjects should be identical in the two conditions In order to identify individual errors that are associated with the social interaction (SI), I use a within-subject design. I compare subjects' choices across the individual and the social conditions and present clean evidence that, despite extensive instructions, subjects exhibit on average a higher level of irrationality in the social condition (in the presence of SI) than in the individual condition (in the absence of SI) when they receive informationally equivalent signals across the two conditions. That is, they neglect the provided information relatively more in the social condition than in the individual condition.\footnote{Throughout, I use "irrationality" and "neglect" interchangeably.} The within subject design plays a critical role here, because it isolates the errors that are associated with SI (e.g., the violation of rational expectations) while controlling for any other error that is independent of SI, e.g., errors in statistical reasoning \shortcite{Benjamin18}.\footnote{A notable difference between my work and the belief updating literature is that I examine individuals' sub-optimal "choice" under uncertainty, while this literature examines how individuals form their posterior "belief". The papers in this literature either do not collect data on actual choices or ignore the fact that having a Bayesian (non-Bayesian) belief is not necessarily equivalent to making a correct (incorrect) choice. As a result, the notion of "bias" in the belief updating literature is completely different than the notion of "error" in my study. A \textit{biased belief} is defined as a belief that is not Bayesian. However, an \textit{error in choice} is defined as an action that fails to optimize the individual's payoff based on the available information. It can be shown that neither does a biased belief necessarily lead to an error in the choice, nor is an error in the choice necessarily a result of a bias in the belief.} A plausible explanation for the unexpected tendency to neglect information in the SI is the subject's inability to infer the relationship between what other participants choose and what they know, i.e., non-rational expectations. That is, subjects might not be able to predict how their neighbors make decision based on their private information. If this is the reason behind the extra neglect in the SI, then providing additional information about neighbors' behavior may help subjects to better extract the information contained in their neighbors' choices. To test the role of uncertainty about neighbor's behavior in the subject's irrationality, I exogenously manipulate subjects' knowledge about their neighbors across treatments by providing additional information about the neighbors. First, I design a treatment in which the neighbor is replaced by a "computer bot", whose behavior is clearly described to the subjects. The idea here is to create a social environment where there is less uncertainty in the neighbor's behavior than the baseline experiment. In this treatment, subjects are told that their neighbor is a computer bot that chooses the box with more black balls when it observes a black ball, and chooses a box with more white balls when it observes a white ball. The results of this low uncertainty treatment show that the neglect in the social condition significantly drops compared to the baseline experiment. This finding highlights that the uncertainty of neighbor's behavior can play an important role in decision-making under observational learning. In fact, the violation of rational expectations might be a result of the ambiguity in other decision makers' behavior. Second, I devise a treatment where the subject observes both her neighbor's choice and the ball that her neighbor has seen. If the additional neglect in the SI was largely about the uncertainty in the neighbor's behavior, then the observed difference between the irrationality across the social and the individual condition should disappear in this treatment. The results support this prediction: when subjects are provided with both their neighbors' choices and the signals behind those choices, there is no statistically significant difference between the level of irrationality across the social and the individual condition. This suggests that the failure of rational expectations in the social condition is mainly driven by the ambiguity of other people's behavior, i.e., subjects behave as if they lack knowledge about how others make choice based on their private information. Researchers are increasingly interested in the mechanisms behind reduced-form errors in decision-making due to the view that this may help develop new behavioral models that can explain real world behavior. In the final part of the paper, I build upon the earlier reduced-form results and develop a model to identify the sources of irrationality in decision-making under social interactions. I argue that in the context of the current study, the individual's decision-making process includes two stages. In the first stage, the subject updates her belief about the environment based on the available information. Then, in a second stage, she makes a choice based on her updated belief. I show that it is not possible to separately identify errors that occur in these two stages from choice data and one needs additional sources of variation to be able to distinguish between them. I then add a survey question to each choice that subjects make during the experiment. The purpose of this survey question is to measure the "relative" direction of the subject's posterior belief.\footnote{See \shortciteA{Manski04} for a detailed discussion of how survey data on probabilistic expectations can enable experimental economists to overcome identification problems.} Specifically, it asks about the probability that the subject believes her choice is correct. This survey question along with the subject's actual choice allow me to separate the first-stage errors from the second-stage errors. I show that the experimental design of this paper enables the researcher to identify three sources of error in decision-making under social interaction. Two of these sources are related to the first stage of the decision-making process and the third source is associated with the second stage. I non-parametrically estimate the two stage decision-making process and shed light on what variation in the data identifies which source of error. \iffalse Individuals often do not make decision in isolation. They might also obtain information from \textit{social interactions}. Social interaction is generally defined as a situation in which an individual observes the choices of other people and these choices provide useful information about an unknown state of the world.\footnote{Social interaction is also referred as \textit{social learning} or \textit{observational learning} in the literature \shortcite{Banerjee92,BHW92}} For instance, consumers often observe the purchase behavior of previous buyers. Here, the individual's payoff does not directly depend on the action of others, but observing other people's actions can be useful because they contain information about the unknown state of the world (e.g., product quality). One can use the conceptual framework introduced earlier to examine social interactions as well. Consider a situation in which an individual does not directly observe the signal, $s$. Instead, she observes the guess of a third person who has seen signal $s$ in private. In this situation, the information obtained from social interaction, the guess of the third person, benefits the individual as long as she is able to extract the private information contained in the third person's guess. Here, the standard assumption in economic theory is that Bayesian rationality is common knowledge, i.e., individuals are Bayesian rational, they know that others are Bayesian rational, and so on. This implies that a subject should be able to fully understand the behavior of another person in social interaction. So, when a subject observes the choice of a third person, she should make the same guess as that person. However, experimental studies show that individuals do not always behave as what theory predicts in social interactions \shortcite{Weizsacker10,Eyster19}, so they make errors in social environments. In general, there is not a clear connection between individual errors in an "\textit{isolated}" environment, where there is no social interaction, and in a "\textit{social}" environment, where social interactions take place. Given that individuals make errors in isolation, observing probabilistic decision-making errors in social interactions might be unsurprising. However, decision-making in a social environment needs a deeper level of reasoning than in an isolated environment. This is mainly because beliefs about other individuals' behavior play a critical role in social interactions, but are irrelevant in an isolated environment. Hence, errors in a social environment might be a result of both behavioral biases that are independent of the social environment \shortcite{Benjamin18} and systematic errors in beliefs about others' behavior \shortcite{KublerWei04}. In this paper, I conduct a set of experiments to examine the relationship between the decision-making errors in an isolated environment and in a social environment. I find that the magnitude of errors is significantly higher in a social environment than in an isolated environment, even when the information content available in the two environments is identical. The higher error rate in the social condition is associated with ambiguity in the behavior of other people. That is, individuals seems to not be able to extract the private information contained in the choice of other people, and thus make more errors in the social environment than in the isolated environment. The conventional incentivizing technique in experimental economics is not able to resolve this ambiguity. However, when individuals are provided with more information (?) about the behavior of the people they are interacting with, this effect disappears. From a particular point of view, social interactions can be divided into \textit{strategic} social interactions and \textit{non-strategic} social interactions. Strategic social interactions often arise when people play against each other in games (like a bilateral trade or an auction). In these cases, the standard prediction in game theory is that people behave rationally. However, individuals sometimes violate rationality; there are a lot of studies in economics that show humans make errors in strategic social interactions \shortcite{Eyster19}. The predominant approach in the literature is to assume one of these two fundamentals hold, and then try to explain the observed error in individuals' behavior by focusing on the other fundamental. For example, many of the papers that study belief updating biases either do not directly ask subjects to make choices -- they only elicit subjects' posteriors -- or assume that, given the elicited beliefs, individuals are rational in making choices.\footnote{see \shortcite{Benjamin18} for a review of this literature.} Another example is \shortcite{AndersonHolt97} who assume individuals are Bayesian, but make random errors when they make binary choices. Prior research shows that individuals learn too little from observation of other people's actions in a situation where they have access to a private signal, i.e., individuals do not use the information about their predecessors' actions as they should \shortcite{Goereeetal07,KublerWei04}. These studies are not able to distinguish between errors due to behavioral biases, that are independent of the social environment, and errors due to incorrect beliefs about other people's behavior. In general, one needs additional sources of data to be able to disentangle between these two types of error \shortcite{Weizsacker10}. In my setting, there are two \textit{ex ante} equally likely states of the world, $\omega \in \{X,Y\}$. One of these states is randomly realized, and the subject is incentivized to make a correct guess about the realized state, after observing an informative signal about the state. In an isolated condition, the subject observes the realization of a binary signal, $s \in \{x,y\}$, with $Pr(s=x|X)=Pr(s=y|Y)\geq 0.5$. In a social interaction condition, however, the subject does not directly observe the realization of this signal. She observes the guess of another subject who has directly observed signal $s$. First, I examine what the sources of error are in an isolated condition. Suppose subject A (she) observes signal $x$ in an isolated condition. What should be her guess? A Bayesian rational subject should follow the signal: choose $X$ if the signal is $x$ and choose $Y$ if the signal is $y$. Suppose subject A does not follow her signal and chooses state $Y$. What is the explanation for this observation? To explain this observations, one needs to know what the sources of error in decision-making are. A deviations from Bayesian rational behavior, which is called an \textit{error} here, can be explained by two distinct mechanisms. First, it might be that an individual's posterior assigns a higher probability of success to a state that in fact has a lower probability of success. This is called a \textit{posterior error}. For instance, in the earlier observation, subject A might mistakenly believe state $Y$ is more likely than state $X$ upon observing signal $x$. So, the observed error might be the result of a posterior error. Second, an individual may have a correct belief about which state is more likely, but she mistakenly chooses an state that she believes has a lower probability of success. This is called a \textit{reasoning error}. Back to the earlier observation, subject A might mistakenly choose state $Y$, while she correctly believes state $X$ is more likely. So, the observed error might be the result of a reasoning error. To distinguish between a posterior error and a reasoning error, I measure both the individual's choice and her posterior. This allows me to examine whether the cause of error is the subject's biased posterior or is a mistake in the subject's reasoning. The results show that both of these errors are likely to happen when individuals make decisions in an isolated condition, though the posterior error is observed more frequently. Next, I examine how errors in an isolated condition are related to errors in a social condition. Suppose subject A repeats the mentioned task in a social condition. She observes the guess of subject B (he) in the social condition, and subject B's guess is $X$. What should be subject A's guess here? Again, if subject A is Bayesian rational, she should follow subject B's guess. Suppose subject A does not follow subject B. What is the explanation for this observation? Is it the same explanation as in the previous observation? Given that subject A makes error in an isolated condition, is it surprising that she makes an error in a social condition? Note that there is an additional factor which plays role in subject A's choice in the social condition: A's belief about B's behavior. It is possible that A's belief about B is correct, but she makes the same error as in the isolated condition. Alternatively, it might be the case that A has an incorrect belief about individual B's behavior (e.g., due to ambiguity), which causes A to make an error. In general, it is not clear which scenario drives A's behavior in the social condition. The belief about others' behavior is an important building block of decision-making in social interaction. So, it is important to understand how this new factor impacts individuals' behavioral errors. In my experiment, subjects make guesses in both an isolated condition and in a social condition. The within subject design allows me to compare the fraction of posterior errors and reasoning errors across isolated and social condition, and identify the effect of social interaction on each of these errors. The results show that the frequency of reasoning error remains constant across isolated and social condition. However, the frequency of posterior error is increased in the social condition relative to the isolated condition, when it should not be. Providing data from 4 different treatments, I show that the underlying reason for this additional error is the ambiguity about other people's behavior. This means that individuals do not know how to extract the information contained in the action of other people in social interactions and this ambiguity alleviates the effect of information on individuals' behavior. \fi \iffalse My study is also related to the literature on social learning \shortcite{Banerjee92,BHW92,AndersonHolt97}. A common finding in this literature is that subjects do not adequately use the social information and put a relatively higher weight on the private signals than justified by Bayes' rule. However, there is less consensus concerning the mechanism behind this phenomenon. Plausible explanations include errors that are not necessarily related to the social environment, such as \textit{conservatism} \shortcite{Edwards68,HuckWeizsacker02}, \textit{overconfidence} \shortcite{NothWeber03}, and \textit{base rate fallacy} \shortcite{Goereeetal07}, as well as errors associated with the social environment, such as wrong beliefs about others' strategies \shortcite{KublerWei04,Weizsacker10, Angrisanietal20}. Some papers suggest a combination of these effects \shortcite{DeFilippisetal21,MarchZiegelmeyer18}. Despite numerous theories in the literature, it remains unclear how the errors in a “social” environment are related to the errors in an “isolated” environment and how one should distinguish between them. In fact, I show that one cannot non-parametrically separate posterior errors from reasoning errors only from the choice data or only from the belief data. The need to separate the inability to best respond to one’s beliefs from potentially wrong beliefs about others has been recognized in prior literature \shortcite{Stahl95,CH04}. However, most of the papers in the literature, approach this identification problem by imposing extra assumptions such as parametric functional forms on beliefs and/or choice probabilities \shortcite{KublerWei04,Goereeetal07,MarchZiegelmeyer18,JindalAribarg21}. These parametric models might suffer from misspecification: \shortciteA{DominitzHung09} explicitly show that the earlier finding in the social learning literature arises because identification of the discrete choice model relies on a misspecified model. Some papers such as \shortciteA{Weizsacker10} utilize a reduced-form approach and do not impose parametric assumptions, but these studies are not able to precisely identify the contribution of different errors (reasoning versus posterior errors) to the subject's behavior. Hence, they provide limited explanation about the nature of the suboptimality. The experimental design in my study explicitly separates two sources of information, the private signal vs the choice of neighbor, and studies them in two distinct experiments. The within-subject design allows me to compare choices across two conditions and causally identify the error associated with the SI. The model used in the current paper for estimating the decision-making process is non-parametric and does not suffer from the potential misspecification issues in parametric models used in prior literature. In addition, my study employs data from four different treatments which allows me to pin down the mechanism behind the errors associated with SI. Another advantage of the study is that the decision task is very simple so that there is no concern about the complexity of the decision in the social condition. The latter challenge is important because many studies in psychology and economics have shown that the complexity of the decision problem can be an important factor in driving individual errors \shortcite{CharnessLevin09, EnkeZimmerman}. This issue has also been highlighted in the reduced form results of \shortciteA{Weizsacker10} in the social learning experiments. My experiment addresses this issue by making the decision task as simple as possible.\footnote{This also distinguishes my research from the studies that examine consensus of beliefs in social networks \shortcite{GolubJackson10, GrimmMengel13, Chandrasekharetal14, Brandtsetal15}} Finally, by separating the people whose choices are used in the social condition from those of the main experiment, my experimental design rules out any confounding factor which might be related to the prosocial/strategic incentives among subjects and provides a lower bound on the amount of irrationality one might expect in social interactions.\footnote{This is an important concern in social learning experiments because the subject knows her decision is revealed to later subjects and this may introduce prosocial or strategic incentives which impacts the subject's behavior through channels unrelated to belief about others.} Another unique aspect of my experimental design is that it collects choice data and belief data under a wide range of symmetric and asymmetric information structures. This feature provides an exclusion restriction and enough variation in the data so that enables me to non-parametrically estimate a general two-step decision making process \shortcite{Walliser89}. In the absence of enough variation in information structures, it is challenging to non-parametrically identify beliefs about others and separate them from reasoning errors.\footnote{The predominant approach in the literature is to use only one or a few symmetric information structures.} So, my results can provide insights about what parametric assumptions might or might not be appropriate for modeling decisions in social interactions. This exclusion restriction is closely related to the literature on identification of non-equilibrium beliefs in games of incomplete information \shortcite{VictorArvind20,VictorErhao21}. The novelty is in the source of variation which has not been explored in prior literature. In addition, I show that there is substantial heterogeneity in decision-making across subjects. So, one needs to naturally account for individual-specific heterogeneity when modeling decision-making in social interactions. The heterogeneity in beliefs has been documented in other contexts, see for instance \shortciteA{Orhun12}, \shortciteA{Chingetal21},\shortciteA{JindalAribarg21}. My results are consistent with these findings and extends them to a social context in which there is no strategic incentive among subjects.\footnote{For a literature review on (the failure of) rational expectations in other strategic settings see \shortciteA{BeardBeil94} and \shortciteA{Eyster19}.} \fi The remainder of the paper proceeds as follows. Section \ref{literature} reviews the related literature. Section \ref{ex_design} describes the experimental design. Section \ref{results} presents the main results of the paper. In Section \ref{sources}, I develop a model to explain the sources of irrational choices in individual behavior and identify the channel that is influenced by SI. In section \ref{estimation}, I non-parametrically estimate a two-stage decision making process and highlight what variation in the data identifies which error in decision-making. Finally, Section \ref{conclusion} concludes the paper. \section{Related Literature} \label{literature} Empirical models of choice data often assume decision makers are Bayesian (e.g., \shortciteA{ErdemKeane96}). In social environments and games of incomplete information, the assumption of rational expectations is further imposed for identification of beliefs \shortcite{VictorErhao21}. However, there are a lot of studies in the literature that document the opposite: that individuals do not always behave as a Bayesian rational agent would do. Plausible explanations of the violation of Bayesian rational behavior include three categories of errors: \textbf{A) \textit{belief updating error}}: this error has been widely documented in isolated environments and is not necessarily related to the social environment. It can arise due to several reasons such as \textit{conservatism} \shortcite{Edwards68,HuckWeizsacker02}, \textit{overconfidence} \shortcite{NothWeber03}, \textit{base rate fallacy} \shortcite{Goereeetal07}, or more broadly any form of non-Bayesian updating \shortcite{Grether80,JindalAribarg21,Chingetal21}. \textbf{B) \textit{reasoning error}}: this type of error often happens randomly when a subject makes decision based on her updated belief, and again is not specific to the social environment. The well-known logistic choice function is a special case of this where errors are due to random shocks that follow an extreme-value type I distribution. \textbf{C) \textit{violation of rational expectations}}: this error is specifically associated with the social environment and can be due to wrong beliefs about others' strategies \shortcite{KublerWei04,Weizsacker10} or mistrust in others' evaluations.\footnote{Some papers suggest a combination of these errors, see for example \shortciteA{MarchZiegelmeyer18,Angrisanietal20,DeFilippisetal21}.} Despite numerous studies in the literature, it remains unclear whether and how one can non-parametrically identify these three types of errors using empirical data. Many studies either ignore some of these errors or put additional assumptions to overcome the identification challenges. For example, the need to separate the inability to best respond to one’s beliefs (type B error) from potentially wrong beliefs about others (type C error) has been recognized in prior literature \shortcite{Stahl95,CH04}. However, most of these papers approach the identification problem by imposing parametric functional forms on beliefs and/or choice probabilities \shortcite{KublerWei04,Goereeetal07,MarchZiegelmeyer18}. These approaches might suffer from misspecification and generate misleading results \shortcite{DominitzHung09}. Some papers such as \shortciteA{Weizsacker10} utilize a reduced-form approach and do not impose parametric assumptions, but these studies are not able to precisely identify the contribution of three types of errors mentioned earlier to the subject's behavior. Hence, they provide limited explanation about the nature of the suboptimality. The current paper contributes to the literature by developing a social interaction experiment which non-parametrically identifies these errors in subject's behavior. The framework in my study is built on the theoretical work of \shortciteA{Walliser89} and sheds light on what variation in the data identifies which error type. The idea is that type A and type C errors often appear when a subject forms expectation about the environment (stage 1 in decision-making), but type B error emerges when these expectations are used to find out a selected action (stage 2 in decision-making). To distinguish between these two stages, I collect both choice data and belief data so that I am able to separate type B errors from type A and type C errors. This highlights the fact that one cannot separately identify errors associated with the first stage of decision-making process from those of the second stage using only the choice data or only the belief data. To further disentangle between type A and type C errors, I separately ask subjects to process a private signal and the choice of a neighbor across two conditions with a within-subject design. The separation and within-subject design are important because without them, one may not be able to study the "absolute" effect of each source of information. In fact, as \shortciteA{Eyster19} points out, many studies in the social learning literature are only able to study the "relative" importance of private versus social information.\footnote{My study has two other advantages compared to social learning experiments \shortcite{AndersonHolt97,KublerWei04,Goereeetal07}. First, the decision task in my experiment is very simple so that there is no concern about the complexity of the decision in the social condition. The complexity of the decision problem has been shown to be an important factor in driving individual errors \shortcite{CharnessLevin09, Weizsacker10, EnkeZimmerman}. Second, by separating the people whose choices are used in the social condition from those of the main experiment, my experimental design rules out any confounding factor which might be related to the prosocial/strategic incentives among subjects and provides a lower bound on the amount of irrationality one might expect to observe in social interactions.} While the above features of the experiment help to separate different types of error, they are not sufficient for the non-parametric identification of beliefs and choices. One needs enough variation in the information structures (precision of signals) so that beliefs cover the whole range of probabilities in $[0,1]$. The wide variety of information structures in my experiment further facilitates the non-parametric estimation and provides insights about what parametric assumptions might or might not be appropriate for modeling decisions in social interactions. This variation in the data is closely related to the literature on non-parametric identification of non-equilibrium beliefs in games of incomplete information \shortcite{VictorArvind20,VictorErhao21}. The novelty is in the source of variation which has not been sufficiently explored in prior literature. In addition, I show that there is substantial heterogeneity in decision-making across subjects. So, one needs to naturally account for individual-specific heterogeneity when modeling decision-making in social interactions. The heterogeneity in beliefs has been documented in other contexts, see for instance \shortciteA{Orhun12}, \shortciteA{Chingetal21}, \shortciteA{JindalAribarg21}, \shortciteA{BenettonCompiani21}, among others. My results are consistent with these findings and extends them to a more general framework in which there is no strategic incentive among subjects.\footnote{For a literature review on (the failure of) rational expectations in other strategic settings see \shortciteA{BeardBeil94} and \shortciteA{Eyster19}.} \iffalse \section{Related Literature} \label{Literature} Several other papers have studied decision-making under social learning \shortcite{Banerjee92,BHW92}. The standard experimental setting in this literature is \shortciteA{AndersonHolt97}: individuals arrive sequentially in the experiment, receive a private signal about the state of the world, observe the choices of previous individuals, and guess about the state. \shortciteA{Weizsacker10} conducts a meta-analysis of this literature and shows that people learn too little from observation of their predecessors' actions.\footnote{See \shortciteA{Eyster19} for a more recent review.} \shortciteA{KublerWei04} extend this setting by making the private signals costly. They show that players systematically misperceive other players' error rate, i.e, players attribute an error rate to their predecessors that is higher than their own. There are several differences between my experimental setting and the conventional experimental setting in the social learning literature \shortcite{AndersonHolt97}. First, in the conventional social learning experiments, subjects decide sequentially and each subject, upon arrival, observes a private signal and the history of their predecessor's actions. This implies that subjects encounter two sources of information when they make choice: 1) a private signal, and 2) the choices of predecessors. In my experiment, however, these two sources of information are studied separately, and subjects encounter only one of these two sources of information at a time. In the individual condition, subjects encounter a private signal, and in the social condition, they encounter the choice of a predecessor. The comparison of choices across these two conditions helps to causally identify the irrationality that is associated with the SI.\footnote{In my experiment, subjects are instructed so that they view the social condition and the individual condition as two separate experiments. However, in social learning experiments, there is no such distinction.} Second, the social learning experiment is a dynamic setting in nature: the number of predecessors increases as subjects arrive later in the experiment. This implies that the complexity of information increases for later participants. This is important because the complexity of the decision problem can be an important factor in driving individual errors \shortcite{CharnessLevin09, EnkeZimmerman}. However, both of the conditions in my experiment are very simple and static, in the sense that the complexity of problem does not change across subjects within a condition. In the individual condition, all subjects obtain only one private signal. Similarly, in the social condition, all subjects only observe the choice of one predecessor.\footnote{This also distinguishes my research from the studies that examine consensus of beliefs in social networks \shortcite{GolubJackson10, GrimmMengel13, Chandrasekharetal14, Brandtsetal15}} Third, the studies in the social learning literature usually impose strong assumptions to be able to estimate a behavioral model that can explain the observed choices in the data. For example, they often abstract away from the biases in belief updating (that are usually prevalent in the absence of SI) and presume that individuals update their beliefs according to Bayes rule. They then assume that the subject's choice probabilities follow a logit function.\footnote{This can be justified by assuming that there is a random shock in the individual's utility function that follows a type-I extreme value distribution \shortcite{McKelveyPalfrey95,AndersonHolt97}.} These assumptions allow the researcher to estimate all the observed errors by one parameter, which is typically called \textit{response precision}. In my setting, none of these assumptions are needed because my main analysis is based on the comparison of choices in the absence and in the presence of SI. I do not need to take a stand on how individuals update their beliefs because the within-subject design allows me to control for any individual error that is not related to the social environment, for instance errors in statistical reasoning \shortcite{Benjamin18}, and isolate the errors associated with SI under a set of weak assumptions.\footnote{In fact, my analysis in the appendix indicates that imposing the assumption of Bayesian updating can bias the estimated response precision in a social learning setting. This highlights the importance of collecting expectations data to relax assumptions about expectations \shortcite{Manski04}.} \fi \section{Experimental Design} \label{ex_design} Subjects are randomly assigned to one of four treatments (Figure \ref{design}). The experiment in each treatment consists of two consecutive parts and the order of these two parts is randomized. In one part, the subject performs a task in isolation --- without social interaction (\textit{individual condition}). In the other part, she performs the same task with social interaction (\textit{social condition}). As noted earlier, the order of these two parts is randomized, i.e., given a treatment, some subjects first see the individual condition and then proceed to the social condition, and others see them in reverse order (see Figure \ref{design}). \\ \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[xscale=1] \draw[->][draw=black, very thick] (4,-1) -- (-2,-2.5); \draw[->][draw=black, very thick] (4,-1) -- (2,-2.5); \draw[->][draw=black, very thick] (4,-1) -- (6,-2.5); \draw[->][draw=black, very thick] (4,-1) -- (10,-2.5); \node[align=left] at (-2,-3) {\mybox{1. \textit{Base}}}; \node[align=left] at (2,-3) {\mybox{2. \textit{Demographics}}}; \node[align=right] at (6,-3) {\mybox{3. \textit{Bot}}}; \node[align=right] at (10,-3) {\mybox{4. \textit{Ball}}}; \draw[->][draw=black, very thick] (-2,-3.5) -- (-3,-4.5); \draw[->][draw=black, very thick] (-2,-3.5) -- (-1,-4.5); \draw[->][draw=black, very thick] (2,-3.5) -- (1,-4.5); \draw[->][draw=black, very thick] (2,-3.5) -- (3,-4.5); \draw[->][draw=black, very thick] (6,-3.5) -- (5,-4.5); \draw[->][draw=black, very thick] (6,-3.5) -- (7,-4.5); \draw[->][draw=black, very thick] (10,-3.5) -- (9,-4.5); \draw[->][draw=black, very thick] (10,-3.5) -- (11,-4.5); \node[align=left] at (-3,-5) {Individual}; \draw[->][draw=black, very thick] (-3,-5.5) -- (-3,-6.5); \node[align=left] at (-3,-7) {Social}; \node[align=left] at (-1,-5) {Social}; \draw[->][draw=black, very thick] (-1,-5.5) -- (-1,-6.5); \node[align=left] at (-1,-7) {Individual}; \node[align=left] at (1,-5) {Individual}; \draw[->][draw=black, very thick] (1,-5.5) -- (1,-6.5); \node[align=left] at (1,-7) {Social}; \node[align=left] at (3,-5) {Social}; \draw[->][draw=black, very thick] (3,-5.5) -- (3,-6.5); \node[align=left] at (3,-7) {Individual}; \node[align=left] at (5,-5) {Individual}; \draw[->][draw=black, very thick] (5,-5.5) -- (5,-6.5); \node[align=left] at (5,-7) {Social}; \node[align=left] at (7,-5) {Social}; \draw[->][draw=black, very thick] (7,-5.5) -- (7,-6.5); \node[align=left] at (7,-7) {Individual}; \node[align=left] at (9,-5) {Individual}; \draw[->][draw=black, very thick] (9,-5.5) -- (9,-6.5); \node[align=left] at (9,-7) {Social}; \node[align=left] at (11,-5) {Social}; \draw[->][draw=black, very thick] (11,-5.5) -- (11,-6.5); \node[align=left] at (11,-7) {Individual}; \end{tikzpicture} \end{center} \caption{The experimental design} \label{design} \end{figure} The individual condition is the same in all four treatments, but the social condition differs across treatments. The idea is to exogenously manipulate the subject's knowledge about the participants with whom she is interacting across treatments. I will elaborate on the differences between treatments as I proceed in the following. \subsection{Individual Condition} The \textit{individual condition} is a benchmark which measures the subjects' behavior in the absence of social interaction. It consists of 21 rounds. In each round, two boxes are shown to the subject. Each box contains 10 balls of white or black color (see Figure \ref{URNS} for an example). These boxes represent the possible states of the world, $\omega \in \{X,Y\}$. In the beginning of each round, a fair coin is anonymously flipped. If the coin is \textbf{Head}, the state is \textbf{X} and one ball is randomly drawn from box X. If the coin is \textbf{Tail}, the state is \textbf{Y} and one ball is randomly drawn from box Y.\footnote{This induces a prior probability of $\frac{1}{2}$ for each box. The language used in the actual experiment was slightly different: I used \textit{box }$H$ (head) and \textit{box }$T$ (tail) instead of box $X$ and box $Y$ to remind individuals about the randomization (see the appendix for experiment instructions).} The subject does not observe the coin. She observes the ball, and then is asked to guess what the state is. The combination of white and black balls randomly changes over 21 rounds. Denote the fraction of white balls in box $X$ by $\theta_X$ and the fraction of black balls in box $Y$ by $\theta_Y$. The combinations used in the experiment include a wide range of symmetric and asymmetric information structures: $\Big\{(\theta_X,\theta_Y) \ \Big| \ \theta_X,\theta_Y \in \{0.5,0.6,0.7,0.8,0.9,1\} \ , \ \theta_X \geq \theta_Y \Big\}$. \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{URNS.png} \caption{These two boxes represent possible states of the world (Here, for instance $\theta_X=0.7$ and $\theta_Y=0.6$)} \label{URNS} \end{figure} \\ Subjects are incentivized to make a correct guess:\footnote{The idea is to randomly choose some rounds and pay the subject for each correct guess in those rounds. I explain the payment scheme in more details later.} it is best for a subject to pick a box with more black (white) balls when she observes a black (white) ball. In addition to collecting the subjects' choices, I add a survey question at the end of each round that asks for the subject's posterior belief. Specifically, the subject answers the following question after she reports her choice in each round: \textit{with what probability do you think your guess is correct?}.\footnote{As I explain later, I am interested in knowing which state is more likely from the subject's perspective when she makes a choice. Effectively, I only need to know whether the subject chooses a state that she believes has a higher ($>$50\%) or lower ($<$50\%) chance of being correct.} Unlike the choice, the survey question is not incentivized in the experiment for a few reasons. First, I found that the experiment lasts too long when subjects are required to go through an incentive compatible elicitation procedure for each of the posterior beliefs that they submit during the experiment. So, it may cause fatigue and contaminate the choice data that is vital for the main analysis. Second, I did use monetary incentives for posteriors in a pilot study using a revised version of Quadratic Scoring Rule \shortcite{Brier50}.\footnote{For the subjects who are incentivized for both the choice and the posterior, I randomly select one posterior and one choice for payment (the selection is independent).} The pilot results suggested that the incentivized posteriors are not significantly different than the posteriors that are not incentivized. The prior literature has also shown that responses to this type of survey questions, in the absence of incentives for honest revelation of expectations, do possess face validity when the questions concern well-defined events; see \shortciteA{Manski04} for a detailed discussion. I kept the survey question very standard and easy to understand so that it is unlikely that the subject does not understand the survey question or incurs a cognitive cost to think about the answer \shortcite{Smith91}. So, one can expect the reported posterior probabilities to be close to the subjective probabilities in the subject's mind.\footnote{I also did a robustness check at the end of my main experiment and incentivized all subjects according to Quadratic Scoring Rule. The elicited posteriors were very similar to those that were collected from the survey questions during the experiment. But I do not use these incentivized posteriors in my analysis because the incentive compatible elicitation of posteriors were always happening at the end of the experiment, after both the individual condition and the social condition had been finished. The fact that the incentivized posteriors were always collected after the end of the experiment might make the results inconclusive (the subjects were answering the same survey questions as they had observed during the experiment. The concern is that subjects might not think about the questions anymore because they had already seen the same questions before. Hence, elicitation mechanism might not have an impact on subjects' posteriors.)} \subsection{Social Condition} The \textit{social condition} is designed to study the subjects' behavior in the presence of social interaction. The structure of the task is similar to that of the individual condition. The social condition consists of 21 rounds. In each round, the subject is randomly connected to another participant, called \textit{neighbor} in the experiment, and receives information from one of the rounds in the neighbor's individual condition. The subject observes the content of two boxes that has been shown to the neighbor. Her task is to guess what the state (selected box) is, based on the information that she obtains from the neighbor. As noted earlier, there are four treatments in the experiment and the transmitted information in the social condition is different across treatments. I explain the treatments in the following. The first treatment is called \textit{base}. In this treatment, the information coming through social interaction is the neighbor's guess.\footnote{By "guess" I mean the actual choice of the neighbor. The subject does not observe the neighbor's posterior belief (the answer to the survey question).} Note that a neighbor here is a random subject who has previously participated in the experiment. The subject knows that the neighbor's guess is incentivized and is based on a randomly drawn ball from the realized state (box). To summarize, in each round, the subject observes two boxes and the guess of a neighbor, but not the ball that the neighbor has observed. Then, the subject is asked to guess about the realized state. The experiment is designed such that the neighbor randomly changes in each round. Hence, the subject does not interact with the same neighbor over time and it is unlikely that the subject learns about a specific neighbor's behavior over the course of 21 rounds in the social condition. The second treatment is called \textit{demographics}. There is a slight difference between the social condition in this treatment and in the base treatment: on top of the neighbor's guess, the subject observes the neighbor's demographic information such as age, gender, years of education, and whether the neighbor has taken any Probability/Statistics courses. This treatment is designed to examine whether providing demographic information about the neighbor can alleviate the irrationality associated with the uncertainty of neighbor's behavior in the social interaction. If demographics provide additional information about the behavior of neighbor, the uncertainty might be lower in this treatment than the base treatment. The third treatment is called \textit{bot}. Everything in this treatment is the same as in the base treatment, except that the neighbor is a computer bot which is programmed to exhibit a specific behavior (i.e., rational). This means when the bot observes a white ball, it chooses the box with more white balls, and when it observes a black ball, it chooses the box with more black balls. The behavior of the bot is explained in details to the subjects in this treatment. Subjects see the guess of the bot in this treatment and then submit their own guesses about the realized state. The social interaction in this treatment is relatively more transparent than the earlier two treatments. So, one expects the irrationality associated with the uncertainty of neighbor's behavior to be significantly lower in this treatment than the base treatment. The fourth treatment, which is called \textit{ball}, is an augmented version of the base treatment in which the subject observes both the (human) neighbor's guess and the ball which has been shown to the neighbor. The uncertainty effect is expected to completely disappear in this treatment because the subject is provided with all the relevant information regarding her neighbor's choice. \subsection{Payment Scheme} Each subject receives \$6 show-up fee for participation. In addition to that, two rounds of the experiment are randomly selected and the subject wins \$12 for each correct guess in those two rounds. \section{Results} \label{results} In this section, I first define the criteria for recognizing individual errors in the context of my experiment. I then analyse subjects' choices in the experiment to measure the frequency of these errors and examine the relation between them in the individual condition and in the social condition. The comparison between errors in the individual and in the social condition isolates the errors that are independent of the social interaction (belief updating error and reasoning error) and identifies the errors that are associated with the social interaction (violation of rational expectations). In the individual condition, a Bayesian rational subject should choose a box with more black balls when she observes a black ball, and a box with more white balls when she observes a white ball. Accordingly, I define an \textit{individual irrationality} as an observation which deviates from this prediction. \begin{definition} Individual Irrationality: A choice in the individual condition where the subject observes a white (black) ball, but chooses a box with more black (white) balls. \end{definition} In social interactions, the conventional assumption in economics is that individuals have rational expectations about each other (and rationality is common knowledge). In the context of the current experiment, this implies that the subject should follow her neighbor's guess and choose the same box as the neighbor in the social condition. Accordingly, a \textit{social irrationality} is defined as follows. \begin{definition} Social Irrationality: A choice in the social condition where the subject chooses a box different from her neighbor's guess.\footnote{The definition of social irrationality in the \textit{ball} treatment is a little bit different because the subject observes both the ball and the neighbor's guess when she is connected to the neighbor. In that case, I define social irrationality as a choice in which the subject chooses a box different from her neighbor's guess, given that the neighbor's guess is rational (i.e., does not contradict with the signal).} \end{definition} In the next section, I analyse the experimental data to measure the magnitude of individual irrationality and social irrationality in subjects' choices and to elaborate on the differences. \subsection{Data} The main experiment was conducted at Toronto Experimental Economics Laboratory (TEEL) in University of Toronto during December 2019. The experiment was programmed in oTree \shortcite{oTree}. In total, 151 subjects were recruited from the subject pool using Online Recruitment System for Economic Experiments \shortcite{Greiner15}. The average payment across subjects was \$25.26.\footnote{No subject participated in more than a single treatment. Subjects needed to be at least 18 years old to be eligible to participate in the experiment. The human neighbors in the social condition were 94 subjects who had participated in the experiment a few months before the main experiment.} Table \ref{Summary_stat} provides evidence that individual characteristics are relatively balanced across the four treatments, confirming that the randomization was successful. \begin{table}[ht] \centering \caption{Summary Statistics} \label{Summary_stat} \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{4}{c}{Treatment} \\ \cline{2-5} \\[-1.8ex] & Base & Demographics & Bot & Ball\\ \hline \\[-1.8ex] Female (\%) & 77.5 & 73.5 & 78.9 & 61.5 \\ & & & & \\ Prob/Stat course (\%) & 75 & 64.7 & 68.4 & 69.2 \\ & & & & \\ Years of Education & 15.0 & 14.85 & 14.73 & 14.48 \\ & (1.5) & (2.11) & (2.24) & (1.82) \\ & & & & \\ Age & 20.25 & 19.38 & 20.02 & 20.28 \\ & (1.81) & (1.39) & (2.04) & (1.88) \\ \hline \\[-1.8ex] Number of Subjects & 40 & 34 & 38 & 39 \\ \hline \hline \\[-1.8ex] \end{tabular} \\ \small \textit{Note:} Standard deviations are presented in parentheses. The second row shows \\ the percentage of subjects who have taken Probability/Statistics courses. \end{table} In the following, I exclude the cases in which both boxes have 5 black balls and 5 white balls, $\theta_X=\theta_Y=0.5$, because theory does not have a prediction about the subject's behavior in those cases. Subjects are expected to behave randomly in those rounds, a result that is supported by the data.\footnote{In the individual condition, when the two boxes have the same combination of balls (5 white and 5 black balls), subjects choose the left box with probability 0.44. Here, the null $H_0:p = 0.5$ cannot be rejected at the 5\% significance level (\textit{p\text{-}value} = 0.14). Similarly, in the social condition, when both boxes have 5 white and 5 black balls, subjects do not follow their neighbor's guess with probability 0.48 ($p\text{-}value=0.74$ for the null $H_0:p=0.5$).} \subsection{How Do Errors Differ across the Individual and the Social Conditions?} \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{Ind-vs-Soc-total-within-corrected.png} \caption{Individual irrationality and social irrationality (pooled data)} \label{Ind-vs-Soc-total-within_corrected} \end{figure} My first result examines the aggregate fraction of irrational choices in the individual condition and in the social condition. Figure \ref{Ind-vs-Soc-total-within_corrected} illustrates that the individual irrationality and the social irrationality are significantly greater than zero, even though subjects are incentivized for being correct. In the individual condition, subjects on average deviate from the theoretical prediction (Bayesian rational behavior) with a probability of 0.049 ($p\text{-}value<0.001$).\footnote{This is lower than the error rate reported in prior literature for the individuals who arrive first in a standard social learning experiment \shortcite{AndersonHolt97,KublerWei04}. In \shortciteA{AndersonHolt97} 10\% of the subjects whose information set was a private signal, did not follow their signal. In \shortciteA{KublerWei04}, this behavior was observed in about 7\% of all cases where first players saw only a private signal.} In the social condition, even though subjects know that their neighbor's guess is incentivized with money, they on average do not follow the choice of their neighbor with a probability of 0.112 ($p\text{-}value<0.001$). Surprisingly, the social irrationality is significantly higher than the individual irrationality ($p\text{-}value<0.001$). This evidence suggests that subjects neglect the information more in the social condition than in the individual condition, a result that can be associated with, for example, the violation of rational expectations.\footnote{Note that the comparisons in this section are within-subject, i.e., the same subjects are making on average more errors in the social condition than in the individual condition. Given my experimental design, it is also possible to do the analysis between-subject. The details of the between-subject analysis are provided in the appendix. The results are qualitatively similar there.} Figure \ref{irr_hist} presents the distribution of individual irrationality and social irrationality across subjects. The blue histogram shows that about 63\% of subjects have no individual irrationality over the course of 21 rounds in the individual condition. In addition, 19.8\% of subjects have exactly one individual irrationality, and the remaining 17.2\% have more than one individual irrationality. So, the individual irrationality is not negligible for a considerable fraction of subjects.\footnote{This result is consistent with \shortciteA{AmbuelLi18} who report that 17\% of their subjects made at least one irrational choice out of six trials.} On the other hand, the orange histogram indicates that 54.3\% of subjects have no social irrationality, 10.6\% have exactly one social irrationality, and the remaining 35.1\% have more than one. Comparing the two distributions, one can observe that the upper tail of the distribution is thicker in the social condition than in the individual condition. So, there is a clear shift in the error rate of subjects across the two conditions. The two-sample Anderson-Darling test also verifies the significant difference between the two distributions ($p\text{-}value<0.005$). \begin{figure}[htbp] \centering \includegraphics[scale=0.75]{irr_hist.png} \caption{The distribution of individual irrationality and social irrationality across subjects} \label{irr_hist} \end{figure} \subsection{The Role of Uncertainty about Others' Behavior} \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{Ind-vs-Soc-treatments-corrected.png} \caption{Social irrationality in different treatments} \label{Ind-vs-Soc-treatments_corrected} \end{figure} In this section, I examine the mechanisms behind the additional neglect in the social condition. The hypothesis is that the additional irrationality in the social condition arises from the uncertainty about the neighbor's behavior. In other words, because the neighbor's decision-making process is ambiguous to the subjects, they cannot correctly extract the neighbor's private information from observation of their choices, and thus violate the rational expectations. To test this idea, as noted earlier, I exogenously manipulate the subject's knowledge about the neighbor across four treatments. Figure \ref{Ind-vs-Soc-treatments_corrected} illustrates the social irrationality in each treatment along with the aggregate individual irrationality.\footnote{Recall that the individual condition is identical in all treatments. So, I do not break down the individual irrationality here and only report the aggregate individual irrationality for ease of exposition.} The treatment "\textit{base}" is a benchmark treatment in which subjects only observe the guess of their neighbor. The result in this treatment echos the earlier finding about the larger magnitude of neglect (irrationality) in the social condition than in the individual condition. In the treatment "\textit{demographics}", subjects are provided with some demographic information about their neighbor (Age, Gender, Years of Education, Whether the neighbor took Probability/Statistics courses), on top of the neighbor's guess. The result of this treatment shows that providing demographics slightly decreases the social irrationality compared to the base treatment, from $0.145$ to $0.1411$. But this effect is not statistically significant ($p\text{-}value=0.83$). In the treatment "\textit{bot}", subjects observe the guess of a computer bot. Here, the bot's behavior is known to subjects: it picks a box with more white balls when it observes a white ball, and picks a box with more black balls when it observes a black ball. The social irrationality in this treatment significantly drops to 0.094 compared to the base treatment ($p\text{-}value<0.01$). This evidence is consistent with the hypothesis that the difference between the social irrationality and the individual irrationality is due to the uncertainty about the neighbor's behavior.\footnote{Note that although the social irrationality is alleviated in the bot treatment, it is still significantly higher than the individual irrationality. One natural question rises here: why is there a difference between the individual irrationality and the social irrationality in the bot treatment? Responses from an open survey question that was collected at the end of the experiment show that some of the subjects mistrust bots. This might explain why the social irrationality in the bot treatment remains significantly higher than the individual irrationality.} Finally, in the treatment "\textit{ball}", subjects are provided with both the neighbor's guess and the ball which was shown to the neighbor. Figure \ref{Ind-vs-Soc-treatments_corrected} shows that the social irrationality in this treatment is significantly lower than all other treatments ($p\text{-}value<0.01$). Here, the difference between the social irrationality and the individual irrationality is no longer statistically significant. This result verifies that when there is no uncertainty about the neighbor's behavior in the social condition, the magnitude of social irrationality is the same as the magnitude of individual irrationality. So, the additional neglect in the social condition disappears from the subject behavior. \subsection{The Observed Heterogeneity in Subjects' Behavior} In this section, I run some regressions to examine the observed heterogeneity in the subjects' behavior. My data contains demographic information about all the subjects and each of their neighbors. So, I can investigate how subjects' characteristics and those of their neighbors explain the observed irrationality in the experiment. First, I investigate the role of the subject's characteristics. Specifically, I estimate the following regression, \begin{align} Y_{i}\times 100 = \alpha+X_i\gamma+\epsilon \end{align} where $Y_{i}$ is the fraction of irrational choices by subject $i$, $X_i$ includes the subject's observed characteristics: gender (dummy for female), years of education, age, and whether the subject has taken Probability/Statistics courses. The estimation results are provided in Table \ref{Regress}. \\ \iffalse \begin{table}[ht] \centering \caption{The Observed Heterogeneity in the Subject's Irrationality} \label{Regress} \begin{tabular}{@{\extracolsep{5pt}}lcc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{Linear Regression} \\ \cline{2-3} \\[-1.8ex] & Y_{ic}\times 100 & Y_{ic}\times 100\\ \\[-1.8ex] & (1) & (2)\\ \hline \\[-1.8ex] \alpha_{indiv} & 4.73$^{***}$ & 17.99$^{**}$ \\ & (1.066) & (8.23) \\ & & \\ \alpha_{social} & 10.06$^{***}$ & 23.32$^{***}$ \\ & (1.066) & (8.23) \\ & & \\ Gender (Female) & & 2.77^{*} \\ & & (1.612) \\ & & \\ Education & & 0.52 \\ & & (0.43) \\ & & \\ Age & & $-$ 0.96^{**} \\ & & (0.47) \\ & & \\ Prob/Stat course & & $-$ 5.25$^{***}$ \\ & & (1.66) \\ \hline \\[-1.8ex] Observations & 300 & 300 \\ R$^{2}$ & 0.04 & 0.108 \\ Adjusted R$^{2}$ & 0.037 & 0.093 \\ F Statistic & 12.5$^{***}$ & 7.142$^{***}$ \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{2}{r}{$^{*}p<0.1$; $^{**}p<0.05$; $^{***}p<0.01$} \\ \end{tabular} \end{table} \begin{table}[ht] \centering \caption{The Observed Heterogeneity in the Subject's Irrationality} \label{Regress} \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{4}{c}{Data} \\ \cline{2-5} \\[-1.8ex] & Individual & Social & Individual & Social\\ \\[-1.8ex] & (1) & (2)& (3) & (4)\\ \hline \\[-1.8ex] Constant & 4.93$^{***}$ & 10.88$^{***}$ &2.81 & 37.19$^{**}$ \\ & (0.85) & (1.35)& (9.35) & (14.64) \\ & & & & \\ Gender (Female) & & & 0.47 & 5.58^* \\ & & & (1.84) & (2.88) \\ & & & & \\ Education & & & 1.00$^{**}$ & 0.11 \\ & & & (0.49) & (0.78) \\ & & & & \\ Age & & & $-$ 0.51 & $-$ 1.36 \\ & & & (0.54) & (0.85) \\ & & & & \\ Prob/Stat course & & & $-$ 4.02$^{**}$ & $-$ 6.85$^{**}$ \\ & & & (1.89) & (2.95) \\ \hline \\[-1.8ex] Observations & 151 & 151 & 151 & 151 \\ R$^{2}$ & 0.000 & 0.000 & 0.063 & 0.097 \\ Adjusted R$^{2}$ & 0.000 & 0.000 & 0.037 & 0.073 \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{4}{r}{$^{*}p<0.1$; $^{**}p<0.05$; $^{***}p<0.01$} \\ \end{tabular} \end{table} \fi \begin{table}[ht] \centering \caption{The observed heterogeneity in the subject's irrationality} \label{Regress} \begin{tabular}{@{\extracolsep{5pt}}lcc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{Data} \\ \cline{2-3} \\[-1.8ex] & Individual & Social\\ \\[-1.8ex] & (1) & (2)\\ \\[-1.8ex] DV & $Y_{ic}\times 100$ & $Y_{ic}\times 100$\\ \hline \\[-1.8ex] Gender (Female) & 0.47 & $5.58^*$ \\ & (1.84) & (2.88) \\ & & \\ Education & 1.00$^{**}$ & 0.11 \\ & (0.49) & (0.78) \\ & & \\ Age & $-$ 0.51 & $-$ 1.36 \\ & (0.54) & (0.85) \\ & &\\ Prob/Stat course & $-$ 4.02$^{**}$ & $-$ 6.85$^{**}$ \\ & (1.89) & (2.95) \\ & & \\ Constant &2.81 & 37.19$^{**}$ \\ & (9.35) & (14.64) \\ \hline \\[-1.8ex] Observations & 151 & 151 \\ R$^{2}$ & 0.063 & 0.097 \\ Adjusted R$^{2}$ & 0.037 & 0.073 \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{2}{r}{$^{*}p<0.1$; $^{**}p<0.05$; $^{***}p<0.01$} \\ \end{tabular} \end{table} Column (1) in Table \ref{Regress} indicates that in the individual condition, all else equal, subjects who have taken Probability/Statistics courses make 4.02 percentage points less errors than subjects who have not. This result suggests that the individual irrationality is mainly driven by the lack of knowledge in probability and statistics. Column (2) illustrates that in the social condition, all else equal, the subjects who have taken Probability/Statistics courses make 6.85 percentage points less errors than subjects who have not. My results are consistent with the previous findings in the literature. For example, in a different context, \shortciteA{Armantieretal15} find evidence that respondents whose behavior cannot be rationalized by economic theory, tend to have lower education and lower numeracy and financial literacy. In my experiment, however, the subject's observable characteristics cannot explain the additional irrationality that is associated with the uncertainty about the neighbor's behavior in the social condition (i.e., violation of rational expectations). Next, I examine the effect of the neighbor's observable characteristics on the subject's social irrationality. Note that only the subjects who are in the treatment \textit{demographics} observe their neighbor's characteristics. So, I need to restrict the data in this section to the choices made in the social condition of the demographics treatment. Here, the dependent variable is a binary choice. Hence, I estimate the following logistic regression, \begin{align} \label{logistic_social} Pr(D_{ij} \text{ is irrational}) = \frac{\text{exp}\ (\alpha+X_i\gamma+X_j \delta)}{1+\text{exp}\ (\alpha+X_i\gamma+X_j \delta)} \end{align} where $D_{ij}$ is the choice of subject $i$ in round $j$, $X_i$ includes the subject's observable characteristics, and $X_j$ includes the neighbor's observable characteristics in round $j$ (recall that the neighbor randomly changes in each round). As before, observable characteristics include gender, years of education, age, and whether the individual has taken Probability/Statistics course. The estimation results are shown in Table \ref{Regress neighbor}. \iffalse \begin{table}[htp] \centering \caption{The Observed Heterogeneity in Social Irrationality in treatment "\textit{demographics}"} \label{Regress neighbor} \begin{tabular}{@{\extracolsep{5pt}}lcc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{Logit Average Marginal Effects} \\ \cline{2-3} \\[-1.8ex] & (1) & (2)\\ \hline \\[-1.8ex] Subject's Gender (Female) & 0.062^{*} & 0.056 \\ & (0.036) & (0.035) \\ & & \\ Subject's Education & $-$ 0.016^{*} & $-$ 0.018^{**} \\ & (0.009) & (0.009) \\ & &\\ Subject's Age & $-$ 0.009 & $-$ 0.008 \\ & (0.013) & (0.013) \\ & & \\ Subject's Prob/Stat course & $-$ 0.052^{*} & $-$ 0.052^{**}\\ & (0.027) & (0.026) \\ & & \\ Neighbor's Gender (Female) & & $-$ 0.047$^{*}$ \\ & & (0.028) \\ & & \\ Neighbor's Education & & $-$ 0.005 \\ & & (0.004) \\ & & \\ Neighbor's Age & & $-$ 0.003$^{**}$ \\ & & (0.001) \\ & & \\ Neighbor's Prob/Stat course & & $-$ 0.058$^{*}$ \\ & & (0.03) \\ & & \\ \hline \\[-1.8ex] Observations & 680 & 680 \\ Pseudo R$^{2}$ & 0.035 & 0.061 \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{2}{r}{$^{*}p<0.1$; $^{**}p<0.05$; $^{***}p<0.01$} \\ \end{tabular} \end{table} \fi \begin{table}[htp] \centering \caption{The observed heterogeneity in the social condition of treatment "\textit{demographics}" (Equation \ref{logistic_social})} \label{Regress neighbor} \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{Logit Coefficients} & \multicolumn{2}{c}{Average Marginal Effects} \\ \cline{2-3} \cline{4-5} \\[-1.8ex] & (1) & (2) & (3) & (4) \\ \hline \\[-1.8ex] Subject's Gender (Female) & 0.52 & 0.48 & 0.062 & 0.056\\ & (0.59) & (0.60) & (0.071) & (0.07)\\ & & & & \\ Subject's Education & $-$0.139 & $-$ 0.159 & $-$ 0.016 & $-$ 0.018 \\ & (0.17) & (0.17) & (0.02) & (0.02) \\ & & & & \\ Subject's Age & $-$ 0.08 & $-$ 0.075 & $-$ 0.009 & $-$ 0.008 \\ & (0.23) & (0.23) & (0.0267) & (0.026)\\ & & & &\\ Subject's Prob/Stat course & $-$ 0.44 & $-$ 0.46 & $-$ 0.052 & $-$ 0.052\\ & (0.63) & (0.64) & (0.076) & (0.076) \\ & & & &\\ Neighbor's Gender (Female) & & $-$ 0.41$^{**}$ & & $-$ 0.047$^{*}$ \\ & & (0.20) & & (0.025) \\ & & & &\\ Neighbor's Education & & $-$ 0.04 & & $-$ 0.005 \\ & & (0.032) & & (0.003)\\ & & & &\\ Neighbor's Age & & $-$ 0.025$^{**}$ & & $-$ 0.003$^{***}$ \\ & & (0.01) & & (0.001) \\ & & & &\\ Neighbor's Prob/Stat course & & $-$ 0.51$^{***}$ & & $-$ 0.058$^{**}$ \\ & & (0.196) & & (0.025)\\ & & & &\\ Constant & 2.18 & 4.19 & &\\ & (3.2) & (3.51)& & \\ \hline \\[-1.8ex] Observations & 680 & 680 & 680 & 680\\ Pseudo R$^{2}$ & 0.035 & 0.061 & 0.035 & 0.061\\ \hline \hline \\[-1.8ex] \end{tabular} \\ \textit{Note}: Standard errors are clustered at the subject level.\\$^{*}p<0.1$; $^{**}p<0.05$; $^{***}p<0.01$ \end{table} Columns (1) and (2) in Table \ref{Regress neighbor} present the estimated coefficients for equation \eqref{logistic_social}. The coefficients are insignificant in the first column. However, the second column shows that the neighbor's observable characteristics have a statistically significant effect on the subject's behavior: \textit{ceteris paribus}, the subject is more likely to follow a neighbor whose age is higher, whose gender is female (versus male), and who has taken Probability/Statistics courses. The coefficients of a logistic regression are not quantitatively interpretable. So, I report the average marginal effects in columns (3) and (4) of Table \ref{Regress neighbor}. The results in column (4) imply that, \textit{ceteris paribus}, a subject is likely to follow a neighbor who has taken Probability/Statistics courses 5.8 percentage points more than a neighbor who has not. In addition, all else equal, a subject is 4.7 percentage points less likely to make an irrational guess when interacting with a female versus a male neighbor (4.7 percentage points more likely to follow the neighbor). The effect of the neighbor's age is very small though, i.e., one year increase in the neighbor's age increases the likelihood of being followed by the subject by 0.3 percentage points. \subsection{Heterogeneity across Information Structures} A unique feature of the experiment is that subjects make decisions given several information structures. This allows me to examine the extent to which subjects' irrationality vary across signal precision. Figure \ref{fig:info_struct} shows the total fractional of irrational choices conditional on each information structure. The general trend is that the irrationality decreases as signal precision increases. However, even in the extreme case of complete certainty ($\theta_X=\theta_Y=1$) the irrationality is not zero. The notable insight is that numbers are significantly higher in the social condition (right figure) than in the individual condition (left figure) which is consistent with my earlier findings. \begin{figure}[htbp] \centering \includegraphics[scale=0.55]{II_heatmap.png}% \includegraphics[scale=0.55]{SI_heatmap.png}% \caption{The fraction of irrational choices conditional on the information structure. Standard errors are shown in parentheses.} \label{fig:info_struct} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.55]{II_W_heatmap.png}% \includegraphics[scale=0.55]{SI_W_heatmap.png}% \caption{The fraction of irrational choices conditional on the information structure and the signal being $x$ (in the individual condition) or a neighbor's guess of $X$ (in the social condition). Standard errors are shown in parentheses.} \label{fig:info_struct_sig}% \end{figure} Note that even though each figure covers half of the possible combinations of $\theta_X$ and $\theta_Y$, these results are equally applicable to the uncovered cases. This is because if one exchanges the contents of box X and box Y, the environment remains the same but the information structure changes from $(\theta_X,\theta_Y)$ to $(\theta_Y,\theta_X)$. Hence, one can naturally extend these figures so that they cover the whole space of information structures. Another way to examine subjects' behavior is to measure irrational choices conditional on both the information structure and the realized signal. This is important specially in cases where the information structure is asymmetric. As an example, consider a case in which $\theta_X=1$ and $\theta_Y=0.5$. In this case, box X has 10 white balls and box Y has 5 white and 5 black balls. Here, when the signal is a black ball, a Bayesian subject is certain that the state is box B. However, when the signal is a white ball, the Bayesian subject assigns a relatively higher likelihood to box X than box Y, but she would not be 100\% sure about the state. So, one might expect to see a higher irrationality in the latter case than the former case. Without loss of generality and for expositional purposes, I permute observations so that all the signals in the individual condition are a white ball, and all the signals in the social condition are a neighbor's guess of $X$. This is possible because of the nature of the experiment. For example, an observation in the individual condition where $(\theta_X,\theta_Y)=(0.5,1)$ and the signal is $y$ (black ball), can be permuted so that $(\theta_X,\theta_Y)=(1,0.5)$ and the signal is $x$ (white ball).\footnote{This way, one only needs to plot one figure for each condition, and the figure shows irrationalities conditional on all information structures and signal "$x$". If I do not permute observations, then I need to plot two figures for each condition: one figure would be conditioned on signal $x$ (but covers half of the information space as in Figure \ref{fig:info_struct}) and the other figure would be conditioned on signal $y$.} The results are presented in Figure \ref{fig:info_struct_sig}. Figure \ref{fig:info_struct_sig} demonstrates that errors are generally higher in cases where the uncertainty is higher. For example, when the subject observes $x$ conditional on $(\theta_X,\theta_Y) = (1,0.5)$, the irrationality is higher than the mirror case in which $(\theta_X,\theta_Y) = (0.5,1)$. As discussed, this is the case because when a subject observes signal $x$ in the former case, she is not certain about the state. However, observing the same signal in the latter case, the subjects would be almost certain about the state. Comparing the results across conditions, the numbers are generally higher in the social condition than the individual condition. So, even though subjects react to changes in the likelihood, the uncertainty about the neighbor's behavior remains stable across information structures. \section{A Model of Decision-making} \label{sources} So far, I compared irrational choices across the individual and the social conditions to disentangle between errors that are independent of the social environment and errors associated with the social interaction. I showed in a reduced-form sense that rational expectations can be violated in social interaction. In this section, I build on these reduced-form results and introduce a framework to describe the individual decision-making process in the context of my experiment. The purpose of this framework is to elaborate on the fundamentals of the individual behavior under social interaction and identify sources of error in decision-making. I will estimate a non-parametric model based on this framework in the next section. In the context of my experiment, the individual's decision-making process can be modeled as a two stage procedure. Upon observing a signal, the subject first updates her belief and then picks one of the two possible states based on her posterior. This process is shown in Figure \ref{decision process}. \\ \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[xscale=1] \draw[->][draw=black, very thick] (-1,0) node [left] {Input (signal)} -- (0.5,0);; \draw[-][draw=blue, very thick] (0,0.75) -- (9,0.75); \draw[-][draw=blue, very thick] (0,-0.75) -- (9,-0.75); \draw[->][draw=black, very thick] (3.5,0) -- (5.5,0); \draw[->][draw=black, very thick] (8.5,0) -- (10,0) node [right] {Output (choice)}; \draw [draw=blue, very thick](0,-0.75) -- (0, 0.75); \draw [draw=blue, very thick](9,-0.75) -- (9, 0.75); \node[align=left, below, color=blue] at (-2,-1.1) {$x$}; \node[align=left] at (2,0) {Update Belief}; \node[align=left, below, color = blue] at (2.5,-1) {$Pr(X|x)>Pr(Y|x)$}; \node[align=center, above] at (4.5,1) {Decision-Making Process}; \node[align=right] at (7,0) {Binary Decision}; \node[align=left, below, color = blue] at (7,-1) {$X$}; \node[align=left, below, color = blue] at (11,-1) {$X$}; \end{tikzpicture} \end{center} \caption{The individual decision-making process} \label{decision process} \end{figure} This model is closely related to the deliberation process introduced in a more general decision framework by \shortciteA{Walliser89}. In the first stage, the subject treats the available data to form expectations on her environment ("cognitive rationality"). In the second stage, these expectations are used to find out a selected action ("instrumental rationality"). As an example, suppose an individual observes signal $x$. In theory, the individual should update her belief in favor of state $X$ in the first stage, $Pr(X|x)>Pr(Y|x)$, and then choose state $X$ in the second stage. However, as I showed earlier, the individual's choice (output) does not always comply with the signal (input). Individuals frequently make errors in this simple task. Now, consider a case in which a subject obtains signal $x$, but her final guess is $Y$ (Figure \ref{decision error}). There are two explanations for this observation: \begin{enumerate} \item \textbf{Posterior Error}: It might be that the subject's posterior is mistakenly in favor of state $Y$, $Pr(Y|x)>Pr(X|x)$, and this causes the subject to make an erroneous decision. This means that the subject's posterior is in a wrong direction, but her choice is consistent with her incorrect posterior.\footnote{It is important to notice that the posterior error is different from what is commonly known as "belief updating bias" in the literature \shortcite{KT72,Benjamin18}. A biased belief is not necessarily in a wrong direction, i.e., the biased belief and the Bayesian belief can both favor the same state while assigning different likelihoods to that state. For example, when the signal implies a 70\% chance (in theory) to an event, a biased belief may assign a 60\% chance to it (both are greater than 50\%). However, a posterior error is the consequence of a severe bias that switches the direction of the posterior probability, i.e., an updated belief that is in a wrong direction (e.g., a belief of less than 50\% in the earlier example).} \item \textbf{Reasoning Error}: It might be that the subject's posterior is correctly in favor of state $X$, $Pr(X|x)>Pr(Y|x)$, but she mistakenly chooses state $Y$. \end{enumerate} \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[xscale=1] \draw[->][draw=black, very thick] (-1,0) node [left] {Input (signal)} -- (0.5,0);; \draw[-][draw=blue, very thick] (0,0.75) -- (9,0.75); \draw[-][draw=blue, very thick] (0,-0.75) -- (9,-0.75); \draw[->][draw=black, very thick] (3.5,0) -- (5.5,0); \draw[->][draw=black, very thick] (8.5,0) -- (10,0) node [right] {Output (choice)}; \draw [draw=blue, very thick](0,-0.75) -- (0, 0.75); \draw [draw=blue, very thick](9,-0.75) -- (9, 0.75); \node[align=left, below, color=black] at (-2,-1) {$x$}; \node[align=left] at (2,0) {Update Belief}; \node[align=left, below, color = red] at (2.5,-2) {$Pr(X|x)<Pr(Y|x)$}; \node[align=left, below, color = black] at (2.5,-3) {$Pr(X|x)>Pr(Y|x)$}; \node[align=center, above] at (4.5,1) {Decision-Making Process}; \node[align=right] at (7,0) {Binary Decision}; \node[align=left, below, color = black] at (7,-2) {$Y$}; \node[align=left, below, color = red] at (7,-3) {\large{$Y$}}; \node[align=left, below, color = black] at (11,-1) {$Y$}; \node[align=left, below, color = red] at (11,-2) {posterior error}; \node[align=left, below, color = red] at (11,-3) {reasoning error}; \end{tikzpicture} \end{center} \caption{Two explanations for an observed error in the individual's choice} \label{decision error} \end{figure} In general, it is not possible to non-parametrically identify these two explanations by only observing the subject's choice. This implies that to overcome the identification challenges, one need extra variation in the data or additional information about subject's behavior. As stated before, my experimental design solves this identification problem by collecting data on both subjects' choices and their self-reported posteriors. So, I can distinguish between a posterior error and a reasoning error in the data.\footnote{The framework that is introduced here applies to both the individual condition and the social condition of my experiment. The only difference is that the signal (input) is a ball in the individual condition, while it is the guess of a neighbor (and any additional information coming along the neighbor's choice) in the social condition.} Figure \ref{Ind-vs-Soc-breakdown-within_corrected} illustrates the break down of the observed individual irrationality and social irrationality into posterior error and reasoning error. The figure shows that the probability of a reasoning error is equal to 0.018 in both the individual condition and the social condition. However, there is a statistically significant difference between the magnitude of posterior error across the two conditions; it is 0.031 in the individual condition, but 0.095 in the social condition ($p\text{-}value<0.01$). \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{Ind-vs-Soc-total-within-corrected_splitted.png}% \caption{The break down of individual irrationality and social irrationality into posterior error and reasoning error} \label{Ind-vs-Soc-breakdown-within_corrected}% \end{figure} Figure \ref{Ind-vs-Soc-breakdown-within_corrected} provides an important insight about the implication of the violation of rational expectations in the social interaction. It suggests that the uncertainty about the neighbor's behavior makes the belief updating more difficult in the social condition than in the individual condition. That is, subjects make more errors when they update their beliefs (the first stage of the decision-making process) in the social condition than in the individual condition. However, this uncertainty does not influence the second stage of the decision-making process, i.e., once posterior beliefs are formed, subjects follow the same reasoning procedure, and thus the magnitude of the reasoning error remains unchanged across the individual condition and the social condition. Distinguishing between posterior errors and reasoning errors identifies a critical touchpoint in the decision-making process and may provide solutions to nudge individuals towards making better decisions in social environments. It suggests that the violation of rational expectations may not necessarily result from the lack of Math/Stats knowledge, but may be more about how uncertain subjects are about their neighbor's behavior. In addition, failing to control for individual errors that are independent of the social environments (e.g., reasoning errors) may lead to unintended consequences. In appendix C, I estimate a unified model of individual behavior by borrowing techniques from the social learning literature \shortcite{Grether80,AndersonHolt97}. There, I show that not accounting for different sources of error (posterior error versus reasoning error) can bias the estimated structural parameter that intends to describe the individual behavior.\footnote{The predominant approach in the social learning literature is to derive predictions under the assumption that all other players obey a given model solution, for instance Perfect Bayesian Equilibrium or Quantal Response Equilibrium. But despite their undisputable usefulness, these solutions are often inaccurate descriptions of behavior and thus yield imperfect benchmarks \shortcite{Weizsacker10}.} \section{Non-parametric Estimation of the Decision-making Process} \label{estimation} In the previous section, I introduced a decision-making framework to explain how subjects behave in social interactions. In this section, I formally estimate a model that corresponds to this two-stage decision-making process. This model elaborates on three sources of irrationality under social interaction and sheds light on what variation in the data identifies which error. The estimation provides two non-parametric functions for each condition (individual/social): one function represents the belief updating process in stage 1, and the other function represents the choice rule in stage 2. For the first stage, I use the self-reported belief data, and for the second step, I use the combination of the choice and the belief data. The model is separately estimated for the individual condition and the social condition. The identification of the sources of irrationality follows from the comparison of irrational choices between decision stages (stage 1 vs 2) and across conditions (individual vs social). The idea is that posterior errors, which are only present in stage 1, are often a combination of belief updating biases (type A) and violation of rational expectations (type C). In addition, we know from the earlier results, Figure \ref{Ind-vs-Soc-breakdown-within_corrected}, that the violation of rational expectations only exists in the social condition. So, the comparison of belief functions in stage 1 across the individual and the social condition separates type A error from type C error. However, reasoning errors (type B) are only present in stage 2. So, the choice function in stage 2 identifies type B errors. Table \ref{tab:identification} summarizes the identification insights from this section. Note that it was shown earlier that reasoning errors remain unchanged across the individual and social condition so that the comparison of stage 2 across conditions does not identify any new effect. If one believes there are other errors in the second stage which might be related to the social environment, for example due to the specifics of the context, then those errors can be also identified from this comparison. \\ \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline & \textbf{Stage 1 (belief)} & \textbf{Stage 2 (choice)} \\ \hline \textbf{Individual} & type A & type B \\ \hline \textbf{Social} & type A + type C & type B \\ \hline \end{tabular} \caption{The identification of three types of errors} \label{tab:identification} \end{table} \subsection{Beliefs in Stage 1} \noindent \textbf{\textit{Individual Condition.}} Denote subject $i$'s posterior belief about state $X$ after observing signal $s$ by $\mu_i(X|s;\theta)$, where $\theta = (\theta_X,\theta_Y)$ is the information structure and is common knowledge. I only assume $\mu_i(X|s;\theta)=1-\mu_i(Y|s;\theta)$ and do not impose any other assumption on $\mu_i$ so that the belief is individual-specific and as general as possible.\footnote{The well known model of \shortciteA{Grether80} is a special case of my model in which $\mu$ is the same across subjects (no heterogeneity) and it has a specific parametric form: $\mu = \frac{Pr(s|X;\theta)^c}{Pr(s|X;\theta)^c+Pr(s|Y;\theta)^c}$.} A unique aspect of the experiment is that it contains a wide variety of information structures so that there is enough variation in the probability space and I am able to plot the distribution of beliefs across subjects at different points. Figure \ref{fig:belief_indiv} presents the distribution of these beliefs ($\mu_i$) across subjects against the Bayesian posterior.\footnote{Note that the shaded area shows one standard deviation above and below the mean. In the data, no subject reports probabilities under 0 and over 100. However, when one plots the mean values with one standard deviation around it, the shaded area may contain probabilities out of $[0,100]$ range in corner points. This shall not mislead the reader.} If subjects were Bayesian, one would expect all points to be exactly on the 45-degree line and the standard deviation would be zero. The first fact is that posterior beliefs are positively correlated with Bayesian beliefs. So, subjects generally understand the changes in likelihood. However, one can see over-appreciation of probabilities in the left hand side and under-appreciation of probabilities in the right hand side. The average self-reported posterior follows a trend which in essence is similar to the well-known probability weighting transformation in Cumulative Prospect Theory \shortcite{CPT}. This pattern is the result of what was called type A error in belief updating. Most importantly, there is substantial heterogeneity in beliefs so that the standard deviation is non-negligible and large specifically at boundaries (0 and 100). This is surprising because it implies some subjects make errors even in the extreme cases of certainty. So, to study the errors in the social interactions, one needs to carefully account for this type of heterogeneity. \\ \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{belief_curve_indiv.pdf}% \caption{The belief distribution across subjects in the individual condition} \label{fig:belief_indiv}% \end{figure} \noindent \textbf{\textit{Social Condition.}} Denote the subject's posterior belief about state $X$ after observing a neighbor's guess of $g$ by $\gamma_i(X|g;\theta)$, where $\theta = (\theta_X,\theta_Y)$ is common knowledge. I impose only one assumption on this function: $\gamma_i(X|g;\theta)=1-\gamma_i(Y|g;\theta)$. Figure \ref{fig:belief_social} presents the distribution of these beliefs ($\gamma_i$) across subjects against the Bayesian rational posterior. The overall trend here is similar to that of individual condition. There is over-appreciation in the left side and under-appreciation in the right side. One difference between Figures \ref{fig:belief_indiv} and \ref{fig:belief_social} is that the variance is higher in the social condition than the individual condition. This is probably due to higher uncertainty included in the social condition. Recall that beliefs in the social condition are impacted by type A and type C errors. So, higher variance would be a natural implication in the social condition. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{belief_curve_social.pdf}% \caption{The belief distribution across subjects in the social condition} \label{fig:belief_social}% \end{figure} To further separate type A and type C errors one needs to compare the average beliefs across conditions. One can see from Figures \ref{fig:belief_indiv} and \ref{fig:belief_social} that the average distance to 45-degree line seems to be higher in the social condition than in the individual condition. To further clarify on this comparison, I separately and non-parametrically estimate a belief function for each condition. The results of the Nadaraya-Watson kernel regression (bandwidth = 15) is presented in Figure \ref{fig:mu_gamma_mean}. As expected, the belief curve for the social condition exhibits a higher probability weighting bias compared to the individual condition. This effect is a result of what was earlier named as type C error, i.e., the violation of rational expectations. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{mu_gama_mean.pdf}% \caption{The non-parametric estimation of mean posterior belief in the individual and social conditions} \label{fig:mu_gamma_mean}% \end{figure} \subsection{Choices in Stage 2} \noindent \textbf{\textit{Individual Condition.}} In the second step of decision-making, conditional on the posterior belief, $\mu_i$, the subject makes a binary decision. In the most general form, the subject's decision can be modeled as a mixed-strategy on $X$ and $Y$: she chooses state $X$ with probability $\alpha_i(\mu_i)$ and state $Y$ with probability $1-\alpha_i(\mu_i)$. Note that the standard Quantal Response Equilibrium \shortcite{McKelveyPalfrey95} imposes a parametric assumption on this so that $\alpha_i(\mu_i)$ is a logistic function. Also, Perfect Bayesian Equilibrium is a special case where $\mu_i$ is the Bayesian posterior and $\alpha_i$ is degenerate on the state with higher posterior probability. I combine the choice data from the second stage of the experiment with the belief data from the first stage and plot the distribution of $\alpha_i$ at each point in Figure \ref{fig:choice_indiv}. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{choice_curve_indiv.pdf}% \caption{The choice distribution across subjects in the individual condition} \label{fig:choice_indiv}% \end{figure} If subjects were fully rational, all the points with $\mu_i<50$ would be on $Choice=0$ and all the points with $\mu_i>50$ would be on $Choice=1$, and the standard deviation would be zero at all points. The first general observation from Figure \ref{fig:choice_indiv} is that subjects respond to changes in posteriors, meaning that they are generally more likely to choose an option that has a higher probability. However, the positive standard errors demonstrate that, as discussed earlier, subjects may sometimes make reasoning errors, i.e., they may pick an option that they believe has a lower probability. This pattern can be consistent with a logistic function but one should take careful considerations regarding the substantial heterogeneity in choices. This analysis identifies the so called type B errors in subject's behavior. \noindent \textbf{\textit{Social Condition.}} In the second stage of the social condition, the subject makes a binary decision based on the neighbor's choice. The subject's decision is a mixing probability on $X$ and $Y$: conditional on her posterior belief, $\gamma_i$, the subject chooses state $X$ with probability $\beta_i(\gamma_i)$ and state $Y$ with probability $1-\beta_i(\gamma_i)$. The distribution of $\beta_i$ is shown in Figure \ref{fig:choice_social}. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{choice_curve_social.pdf}% \caption{The choice distribution across subjects in the social condition} \label{fig:choice_social}% \end{figure} The general pattern in Figure \ref{fig:choice_social} is similar to that of the individual condition (Figure \ref{fig:choice_indiv}). However, to further check for the differences, I separately and non-parametrically estimate a choice function for each condition. The results of the Nadaraya-Watson kernel regression (bandwidth = 15) is presented in Figure \ref{fig:alpha_beta_mean}. As expected, the choice curve for the social condition exhibits a very similar pattern to that of the individual condition.\footnote{Note that, as I mentioned earlier, the experimental design is such that it can also identify errors related to SI that happen in the second stage of decision-making and are separate from the other three types of error. So, one might argue that the small difference between choice probabilities in the right hand side of Figure \ref{fig:alpha_beta_mean} is due to such errors. However, I believe this small difference is not due to a systematic error. Recall that in the previous section, I showed that the average reasoning error remains unchanged across conditions. So, I think the difference here might be just because of luck or may be related to the noise in the data (here, the analysis is at a more granular level than the previous section).} \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{alpha_beta_mean.pdf}% \caption{The non-parametric estimation of choice probabilities in the individual and social conditions} \label{fig:alpha_beta_mean}% \end{figure} \section{Conclusions and Implications} \label{conclusion} Many economic decisions, from the most mundane ones, like choosing a diner for lunch, to the most important ones, like adoption of a new technology by a firm, require making inferences about an underlying state of the world. In these situations, social interaction is an important source of information. Economic agents often interact with each other via observation of choices. Such social interactions affect people's beliefs and can help them to make informed decisions. The conventional assumption in the economic models is that decision makers are Bayesian and they have rational expectations about each other. However, individuals often deviate from these assumptions. Errors in social interactions may have severe consequences, for example, they may lead to a socially inefficient equilibrium when information is transmitted by observation. So, it is important to study and identify the sources of such errors. In this study, I conduct a series of laboratory experiments to uncover errors in learning from others' choices. I use a relatively simple and novel experimental setting to disentangle between individual errors that are independent of the social environment, and the errors that are caused by the violation of rational expectations. In a within-subject design, I compare subjects' choices across an isolated condition and a social condition, and show that subjects make more errors in the presence than in the absence of social interaction, even when they receive informationally equivalent signals across the two conditions. That is, they neglect the provided information more when they interact with others than when they do not. To uncover the mechanism behind the additional neglect in the social condition, I design a series of treatment variations by exogenously manipulating the subject's knowledge about her neighbor. I find that the unexpected irrationality in the social condition is mainly driven by the uncertainty about other people's behavior: subject's behave as if they lack knowledge of how others make decision based on their private signals. The implication of this result is that social interactions might not be as effective as one expects in theory. So, one should take careful considerations in examining the effects of social interactions using field data. In econometric models of field data, researchers often impose the assumption of Bayesian rationality. While one should understand the need for these assumptions, to overcome identification challenges, it has been unclear whether and how one can identify different types of errors in decision making under observational learning. By introducing a unique experimental design, this paper sheds lights on three categories of error that often emerge in social interactions: errors associated with belief updating, errors due to incorrect reasoning, and errors related to the violation of rational expectations. I show that measuring probabilistic beliefs is important because it provides extra data which can be utilized to identify new aspects of individual behavior. However, to non-parametrically estimate the complete decision-making process, one needs extra variations in the data. While the estimation is specific to the context of this study, similar analysis can be used to separate various errors in decision-making using field data. Identification of non-equilibrium beliefs is an important topic that has recently attracted substantial attention \shortcite{VictorArvind20,VictorErhao21}. This study extends this literature to situations where there is no strategic incentives between players and identifies the contribution of different errors to subjects' behavior. The results of this study can provide guidance to researchers and managers on how to benefit from measuring beliefs and incorporate them into their analysis. In many circumstances, marketers may observe inefficiencies in the market behavior. For example, in the kidney market, kidney utilization may remain low despite the continual shortage in kidney supply. The results of this study highlight the role of decision-making errors in such situations. Understanding the sources of error can guide policy-makers to design policies that nudge individuals towards socially optimal outcomes. \newpage \bibliographystyle{apacite}
1,116,691,500,521
arxiv
\section{Introduction} \subsection{Crofton formulas} The classical Crofton formula computes the length of a rectifiable curve $\gamma$ in $\mathbb{R}^2$ as \begin{equation} \label{eq_crofton_2dim} \mathrm{Length}(\gamma)=\frac{\pi}{2} \int_{\overline{\Gr}_1(\mathbb{R}^2)} \#(\gamma \cap \overline L) d\overline L, \end{equation} where $\overline{\Gr}_1(\mathbb{R}^2)$ is the space of lines in $\mathbb{R}^2$ with a rigid motion invariant measure (which is normalized in a suitable way). A higher dimensional version states that for $M \subset \mathbb{R}^n$ a compact submanifold with boundary, we have \begin{displaymath} \mu_k(M)=c_{n,k} \int_{\overline{\Gr}_{n-k}(\mathbb{R}^n)} \chi(M \cap \overline E)d\overline E, \end{displaymath} where $\overline{\Gr}_{n-k}(\mathbb{R}^n)$ is the Grassmann manifold of affine $(n-k)$-planes equipped with a rigid motion invariant measure, $\chi$ is the Euler characteristic, and $\mu_k(M)$ is the $k$-th intrinsic volume of $M$, which can be defined via Weyl's tube formula \cite{weyl_tubes}. The same formula also holds with the submanifold $M$ replaced by a compact convex body $K$, in which case the $k$-th intrinsic volume can be defined via Steiner's tube formula \cite{steiner}. We refer to \cite{klain_rota, schneider_book14} for more information about intrinsic volumes of convex bodies. More generally, we can take an arbitrary translation-invariant measure $m$ on $\overline{\Gr_{n-k}}(\mathbb{R}^n)$ and consider the integral \begin{displaymath} \mu(K):=\int_{\overline{\Gr_{n-k}}(\mathbb{R}^n)} \chi(K \cap \overline E) dm(\overline E). \end{displaymath} By the additivity of the Euler characteristic, we have \begin{displaymath} \mu(K \cup L)+\mu(K \cap L)=\mu(K)+\mu(L) \end{displaymath} whenever $K,L, K \cup L$ are compact convex bodies, hence $\mu$ is a valuation. Clearly $\mu$ belongs to the space $\Val$ of translation-invariant valuations which are continuous with respect to the Hausdorff metric. Additionally, $\mu$ is $k$-homogeneous and even, that is invariant under $-\id$. We thus get a map \begin{displaymath} \Cr:\mathcal M(\overline{\Gr_{n-k}}(\mathbb{R}^n))^{tr} \to \Val_k^+, \end{displaymath} where $\mathcal M^{tr}$ denotes the space of translation-invariant measures. Alesker \cite{alesker_mcullenconj01} has shown that the image of this map is dense with respect to the natural Banach space topology on $\Val_k^+$. Therefore Crofton formulas are a central tool in the study of valuations and in integral geometry. When restricted to smooth measures and valuations (see Section \ref{sec_preliminaries} for the notion of smoothness of valuations), the map $\Cr$ is in fact a surjection \begin{displaymath} \Cr:\mathcal M^\infty(\overline{\Gr_{n-k}}(\mathbb{R}^n))^{tr} \twoheadrightarrow \Val_k^{\infty,+}, \end{displaymath} the kernel of which coincides with the kernel of the cosine transform \cite{alesker_bernstein04}. Among the many applications in integral geometry of such formulas, we mention the construction of a basis of the space of unitarily invariant valuations on $\mathbb{C}^n$ by Alesker \cite{alesker03_un}, the interpretation of Alesker's product of smooth and even valuations in terms of Crofton measures \cite{bernig07}, the Holmes-Thompson intrinsic volumes of projective metrics \cite{alvarez_fernandes}. Applications outside integral geometry include isoperimetric inequalities in Riemannian geometry \cite{croke84}, symplectic geometry \cite{oh90}, systolic geometry \cite{treibergs85}, algebraic geometry \cite{akhiezer_kazarnovskii} and more. Crofton formulas are employed also outside of pure mathematics, in domains such as microscopy and stereology, see \cite{Kiderlen_Vedel_Jensen}. Crofton formulas do not only exist on flat spaces, but also on manifolds. In this case, we need a family of sufficiently nice subsets, endowed with a measure. Then the Crofton integral is given by the integral of the Euler characteristic of the intersection with respect to the measure. Under certain conditions which are given in \cite{fu_alesker_product}, it yields a smooth valuation on the manifold in the sense of Alesker \cite{alesker_val_man2}. On spheres and hyperbolic spaces, a natural class of subsets are the totally geodesic submanifolds of a fixed dimension, endowed with the invariant measure. On the $2$-dimensional unit sphere, we have a formula similar to \eqref{eq_crofton_2dim}, with affine lines replaced by equators. This formula is the main ingredient in the proof of the F\'ary-Milnor theorem that the total curvature of a knot in $\mathbb{R}^3$ is bigger than $4\pi$ if the knot is non-trivial. In higher dimensions, the formula becomes slightly more complicated. On the $n$-dimensional unit sphere we have \begin{equation} \label{eq_spherical_crofton} \int_{\Geod_{n-k}(S^{n})} \chi(M \cap E)dE=\sum_j \frac{1}{\pi \omega_{k+2j-1}} \binom{-\frac{k}{2}}{j} \mu_{k+2j}(M). \end{equation} Here $\Geod_{n-k}(S^{n})$ denotes the totally geodesic submanifolds of dimension $(n-k)$, $\mu_j(M)$ is the $j$-th intrinsic volume of $M$ (which can be defined as the restriction of the $j$-th intrinsic volume on $\mathbb{R}^{n+1}$ under the isometric embedding $S^n \hookrightarrow \mathbb{R}^{n+1}$), and $\omega_n$ denotes the volume of the $n$-dimensional unit ball. A similar formula holds on hyperbolic space. See \cite{fu_wannerer, fu_barcelona} for more on the integral geometry of real space forms. Moving on to Lorentzian signature, few results are available. The main challenge to overcome is the non-compactness of the isotropy group, which in general renders the Crofton integral divergent. Some special Crofton-type formulas in Lorentzian spaces of constant curvature, applicable under certain rather restrictive geometric conditions, appeared in \cite{birman84, langevin_chaves_bianconi03, solanes_teufel, ye_ma_wang16}. \subsection{Results} We are going to prove Crofton formulas on flat spaces, spheres and hyperbolic spaces of arbitrary signatures. Let us recall the definition of these manifolds, referring to \cite{oneill,wolf61} for more information. \begin{Definition}\label{def:space_forms} \begin{enumerate} \item The \emph{pseudo-Euclidean space of signature $(p,q)$} is $\mathbb{R}^{p,q}=\mathbb{R}^{p+q}$ with the quadratic form $Q=\sum_{i=1}^p dx_i^2-\sum_{i=p+1}^{p+q} dx_i^2$. \item The \emph{pseudosphere of signature $(p,q)$ and radius $r>0$} is \begin{displaymath} S_r^{p,q}=\{v \in \mathbb{R}^{p+1,q}:Q(v)=r^2\}, \end{displaymath} equipped with the induced pseudo-Riemannian metric. Its sectional curvature equals $\sigma=\frac{1}{r^2}$. The pseudosphere $S_1^{n,1}\subset \mathbb{R}^{n+1,1}$ is called \emph{de Sitter space}. \item The \emph{pseudohyperbolic space of signature $(p,q)$ and radius $r>0$} is \begin{displaymath} H_r^{p,q}=\{v \in \mathbb{R}^{p,q+1}:Q(v)=-r^2\}. \end{displaymath} Its sectional curvature equals $\sigma=-\frac{1}{r^2}$. The pseudohyperbolic space $H_1^{n,1}$ is called the \emph{anti-de Sitter space}. \end{enumerate} \end{Definition} We will colloquially call these spaces \emph{generalized pseudospheres}. The isometry groups of generalized pseudospheres are given by \begin{align*} \Isom(\mathbb{R}^{p,q}) & \cong \overline{\OO}(p,q)=\OO(p,q) \ltimes \mathbb{R}^{p,q},\\ \Isom(S_r^{p,q}) & \cong \OO(p+1,q),\\ \Isom(H_r^{p,q}) & \cong \OO(p,q+1). \end{align*} In each case the action is transitive, and the isotropy group is conjugate to $\OO(p,q)$. These spaces are \emph{isotropic} in the sense that the isotropy group acts transitively on the level sets of the metric in the tangent bundle. \begin{Definition} A complete connected pseudo-Riemannian manifold of constant sectional curvature is called a \emph{space form}. \end{Definition} Up to taking connected components and universal coverings, any space form is a generalized pseudosphere (cf. \cite[Chapter 8, Corollary 24]{oneill}). On a generalized pseudosphere $M$, we will formulate Crofton formulas using the space $\Geod_{n-k}(M)$ of totally geodesic subspaces. However, there is no isometry-invariant Radon measure on this space. Therefore we will use an isometry-invariant \emph{generalized} measure (also called distribution). This causes some technical problems, as the function that we want to integrate is not smooth. Nevertheless, in many cases the integral can still be evaluated. The result is not a valuation anymore, but a \emph{generalized} valuation in the sense of Alesker \cite{alesker_val_man4}. The Crofton map is then a map \begin{displaymath} \Cr:\mathcal M^{-\infty}(\Geod_{n-k}(M)) \to \mathcal V^{-\infty}(M). \end{displaymath} In the Riemannian case, any isometry invariant valuation admits an invariant Crofton measure. The corresponding statement in other signatures is also true, but much harder to prove. The second named author proved in \cite{faifman_crofton} the statement first for certain signatures by an explicit, but difficult, computation and then used the behavior of Crofton formulas under restrictions and projections to handle the general case. Furthermore, with the exception of Riemannian and Lorentzian signatures, the space of isometry-invariant generalized measures is of greater dimension than the space of isometry-invariant valuations. Thus we are forced to choose a distribution, and must take care to avoid the kernel of the Crofton map. Using results by Muro \cite{muro99} on analytic families of homogeneous generalized functions on the space of symmetric matrices, one can construct such an invariant generalized measure on $\Geod_{n-k}(M)$. We construct a particular generalized measure $m_k$ with the distinguishing property that it behaves well under restrictions of Crofton measures (see subsection \ref{sec:restriction_crofton}), and is independent (in a precise sense) of signature and dimension. There is some freedom in the normalization of a Crofton measure. We choose the normalization in such a way that the first coefficient in the Crofton formulas will always be $1$. Our Crofton formula will evaluate \begin{displaymath} \Cr_k^M:=\Cr_M(\widehat m_k), \end{displaymath} where $\sigma \neq 0$ is the curvature of $M$ and $\widehat m_k:=\pi\omega_{k-1}\sqrt{\sigma^{-1}}^k m_k$. The flat case $\sigma=0$ appears through a careful limiting procedure. The right hand side of the Crofton formula will be expressed in terms of the recently introduced intrinsic volumes on pseudo-Riemannian manifolds \cite{bernig_faifman_solanes}, which are complex-valued generalized valuations on $M$. They satisfy a Hadwiger-type classification \cite{bernig_faifman_solanes_part2}, which allows us to use the template method to compute the coefficients in this formula. However, the resulting computations lead to distributional integrals on the space of symmetric matrices, that can be evaluated directly essentially only for the Lorentzian signature. To conclude the general case, we use techniques of meromorphic continuation and distributional boundary values of analytic functions. Due to the functorial properties of the constructed Crofton distribution mirroring those of the intrinsic volumes, namely their adherence to the Weyl principle, the resulting Crofton formulas are signature independent. Remarkably, they are also holomorphic; i.e. they involve the intrinsic volumes only and not their complex conjugates. \begin{MainTheorem} Let $M$ be a generalized pseudosphere of sectional curvature $\sigma$. Then \begin{displaymath} \Cr_k^M= \sum_{j \geq 0} \frac {\omega_{k-1}} {\omega_{k+2j-1}} {-\frac k 2 \choose j}\sigma^{j} \mu_{k+2j}. \end{displaymath} \end{MainTheorem} The Crofton formulas should be understood formally, namely as the correspondence between distributions on the Grassmannian and the intrinsic volumes through an abstractly defined Crofton map. However they can also be interpreted as explicit Crofton-type formulas applicable to sufficiently nice subsets of the generalized pseudospheres. By a strictly convex subset of a non-flat generalized pseudosphere $M^m$ we mean its intersection with a strictly convex cone in $\mathbb{R}^{m+1}$, with $M\subset\mathbb{R}^{m+1}$ embedded as in Definition \ref{def:smooth_crofton}. For the Riemannian round sphere and hyperbolic space, this coincides with the standard definition of strict convexity. \begin{Corollary} Let $A \subset M$ be a smooth and strictly convex domain in $M$. Then the generalized measure $\widehat m_k$ can be applied to the function $E \mapsto \chi(A \cap E), E \in \Geod_{n-k}(M)$, and \begin{displaymath} \int_{\Geod_{n-k}(M)} \chi(A \cap E) d\widehat m_k(E)=\sum_{j \geq 0} \frac{\omega_{k-1}} {\omega_{k+2j-1}} {-\frac k 2 \choose j}\sigma^{j} \mu_{k+2j}(A). \end{displaymath} \end{Corollary} Note that the spherical Crofton formula \eqref{eq_spherical_crofton} is a special case of our theorem. We also note that we will prove a slightly more general statement in Corollary \ref{cor:can_compute_on_LC} using the notion of LC-regularity from \cite{bernig_faifman_solanes}. \subsection{Plan of the paper} After covering the preliminaries, we turn in section \ref{sec:crofto} to study general Crofton formulas with a distributional Crofton measure, utilizing the Alesker-Radon transform on valuations. In particular, we study under which conditions such formulas can be applied directly to a given subset. In Section \ref{sec:LC_crofton_WF} we consider LC-regular domains and hypersurfaces of space forms, and deduce that they would be in good position for the evaluation of intrinsic volumes through Crofton integrals, once the corresponding distributions are constructed. The latter construction is carried out in Section \ref{sec:opq_distributions}. Moreover, these distributions are embedded in a meromorphic family of measures on a complex domain as a distributional boundary value, and some delicate - though central to our analysis - convergence questions are investigated and settled. Finally in Section \ref{sec:template}, the Hadwiger-type description of intrinsic volumes combined with the template method are applied to yield the explicit Crofton formulas in all cases. \subsection*{Acknowledgments} We wish to thank Gautier Berck for several inspiring talks and discussions. \section{Preliminaries} \label{sec_preliminaries} \subsection{Notations} By \begin{displaymath} \omega_n=\frac{\pi^\frac{n}{2}}{\Gamma\left(\frac{n}{2}+1\right)} \end{displaymath} we denote the volume of the $n$-dimensional unit ball. The space of smooth $k$-forms on a manifold is denoted by $\Omega^k(M)$. The space of smooth measures on $M$ is $\mathcal M^\infty(M)$. The space of generalized measures, also called distributions, is denoted by \begin{displaymath} \mathcal M^{-\infty}(M):=(C^\infty_c(M))^*, \end{displaymath} where here and in the following the subscript $c$ denotes compactly supported objects. Similarly, for $m=\dim M$, we denote the space of $k$-dimensional currents on $M$ by $$\Omega_{-\infty}^{m-k}(M):=(\Omega^{k}_c(M))^*.$$ The elements of this space can also be thought of as generalized $(m-k)$-forms. For an oriented $k$-dimensional submanifold $X \subset M$, we let $\llbracket X\rrbracket$ be the $k$-current which is integration over $X$. By $\mathbb{P}_M:=\mathbb P_+(T^*M)$ we denote the cosphere bundle of $M$, which consists of all pairs $(p,[\xi]), p \in M, \xi \in T_p^*M \setminus 0$, where $[\xi]=[\xi']$ if there is some $\lambda>0$ with $\xi=\lambda \xi'$. When no confusion can arise, we use the same notation for subsets of $\mathbb P_M$ and their lifts to $T^*M$. The natural involution on $\mathbb{P}_M$ is the fiberwise antipodal map $s(p,[\xi]):=(p,[-\xi])$. The wave front set of a generalized form $\omega\in\Omega_{-\infty}(M)$ is a closed subset of $\mathbb{P}_M$, denoted by $\WF(\omega)$, and we refer to \cite{hoermander_pde1} or \cite{duistermaat_book96} for details. For a generalized pseudo-sphere $M$, we denote by $\Geod_k(M)$ the space of totally geodesic $k$-dimensional submanifolds of $M$. \subsection{Smooth valuations} Let $M$ be a smooth manifold of dimension $m$, which we assume oriented for simplicity. Let $\mathcal P(M)$ be the set of compact differentiable polyhedra on $M$. To $A\in\mathcal P(M)$ we associate two subsets of $\mathbb P_M$. The conormal cycle, denoted $\nc(A)$, is the union of all conormal cones to $A$. It is an oriented closed Lipschitz submanifold of dimension $(m-1)$, and naturally stratified by locally closed smooth submanifolds corresponding to the strata of $A$. The conormal bundle, denoted $N^*A$, is the union of the conormal bundles to all smooth strata of $A$. It holds that $\nc(A) \subset N^*A$. By definition, two stratified spaces intersect transversally if all pairs of smooth strata are transversal. A smooth valuation is a functional $\mu:\mathcal P(M)\to \mathbb{R}$ of the form \begin{displaymath} \mu(A)=\int_A \phi+\int_{\nc(A)} \omega, \quad \phi \in \Omega^m(M), \omega \in \Omega^{m-1}(\mathbb{P}_M). \end{displaymath} We will write $\mu=[[\phi,\omega]]$ in this case. The space of smooth valuations is denoted by $\mathcal V^\infty(M)$. It admits a natural filtration \begin{displaymath} \mathcal V^\infty(M)=\mathcal W_0^\infty(M) \supset \mathcal W_1^\infty(M) \supset \ldots \supset \mathcal W_m^\infty(M) =\mathcal M^\infty(M). \end{displaymath} It is compatible with the Alesker product of valuations. \subsection{Generalized valuations} The space of generalized valuations is \begin{displaymath} \mathcal V^{-\infty}(M):=(\mathcal V^\infty_c(M))^*. \end{displaymath} By Alesker-Poincar\'e duality we have a natural embedding $\mathcal V^\infty(M) \hookrightarrow \mathcal V^{-\infty}(M)$. There is a natural filtration \begin{displaymath} \mathcal V^{-\infty}(M)=\mathcal W_0^{-\infty}(M) \supset \mathcal W_1^{-\infty}(M) \supset \ldots \supset \mathcal W_m^{-\infty}(M) =\mathcal M^{-\infty}(M). \end{displaymath} In particular, we may consider a generalized measure as a generalized valuation. A compact differentiable polyhedron $A$ defines a generalized valuation $\chi_A$ by \begin{displaymath} \langle \chi_A,\mu\rangle=\mu(A), \quad \mu \in \mathcal V_c^\infty(M). \end{displaymath} A generalized valuation $\psi$ can be represented by two generalized forms $\zeta \in C^{-\infty}(M), \tau \in \Omega_{-\infty}^m(\mathbb{P}_M)$ such that \begin{displaymath} \langle \psi,[[\phi,\omega]]\rangle=\langle \zeta,\phi\rangle+\langle \tau,\omega\rangle. \end{displaymath} We refer to $\zeta$ and $\tau$ as the \emph{defining currents}. For instance, the defining currents of $\chi_A, A \in \mathcal P(M)$ are $\zeta=\mathbbm 1_A, \tau=\llbracket nc(A) \rrbracket$. The wave front set of $\psi$ is defined as the pair $\Lambda \subset \mathbb{P}_M, \Gamma \subset \mathbb{P}_{\mathbb{P}_M}$ of the wave front sets of $\zeta$ and $\tau$. The space of all generalized valuations with wave front sets contained in $\Lambda,\Gamma$ is denoted by $\mathcal V^{-\infty}_{\Lambda,\Gamma}(M)$. Consider $\psi \in \mathcal V^{-\infty}(M)$. We say that $A \in \mathcal P(M)$ is WF-transversal to $\psi$, denoted $A \pitchfork \psi$, if the conditions of \cite[Theorem 8.3]{alesker_bernig} hold for \begin{align*} (\Lambda_1, \Gamma_1) & :=\WF(\chi_A),\\ (\Lambda_2,\Gamma_2) & :=\WF(\psi). \end{align*} These conditions imply that Alesker's product of smooth valuations can be extended to a jointly sequentially continuous product \begin{displaymath} \mathcal{V}_{\Lambda_1,\Gamma_1}^{-\infty}(M) \times \mathcal{V}_{\Lambda_2,\Gamma_2}^{-\infty}(M) \to \mathcal{V}^{-\infty}(M), \end{displaymath} and in particular the pairing $\psi(A):=\langle \psi, \chi_A\rangle=\int_M \psi \cdot \chi_A$ is well-defined. Let us write a sufficient set of conditions in a particular case. \begin{Proposition} \label{prop:WF_transversal} Assume $\WF(\psi) \subset(N^*D, N^*L)$ for some submanifolds with corners $D\subset M$, $L\subset\mathbb{P}_M$, and take $A \in \mathcal P(M)$. Assume further \begin{enumerate} \item $A\pitchfork D$. \item $\nc(A)\pitchfork \pi^{-1}D$, where $\pi:\mathbb P_M\to M$ is the natural projection. \item $\pi^{-1}A\pitchfork L$. \item $\nc(A) \pitchfork s(L)$, where $s:\mathbb P_M\to\mathbb P_M$ is the antipodal map. \end{enumerate} Then $A\pitchfork \psi$. \end{Proposition} \proof These conditions imply the conditions in \cite[Theorem 8.3]{alesker_bernig}. \endproof \subsection{Intrinsic volumes on pseudo-Riemannian manifolds} In \cite{bernig_faifman_solanes} we constructed a sequence of complex-valued generalized valuations $\mu_0^M,\ldots,\mu_m^M$ naturally associated to a pseudo-Riemannian manifold $M$ of dimension $m$. They are invariant under isometries and called the \emph{intrinsic volumes of $M$}. The intrinsic volume $\mu_0$ equals the Euler characteristic, while $\mu_m$ is the volume measure of $M$, multiplied by $\mathbf i^q$ where $q$ is the negative index of the signature. For other values of $k$, $\mu_k$ is typically neither real nor purely imaginary. The wave front set of $\mu_k$ is contained in $(\emptyset,N^*(\LC^*_M))$, where $\LC^*_M \subset \mathbb{P}_M$ is the dual light cone of the metric, i.e. the set of all pairs $(p,[\xi]) \in \mathbb{P}_M$ such that $g|_p(\xi,\xi)=0$. Here we use the metric to identify $TM$ and $T^*M$. A subset $A\in\mathcal P(M)$ is \emph{LC-transversal} if $\nc(A)\pitchfork \LC^*_M$. \begin{Lemma} Assume $D=\emptyset, L=\LC^*_M \subset \mathbb{P}_M$, and $A \in \mathcal P(M)$. Then the conditions of Proposition \ref{prop:WF_transversal} are equivalent to the LC-transversality of $A$. In particular, the intrinsic volume $\mu_k$ may be evaluated at LC-transversal $A$. \end{Lemma} \proof The first two conditions are empty. The third condition is satisfied for arbitrary $A$, since the tangent space to $\pi^{-1}A$ contains all vertical directions, while the tangent space of $\LC^*_M$ contains all horizontal directions. The fourth condition is precisely LC-transversality. \endproof We will also need the notion of LC-regularity, which was introduced in \cite{bernig_faifman_solanes}. \begin{Definition} Let $X$ be a smooth manifold equipped with a smooth field $g$ of quadratic forms over $TX$. We say that $(X,g)$ is LC-regular if $0$ is a regular value of $g \in C^\infty(TX \setminus \underline{0})$. \end{Definition} It was shown in \cite[Proposition 4.9]{bernig_faifman_solanes} that the extrinsic notion of LC-transversality and the intrinsic notion of LC-regularity coincide: a submanifold of a pseudo-Riemannian manifold, equipped with the field of quadratic forms induced from the metric, is LC-regular if and only if it is LC-transversal. The most important property of the intrinsic volumes is that they satisfy a Weyl principle: for any isometric immersion $M \looparrowright \widetilde M$ of pseudo-Riemannian manifolds we have \begin{displaymath} \mu_k^{\widetilde M}|_M=\mu_k^M, \end{displaymath} in particular the restriction on the left hand side is well-defined. Conversely, we have shown in \cite{bernig_faifman_solanes_part2} that any family of valuations associated to pseudo-Riemannian manifolds that satisfies the Weyl principle must be a linear combination of intrinsic volumes. \section{Distributional Crofton formulas}\label{sec:crofto} Let $M^m$ be a manifold. A Crofton formula for a smooth valuation $\phi\in\mathcal V^\infty(M)$ has the form $\phi(A)=\int_{S}\chi(X(s)\cap A)d\mu(s)$, where $S$ is a smooth manifold parametrizing a smooth family of submanifolds of $M$, and $\mu$ a smooth measure on $S$. Similarly, a distributional Crofton formula has $\phi\in\mathcal V^{-\infty}(M)$, and $\mu$ is a distribution. In this section we study some general properties of such formulas, when $M\subset V\setminus\{0\}$ is a submanifold without boundary in a $d$-dimensional linear space $V$, and $S=\Gr_{d-k}(V), k <d, X(s)=s \cap M, s \in S$. We utilize the Radon transform on valuations, introduced in \cite{alesker_intgeo}. Loosely speaking, the Crofton map is but the Radon transform of a measure with respect to the Euler characteristic. However, there are technical difficulties in applying this formalism directly to distributions, and a large part of this section is concerned with resolving those difficulties. The main results to this end are Propositions \ref{prop:smooth_on_grassmannian} and \ref{prop:yomdin}. In the last part, we describe the Crofton wave front of sufficiently nice sets in Proposition \ref{prop:base_of_induction}, which controls the applicability of an explicit Crofton integral to the given set. \subsection{The general setting}\label{sec:crofton_functorial} \label{Preliminaries} For a submanifold with corners $X\subset M$, define $Z_X\subset X\times \Gr_{d-k}(V)$ by $Z_X=\{(x,E): x\in X\cap E\}$. Then $Z_X$ is a manifold with corners, more precisely it is the total space of the fiber bundle over $X$ with fiber $\Gr_{d-k-1}(V/\mathbb{R} x)$ at $x\in X$. Write \begin{displaymath} \xymatrix@=1em{ & Z_X \ar[dl]_{\pi_X} \ar[dr]^{\tau_X} & \\ X & & \Gr_{d-k}(V)} \end{displaymath} for the natural projections. Denote by $W_X\subset\Gr_{d-k}(V)$ the set of subspaces intersecting $X$ transversally in $V$. We will need a simple fact from linear algebra, which we state in a rather general form that will be useful for us in several places. \begin{Lemma}\label{lem:pair_of_spaces} Let $V$ be a vector space, $L_0 \in \Gr_l(V), E_0 \in \Gr_k(V)$ with $L_0 \subset E_0$. Denote by $i:L_0\hookrightarrow E_0$ the inclusion, and $\pi: V/L_0\to V/E_0$ the projection. \begin{enumerate} \item Let $E(t)\in\Gr_k(V)$ be a smooth path with $E(0)=E_0$ and $A:L_0\to V/L_0$ a linear map. Then there is a smooth path $L(t) \in \Gr_l(E(t))$ with $L(0)=L_0$ and $L'(0)=A$ if and only if the following diagram commutes: \begin{displaymath} \xymatrix{L_0 \ar[r]^-{A} \ar[d]^{i} & V/L_0 \ar[d]^{\pi}\\ % E_0 \ar[r]^-{E'(0)} & V/E_0\\ } \end{displaymath} \item Let $L(t) \in \Gr_l(V)$ be a smooth path with $L(0)=L_0$. Let $B:E_0\to V/E_0$ be a linear map. Then there is a smooth path $E(t) \in \Gr_k(V)$ with $L(t) \subset E(t)$, $E(0)=E_0$ and $E'(0)=B$ if and only if the following diagram commutes: \begin{displaymath} \xymatrix{L_0 \ar[r]^-{L'(0)} \ar[d]^{i} & V/L_0 \ar[d]^{\pi}\\ % E_0 \ar[r]^-{B} & V/E_0\\ } \end{displaymath} \end{enumerate} \end{Lemma} \begin{Remark} The 'only if' statement obviously remains true if instead of $L(t)\subset E(t)$, we have $\measuredangle (L(t), E(t))=o(t)$ with respect to any Euclidean structure. \end{Remark} \proof Consider the partial flag manifold $Z=\{ L\subset E\}\subset \Gr_l(V)\times\Gr_k(V)$. The group $\GL(V)$ acts transitively on $Z$, and any smooth path $F(t)=(L(t) \subset E(t))\in Z$ can be lifted to a smooth curve $g(t)\in\GL(V)$ with $g(0)=\id$ and $F(t)=g(t)F(0)$. Thus $E'(0):E_0\to V/E_0$ and $L'(0):L_0\to V/L_0$ are both projections of $g'(0):V\to V$, and the diagram commutes. In the other direction, write $\pi_W:V\to V/W$ for the natural projection. it follows by the above that the set of velocity vectors $L'(0)$ for all curves $L(t)\subset E(t)$ is the affine space $\{\pi_{L_0}\circ T|_{L_0}\in\mathrm{Hom}(L_0, V/L_0): T\in\mathfrak{gl}(V), \pi_{E_0}\circ T|_{E_0}=E'(0)\}$, which is of dimension $\binom{k}{2}-\binom{l}{2}-\binom{ k-l}{2}=l(k-l)$. This is also the dimension of the affine space of all $A$ such that the diagram commutes, which finishes the proof of the first part. The second part follows from the first one by taking orthogonal complements. \endproof We need the following technical statement appearing in \cite[Proposition 5.1.3]{alesker_intgeo}. \begin{Lemma}\label{lem:radon_condition} The natural projection $\pi:N^*Z_M\setminus 0\to T^*M\setminus 0$ is a submersion. \end{Lemma} \proof Let $(p_t,\xi_t)$ be a smooth path in $T^*M\setminus 0$. We will lift it to a smooth path $(p_t,E_t,\xi_t,\eta_t)\in T^*(M\times\Gr_{d-k}(V))$ such that $p_t \in E_t$, and $(\xi_t,\eta_t)\in N^*_{p_t,E_t}Z_M$. Now for $v\in T_pM$, $B \in T_E\Gr_{d-k}(V)=\mathrm{Hom}(E,V/E)$, we have by Lemma \ref{lem:pair_of_spaces} (applied with $l=1, L_0=\mathbb{R} p$) that $(v,B)\in T_{p,E}Z_M$ if and only if $v+E=B(p)$. Hence \begin{displaymath} N^*_{p,E} Z_M=\{(\xi,\eta) \in T_p^*M \times T_E^*\Gr_{d-k}(V): \langle \xi,v\rangle+\langle \eta,B\rangle=0 \text{ whenever } v+E=B(p)\}. \end{displaymath} Fix a Euclidean structure on $V$, inducing Euclidean structures on the spaces $\mathrm{Hom}(E_t, V/E_t)$. Let us choose some $E_t$ such that $p_t\in E_t$, and $T_{p_t}(M\cap E_t)\subset \Ker(\xi_t)$, which evidently can be done. Consider the linear subspace \begin{displaymath} W_t=\{ B \in T_{E_t}\Gr_{d-k}(V): B(p_t)\in T_{p_t}M+E_t\}, \end{displaymath} and for each $B \in W_t$ define $\langle \eta_t,B\rangle=-\langle \xi_t,B(p_t)\rangle$, which is well-defined as $T_{p_t}M\cap E_t \subset \ker \xi_t$. Extend $\eta_t$ by zero to $W_t^\perp$. It follows that $(\xi_t,\eta_t)\in N^*_{p_t,E_t}Z_M$, completing the proof. \endproof It follows by \cite[Corollary 4.1.7]{alesker_intgeo} that the Radon transforms with respect to the Euler characteristic, $\mathcal R_M=(\tau_M)_*\pi_M^*:\mathcal V_c^{-\infty}(M)\to \mathcal V^{-\infty}(\Gr_{d-k}(V))$ and $\mathcal R_M^T=(\pi_M)_*\tau_M^*:\mathcal V^\infty(\Gr_{d-k}(V))\to \mathcal V^\infty(M)$, are well-defined and continuous. \begin{Definition} For any $\phi\in\mathcal V_c^{-\infty}(M)$, let $\widehat \phi\in C^{-\infty}(\Gr_{d-k}(V))$ be the defining current of $\mathcal R_M\phi$ (on the base manifold). Equivalently, using \cite[Proposition 7.3.6]{alesker_val_man4} we have \begin{displaymath} \widehat\phi=[\mathcal R_M\phi]\in\mathcal W_0^{-\infty}(\Gr_{d-k}(V))/\mathcal W_1^{-\infty}(\Gr_{d-k}(V))=C^{-\infty}(\Gr_{d-k}(V)). \end{displaymath} \end{Definition} \begin{Remark}\label{rem:radon_fail1} It is false in general that $\widehat \phi$ is a smooth function when $\phi$ is a smooth valuation, see Remark \ref{rem:radon_fail2}. \end{Remark} \begin{Definition}\label{def:smooth_crofton} The Crofton map $\Cr_M: \mathcal M^{\infty}(\Gr_{d-k}(V))\to \mathcal W_k^\infty(M)$ is the restriction of $\mathcal R_M^T$ to $\mathcal M^\infty(\Gr_{d-k}(V))$. More explicitly, \begin{displaymath} \Cr_M(\mu)(X)=\int_{\Gr_{d-k}(V)}\widehat \chi_X(E)d\mu(E), \quad X \in \mathcal P(M). \end{displaymath} \end{Definition} We will see in Proposition \ref{prop:yomdin} below that $\widehat \chi_X(E)=\chi(X \cap E)$. \subsection{Distributional Crofton measures} To allow distributional Crofton measures, it seems essential to require that all intersections $E\cap M$ are transversal, for $E\in\Gr_{d-k}(V)$. This is easily seen to be equivalent, for any $k>0$, to having $\mathbb{R} x\oplus T_xM=V$ for all $x\in M$. In particular $\dim V=\dim M+1=m+1$. By further restricting attention to connected manifolds, we deduce that $M$ is a hypersurface that is locally diffeomorphic to an open subset of $\mathbb P_+(V)$ through the radial projection. In other words, $M$ is locally a strictly star-shaped hypersurface around the origin. \begin{Proposition}\label{prop:smooth_on_grassmannian} \begin{enumerate} \item For all $0\leq k\leq m$ and $\psi \in\mathcal V^\infty_c(M)$, it holds that $E \mapsto \psi(E\cap M)$ is a smooth function on $\Gr_{m+1-k}(V)$. \item The image in $C^{-\infty}(\Gr_{m+1-k}(V))$ of this function equals $\widehat\psi$. \end{enumerate} \end{Proposition} \proof \begin{enumerate} \item Let us first show $E\mapsto\psi(E\cap M)$ is smooth. By choosing an open cover of $M$ by star-shaped charts, and using the partition of unity property of smooth valuations \cite{alesker_val_man4}, we may assume $M$ projects diffeomorphically to an open subset of $\mathbb P_+(V)$, which we henceforth identify with $M$. By Boman's theorem \cite{boman67}, it suffices to prove that $\psi(E_t \cap M)$ is a smooth function of $t\in (-\epsilon,\epsilon)$ for all smooth curves $E_\bullet:(-\epsilon,\epsilon)\to \Gr_{m+1-k}(V)$. It suffices in fact to show smoothness in some open interval around $0$ for any such given curve. Let us lift $E_t$ to a smooth curve $g_t\in \mathrm{GL}(V)$ with $g_0=\id$ and $E_t=g_tE_0$. Then $\psi(E_t\cap M)= g_t^*\psi (E_0 \cap M)$ for sufficiently small $t$ such that $g_t(\Supp(\psi))\subset M$, establishing the first part. \item Let us check $\psi(\bullet \cap M)=\widehat\psi$ in $C^{-\infty}(\Gr_{m+1-k}(V))$. Take $\mu\in\mathcal M^\infty(\Gr_{m+1-k}(V))$, and write \begin{displaymath} \mu=\int_{\Gr_{m+1-k}(V)}\delta_Ed\mu(E)= \int_{\Gr_{m+1-k}(V)}\chi_{\{E\}}d\mu(E)\in\mathcal V^\infty(\Gr_{m+1-k}(V)). \end{displaymath} \textit{Claim.} $\tau_M^{-1}E\pitchfork \pi_M^*\psi$. To see this, write $Z=Z_M$ and identify $W:=Z\times_M\mathbb P_M$ with its image in $\mathbb P_{Z}$ under $d\pi_M^*$. Explicitly, $W|_{(x,E)}=\mathbb P_+(\mathrm {Ker}(d_{(x,E)}\pi_M)^\perp)$, so $W$ is the union of the conormal bundles to all fibers of $\pi_M$. It follows from \cite[Proposition 3.3.3]{alesker_intgeo} that $\WF(\pi_M^*\psi)\subset(\emptyset, N^*W)$. By Proposition \ref{prop:WF_transversal}, it suffices to check that two intersections in $\mathbb P_{Z}$ are transversal: $\pi^{-1}(\tau_M^{-1}E)\pitchfork W$ and $N^*(\tau_M^{-1}E)\pitchfork W$. Denote $z=(x,E)$, let $(z,\zeta)$ be an intersection point. The first intersection is easy to analyze: $T_{z,\zeta}\pi^{-1}(\tau_M^{-1}E)$ contains all vertical directions of $\mathbb P_{Z}$, while $T_{z,\zeta}W$ contains all horizontal directions. To analyze the second intersection, we lift all manifolds from $\mathbb P_Z$ to $T^*Z$, and retain all notation for the corresponding objects. As in the previous case, the image of $T_{z,\zeta}W$ under the natural projection $\pi:T_{z,\zeta} T^*Z\to T_zZ$ is all of $T_zZ$, and so it suffices to show $T_{z,\zeta}(T_z^*Z)\subset T_{z,\zeta}W+ T_{z,\zeta}N^*(\tau_M^{-1}E)$. Since $N_z^*(\pi_M^{-1}z) \subset W$, it suffices to show that \begin{displaymath} T_{z,\zeta}(T_z^*Z)\subset T_{z,\zeta}N_z^*(\pi_M^{-1}x)+ T_{z,\zeta}N_z^*(\tau_M^{-1}E), \end{displaymath} which is the same as \begin{displaymath} T_z^*Z\subset N_z^*(\pi_M^{-1}x)+ N_z^*(\tau_M^{-1}E)=N_z^*(T_z\pi_M^{-1}x\cap T_z\tau_M^{-1}E). \end{displaymath} The proof of the claim is completed by noting that the intersection $T_z\pi_M^{-1}x\cap T_z\tau_M^{-1}E$ is trivial. Consider the set \begin{displaymath} X:=\left\{(E,[\xi]): E \in \Gr_{m+1-k}(V), [\xi] \in \WF(\chi_{\tau_M^{-1}E})\right\} \subset \Gr_{m+1-k}(V) \times \mathbb{P}_+\mathbb{P}_Z. \end{displaymath} We claim that it is compact. If $X \subset \bigcup_{i \in I} U_i$ is an open cover, then for each $E \in \Gr_{m+1-k}(V)$ we find a finite subcover $X_E \subset \bigcup_{i \in I_E} U_i$ of the compact set \begin{displaymath} X_E:=X\cap (\{E\} \times \mathbb{P}_+\mathbb{P}_Z)=\{E\} \times \WF(\chi_{\tau_M^{-1}E}). \end{displaymath} The map $g \mapsto X_{gE}$ is $\GL(V)$-equivariant, hence there exists some open neighborhood $V_E \subset \Gr_{m+1-k}(V)$ of $E$ such that $X_{E'} \subset \bigcup_{i \in I_E} U_i$ for all $E' \subset V_E$. Now $\Gr_{m+1-k}(V)$ is compact, hence finitely many $V_{E_j}$ cover $\Gr_{m+1-k}(V)$. Then $X \subset \bigcup_j \bigcup_{i \in E_j} U_i$ is a finite subcover, proving the claim. The image of $X$ in $\mathbb{P}_+ \mathbb{P}_Z$ is then a compact set disjoint from $\WF(\pi_M^*\psi)$. Thus we can find a closed cone $\Gamma\subset T^*\mathbb P_{Z}\setminus \underline 0$ such that for all $E\in\Gr_{m+1-k}(V)$, $\chi_{\tau_M^{-1}E}\in\mathcal V^{-\infty}_{(\emptyset, \Gamma)}(Z)$, and $\pi_M^*\psi$ acts as a continuous functional on the latter space. Thus we can write \begin{align*} \langle \widehat \psi,\mu\rangle= \langle \pi_M^*\psi, \tau_M^*\int_{\Gr_{m+1-k}(V)} \chi_{\{E\}}d\mu(E)\rangle = \int_{\Gr_{m+1-k}(V)} \langle \pi_M^*\psi,\chi_{\tau_M^{-1}(E)}\rangle d\mu(E). \end{align*} It remains to check that \begin{equation}\label{eq:psi_hat} \langle \pi_M^*\psi,\chi_{\tau_M^{-1}(E)}\rangle= \psi(E\cap M). \end{equation} For a compact submanifold with boundary $A \subset M$ that is transversal to $E\cap M$, we have by \cite[Theorem 5]{alesker_bernig}, \begin{displaymath} \langle \pi_M^*\chi_A,\chi_{\tau^{-1}(E)}\rangle=\chi(\pi_M^{-1}A\cap \tau_M^{-1}(E))=\chi(A\cap E)=\chi_A(E\cap M). \end{displaymath} It follows by linearity that any smooth valuation of the form $\psi=\int_{\mathcal A} \chi_Ad\nu(A)$, where $\mathcal A$ is a family of submanifolds $A$ as above and $\nu$ a smooth measure, satisfies \eqref{eq:psi_hat}. This family $\mathbf {Cr}_E$ of valuations spans a dense subset in $\mathcal V^\infty(M)$. Indeed, we may approximate $\chi_A$ in $\mathcal V^{-\infty}(M)$ by a sequence in $\mathbf {Cr}_E$ for any $A$ transversal to $E\cap M$. Were $\mathbf {Cr}_E$ not dense, by Alesker-Poincar\'e duality one could find a non-zero smooth valuation $\phi$ annihilating $\mathbf {Cr}_E$, and thus also vanishing on all submanifolds with boundary that are transversal to $E\cap M$. By the genericity of transversality and continuity, $\phi$ would vanish on all submanifolds with boundary. But this is impossible by \cite{bernig_broecker07}. It follows that equality in \eqref{eq:psi_hat} holds for all $\psi$. \end{enumerate} \endproof \begin{Corollary}\label{coro:smooth_on_grassmannian} The map $\Gr_{m+1-k}(V)\to \mathcal V^{-\infty}(M)$, $E\mapsto \chi_{E\cap M}$ is smooth, and for $\mu\in\mathcal M^\infty(\Gr_{m+1-k}(V))$ it holds that $\Cr(\mu)=\int_{\Gr_{m+1-k}(V)}\chi_{E\cap M}d\mu(E)$. \end{Corollary} \begin{Proposition} \label{prop_extension_crofton_map} The map $\Cr:\mathcal M^\infty(\Gr_{m+1-k}(V))\to\mathcal W_k^\infty(M)$ extends to a continuous map $\Cr: \mathcal M^{-\infty}(\Gr_{m+1-k}(V))\to \mathcal W_k^{-\infty}(M)$, by setting, for all $\psi \in \mathcal V^\infty_c(M)$, \begin{displaymath} \left\langle \Cr(\mu), \psi\right\rangle := \int_{\Gr_{m+1-k}(V)} \psi(E\cap M)d\mu(E). \end{displaymath} \end{Proposition} \proof The right hand side is well-defined for a generalized measure $\mu$, since the function $E \mapsto \psi(E \cap M)$ is smooth by Proposition \ref{prop:smooth_on_grassmannian}. Take $\mu\in\mathcal M^\infty(\Gr_{m+1-k}(V))$, $\psi\in\mathcal V_c^\infty(M)$. To verify this new definition extends the smooth one, we ought to check that \begin{displaymath} \langle (\pi_M)_*\tau_M^*\mu, \psi\rangle = \int_{\Gr_{m+1-k}(V)} \psi(E\cap M)d\mu(E), \end{displaymath} which is the content of Corollary \ref{coro:smooth_on_grassmannian}. Continuity is equally evident. \endproof \begin{Remark} \label{rem:radon_fail2} It is tempting to define $\Cr(\mu)$ as a Radon transform: $\Cr(\mu)=\mathcal R_M^T\mu$, as defined in \cite{alesker_intgeo}. Unfortunately the conditions of \cite[Corollary 4.1.7]{alesker_intgeo}, which guarantee that the transform is well-defined on generalized valuations, do not hold for general $k$, as can be seen by a simple dimension count. \end{Remark} \subsection{Functorial properties of Crofton measures.} \label{sec:restriction_crofton} The following is a partial summary of the results of \cite[Appendix B]{faifman_crofton} (adapted from the affine to the linear Grassmannian), whereto we refer the reader for further details. Let $j: U^r\hookrightarrow V^d$ be an inclusion of a linear subspace. There is then a well-defined operation of restriction \begin{displaymath} j^*:\mathcal M^\infty(\Gr_k(V)) \to \mathcal M^\infty(\Gr_{k-(d-r)}(U)), \end{displaymath} which is the pushforward under the (almost everywhere defined) map $J_U: E\mapsto j^{-1}(E)=E\cap U$. Let $S_U\subset\Gr_{k}(V)$ be the collection of subspaces intersecting $U$ non-generically, and fix a closed cone $\Gamma \subset T^*\Gr_k(V)\setminus 0$ such that $\Gamma\cap N^*S_U=\emptyset$. Given $k\geq d-r$, let $\mathcal M^{-\infty}_\Gamma (\Gr_k(V))$ denote the set of generalized measures (distributions) $\mu$ whose wave front sets lie in $\Gamma$, equipped with the H\"ormander topology. The map $j^*$ extends as a sequentially continuous map \begin{displaymath} j^*:\mathcal M_\Gamma^{-\infty}(\Gr_k(V)) \to \mathcal M^{-\infty}(\Gr_{k-(d-r)}(U)). \end{displaymath} Similarly, if $\pi :V\to W$ is a quotient map, there is a natural pushforward operation \begin{displaymath} \pi_*: \mathcal M^\infty(\Gr_k(V))\to \mathcal M^\infty(\Gr_{k}(W)), \end{displaymath} which is the pushforward under the (almost everywhere defined) map $\Pi_W: E\mapsto \pi(E)$. It extends to distributions whose wave front sets are disjoint from the conormal cycle of the collection of subspaces intersecting $\Ker \pi$ non-generically. The following proposition captures the intuitively obvious fact that the pullback of distributions/valuations under embeddings commutes with the Crofton map. We prove a weak version which suffices for our purposes. Recall that $M$ is a locally star-shaped hypersurface around the origin. \begin{Proposition}\label{prop:restrictions_commute} Take a submanifold $M^r\subset V^d$, a subspace $j: U\hookrightarrow V$ such that $Z:=M\cap j(U)$ is a submanifold, and a distribution $\mu\in\mathcal M_\Gamma^{-\infty}(\Gr_{d-k}(V))$. Assume $\Cr_M(\mu)$ is transversal to $Z$ in the sense of \cite[Definition 3.5.2]{alesker_intgeo}. Then $\Cr_M(\mu)|_Z=\Cr_Z(j^*\mu)$. \end{Proposition} \proof Choose an approximate identity $\rho_i\in\mathcal M^\infty(\textrm{GL}(V))$ as $i\to\infty$, and set $\mu_i=\mu\ast\rho_{i}\in\mathcal M^\infty(\Gr_{d-k}(V))$. For all $A\in \mathcal P(Z)$ we have \begin{displaymath} \Cr_M(\mu_i)(A)= \int_{\Gr_{d-k}(V)} \chi(A \cap E)d\mu_i(E)=\int_{\Gr_{r-k}(U)} \chi(A \cap E)d((J_U)_*\mu_i)(E), \end{displaymath} and therefore $\Cr_M(\mu_i)|_Z=\Cr_Z(j^*\mu_i)$. The restriction of valuations to a submanifold is continuous in the H\"ormander topology on the space of valuations with wave front set contained in $\textrm{WF}(\Cr_M(\mu))$, see \cite[Claim 3.5.4]{alesker_intgeo}. Thus the left hand side weakly converges to $\Cr_M(\mu)|_Z$. The right hand side weakly converges to $\Cr_Z(\mu)$. \endproof \subsection{Applying generalized Crofton formulas to subsets}\label{sec:crofton_functorial2} Let $M^m\subset V=\mathbb{R}^{m+1}$ be a strictly star-shaped hypersurface around the origin. Given $A\in\mathcal P(M)$ and a Crofton distribution $\mu\in\mathcal M^{-\infty}(\Gr_{m+1-k}(V))$, we would like to evaluate $\Cr(\mu)$ on $A$ using an explicit Crofton integral, whenever $A\pitchfork\Cr(\mu)$. The following proposition provides some a-priori regularity for $\widehat\chi_A$. \begin{Proposition}\label{prop:yomdin} For $A\in\mathcal P(M)$, it holds that $\chi(A\cap \bullet)\in L^1(\Gr_{m+1-k}(V))$, is finite and locally constant on $W_A:=\{E:E\pitchfork A\}$. Furthermore, $\widehat \chi_A=\chi(A \cap \bullet)$. \end{Proposition} \proof Let us first check that $\chi(A \cap \bullet)\in L^1 ( \Gr_{m+1-k}(V))$. Fix a Euclidean structure on $V$ and identify $M$ with the unit sphere. By \cite[{Lemma A.2}]{bernig_fu_solanes}, for a fixed $E_0\in\Gr_{m+1-k}(V)$ we have $[g \mapsto \chi(A\cap gE_0)] \in L^1(\SO(V))$. Let $dg, dE$ be the Haar measures on $\SO(V)$ and $\Gr_{m+1-k}(V)$ respectively, and $p:\SO(V)\to \Gr_{m+1-k}(V)$ given by $g\mapsto gE_0$. Then $p_*(\chi(A\cap gE_0)dg)=\chi(A\cap E)dE$, and so $\chi(A\cap E)$ is integrable. It is evidently finite and locally constant on $W_A$ It remains to check that $\widehat \chi_A=\chi(A\cap \bullet)$. Take an approximate identity ${ \rho}_j\in\mathcal M^\infty(\SO(V))$, which for convenience we assume invariant under inversion. Consider the convolution $\phi_j:=\chi_A\ast { \rho_j}\in\mathcal V^{-\infty}(M)$. As $\SO(V)$ is transitive on $M$ and $\mathbb P_M$, it follows that the defining currents of $\phi_j$ are smooth, and therefore $\phi_j\in\mathcal V^\infty(M)$. By \cite[Theorem A.1]{bernig_fu_solanes}, $\tilde \phi_j:=\int_{\SO(V)}\chi(gA\cap \bullet)d{ \rho}_j(g)$ is a well-defined smooth valuation. Let us show that $\tilde\phi_j=\phi_j$. Take $\psi\in\mathcal V_c^\infty(M)$ and compute: \begin{displaymath} \langle\tilde\phi_j, \psi\rangle= \int_{\SO(V)}\psi(gA\cap M)d{ \rho}_j(g) =\int_{\SO(V)}\psi(gA)d{ \rho}_j(g) \end{displaymath} by \cite{bernig_fu_solanes}, while \begin{displaymath} \langle \phi_j, \psi\rangle=\langle \chi_A, \psi\ast { \rho}_j\rangle =(\psi\ast\nu_j)(A)=\int_{\SO(V)}\psi(gA)d{ \rho}_j(g). \end{displaymath} Equality now follows by Alesker-Poincar\'e duality. We thus have the following equalities of functions on $\Gr_{m+1-k}(V)$: \begin{displaymath} \phi_j( { \bullet} \cap M)=\tilde\phi_j({ \bullet}\cap M)=\int_{\SO(V)}\chi(gA\cap { \bullet})d{ \rho}_j(g)=\chi(A\cap \bullet)\ast { \rho}_j, \end{displaymath} where the right hand side is the convolution of $\chi(A\cap \bullet)\in L^1(\Gr_{m+1-k}(V))$ with ${ \rho}_j$. It follows that $\phi_j(E\cap M)\to \chi(A\cap E)$ in $L^1(\Gr_{m+1-k}(V))$. Fix $\mu\in\mathcal M^\infty(\Gr_{m+1-k}(V))$. By Proposition \ref{prop:smooth_on_grassmannian} and $\GL(V)$-equivariance, \begin{align*} \langle \widehat \chi_A,\mu \rangle = \lim_{j\to\infty} \langle \widehat \chi_A \ast { \rho}_j, \mu\rangle = \lim_{j\to\infty} \langle \widehat {\chi_A\ast { \rho}_j}, \mu\rangle &=\lim_{j\to\infty} \int_{\Gr_{m+1-k}(V)} \phi_j(E\cap M) d\mu(E)\\ &=\int_{\Gr_{m+1-k}(V)} \chi(A\cap E) d\mu(E), \end{align*} and so $\widehat \chi_A=\chi(A\cap \bullet)$. \endproof \begin{Definition} The \emph{$k$-Crofton wave front} of $A\in\mathcal P(M)$ is $\Cr\WF^k(A):=\WF(\widehat\chi_A)\subset T^*\Gr_{m+1-k}(V)$. \end{Definition} \begin{Proposition}\label{prop:apply_crofton_general} Assume $\Cr\WF^k(A)\cap s^*\WF(\mu)=\emptyset$ where $s$ is the fiberwise antipodal map, while $A\pitchfork\Cr(\mu)$. Then \begin{equation}\label{eq:crofton_applicable} \Cr(\mu)(A)=\int_{\Gr_{m+1-k}(V)}\chi(A\cap E)d\mu(E). \end{equation} \end{Proposition} \proof We identify $M$ with $\mathbb P_+(V)$. Then $\mathcal V^{-\infty}(M)\to C^{-\infty}(\Gr_{m+1-k}(V))$, $\phi\mapsto\widehat \phi$ is $\GL(V)$-equivariant. Consider the sequence of smooth valuations $\psi_j$ given by $\psi_j=\int_{\GL(V)}g^*\chi_A\cdot d \rho_j(g)$, where $ \rho_j$ is a compactly supported approximate identity on $\GL(V)$. Clearly $\psi_j\to\chi_A$ in the H\"ormander topology of $\mathcal V^{-\infty}_{\WF(\chi_A)}(M)$. By $\GL(V)$-equivariance we have that \begin{displaymath} \widehat\psi_j= \int_{\GL(V)}g^*\widehat \chi_A \cdot d \rho_j(g) \to\widehat\chi_A \end{displaymath} in $C^{-\infty}_{\WF(\widehat\chi_A)}(\Gr_{m+1-k}(V))$. It holds by Propositions \ref{prop_extension_crofton_map} and \ref{prop:smooth_on_grassmannian} that \begin{displaymath} \langle \Cr(\mu), \psi_j\rangle = \int_{\Gr_{m+1-k}(V)} \psi_j(E\cap M)d\mu(E)=\langle \mu,\widehat \psi_j\rangle. \end{displaymath} As $j \to \infty$, the left hand side converges to $\langle \Cr(\mu),\chi_A\rangle=\Cr(\mu)(A)$, as $A \pitchfork \Cr(\mu)$. The right hand side converges to $\langle \mu,\widehat \chi_A\rangle$ (since $\WF(\mu) \cap \WF(\widehat \chi_A)=\emptyset$), which is the same as $\int_{\Gr_{m+1-k}(V)}\chi(A\cap E)d\mu(E)$ by Proposition \ref{prop:yomdin}. \endproof Determining $\Cr\WF^k(A)$ precisely appears to be difficult in general. Let us focus on a subset $A\in\mathcal P(M)$ which is either a compact domain with smooth boundary, or a compact hypersurface without boundary. For the following, we write $H=H(A)$ for $\partial A$ if $A$ is of full dimension, and for $A$ when it is a hypersurface. Write $\widehat E:=E\cap M$, and note that $E$ intersects $H$ transversally in $V$ if and only if $\widehat E$ intersects $H$ transversally in $M$. Denote \begin{align*} \widetilde B_H & :=\{(x,E)\in Z_H: T_x\widehat E\subset T_xH\}, \\ B_H & : =\tau_H(\widetilde B_H)\subset\Gr_{m+1-k}(V). \end{align*} It is not hard to see that $\widetilde B_H$ is an embedded submanifold of $Z_H$ of dimension \begin{equation} \label{eq_dimension_tildebh} \dim \widetilde B_H= \dim H+(m-k)(\dim H-(m-k))=k(m+1-k)-1=\dim \Gr_{m+1-k}(V)-1. \end{equation} If $(x,E)\in \widetilde B_H$, we say that $x\in H$ is a tangent point for $E$. Observe also that $W_A= B_H^c$. Write $\tilde \tau_H$ for the restriction $\tau_H|_{\widetilde B_H}:\widetilde B_H\to \Gr_{m+1-k}(V)$. We sometimes write $B^{m+1-k}_H$, etc. to specify the dimension. \begin{Definition} We say that $E\in\Gr_{m+1-k}(V)$ is a \emph{regular tangent} to $A$ if $\tilde\tau_H$ is immersive on $\tilde\tau_H^{-1}(E)$. \end{Definition} Note that if $E\notin B_H$ then it is automatically regular. For a subset $A\subset V$ we denote by $\mathbb P(A)$ its image in the projective space $\mathbb P(V)$. The regularity of the tangent is equivalent to the non-vanishing of the Gauss curvature of the corresponding section, as follows. \begin{Lemma} Fix $(p,E)\in \widetilde B_H$. Choose any line $N\subset T_pM\setminus T_pH$, and set $F=E\oplus N$. Then $\tilde\tau_H:\widetilde B_H\to \Gr_{m+1-k}(V)$ is an immersion at $(p,E)$ if and only if $\mathbb P(H\cap F)\subset \mathbb P( F)$ has non-degenerate second fundamental form at $p$. In particular, all tangents to $A$ are regular if and only if $\mathbb P(H)\subset \mathbb P(V)$ is a strictly convex hypersurface. \end{Lemma} \proof Let us sketch the argument, see \cite[Lemma 1(ii)]{teufel} for details. Clearly $d\tilde \tau_H$ is injective on the subspace of directions where $p$ moves transversally to $E$. Namely, fixing any subspace $\overline E\subset T_pH$ such that $E\oplus\overline E=T_pH\oplus \mathbb{R} p$, $d\tilde\tau_H$ is injective on $\{ (v,A)\in T_pH\times T_E\Gr_{m+1-k}(V): v\in \overline E\}\cap T\widetilde B_H$. That injectivity is retained as the remaining directions are added, corresponds to the non-degeneracy of the Gauss map of the section $H\cap F$. \endproof We now describe the Crofton wave front near regular tangents. For an immersed manifold $X\subset Y$, by $N^*X$ we understand the union of all conormal spaces to individual embedded parts of $X$. \begin{Proposition}\label{prop:base_of_induction} Assume $E_0\in B^{m+1-k}_H$ is a regular tangent. Then $\Cr\WF^k_{E_0}(A)\subset N^*_{E_0}B_H$. \end{Proposition} That $\Cr\WF^k_{E_0}(A)$ is contained in the {\em sum} of the conormal spaces of the embedded parts of $B_H$ follows from the fact that $\widehat\chi_A$ is locally constant on the complement of $B_H$. However, to show that it is actually contained in the {\em union} of those conormal spaces, in the following proof we will need a more precise description of $\widehat\chi_A$. \proof In the following, by a ball (centered at a point) we mean a compact contractible neighborhood (of the point) with smooth boundary. Since $\widetilde B_H$ is a submanifold of $Z_H$, by assumption $B_H\subset\Gr_{m+1-k}(V)$ is an immersed submanifold in a neighborhood around $E_0$, which is a hypersurface by \eqref{eq_dimension_tildebh}. The preimage $\tilde \tau_H^{-1}(E_0)$ must be finite, or else we could find a sequence of distinct points $(q_j, E_0)\in \widetilde B_H$, which then has a limit point $(q_0, E_0)$, and $\tilde \tau_H$ would fail to be injective in a neighborhood of $(q_0, E_0)$, contradicting the assumed immersivity of $\tilde \tau_H$ there. Denote $\tilde\tau_H^{-1}(E_0)=\{(q_j, E_0), 1\leq j\leq N\}$. We can now find a ball $W \subset \Gr_{m+1-k}(V)$ centered at $E_0$, such that $B_H\cap W$ is the finite union of embedded hypersurfaces $F_j$, each diffeomorphic to a Euclidean ball, with $E_0\in F_j$ and $\partial F_j\subset\partial W$ for all $j$. Note that we have no control on how these hypersurfaces intersect each other. Denote by $C_j^\pm$ the connected components of $W\setminus F_j$. The indices are matched by requiring that a neighborhood of $(q_j, E_0)\in\widetilde B_H$ is mapped to $F_j$ by $\tilde\tau_H$. Fix small balls $K_j\subset M$ around $q_j$ such that \begin{displaymath} \tilde\tau_H:\pi_H^{-1}(K_j)\cap \tilde\tau_H^{-1}(W) \to F_j \end{displaymath} is an embedding, $\partial K_j\pitchfork H$ and $\partial K_j\pitchfork\widehat E_0$ in $M$, and $(\widehat E_0\cap H) \pitchfork (\partial K_j \cap H)$ in $H$. We may further assume all $K_j$ are pairwise disjoint, and $H\cap K_j$ is diffeomorphic to a Euclidean ball. Denote by $\frac12 K_j$ a smaller ball centered at $q_j$. Taking $W$ sufficiently small, we may assume that for any $E\in W$, $\widehat E \pitchfork (H\setminus \cup_j \frac12 K_j)$, and for all $j$ we have $\widehat E \pitchfork \partial K_j$ in $M$ and $(\widehat E\cap H) \pitchfork (\partial K_j\cap H)$ in $H$. For $\epsilon\in \{\pm\}^N$, denote $C_\epsilon=\cap_{j=1}^N C_j^{\epsilon_j}$. Recall that $\widehat{\chi}_A$ is locally constant on $W_A=B_H^c$, and so is constant on any connected component of a non-empty set $C_\epsilon$. Let us show there are integers $e_j=e_j(E_0)$ such that for any $\epsilon,\epsilon' \in \{\pm\}^N$ and any connected components $C\subset C_\epsilon$, $C'\subset C_{\epsilon'}$ one has $$ \widehat \chi_A|_{C'}-\widehat\chi_A|_{C}=\sum_{j:\epsilon_j<\epsilon_j'}e_j-\sum_{j:\epsilon_j'<\epsilon_j}e_j.$$ For $E\in W$, denote $\Sigma_j(E):=\widehat E\cap \partial K_j\cap H$. As it is the transversal intersection of $\widehat E\cap H$ and $\partial K_j \cap H$ in $H$, it is a closed manifold of dimension $(m-k-2)$, and $\chi(\Sigma_j(E))$ is independent of $E\in W$. Let us distinguish the two cases under consideration. Assume first $A=H$ is a hypersurface. Since $K_i \cap K_j = \emptyset$, we have \begin{displaymath} \mathbbm 1_M=\prod_{j=1}^N (\mathbbm 1_{K_j}+\mathbbm 1_{\overline {K_j^c}}-\mathbbm 1_{\partial K_j})=\sum_{j=1}^N (\mathbbm 1_{K_j}-\mathbbm 1_{\partial K_j})+\mathbbm 1_{\overline {K^c}}, \end{displaymath} with $K:=\bigcup_{j=1}^N K_j$. Hence for $E\in W\setminus B_H$ we have \begin{displaymath} \chi(E\cap A) =\sum_{j=1}^N \chi(E\cap K_j\cap A) -\sum_{j=1}^N\chi(\Sigma_j(E))+\chi(E\cap \overline{K^c} \cap A). \end{displaymath} The last summand is constant on $W$ by construction. Consequently, for $E\in C$, $E'\in C'$ we have \begin{displaymath} \widehat\chi_A(E')-\widehat \chi_A(E)=\sum_{j=1}^N \left(\chi(E'\cap A\cap K_j)- \chi(E\cap A\cap K_j)\right). \end{displaymath} The function $\chi(\bullet\cap A\cap K_j)$ is locally constant on $W\setminus F_j$, and it remains to define \begin{displaymath} e_j:=\chi(\bullet\cap A\cap K_j)|_{C_j^+}- \chi(\bullet\cap A\cap K_j)|_{C_j^-}. \end{displaymath} The case of full-dimensional $A$ is only slightly more involved. If $(m-k)$ is odd, we have $\widehat{\chi_A}=\frac12\widehat{\chi_{\partial A}}$, reducing to the previous case. Thus assume $(m-k)$ is even. Write as before, whenever $E\in W\setminus B_H$, \begin{displaymath} \chi(E\cap A)=\sum_{j=1}^N \chi(E\cap K_j\cap A) -\sum_{j=1}^N\chi(E\cap \partial K_j\cap A)+\chi(E\cap \overline{K^c}\cap A). \end{displaymath} Note that for $E\in W\setminus B_H$, all intersections are manifolds with corners. We have $\chi(E\cap \partial K_j\cap A)=\frac12\chi(E\cap \partial K_j\cap \partial A)=\frac12\chi(\Sigma_j(E))$, thus it is constant in $W$. Set \begin{displaymath} S_j:=\widehat E\cap \partial K_j, \quad S:=\widehat E\cap \partial A. \end{displaymath} $S_j$ is a transversal intersection in $M$ for all $E\in W$ and hence a smooth hypersurface, while $S$ is given by a transversal intersection in $M$ and hence smooth for $E\in W\setminus B_H$. Moreover, $S$ is a smooth hypersurface outside of $\frac12 K_j$ for all $E\in W$. We claim that the intersection $S_j \cap S=\Sigma_j(E)$ is transversal in $\widehat E$ for all $E\in W$. For if the intersection is not transversal at $x$, then $T_x( \widehat E\cap \partial K_j)=T_x( \widehat E\cap \partial A)$. But by assumption $\widehat E \cap \partial A$ and $\partial K_j \cap \partial A$ intersect transversally in $\partial A$, in particular \begin{displaymath} T_x(\widehat E \cap \partial A)+T_x(\partial K_j \cap \partial A)=T_x\partial A. \end{displaymath} In conjunction with the previous equality, we get $T_x\partial A \subset T_x\partial K_j$, which is false. Let $X, Y\subset P$ be smooth domains in a manifold $P$, and assume $\partial X\pitchfork \partial Y$ and $X$ is compact. Let $Z\subset X\cap Y$ be the closure of a connected component of $X\cap Y$. Then $\chi(Z)$ is constant as $X, Y$ are perturbed while maintaining transversality. Taking $P=\widehat E$, $X=E\cap A$ with $\partial X=S$ (which is a manifold for $E \in W \setminus B_H$), and $Y=E \cap K_j$ with $\partial Y=S_j$ we get that $\chi(E\cap A\cap K_j)$ is locally constant in $W\setminus B_H$. Taking $X=E\cap A$ with $\partial X=S$ (which is a manifold outside $\bigcup_j \frac12 K_j$ for all $E \in W$), $Y=E\cap \overline{K^c}$ with $\partial Y=\bigcup_j S_j$, it follows that $\chi(E\cap A\cap \overline{K^c})$ is constant in $W$. It follows in both cases that for $E\in W$, $\widehat \chi_A(E)$ is a linear combination of the indicator functions of the complements of the hypersurfaces $F_j$. Therefore $\WF_{E_0}(\widehat\chi_A)\subset \bigcup_j N^*F_j$, concluding the proof. \endproof \section{The Crofton wave front of LC-regular hypersurfaces}\label{sec:LC_crofton_WF} Let $(W,Q)$ be a vector space equipped with a quadratic form. We denote by $\Lambda_{k}^\nu(W) \subset \Gr_{k}(W)$ the collection of subspaces $E\subset W$ where $Q|_E$ has nullity $\nu$. We will need to describe those sets in several cases. \begin{Proposition} \label{prop_dimension_lambda} Assume $(V,Q)$ has dimension $d$. \begin{enumerate} \item If $Q$ is non-degenerate, then $\Lambda_k^\nu(V)\subset\Gr_k(V)$ is a submanifold of dimension \begin{equation} \label{eq_dimension_lambda} \dim \Lambda_{k}^\nu(V)=k(d-k)-\binom{\nu+1}{2}. \end{equation} Writing $E_0:=E \cap E^Q$, we have \begin{equation} \label{eq_tangent_lambda} T_E \Lambda_{k}^\nu(V)=\{A \in \Hom(E,V/E): Q(Au,u)=0, \forall u \in E_0\}. \end{equation} \item If $Q$ has nullity $1$ and $E \in \Lambda_k^\nu(V)$ is such that $\Ker Q \cap E=\{0\}$, then $\Lambda_k^\nu(V)$ is a manifold near $E$ whose dimension is given by Eq. \eqref{eq_dimension_lambda}. \end{enumerate} \end{Proposition} \proof \begin{enumerate} \item See \cite[Proposition 4.2]{bernig_faifman_opq}. \item Write $L_0:=\Ker(Q)$. Consider $W:=V \oplus \mathbb{R}$, and extend $Q$ as a non-degenerate quadratic form $\widetilde Q$ on $W$. Let us verify that the submanifolds $\Lambda_k^\nu(W)$ and $\Gr_k (V)$ intersect transversally in $\Gr_k(W)$ at $E$. As $L_0 \not \subset E$, also $E^{\widetilde Q} \not \subset L_0^{\widetilde Q}=V$. Thus we can find a line $L \subset E^{\widetilde Q} \setminus V$. Now any linear map $A:E \to W/E$ decomposes as a sum $A=A_1+A_2$ with $A_1 \in \Hom(E,V/E)=T_E \Gr_k(V), A_2 \in \Hom(E,(E+L)/E) \subset \Hom(E,W/E)=T_E \Gr_k(W)$. Since $\widetilde Q(A_2u,v)=0$ for all $u,v \in E_0$, we have $A_2\in T_E \Lambda^\nu_k(W)$ by \eqref{eq_tangent_lambda}. This proves the claim. As $\Lambda_k^\nu (V)=\Lambda_k^\nu(W) \cap \Gr_k (V)$, it is a manifold near $E$. The formula for the dimension then follows from the previous case. \end{enumerate} \begin{Corollary} \label{cor:degenerate_linear_algebra_bundle} Let $B$ be a smooth manifold, and $W$ a real vector bundle of rank $d$ over $B$. Let $Q\in\Gamma(B, \Sym^2(W^*))$ be a smooth field of quadratic forms, of nullity at most $1$ for all $x$. Let $\Gr_k(W)$ be the corresponding bundle of $k$-subspaces over $B$, and consider $\Lambda_k^\nu (W)=\{(x,E)\in \Gr_k(W): E\in\Lambda^\nu_k(W_x,Q_x)\}$. If $(p,E)\in \Lambda_k^\nu (W)$ and $\Ker(Q_ x)\cap E=\{0\}$, then $\Lambda_k^\nu (W)$ is a manifold near $(p,E)$, of dimension \begin{displaymath} \dim \Lambda_k^\nu (W)=\dim B+k(d-k)-{\nu+1 \choose 2}. \end{displaymath} \end{Corollary} \proof Using a local trivialization, this reduces to Proposition \ref{prop_dimension_lambda}. \endproof \begin{Lemma}\label{lem:degenerate_linear_algebra} Let $W$ be a $d$-dimensional vector space equipped with a quadratic form $Q$ of nullity $1$ with kernel $L_0$. Assume $L_0\subset E_0\in\Lambda_k^\nu(W)$, and define the set $C \subset T_{E_0} \Gr_k(W)$ of all velocity vectors $E'(0)$ of smooth curves $E(t) \in\Lambda_k^\nu(W)$ with $E(0)=E_0$. Then $C$ is a cone of dimension $k(d-k)-{\nu+1\choose 2}$. \end{Lemma} \proof Assume $L_0=\Span(v_0)$, and fix a $(d-1)$-complement $W_0$ such that $W=W_0\oplus L_0$. Denote $\Lambda_k^\nu(W, W_0)=\{E\in \Lambda_k^\nu(W): \Ker(Q|_E)\subset W_0\}$. Consider $I:\Gr_k(W)\setminus\Gr_k(W_0)\to \Gr_{k-1}(W_0)$ given by $I(E)=E\cap W_0$. We claim that the restriction \begin{displaymath} \tilde I:=I:\Lambda_k^\nu(W)\setminus\Lambda_k^\nu(W, W_0)\to \Lambda_{k-1}^{\nu-1}(W_0) \end{displaymath} is well-defined. Indeed, for $E$ of nullity $\nu$, the nullity of $I(E)$ is at least $(\nu-1)$. Now if also $E\notin \Lambda_k^\nu(W, W_0)$, take $w\in \Ker(Q|_E)\setminus W_0$ so that $E=I(E)\oplus \Span(w)$. If $I(E)$ contains a $\nu$-dimensional subspace $U$ that is $Q$-orthogonal to $I(E)$, then $U$ is also $Q$-orthogonal to $E$ as $Q(w, I(E))=0$. Hence $U \oplus \Span(w) \subset \Ker(Q|_E)$, and consequently the nullity of $E$ is at least $(\nu+1)$, a contradiction. Let us describe the fiber $\tilde I^{-1}(F)$ of $F\in\Lambda_{k-1}^{\nu-1}(W_0)$. Denote $\Lambda_k^\nu(W,L_0)=\{E\in \Lambda_k^\nu(W): L_0\subset E\}$, and $\pi_0:W\to W_0$ is the projection along $L_0$. If $E\in \tilde I^{-1}(F)\cap \Lambda^\nu_k(W, L_0)$, then $E=F\oplus L_0$. If $E\in \tilde I^{-1}(F)\setminus \Lambda^\nu_k(W, L_0)$, we take $w$ as above and decompose $w=w_1+w_2, w_1 \in W_0, w_2 \in L_0$. Then $w_1 \neq 0$ and $w_1 = w-w_2 \in F^Q$. It follows that there is a unique line $L\in \Lambda_1^1(F^Q\cap W_0/F^Q\cap F)$ such that $\pi_0(E)=F+\tilde L$ for any lift $\tilde L\in\Lambda_1^1(F^Q\cap W_0)$ of $L$. Thus for any fixed $L$, the corresponding part of $\tilde I^{-1}(F) \setminus \Lambda^\nu_k(W, L_0)$ can be identified with $\mathbb P_+(\tilde L \oplus L_0)\simeq \mathbb P_+(L \oplus L_0)$ with $\pm L, \pm L_0$ excluded. Now let us see how the various parts of the fiber fit together. Consider \begin{displaymath} \widetilde M_F=\widetilde\Lambda_1^1(F^Q\cap W_0/F^Q\cap F)\subset \mathbb P_+(F^Q\cap W_0/F^Q\cap F), \end{displaymath} where we write $\widetilde\Lambda^1_1$ for oriented null lines. Let $\widetilde \Sigma \widetilde M_F$ be its suspension with poles at two copies of $L_0$ with opposite orientations. Finally, set $\Sigma M_F=\widetilde \Sigma \widetilde M_F/\mathbb Z_2$, the action by the antipodal map, and let $L_F\in \Sigma M_F$ be the image of $L_0$. Put \begin{displaymath} U_\Sigma(F)=\Sigma M_F\setminus \mathbb P(F^Q\cap W_0/F^Q\cap F), \end{displaymath} which is a neighborhood of $L_F$. There is then a natural diffeomorphism $U_\Sigma(F)\simeq \tilde I^{-1}(F)\subset\Lambda_k^\nu(W)\setminus\Lambda_k^\nu(W, W_0)$. We get the fibration \begin{displaymath} \xymatrix{ \tilde I^{-1}(F) \simeq U_\Sigma(F)\ar@{^{(}->}[r]& \Lambda_k^\nu(W)\setminus\Lambda_k^\nu(W,W_0)\ar@{->>}[d]^{I}\\ & \Lambda_{k-1}^{\nu-1}(W_0)} \end{displaymath} and the section $F\mapsto L_F$, or equivalently $F\mapsto F\oplus L_0$, coincides with $\Lambda_k^\nu(W,L_0)$. In particular, the cone $C\subset T_{E_0}\Gr_k(W)$ has a linear factor that can be identified with $T_{E_0\cap W_0}\Lambda_{k-1}^{\nu-1}(W_0)$. Denoting $F_0=E_0\cap W_0$, the cone $C/T_{F_0}\Lambda_{k-1}^{\nu-1}(W_0)$ is then naturally identified with the abstract cone over $\widetilde \Lambda_1^1(F_0^Q\cap W_0/F_0^Q\cap F_0)$. The dimension of $C$ can now be readily computed. We have $\dim \Lambda_{k-1}^{\nu-1}(W_0)=(k-1)(d-k)-\binom{\nu}{2}$ by Proposition \ref{prop_dimension_lambda}. Since the restriction of $Q$ to $F_0^Q \cap W_0/F_0^Q \cap F_0$ is non-degenerate, we have $\dim \widetilde \Lambda_1^1(F_0^Q\cap W_0/F^Q_0\cap F_0)=\dim F^Q_0\cap W_0/F^Q_0\cap F_0-2=d-k-(\nu-1)-2$. Hence \begin{displaymath} \dim C=\Lambda_{k-1}^{\nu-1}(W_0)+\dim \tilde \Lambda_1^1(F^Q_0\cap W_0/F^Q_0\cap F_0)+1=k(d-k)-\binom{\nu+1}{2}. \end{displaymath} \endproof We now turn to LC-regular submanifolds. First, we will need a simple fact on LC-regular metrics. \begin{Lemma}\label{lem:LC_gram_determinant} Let $(M,g)$ be LC-regular, and assume $g$ is degenerate on $T_pM$. Let $v_1(x),\dots,v_m(x)$ be any local frame near $p$, with Gram matrix $A(x)=(g(v_i, v_j))_{i,j=1}^m\in\Sym_m(\mathbb{R})$. Then the condition $d_p(\det A)\neq 0$ is independent of the choice of the frame $(v_j)$. Moreover, if the nullity of $g_p$ is $\nu=1$, then $d_p(\det A)\neq 0$, and the degenerate subset of the metric near $p$ is a smooth hypersurface. \end{Lemma} \proof Let $\tilde v_1(x),\dots,\tilde v_m(x)$ be a different local frame with corresponding Gram matrix $\widetilde A(x)$. Then the change of basis matrix $U(x)\in \GL(m)$ satisfies $\widetilde A(x)=U(x)^TA(x) U(x)$. By assumption, $\det A(p)=\det \widetilde A(p)=0$. Thus $d_p(\det \widetilde A)=\det U(p)^2d_p(\det A)$, which implies the first statement. For the second statement, choose coordinates $x_1,\dots,x_m$ on $M$ near $p$, and take $v_j=\frac{\partial}{\partial x_j}$. We may assume that $\Ker(g_p)=\Ker A(p)=\Span(v_m)$, and by assumption $A(p)$ has non degenerate principal $(m-1)$-minor. By LC-regularity, we can choose a curve $p(t)\in M$ with $p(0)=p$ and a smooth vector field $v(t)$ along it with $v(0)=v_m$ such that $\left.\frac{d}{dt}\right|_{t=0}g(v(t),v(t))=\left.\frac{d}{dt}\right|_{t=0}\langle A(p(t))e_m, e_m\rangle\neq 0$. It follows that $$\left.\frac{d}{dt}\right|_{t=0}\det A(p(t))=\left.\frac{d}{dt}\right|_{t=0}\langle A(p(t))e_{m},e_{m}\rangle\cdot \det (A(p))_{i,j=1}^{m-1}\neq 0.$$ As the degenerate subset of $g$ near $p$ is $\{x:\det A(x)=0\}$, the last assertion follows. \endproof The following is the main result of the section. We use the notation and terminology of Sections \ref{sec:crofton_functorial} and \ref{sec:crofton_functorial2}. \begin{Proposition}\label{prop:LC_regularity_implies_WF} Let $(V,Q)$ be a pseudo-Euclidean vector space of dimension $(n+1)$, and $M=Q^{-1}(r)$ with $r \in \{\pm1\}$ a pseudo-Riemannian space form. Let $H \subset M$ be an LC-regular hypersurface, and $E \in \Gr_{n-k+1}(V)$. Assume $E \in B_H$ is a regular tangent to $H$ of nullity $\nu$. Then each embedded part of $B_H$ through $E$ intersects $\Lambda^\nu_{n-k+1}(V)$ transversally at $E$. \end{Proposition} \proof Denote the signature of $M$ by $(p_M,q_M)$. Write $g=Q|_H$, $\widehat E=E\cap M$. Since $H$ is a hypersurface, the nullity of $g_x$ is at most one for all $x\in H$. Define $\widetilde\Lambda^\nu_{n+1-k}:=\tau_M^{-1}\Lambda^\nu_{n+1-k}(V)$, which is a submanifold of $Z_M$ as $\tau_M:Z_M\to\Gr_{n+1-k}(V)$ is a submersion. As $B_H$ is a hypersurface, one should show for every embedded part $F$ of $B_H$ through $E$ that $T_E\Lambda^\nu_{n+1-k}(V) \not \subset T_E F$. Assuming the contrary, there is $p\in H\cap E$ such that $T_E \Lambda^\nu_{n+1-k}(V) \subset d\tau_H(T_{p,E}\widetilde B_H)$. Observe that $\Ker(Q|_E)\subset T_p\widehat E$. Before proceeding with the more complicated general case, we consider the case $\nu=1$. Since $\dim \Lambda^1_{n+1-k}(V) =\dim B_H=k(n+1-k)-1$ by Proposition \ref{prop_dimension_lambda}, we have $T_E \Lambda^1_{n+1-k}(V) =d\tau_H(T_{p,E}\widetilde B_H)$. Let $v_0 \in E \cap T_pH$ be in the kernel of ${g|_{T_p\widehat E}}$. For any smooth curve $v_t\in TH$ through $v_0$, we may find a smooth curve $E_t\subset B_H$ through $E$ such that $v_t \in E_t$. Then $A:=\left.\frac{d}{dt}\right|_{t=0} E_t\in\Hom(E, V/E)$ satisfies $Q(Av_0, v_0)=0$, and by \eqref{eq_tangent_lambda} we have $A\in T_E \Lambda^1_{n+1-k}(V)$. It follows that $\left.\frac{d}{dt}\right|_{t=0} g(v_t)=2Q(v_0, Av_0)=0$, contradicting the LC-regularity of $H$. Let us now consider the general case. Fix an auxiliary Riemannian metric $h$ on $M$, and let $\rho_H$ be the least distance projection to $H$, defined and smooth in a neighborhood of $p$. By our assumption $\tilde \tau_H$ is an immersion at $(p,E)$. Using Proposition \ref{prop_dimension_lambda} we see that \begin{equation}\label{eq:dimensions} \dim (d\tilde\tau_H)^{-1}T_{E}\Lambda^\nu_{n+1-k}(V)=\dim \Lambda^\nu_{n+1-k}(V)=k(n-k+1)-{\nu+1\choose 2}. \end{equation} By definition, $d_{p,E}\tilde\tau_H^{-1}T_{E}\Lambda^\nu_{n+1-k}(V)$ consists of all velocity vectors $(p'(0), E'(0))$ to curves $(p(t), E(t))\in \widetilde B_H$ through $(p, E)$ such that $E'(0)\in T_E\Lambda^\nu_{n-k+1}$. Let us check that equivalently one may take all velocity vectors to curves $(\tilde p(t), \widetilde E(t))\in \widetilde\Lambda^\nu_{n-k+1}$ with $(\tilde p'(0),\widetilde E'(0))\in T_{p,E} \widetilde B_H$. Indeed, given $(p(t), E(t))$ as above, one can find $\widetilde E(t)\in\Lambda^\nu_{n-k+1}(V)$ with $\widetilde E'(0)=E'(0)$, and then choose $\tilde p(t)\in \widetilde E(t)\cap M$ such that $\tilde p'(0)=p'(0)$. In the other direction, given $(\tilde p(t), \widetilde E(t))$, define $p(t)=\rho_H(\tilde p(t))$ and $E(t)=d_{\tilde p(t)}\rho_H(T_{\tilde p(t)}\widehat{\widetilde E}(t))\oplus \mathbb{R} p(t)$. \textit{Case 1:} $\Ker(g_p)\cap E=\{0\}$. Define \begin{displaymath} \widetilde B^{\nu}_H:=\tilde \tau_H^{-1}\Lambda^\nu_{n-k+1}(V)=\{(q,F)\in \widetilde B_H:T_q\widehat F\in\Lambda^\nu_{n-k}(T_qH)\}. \end{displaymath} By Corollary \ref{cor:degenerate_linear_algebra_bundle} we see that $\widetilde B^\nu_H$ is a smooth manifold near $(p,E)$, of dimension \begin{align} \label{eq:smaller_dimension} \dim T_{p,E}\widetilde B_H^\nu&= \dim H + (n-k)(n-1-(n-k))-{\nu+1\choose 2}\\ &=k(n-k+1)-1 -{\nu+1\choose 2}. \nonumber \end{align} \textit{Claim 1.} $(d_{p,E}\tilde\tau_H)^{-1}T_{E}\Lambda^\nu_{n+1-k}(V)\subset T_{p,E}\widetilde B_H^\nu$. \noindent We postpone the proof. Combined with eqs. \eqref{eq:dimensions} and \eqref{eq:smaller_dimension} we get a contradiction. \textit{Case 2:} $T_pH$ has nullity one, and $\Ker(g_p)\subset E$. Let $S\subset H$ be the degenerate subset of $g$. It follows from Lemma \ref{lem:LC_gram_determinant} that $S$ is a smooth hypersurface. Define $\widetilde B^\nu_H(S)=\{(q,F)\in \widetilde B_H: q\in S,F\in \Lambda^\nu_{n-k+1}(V)\}$. It is a fiber bundle over $S$ with fiber $\Lambda^\nu_{n-k}(\mathbb{R}^{p_M-1,q_M-1,1})$. \textit{Claim 2.} For any $(w,\xi)\in d_{p,E}\tilde\tau_H^{-1}T_E\Lambda^\nu_{n-k+1}(V)$ with $w\in T_pS$ there is a curve $(q(t), F(t))\in \widetilde B^\nu_H(S)$ with $(q'(0), F'(0))=(w, \xi)$. \noindent Again we postpone the proof of the claim. The set of all vectors $(q'(0), F'(0))$ as in the claim defines, by Lemma \ref{lem:degenerate_linear_algebra}, a cone in $T_{p,E}\widetilde B_H$ of dimension \begin{align*} N&=\dim S+ (n-k)((n-2)-(n-k-1))-{\nu +1\choose 2} \\&= (n-k)(k-1)-{\nu+1\choose 2}+n-2<\dim d_{p,E}\tilde\tau_H^{-1}T_E\Lambda^\nu_{n-k+1}(V). \end{align*} It follows by the claim that we can find a curve $(p(t), E(t))\in \widetilde B_H$ through $(p,E)$ with $E'(0)\in T_{E}\Lambda_{n-k+1}^\nu(V)$ and $p'(0)\notin T_pS$. Let $v_{n-1}\in T_pH$ span $\Ker(g_p)$, and recall that $v_{n-1}\in T_p\widehat E$, in particular $v_{n-1}\in\Ker(Q|_{E})$. Choosing any smooth vector field $v_{n-1}(t)\in T_{p(t)}\widehat E(t)$, we find $\left.\frac{d}{dt}\right|_{t=0}Q(v_{n-1}(t))=2Q(v_{n-1}'(0), v_{n-1}(0))=0$. Choose a frame $(v_j(t))_{j=1}^{n-1}$ for $H$ along $p(t)$ with $v_{n-1}(0)=v_{n-1}$, such that $T_{p(t)}\widehat E(t)=\Span(v_k(t),\dots,v_{n-1}(t))$. Then \begin{align*} \left.\frac{d}{dt}\right|_{t=0}\det (Q(v_i(t),v_j(t)))_{i,j=1}^{n-1}&=\det (Q(v_i(t),v_j(t)))_{i,j=1}^{n-2}\cdot \left.\frac{d}{dt}\right|_{t=0}Q(v_{n-1}(t))=0, \end{align*} which means that $d_p(\det g)(p'(0))=0$ by Lemma \ref{lem:LC_gram_determinant}. Since $\Ker d_p(\det g)=T_pS$ and $p'(0) \notin T_pS$, we get a contradiction. This completes the proof of the proposition, modulo the two claims we now proceed to prove.\endproof \proof[Proof of Claim 1] Consider a curve $(p(t), E(t))\in \widetilde \Lambda^\nu_{n-k+1}$ with $(p'(0), E'(0))\in T_{p, E}\widetilde B_H$. We ought to find a curve $(q(t), F(t))\in \widetilde B^\nu_H$ through $(p,E)$ with $(q'(0), F'(0))=(p'(0), E'(0))$. Set $q(t)=\rho_H(p(t))$, evidently $q'(0)=p'(0)$. Fix a subspace $W_0\subset T_pH$ which is non-degenerate and contains $T_p \widehat E$. If $T_pH$ is non-degenerate, we can just take $W_0=T_pH$. Otherwise, $\dim \ker g_p=1$ and we may take any hyperplane $W_0 \subset T_pH$ which contains $T_p\widehat E$ and satisfies $W_0 \cap \ker g_p=\{0\}$. Now fix any linear map $A:W_0\to V/W_0$ which makes the following diagram commutative. \begin{displaymath}\xymatrixcolsep{5pc} \xymatrix{E \ar[r]^{E'(0)} \ar[d]^{i} & V/E \ar[d]_{\pi}\\ % W_0\ar[r]^{A}\ar[d]^{i} & V/W_0\ar[d]_{\pi}\\ % T_pH\oplus \mathbb{R} p \ar[r]^-{\left.\frac{d}{dt}\right|_{ t=0}(T_{q(t)}H\oplus \mathbb{R} q(t))} & V/(T_pH\oplus \mathbb{R} p) } \end{displaymath} and use Lemma \ref{lem:pair_of_spaces} to find smooth paths $W(t) \supset E(t)$, $\widetilde W(t)\subset T_{q(t)}H\oplus \mathbb{R} q(t)$ with $W(0)=\widetilde W(0)=W_0$ and $W'(0)=\widetilde W'(0)=A$. For small $t$, $W(t), \widetilde W(t)$ are non-degenerate of fixed signature $(\alpha, \beta)$. Consider the manifold $Z=\{(x, W)\in M\times \Gr_{\alpha+\beta}(V), x\in W, \mathrm{sign}(Q|_W)=(\alpha, \beta)\}$. Clearly $Z$ is a homogeneous space for $\OO(V, Q)$, with the equivariant projection $\pi_Z:\OO(V,Q)\to Z$ normalized by $\pi_Z(\id)=(p,W_0)$. We can fix a smooth section $X_Z:Z\to \OO(V,Q)$ near $(p, W_0)$ with $X_Z(p, W_0)=\id$ such that $\pi_Z\circ X_Z=\id$. Now define the smooth path $R_t\in \OO(V,Q)$ by \begin{displaymath} R_t=X_Z(q(t),\widetilde W(t))\circ X_{ Z}(p(t), W(t))^{-1}. \end{displaymath} Then $R_tp(t)=q(t)$, and $\left.\frac{d}{dt}\right|_{t=0}R_t=0$ since $\left.\frac{d}{dt}\right|_{t=0}W(t)=\left.\frac{d}{dt}\right|_{t=0}\widetilde W(t)$. Setting $F(t)=R_tE(t)$, we have $(q'(0), F'(0))=(p'(0), E'(0))$, and $(q(t), F(t)) \in\widetilde B^\nu_H$. This proves the claim. \endproof \proof[Proof of Claim 2] Consider a curve $(p(t), E(t))\in \widetilde\Lambda^\nu_{n-k+1}(V)$ through $(p, E)$, with $p'(0)=w\in T_pS$ and $(p'(0), E'(0))=(w,\xi)\in T_{p,E}\widetilde B_H$. Let $\rho_S:M\to S$ be the least distance projection with respect to $h$, well-defined and smooth in some neighborhood of $p$. Set $q(t)=\rho_S( p(t))$, clearly $q'(0)=p'(0)$. Denote $L_0=\Ker(g_p)\subset T_pH\cap E$, and extend to a smooth path of lines $L_t\subset E(t)\cap E(t)^{Q}\in\Lambda_\nu^\nu(T_{p(t)}M)$. Consider the manifold of pairs \begin{displaymath} Z=\{(x, L): x\in M,L\in\Lambda^1_1(T_xM)\}. \end{displaymath} Clearly $Z$ is a homogeneous space for $\OO(V, Q)$, with the equivariant projection $\pi_Z:\OO(V,Q)\to Z$ normalized by $\pi_Z(\id)=(p,L_0)$. We can fix a smooth section $X_Z:Z\to \OO(V,Q)$ near $(p, L_0)$ with $X_Z(p, L_0)=\id$ such that $\pi_Z\circ X_Z=\id$. Now define the smooth path $R_t\in \OO(V,Q)$ by \begin{displaymath} R_t=X_Z(q(t),\Ker(g_{q(t)}))\circ X_Z(p(t),L_t)^{-1}. \end{displaymath} Then $R_tp(t)=q(t)$, and $\left.\frac{d}{dt}\right|_{t=0}R_t=0$, provided that $\left.\frac{d}{dt}\right|_{t=0}L_t=\left.\frac{d}{dt}\right|_{t=0}\Ker(g_{q(t)})$. Let us verify that $L_t$ can be chosen in this fashion. In the following, we fix some Riemannian metric on various manifolds, and write $|x-y|_X$ for the corresponding distance between $x,y\in X$. We will also write, for two subspaces $E, F\subset V$, $\measuredangle(E,F)$ for the angle between them with respect to some Euclidean metric. This should not create ambiguity, as we will be concerned only with rough small scale asymptotics. As $(p'(0), E'(0))\in T_{p,E}\widetilde B_H$, we may find a curve $(\tilde p(t), \widetilde E(t))\in \widetilde B_H$ through $(p,E)$ with $(p'(0), E'(0))= (\tilde p'(0), \widetilde E'(0))$. Define $\widetilde H(t):=T_{\tilde p(t)}H\oplus \mathbb{R} \tilde p(t)$. It follows that $\measuredangle(E(t),\widetilde H(t))=O(t^2)$, and by Lemma \ref{lem:pair_of_spaces} we have the commutative diagram \begin{displaymath}\xymatrixcolsep{3pc} \xymatrix{E \ar[r]^{E'(0)} \ar[d]^{i} & V/E \ar[d]_{\pi}\\ % \widetilde H(0)\ar[r]^{\widetilde H'(0)} & V/\widetilde H(0) } \end{displaymath} Taking the dual diagram and identifying $V=V^*$ using $Q$, we get \begin{equation}\label{eq:diag1} \xymatrixcolsep{4pc}\xymatrix{L_0 \ar[r]^{f_0} \ar[d]^{i} & V/L_0 \ar[d]_{\pi}\\ E^{Q}\ar[r]^{(E^{Q})'(0)} & V/E^{Q}\\} \end{equation} where $f_0= (\widetilde H^{Q})'(0)$. As $\tilde p'(0)=p'(0)=q'(0)$, it is clear that \begin{displaymath} f_0= \left. \frac{d}{dt}\right|_{t=0}(T_{\tilde p(t)}H\oplus \mathbb{R} \tilde p(t))^{Q}=\left. \frac{d}{dt}\right|_{t=0}(T_{q(t)}H\oplus \mathbb{R} q(t))^{Q}=\left. \frac{d}{dt}\right|_{t=0}\Ker g_{q(t)}. \end{displaymath} By Lemma \ref{lem:pair_of_spaces}, we can find $L_t\subset E(t)^{Q}$ with $\left.\frac{d}{dt}\right|_{t=0}L_t=f_0$. Note that $L_t$ is not in general a null line of $Q$. We now proceed to modify the definition of $L_t$ to force it to be a null line. Observe that if $q\in S$, $\widetilde E\in\Gr_{n+1-k}(V)$ and $\Ker(g_{q})\subset \widetilde E$, then $\widetilde E^Q\subset T_qH\oplus \mathbb{R} q$. We have \begin{align*} |\tilde p(t)-q(t)|_M & =O(t^2), \\ |T_{\tilde p(t)}H-T_{q(t)}H|_{\Gr_{n-1}(V)} & =O(t^2),\\ |L_t-\Ker(g_{q(t)})|_{\mathbb P(V)} & =O(t^2). \end{align*} It follows that $\measuredangle(E(t)^{Q}, \widetilde H(t))=O(t^2)$, and so we may apply Lemma \ref{lem:pair_of_spaces} to get the commutative square \begin{displaymath} \xymatrixcolsep{3pc}\xymatrix{E^{Q} \ar[r]^{(E^{Q})'(0)} \ar[d]^{i} & V/E^Q \ar[d]_{\pi}\\ % \widetilde H(0)\ar[r]^{\widetilde H'(0)} & V/\widetilde H(0), } \end{displaymath} and by duality also \begin{equation}\label{eq:diag2} \xymatrix{L_0 \ar[r]^{f_0} \ar[d]^{i} & V/L_0 \ar[d]_{\pi}\\ E\ar[r]^-{E'(0)} & V/E.} \end{equation} Denote $K(t)=E(t)\cap E(t)^Q$, $K_0=K(0)$. Observe there is a natural inclusion $\alpha_K: V/K_0\hookrightarrow V/E\oplus V/E^Q$. It follows from Lemma \ref{lem:pair_of_spaces} applied to the inclusions $K(t)\subset E(t)$, $K(t)\subset E(t)^Q$ that $\alpha_K\circ K'(0):K_0\to V/E\oplus V/E^Q$ coincides with $E'(0)\oplus (E^Q)'(0):K_0 \to V/E \oplus V/E^Q$. Combining diagrams \eqref{eq:diag1} and \eqref{eq:diag2} then yields the commutative diagram \begin{equation*} \xymatrixcolsep{6pc}\xymatrix{L_0 \ar[r]^{f_0} \ar[d]^{i} & V/L_0 \ar[d]_{\pi}\\ K_0\ar[r]^-{\alpha_K\circ K'(0)} & V/E\oplus V/E^{Q},} \end{equation*} and so also \begin{equation*} \xymatrix{L_0 \ar[r]^{f_0} \ar[d]^{i} & V/L_0 \ar[d]_{\pi}\\ K_0\ar[r]^-{K'(0)} & V/K_0.} \end{equation*} By Lemma \ref{lem:pair_of_spaces}, we may redefine $L_t$ such that $\left.\frac{d}{dt}\right|_{t=0}L_t=f_0$ and $L_t \subset K(t)=E(t)\cap E(t)^{Q}$. In particular, $L_t$ is a null line of $Q$. Setting $F(t):=R_tE(t)$ we have $q(t) \in F(t)$, $T_{q(t)}\widehat F(t)\subset T_{q(t)}H$ since $E(t)\subset L_t^{Q}$, and $F'(0)=E'(0)$ since $\left.\frac{d}{dt}\right|_{t=0}R_t=0$. This proves the claim. \endproof \begin{Remark} It is easy to see that the conclusion of the proposition with $k=n-1$ is equivalent to the LC-regularity of $H$. \end{Remark} \begin{Corollary}\label{cor:CrWF_of_LC} Let $V,M, H, E$ be as in Proposition \ref{prop:LC_regularity_implies_WF} with $H$ compact without boundary, and $A\subset M$ is either $H$ itself or a domain with $\partial A=H$. Then $\Cr\WF_{E}(A)\cap N_E^*\Lambda^\nu_{n-k+1}(V)=\emptyset$. \end{Corollary} \proof Follows from Propositions \ref{prop:LC_regularity_implies_WF} and \ref{prop:base_of_induction}. \endproof \section{Construction of an invariant measure on the Grassmannian} \label{sec:opq_distributions} For $X \in \Sym_r(\mathbb{R})$ and $\lambda \in \mathbb{C}$ we set, as in \cite{muro99}, \begin{displaymath} |\det X|_p^\lambda:=\begin{cases} |\det X|^\lambda & \text{if } \mathrm{sign}(X)=(p,r-p)\\ 0 & \text{otherwise.} \end{cases} \end{displaymath} It is well-known, essentially due to Cayley and G\"arding, that $|\det X|_q^\lambda$ extends as a meromorphic in $\lambda$ family of generalized functions, which are in fact tempered distributions. We will use the set \begin{displaymath} U_{\mathbb C}:= \{\Re\zeta>\frac12 \} \cup \left\{\mathrm{Im}\zeta>0\right\} \subset \mathbb{C}, \end{displaymath} and write $\sqrt{z}$ for the unique branch of the square root function on $U_\mathbb{C}$ such that $\sqrt z>0$ for $z>\frac12$. \subsection{Construction of a holomorphic family of Crofton measures} For the following, let $\Sym_r^+(\mathbb{R})\subset\Sym_r(\mathbb{R})$ be the cone of positive-definite matrices, and $\mathfrak h_r=\Sym_r(\mathbb{R})\oplus \mathbf i\Sym_r^+(\mathbb{R})\subset\Sym_r(\mathbb{C})$ the Siegel upper half space. The following is well-known. \begin{Lemma}\label{lem:siegel_determinant} For $Z\in\mathfrak h_r$, $\det Z\neq 0$. In particular, we can define for every $\lambda \in \mathbb{C}$ the holomorphic function $Z \mapsto (\det Z)^\lambda$, normalized by $\lim_{\epsilon \to 0^+} \det (I_r+\mathbf i\epsilon I_r)^\lambda=1$. Moreover, all eigenvalues of $Z\in\mathfrak h_r$ lie in the upper half plane of $\mathbb{C}$. \end{Lemma} \proof Write $Z=X+\mathbf i Y$, $Y>0$. Let $Q_X(v)=\langle Xv, v\rangle, Q_Y=\langle Yv,v\rangle$ be the corresponding quadratic forms. Choose a basis $u_j$ such that the Gram matrix of $Y$ is $I_r$, and of $X$ is diagonal: $D=\mathrm{diag}(d_j)$. Since $D+\mathbf i I_r=U^TZU$ with $U$ invertible, and $\det(D+\mathbf i I_r)=\prod (d_j+\mathbf i)\neq 0$, it follows that $\det Z\neq 0$. Since $\mathfrak h_r$ is simply connected, the second statement follows. For the last statement, we first note there can be no real eigenvalues. Indeed by the first statement, $\det(X+\mathbf i Y-\lambda I_r)=\det ((X-\lambda I_r) + \mathbf i Y)\neq 0$ for $\lambda\in\mathbb{R}$. Next we argue as before and select a diagonalizing basis, given by $U\in \mathrm{GL}(r)$. We furthermore may assume that $\det U>0$, by interchanging two basis elements. Choose a smooth path $U_t\in \mathrm{GL}(r)$ with $U_0=\mathrm{Id}$ and $U_1=U$. Then $U_t^TZU_t\in\mathfrak h_r$ is a smooth path. For $t=1$, the endpoint is $D+\mathbf i I_r$, which has all eigenvalues in the upper half plane. If $Z$ has eigenvalues in the lower half-plane, then by continuity for some $t$ there will be a real eigenvalue, a contradiction. \endproof Recall for the following that given a non-degenerate quadratic form $Q$ on $V$, a \textit{compatible Euclidean form} is any positive-definite form $P$ such that $V$ admits a decomposition $V=V_+\oplus V_-$ which is both $P$- and $Q$-orthogonal, and $Q|_{V_\pm}=\pm P|_{V_{\pm}}$. From here on, let $V=\mathbb{R}^{p} \oplus \mathbb{R}^{q}=\mathbb{R}^{n+1}$ with the standard quadratic form $Q$ of signature $(p,q)$ and the corresponding compatible Euclidean form $P_0$. Define a family of complex-valued quadratic forms $Q_\zeta$ on $V$ with $\zeta\in\mathbb C$, by \begin{displaymath} Q_\zeta:=Q+2\zeta P_0. \end{displaymath} We then have \begin{displaymath} Q_\zeta(x,y):= \begin{cases} (2\zeta+1) P_0(x,y) & x,y \in \mathbb{R}^p\\ (2\zeta-1) P_0(x,y) & x,y \in \mathbb{R}^q\\ 0 & x \in \mathbb{R}^p, y \in \mathbb{R}^q. \end{cases} \end{displaymath} Observe that $Q_\zeta$ is real and positive-definite for $\zeta>\frac12$, and $Q_0=Q$. Furthermore by Lemma \ref{lem:siegel_determinant}, $\det Q_\zeta\neq 0$ for $\zeta\in U_{\mathbb C}$, as either $Q_\zeta$ or $\mathbf i Q_\zeta$ lies in $\mathfrak h_{n+1}$. Note that a complex-valued non-degenerate quadratic form $Q$ on a real vector space $E$ defines an element $\vol_Q^2\in \Dens_\mathbb{C}(E)^2$, and given a branch of square root we also get a complex-valued density $\vol_Q\in \Dens_\mathbb{C}(E)$. For a subspace $E\in\Gr_{n+1-k}(V)$ and $\zeta\in U_{\mathbb C}$, let $X^P_\zeta(E)$ be the Gram matrix of $Q_\zeta|_E$ with respect to a Euclidean structure $P$. It depends on a local choice of a section of $P$-orthonormal bases, and locally defines a map $X^P_\zeta:\Gr_{n+1-k}(V)\to \Sym_{n+1-k}(\mathbb{C})$. Observe that $\det(X^P_\zeta)$ is independent of the choice of $P$-orthonormal bases of $E$. Moreover, either the real or imaginary part of $Q_\zeta|_E$ is positive-definite, and consequently by Lemma \ref{lem:siegel_determinant}, $\det X^P_\zeta(E)\neq 0$. The function $\det(X^P_\zeta)^\lambda\in C^\infty(\Gr_{n+1-k}(V),\mathbb{C})$ is thus well-defined for all $P$ and $\lambda\in\mathbb{C}$, and analytic in $\zeta\in U_{\mathbb C}$, once the normalization $\det(X^P_1)^\lambda>0$ is fixed, as $U_{\mathbb C}$ is simply-connected. Define the smooth measure $\widetilde m^P_\zeta$ on the Grassmannian $\Gr_{n+1-k}(V)$ by \begin{displaymath} d\widetilde m^P_\zeta(E):=\det(X^P_\zeta)^{-\frac{n+1}{2}}(E)d\sigma_P(E), \end{displaymath} where $d\sigma_P(E)$ is the $\OO(P)$-invariant probability measure on the Grassmannian. \begin{Proposition} The complex-valued smooth measure \begin{displaymath} m^\zeta_k:=(2\zeta+1)^{\frac{p(n+1-k)}{2}}(2\zeta-1)^{\frac{q(n+1-k)}{2}}\widetilde m_\zeta^{P_0}, \quad \zeta\in U_{\mathbb C}. \end{displaymath} depends analytically on $\zeta$ and is normalized, i.e. \begin{displaymath} \int_{\Gr_{n+1-k}(V)} dm^\zeta_k=1. \end{displaymath} \end{Proposition} \proof The first statement is clear. For the second, we first see how $\widetilde m^P_\zeta$ depends on $P$. Let $P_1,P_2$ be two Euclidean structures on $V$. From the natural identification $T_E\Gr_{n+1-k}(V)=E^*\otimes V/E$ we obtain that \begin{displaymath} \Dens(T_E\Gr_{n+1-k}(V))=\Dens^*(E)^{n+1}\otimes \Dens(V)^{n+1-k}. \end{displaymath} Spelling this out gives \begin{displaymath} \frac{d\sigma_{P_1}(E)}{d\sigma_{P_2}(E)}=\left(\frac{\vol_{P_1|_E}}{\vol_{P_2|_E}}\right)^{-(n+1)} \left(\frac{\vol_{P_1}}{\vol_{P_2}}\right)^{n+1-k}, \end{displaymath} Since \begin{displaymath} \det X_\zeta^{P_i}(E)=\frac{\vol^2_{Q_\zeta|_E}}{\vol^2_{P_i|_E}}, \end{displaymath} we find that \begin{displaymath} \widetilde m^{P_1}_\zeta=\left(\frac{\vol_{P_1}}{\vol_{P_2}}\right)^{n+1-k} \widetilde m^{P_2}_\zeta. \end{displaymath} For $\zeta>\frac{1}{2}$, $Q_\zeta$ is a Euclidean structure. Then \begin{align*} 1 & =\int \widetilde m_\zeta^{Q_\zeta}= \left(\frac{\vol_{Q_\zeta}}{\vol_{P_0}}\right)^{n+1-k} \int \widetilde m_\zeta^{P_0}\\ & = \sqrt{2\zeta+1}^{p(n+1-k)} \sqrt{2\zeta-1}^{q(n+1-k)} \int \widetilde m_\zeta^{P_0}=\int m^\zeta_k. \end{align*} By analytic continuation, this formula also holds for general $\zeta\in U_{\mathbb C}$. \endproof \subsection{Construction of an $\OO(p,q)$-invariant Crofton measure} \begin{Lemma}\label{lem_f_lambda} The meromorphic family of generalized functions \begin{displaymath} f_\lambda(X):=\sum_{h=0}^r e^{\mathbf i \pi h\lambda}|\det X|^\lambda_{r-h} \in C^{-\infty}(\Sym_r(\mathbb{R})) \end{displaymath} is analytic in $\lambda\in\mathbb C$ and satisfies \begin{equation} \label{eq_reflection_f_lambda} f_\lambda(-X)=e^{\mathbf i \pi r \lambda} \overline{f_\lambda(X)}. \end{equation} \end{Lemma} \proof We recall some results from \cite{muro99}. Consider a linear combination \begin{displaymath} g_\lambda(X):=\sum_{h=0}^r a_h |\det X|^\lambda_{r-h} \end{displaymath} with constant coefficients $a_h \in \mathbb{C}$ and set $\vec a:=(a_0,\ldots,a_r) \in \mathbb{C}^{r+1}$. Then $g_\lambda \in C^{-\infty}(\Sym_r(\mathbb{R}))$ is meromorphic with possible poles in the set $\left\{-m, -\frac{2m+1}{2}: m\geq 1\right\}$. The order of the pole at $s$ in this set can be obtained as follows. Set $\epsilon=-1$ if $s$ is an even integer and $\epsilon=1$ otherwise. Define inductively linear maps $d^{(m)}=(d^{(m)}_0,\ldots,d^{(m)}_{r+1-m}):\mathbb{C}^{r+1} \to \mathbb{C}^{r+1-m}$ by setting \begin{align*} d^{(0)}_h(\vec a) & := a_h\\ d^{(1)}_h(\vec a) & :=a_h+\epsilon a_{h+1} \\ d_h^{(2l+1)}(\vec a) & := d^{(2l-1)}_h-d^{(2l-1)}_{h+2}, \quad l=1,2,\ldots\\ d_h^{(2l)}(\vec a) & := d^{(2l-2)}_h+d^{(2l-2)}_{h+2}, \quad l=1,2,\ldots \end{align*} Then $g_\lambda$ has a pole of order $p$ at $s=-\frac{2m+1}{2}$ if and only if $d^{2p}(\vec a) \neq 0, d^{2p+2}(\vec a)=0$. Similarly, $g_\lambda$ has a pole of order $p$ at $s=-m$ if and only if $d^{2p-1}(\vec a) \neq 0, d^{2p+1}(\vec a)=0$. Here we use the convention that $d^{(m)}=0$ if $m >r+1$ and that a pole of order $0$ is a point of analyticity. In our situation, the coefficients $a_h=a_h(\lambda)=e^{\mathbf i \pi h \lambda}$ depend on $\lambda$ and we cannot apply Muro's result directly. However, writing \begin{displaymath} f_\lambda(X)=\sum_{h=0}^r a_h(\lambda) |\det X|^\lambda_{r-h}=\sum_{j=0}^\infty \frac{(\lambda-s)^j}{j!} \sum_{h=0}^r a_h^{(j)}(s) |\det X|^\lambda_{r-h}, \end{displaymath} we see that it is enough to prove that the order of the pole of $\sum_{h=0}^r a_h^{(j)}(s) |\det X|^\lambda_{r-h}$ at $\lambda=s$ is at most $j$ for all $j$. By induction we find that for all $l=0,1,\ldots$ \begin{align*} d^{2l}_h(\vec a(\lambda)) & = e^{\mathbf i \pi h \lambda} (1+e^{2\pi \mathbf i \lambda})^l, \\ d^{2l+1}_h(\vec a(\lambda)) & = e^{\mathbf i \pi h \lambda} (1+\epsilon e^{\mathbf i \pi \lambda}) (1-e^{2\pi \mathbf i \lambda})^l, \end{align*} and hence \begin{align*} d^{2j+2}_h\left(\vec a^{(j)}\left(-\frac{2m+1}{2}\right)\right) & = \left.\frac{d^j}{d\lambda^j}\right|_{\lambda=-\frac{2m+1}{2}} e^{\mathbf i \pi h \lambda} (1+e^{2\pi \mathbf i \lambda})^{j+1}=0, \\ d^{2j+1}_h\left(\vec a^{(j)}(-m)\right) & = \left.\frac{d^j}{d\lambda^j}\right|_{\lambda=-m} e^{\mathbf i \pi h \lambda} (1+\epsilon e^{\mathbf i \pi \lambda}) (1-e^{2\pi \mathbf i \lambda})^{j}=0, \end{align*} which finishes the proof. \endproof \begin{Proposition}\label{prop:upper_complex_limit} Let $C' \subset \Sym_r^+(\mathbb{R})$ be a closed convex cone. Then \begin{displaymath} \lim_{Y\to 0, Y \in C'} \det(X+\mathbf i Y)^\lambda=f_\lambda(X) \end{displaymath} in the strong topology on tempered distributions on $\Sym_r(\mathbb{R})$. \end{Proposition} \proof For $\Re\lambda\geq 0$ the statement is easy, so in the following we assume $\Re\lambda<0$. First we claim that the limit exists. We will show that there are constants $\alpha=\alpha(\lambda)\geq 0$ and $b'=b(\lambda,C')$ such that \begin{equation}\label{eq:det_bound} |\det(X+\mathbf i Y)^\lambda| \leq b' \|Y\|^{-\alpha},\qquad \forall X\in\Sym_r(\mathbb{R}), \forall Y\in C'\setminus\{0\} \end{equation} First take $Y=I_r$. Letting $(\mu_j)_{j=1}^r\subset\mathbb{R}$ be the eigenvalues of $X$, we get \begin{align*} |\det(X+\mathbf i I_r)^\lambda| &= \prod|\mu_j+\mathbf i |^{{\mathrm{Re}}\lambda} e^{-\mathrm{Im}\lambda\cdot\mathrm{Arg}(\mu_j+i)}\\ % & \leq e^{\pi r|\mathrm{Im}\lambda|}\prod(\mu_j^2+1)^{\frac{\Re\lambda}{2}}\\ % & \leq e^{\pi r|\mathrm{Im}\lambda|}. \end{align*} Now for general $Y$, we have \begin{displaymath} |\det (X+\mathbf i Y)^\lambda|=|\det Y|^{\Re\lambda}|\det(\sqrt Y^{-1} X \sqrt Y^{-1} +\mathbf i I)^\lambda| \leq e^{\pi r {|\mathrm {Im}\lambda|}}|\det Y|^{\Re\lambda}, \end{displaymath} and letting $c:=\sup\left\{ \frac{\|Y\|^r}{|\det Y|}: Y\in C'\right\}$, we conclude that \eqref{eq:det_bound} holds with $b'=c^{ -\Re\lambda} e^{\pi r |\mathrm {Im}\lambda|}$, and $\alpha=-r\Re\lambda\geq 0$. It now follows from \cite[Section 26.3]{vladimirov66} that the limit \begin{displaymath} \det(X+\mathbf i 0)^\lambda:=\lim_{Y\to 0, Y \in C'}\det(X+\mathbf i Y)^\lambda\in \mathcal S' \end{displaymath} exists in the strong topology on the space of tempered distributions of order $\lceil r|\Re\lambda|\rceil+{r+1\choose 2}+3$. It remains to verify that $\det(X+\mathbf i 0)^\lambda=f_\lambda(X)$ for $\Re\lambda<0$. Denote \begin{displaymath} H_\epsilon=\{X+\mathbf i\epsilon I_r: X\in \Sym_r(\mathbb{R})\}\subset \Sym_r(\mathbb{C}). \end{displaymath} Let $\psi(X)$ be a Schwartz function on $\Sym_r(\mathbb{R})$, which is the Fourier transform of a compactly supported smooth function, in particular it has an analytic extension to $\Sym_r(\mathbb{C})$. Writing $dZ=\mbox{\Large $\wedge$}_{i=1}^r \mbox{\Large $\wedge$}_{j=i}^r dz_{ij}$, the integral $\int_{H_\epsilon} \psi(Z)\det(Z)^\lambda dZ$ is convergent, since $\psi$ is rapidly decaying at infinity and $\det(Z)^\lambda$ of polynomial growth. It is clearly analytic in $\lambda\in \mathbb{C}$. Furthermore, its value is independent of $\epsilon$ as the integrand is a closed form, rapidly decaying at infinity. For $\lambda>0$, we have \begin{displaymath} \det(X+\mathbf i \epsilon I_r)^\lambda=\prod_{j=1}^r (\mu_j+\mathbf i \epsilon)^\lambda \to \prod |\mu_j|^\lambda e^{\mathbf i\pi \#\{\mu_j<0\}\lambda}=f_\lambda(X), \end{displaymath} and so \begin{displaymath} \int_{H_\epsilon} \psi(Z)\det(Z)^\lambda dZ\to \int_{\Sym_r(\mathbb{R})} \psi(X) f_\lambda(X)dX \end{displaymath} for $\lambda>0$. By analytic extension we conclude that for all $\lambda\in\mathbb{C}$ and $\epsilon>0$, \begin{displaymath} \int_{H_\epsilon} \psi(Z)\det(Z)^\lambda dZ =\int_{\Sym_r(\mathbb{R})} \psi(X) f_\lambda(X)dX, \end{displaymath} that is \begin{displaymath} \int_{\Sym_r(\mathbb{R})} \psi(X+\mathbf i \epsilon I_r)\det(X+\mathbf i \epsilon I_r)^\lambda dX =\int_{\Sym_r(\mathbb{R} )} \psi(X) f_\lambda(X)dX. \end{displaymath} As $\epsilon\to 0$, we have $\psi(X+\mathbf i \epsilon I_r)\to \psi(X)$ in $\mathcal S$, while $\det(X+\mathbf i \epsilon I_r)^\lambda\to\det(X+\mathbf i 0)^\lambda$ in $\mathcal S'$. It follows by continuity that \begin{displaymath} \int_{\Sym_r(\mathbb{R})} \psi(X)\det(X+\mathbf i 0)^\lambda dX = \int_{\Sym_r(\mathbb{R})} \psi(X) f_\lambda(X)dX. \end{displaymath} Finally, noting that the set of Schwartz functions such as $\psi$ is dense, we conclude that $\det(X+\mathbf i 0)^\lambda=f_\lambda$ for all $\lambda\in\mathbb{C}$, as claimed. \endproof Henceforth we use $f_\lambda$ and $\det(X+\mathbf i 0)^\lambda$ interchangeably. The following statement shows that the convergence along $Y\in \mathbb{R}_+ I_r$ holds in a finer topology, namely the \emph{normal H\"ormander topology}. We refer to \cite{brouder_dang_helein} for its definition (where it is called \emph{normal topology}). The main point for us is that the operation of pull-back of generalized sections is continuous in this topology, provided some condition on wave fronts is satisfied. We do not know if an analogue of the following proposition holds for an arbitrary distributional boundary value; the proof below is tailored to our particular case, and in essence leverages strong convergence by induction on dimension. \begin{Proposition}\label{prop:hormander_convergence} Denote $N^*\Gamma^r=\cup_{\nu=0}^r N^* \Gamma^r_\nu\subset T^*\Sym_r(\mathbb{R})$, where $\Gamma^r_\nu$ consists of all matrices of nullity $\nu$. It then holds for all $\lambda\in \mathbb C$ that $\det(X+\mathbf i\epsilon I_r)^\lambda\to f_\lambda(X)$ in $C^{-\infty}_{N^*\Gamma^r}(\Sym_r(\mathbb{R}))$ in the normal H\"ormander topology . \end{Proposition} \proof First note that $g^*f_\lambda=\det(g)^{2\lambda}f_\lambda$ for all $g\in\GL(r)$. Thus we have the differential equations $(\underline A-2\lambda\tr(A))f_\lambda=0$, where $\underline A$ is the vector field defined by the infinitesimal action of $A\in\mathfrak{gl}(r)$. It follows from \cite[Theorem 8.3.1]{hoermander_pde1} that $\WF(f_\lambda)\subset N^*\Gamma^r$. We proceed by induction on $r$, the case $r=1$ being trivial. Since $\det(X+\mathbf i\epsilon I_r)^\lambda\to \det (X+\mathbf i0)^\lambda$ in the strong topology by Proposition \ref{prop:upper_complex_limit} and $N^*_0\Gamma^r_r=T_0^*\Sym_r(\mathbb{R})$, it remains to consider convergence in $\Sym_r(\mathbb{R})\setminus\{0\}$. Consider a matrix $Y\in\Sym_r(\mathbb{R})$ of nullity $\nu<r$. Let $E_0$ be its kernel, and $F_0=E_0^\perp$. There is then a unique map $E:U\to \Gr_\nu(\mathbb{R}^r)$ in a neighborhood $U$ of $Y$ such that $E(Y)=E_0$, and $E(X)$ is an invariant subspace of $X$. Here and in the following, $U$ is assumed sufficiently small for various purposes. We claim $E=E(X)$ is smooth. Indeed, consider $Z=\{(X,F): X(F)= F\}\subset\Sym_r(\mathbb{R})\times\Gr_{r-\nu}(\mathbb{R}^r)$. Clearly $Z$ is the graph of a unique function $F=F(X)$ near $(Y, F_0)$. Let us check that $Z$ is a manifold near $(Y, F_0)$. Define $\alpha:U\times\Gr_{r-\nu}(\mathbb{R}^r)\to\Gr_{r-\nu}(\mathbb{R}^r)\times\Gr_{r-\nu}(\mathbb{R}^r)$ by $\alpha(X, F)=(F, X(F))$. Then $Z=\alpha^{-1}( \Delta)$, where $\Delta$ is the diagonal. Let us verify that $\alpha$ is a submersion at $(Y, F_0)$. For $M\in\Sym_r(\mathbb{R})$ and $H\in T_{F_0}\Gr_{r-\nu}(\mathbb{R}^r)=\mathrm{Hom}(F_0,\mathbb{R}^r/F_0)$, one computes $d_{Y,F_0}\alpha(M, H)=(H, Y\circ H+M|_{F_0\to \mathbb{R}^r/F_0})=(H, M|_{F_0\to \mathbb{R}^r/F_0})$, since by construction $Y: \mathbb{R}^r/F_0\to \mathbb{R}^r/F_0$ is the zero map. Noting that any linear map $F_0\to \mathbb{R}^r/F_0$ is induced by a symmetric matrix mapping $M:\mathbb{R}^r \to \mathbb{R}^r$, it follows that $\alpha$ is submersive and $Z$ is a manifold. Further, \begin{displaymath} T_{Y,F_0}Z=\{(M,H):M\in\Sym_r(\mathbb{R}), H=M|_{F_0\to \mathbb{R}^r/F_0} \}. \end{displaymath} In particular if $(0, H)\in T_{Y,F_0}Z$, then we must have $H=0$. It follows that $F(X)$ is smooth in $U$, and therefore so is $E(X)=F(X)^\perp$. Choose arbitrary orthonormal frames $e_i(X)$ for $E(X)$ and $f_i(X)$ for $F(X)=E(X)^\perp$ depending smoothly on $X$. Define \begin{displaymath} A:U\to\Sym_\nu(\mathbb{R}),\quad B:U\to \Sym_{r-\nu}(\mathbb{R}) \end{displaymath} by $$A(X)=(\langle Xe_i(X), e_j(X)\rangle ),\quad B(X)=(\langle X f_i(X), f_j(X)\rangle ).$$ Then $A$ is a submersion in $U$. Indeed one has \begin{displaymath} d_YA(M)_{i,j}=\langle M e_i(Y), e_j(Y)\rangle+\langle Ye_i(Y), d_Ye_j(M)\rangle +\langle Ye_j(Y), d_Ye_i(M)\rangle, \end{displaymath} and the last two summands vanish as $e_i(Y), e_j(Y)\in E_0$. It follows that $d_YA:\Sym_r(\mathbb{R})\to \Sym_\nu(\mathbb{R})$ is surjective, and so $A$ is submersive near $Y$. It holds that \begin{displaymath} \det(X+\mathbf i\epsilon I_r)^\lambda=A^*\det(X_1+\mathbf i\epsilon I_\nu) ^\lambda \det(B(X)+\mathbf i\epsilon I_{r-\nu})^\lambda, \quad X_1 \in \Sym_{\nu}(\mathbb{R}). \end{displaymath} As $ B(X)$ is non-degenerate, the second factor is a smooth function in $(X,\epsilon)\in U\times\mathbb{R}$. For the first factor, we have by the induction assumption that $\det(X_1+\mathbf i\epsilon I_\nu) ^\lambda\to \det(X_1+\mathbf i0)^\lambda$ in the normal topology on $C^{-\infty}_{N^*\Gamma^\nu}(\Sym_\nu(\mathbb{R}))$. It then holds that \begin{displaymath}\WF(A^*\det(X_1+\mathbf i0)^\lambda)\subset A^*(N^*\Gamma^\nu) =N^*(A^{-1}\Gamma^\nu)= N^*(\Gamma^r\cap U),\end{displaymath} and by \cite{brouder_dang_helein}, $A^*\det(X_1+\mathbf i\epsilon I_\nu)^\lambda\to A^*\det (X_1+\mathbf i0)^\lambda$ in the normal H\"ormander topology on $C^{-\infty}_{N^*\Gamma^r}(\Sym_r(\mathbb{R}))$. We conclude that \begin{displaymath}\det(X+\mathbf i\epsilon I_r)^\lambda\to \det(X+\mathbf i0)^\lambda \end{displaymath} in the normal H\"ormander topology on $C^{-\infty}_{N^*\Gamma^\nu}(\Sym_r(\mathbb{R}))$. \endproof \begin{Remark} Using the Hilbert-Schmidt inner product to identify $T_0^*\Sym_r(\mathbb{R})=\Sym_r(\mathbb{R})$, the statement of the proposition in fact holds with all conormal cones intersected with $\Sym_r^+(\mathbb{R})$, which follows from \cite[Theorem 8.1.6]{hoermander_pde1}. \end{Remark} In \cite[Proposition 4.9]{faifman_crofton}, an $\OO(p,q)$-invariant distribution was constructed on $\Gr_{n+1-k}(V)$. Let us briefly recall the construction. Write \begin{displaymath} \kappa=\min(p,q,k,n+1-k),\quad p'=\max(0, p-k),\quad q'=\max(0, q-k). \end{displaymath} Then \begin{equation} \label{eq_equation_kappa} \kappa+p'+q'=n+1-k. \end{equation} Let $P$ be a $Q$-compatible Euclidean structure, and let $\sigma_P$ be the corresponding $\OO(P)$-invariant probability measure on $\Gr_{n+1-k}(V)$. Decompose $V=V_P^+\oplus V_P^-$ such that $Q|_{V_P^\pm}=\pm P|_{V_P^\pm}$. For a subspace $E\in\Gr_{n+1-k}(V)$, choose a $P$-orthonormal basis $u_1,\dots, u_\kappa, v^+_1,\dots, v^+_{p'}, v^-_1,\dots v^-_{q'}$ such that $v^{\pm}_\nu\in V^\pm_P$. Let $\widehat X^{P}_\zeta(E)$ be the Gram matrix of $Q_\zeta$ with respect to $P$, restricted to $\Span(u_\nu)$. It holds that $\det X_\zeta^P(E)=(2\zeta+1)^{p'}(2\zeta-1)^{q'}\det \widehat X^P_\zeta(E)$. Then $\widetilde m_k^0|_{U_P}:=(\widehat X^P_0)^* f_{-\frac {n+1}2}\cdot d\sigma_{P}(E)$ is a well-defined distribution in the open and dense set $U_P$ where $\widehat X_0^P:\Gr_{n+1-k}(V)\to \Sym_\kappa(\mathbb{R})$ is a submersion, explicitly $U_P=\{E: \pm1\notin\mathrm{Spec}(X^P_0(E))\}$. We now choose several Euclidean structures $P_j$ as above, $1\leq j\leq N$, such that the sets $U_{P_j}$ cover the Grassmannian. Then $\widetilde m_k^0|_{U_{P_j}}$ patch to a globally defined distribution $\widetilde m_k^0$, which is invariant under $\OO(Q)$. We will be approximating $\widetilde m_k^0$ by smooth measures. For this purpose we will need to simplify the construction above, namely we will only use a single Euclidean structure. This is made possible by monitoring the wave front set. Write $P=P_0$, $X_0=X^{P}_0$, $\widehat X_0=\widehat X^{P}_0$, $dE=d\sigma_{P}(E)$. \begin{Proposition}\label{prop:m0_explicit} The distribution $$\widetilde m^0_k:=\widehat X_0^*f_{-\frac{n+1}{2}}(E)\cdot dE\in\mathcal M^{-\infty}(\Gr_{n+1-k}(V))$$ is well-defined and $\OO(Q)$-invariant. \end{Proposition} \proof Denote by $\mathrm{mult}(\mu, X)$ the multiplicity of $\mu$ in the spectrum of $X$, and by $E_\mu(X)$ the eigenspace of $X$ with eigenvalue $\mu$. Consider \begin{displaymath} B^a_\mu=\{X: \mathrm{mult}(\mu, X)=a\}\subset\Sym_\kappa(\mathbb{R}). \end{displaymath} This is a locally closed submanifold: $B^a_0$ locally coincides with an orbit of the action of $\GL(\kappa)$ on $\Sym_\kappa(\mathbb{R})$ by $(g, X)\mapsto g^TXg$, and $B^a_\mu=\mu I+B^a_0$. Using the Hilbert-Schmidt Euclidean structure $\langle X,Y\rangle=\tr(XY)$ to identify $T_X\Sym_\kappa(\mathbb{R})=T_X^*\Sym_\kappa(\mathbb{R})=\Sym_\kappa(\mathbb{R})$, let us describe the set $N^*_XB_\mu^a$. As $T_XB_0^a=\{A^TX+XA: A\in\mathfrak{gl}_\kappa(\mathbb{R})\}$, we have $\Xi\in N^*_XB_0^a\iff \tr(\Xi A^T X+\Xi X A)=0$ for all $A$, or equivalently $N^*_XB_0^a=\{\Xi\in\Sym_\kappa(\mathbb{R}): \Xi X=0\}$. It follows that $N^*_XB_\mu^a= N^*_{X-\mu I}B_0^a=\{\Xi: \Xi X=\mu \Xi\}$, and one easily computes that $\mathrm{codim} B_\mu^a=\binom{a+1}{2}$. Let us check that $\widetilde m_k^0$ is well-defined. For this purpose, writing \begin{displaymath} L_{E,Y}=\mathrm{Ker}(d\widehat X_0^*:T^*_Y\Sym_\kappa(\mathbb{R})\to T_E^*\Gr_{n+1-k}(V)), \end{displaymath} we should check \cite{duistermaat_book96} that $\WF(f_{\lambda})\cap L_{E,Y}=\emptyset$ whenever $\widehat X_0(E)=Y$, where $\lambda=-\frac{n+1}{2}$. We may assume $Y$ lies in the singular support of $f_{\lambda}$, and $\widehat X_0$ is not submersive at $E$. This implies that $Y\in B^r_0\cap B^s_1\cap B^t_{-1}$ with $r>0$ and $s+t>0$. The intersections $B^{s,t}_{1,-1}:=B^s_1\cap B^t_{-1}$ and $B^r_0\cap B^{s,t}_{1,-1}$ are transversal. Indeed, $N_Y^*B_1^s\cap N_Y^*B_{-1}^t=\{\Xi: \Xi Y=\Xi =-\Xi \}=\{0\}$, so $B^s_1\pitchfork B^t_{-1}$, $N^*_YB^{s,t}_{1,-1}=\{ \Xi_1+\Xi_2: \Xi_1Y=\Xi_1, \Xi_2Y=-\Xi_2\}$ and $\mathrm{codim}T_YB^{s,t}_{1,-1}=\mathrm{codim}T_Y B_1^s+\mathrm{codim}T_Y B_{-1}^t={s+1\choose 2}+{t+1\choose 2}$. Next if $\Xi= \Xi_1+\Xi_2\in N^*_YB^{s,t}_{1,-1}\cap N_Y^*B_0^r$, then $\Xi_1-\Xi_2=\Xi_1Y+ \Xi_2Y=\Xi Y=0$, so that $ \Xi_1= \Xi_2$, which can only happen if $\Xi_1=\Xi_2=0$, thus $\Xi=0$. Therefore $B_0^r \pitchfork B_{1,-1}^{s,t}$. By Proposition \ref{prop:hormander_convergence}, $\WF_Y(f_\lambda)\subset N_Y^*B_0^r$. Set $E_Y:=E_1(Y)\oplus E_{-1}(Y)$, and define $W\subset B^{s,t}_{1,-1}$ as the set of matrices $X$ of the same spectrum as $Y$ that satisfy $E_\mu(X)=E_\mu(Y)$ for all $\mu\neq \pm1$. It is clearly a manifold that can be identified with $\Gr_{s}(E_Y)=\Gr_s(\mathbb{R}^{s+t})$, in particular $\dim W=st$. Evidently $W\subset B^r_0$. For $1\leq j\leq \kappa$, choose a curve $\gamma_j$ through $E$ given by \begin{displaymath} \gamma_j(t)=\Span(u_1,u_2,\dots,u_j(t),\ldots,u_\kappa, v^+_1,..., v^+_{p'}, v_1^-, \dots, v_{q'}^-) \end{displaymath} with all vectors orthonormal and fixed except for $u_j$, such that $u_{\kappa-s-t+1},\dots, u_{\kappa-t}\in V_P^+$, $u_{\kappa-t+1},\dots, u_{\kappa}\in V_P^-$, and $\xi=u_j'(0)\in E^P$ arbitrary. By \cite[Lemma 4.1]{faifman_crofton}, we have \begin{displaymath} d_E\widehat X_0 (\gamma_j'(0))= \begin{pmatrix} 0&\cdots &0 &Q(\xi, u_1)&0&\cdots&0 \\ \vdots &\ddots&\vdots &\vdots&\vdots&\ddots &\vdots \\ 0&\cdots & 0& Q(\xi, u_{j-1}) &0&\cdots&0\\ Q(\xi, u_1)&\cdots &Q(\xi, u_{j-1})& Q(\xi, 2u_j) & Q(\xi, u_{j+1}) & \cdots & Q(\xi,u_\kappa) \\ 0&\cdots & 0& Q(\xi, u_{j+1})&0 & \cdots&0\\ \vdots &\ddots&\vdots &\vdots&\vdots&\ddots&\vdots \\ 0&\cdots &0 &Q(\xi, u_\kappa)&0&\cdots&0 \end{pmatrix} \end{displaymath} Note that $E\cap(E^P)^Q=\Span(u_{\kappa-s-t+1},\dots, u_\kappa, v^+_1,..., v^+_{p'}, v^-_1,..., v_{q'}^-)$ by assumption. Hence $Q(\xi, u_1), Q(\xi,u_2),\ldots, Q(\xi,u_{\kappa-s-t})$ are linearly independent functionals in $\xi\in E^P$. Thus the bottom right $(s+t)\times (s+t)$ submatrix of $d_E \widehat X_0 (\gamma_j'(0))$ vanishes, while all entries on row $j$ in the first $(\kappa-s-t)$ columns are arbitrary. It follows that $\mathrm{codim}(\mathrm{Image}(d_E\widehat X_0))={s+t+1\choose 2}$, and $\mathrm{Image}(d_E\widehat X_0)\cap T_YW=\{0\}$. As $-P\leq Q\leq P$, it holds that $\mathrm{Image}(d_E\widehat X_0)\subset T_YB^{s,t}_{1,-1}$. Furthermore ${s+1\choose 2}+{t+1\choose 2}+st={s+t+1\choose 2}$, so that $\dim \mathrm{Image}(d_E\widehat X_0)+\dim T_{Y}W=\dim T_Y B^{s,t}_{1,-1}$, and so \begin{displaymath} \mathrm{Image}(d_E\widehat X_0)\oplus T_YW=T_Y B^{s,t}_{1,-1}. \end{displaymath} Since $T_YW\subset T_YB^r_0$ and $T_YB^r_0 + T_Y B^{s,t}_{1,-1}=T_Y\Sym_\kappa(\mathbb{R})$, we conclude that $ \mathrm{Image}(d_E\widehat X_0)+T_YB^r_0=T_Y\Sym_\kappa(\mathbb{R})$ and so $N^*B_0^r\cap L_{E,Y}=\{0\}$. We conclude that $\WF(f_{\lambda})\cap L_{E,Y}=\emptyset$, and so $\widetilde m_k^0$ is well-defined. As the proof above is independent of the value of $\lambda$, invariance under $\OO(Q)$ follows by analytic continuation as in the original construction in \cite{faifman_crofton}, and we omit the details. \endproof \begin{Definition}\label{def_tilde_m} Set \begin{displaymath} m_k:=e^{\frac{\mathbf i\pi}{2} \left((n+1)\min(k,q)-qk\right)}\widetilde m_k^0\in\mathcal M^{-\infty}(\Gr_{n+1-k}(V))^{\OO(Q)}. \end{displaymath} \end{Definition} \begin{Lemma}\label{lem:crofton_sign} Let $j\colon \mathbb{R}^{p,q}\to\mathbb{R}^{q,p}$ be given by $j(x,y)=(y,x)$ where $x\in \mathbb{R}^p,y\in\mathbb{R}^q$. Let us also denote by $j$ the induced map $\Gr_{p+q-k}(\mathbb{R}^{p,q})\to \Gr_{p+q-k}(\mathbb{R}^{q,p})$. Then \begin{displaymath} j^*m_k=\overline{m_k}. \end{displaymath} \end{Lemma} \begin{proof} We have $m_k=\mathbf i^{(n+1)\min(k,p)-pk} {\widehat X_0^*f_{-\frac{n+1}2}}d\sigma_{P_0}$ on $\mathbb{R}^{q,p}$. Since $\widehat X_{ 0} \circ j=-\widehat X_{ 0}$, \eqref{eq_reflection_f_lambda} implies that $j^*\widehat X_{ 0}^*f_{-\frac{n+1}2}=\mathbf i^{-\kappa(n+1)}\overline{\widehat X_{ 0}^*f_{-\frac{n+1}2}}$. Using \eqref{eq_equation_kappa} we get \begin{align*} j^*m_k&=\mathbf i^{(n+1)\min(k,p)-pk-\kappa(n+1)} \overline{\widehat X_{ 0}^*f_{-\frac{n+1}2}}d\sigma_{P_0}\\ &=\mathbf i^{-(n+1)\min(k,q)+qk}\overline{\widehat X_{ 0}^*f_{-\frac{n+1}2}}d\sigma_{P_0} \end{align*} which is the conjugate of $m_k$ in $\mathbb{R}^{p,q}$. \end{proof} \begin{Proposition}\label{prop:boundary_value_grassmannian} Define \begin{displaymath} N^*\Lambda:=\cup_{\nu\geq1}N^*\Lambda^\nu_{n+1-k}(V). \end{displaymath} \begin{enumerate} \item The wave front set of $m_k$ is contained in $N^*\Lambda$. \item $m^{\mathbf i\epsilon}_k\to m_k$ in $\mathcal M_{N^*\Lambda}^{-\infty}(\Gr_{n+1-k}(V))$ as $\epsilon\to 0^+$ in the normal H\"ormander topology. \end{enumerate} \end{Proposition} \proof Write $\lambda=-\frac{n+1}{2}$. For $\zeta\in U_{\mathbb C}$ we have $$(2\zeta+1)^{-\frac{n+1-k}{2}p}(2\zeta-1)^{-\frac{n+1-k}{2}q}m^\zeta_k(E)= \det (X^{P_0}_\zeta)^{\lambda}dE.$$ We compute using the special basis described above adapted to $P_0$, \begin{align*}\det(X^{P_0}_{\mathbf i\epsilon}(E))^{\lambda}&= \det(X_{0}(E)+2\mathbf i\epsilon I_{n+1-k} )^{\lambda}=X_0^*\det(X+2\mathbf i\epsilon I_{n+1-k})^{\lambda}\\&=(1+2\mathbf i\epsilon)^{\lambda p'}(-1+2\mathbf i\epsilon)^{\lambda q'}\widehat X_0^*\det(X+2\mathbf i\epsilon I_\kappa)^\lambda.\end{align*} By Proposition \ref{prop:hormander_convergence}, we have $\det(X+2\mathbf i\epsilon I_\kappa)^{\lambda}\to f_{\lambda}(X)$ in the normal H\"ormander topology on $C^{-\infty}_{\Gamma^\kappa}(\Sym_\kappa(\mathbb{R}))$. By the proof of Proposition \ref{prop:m0_explicit}, we may use the continuity of the pull-back $\widehat X_0^*$ in the normal H\"ormander topology \cite{brouder_dang_helein}. Noting that $\widehat X_0^{-1}(\Gamma^\kappa_\nu)\subset \Lambda_{n-k+1}^\nu(V)$ so that $\widehat X_0^*N^*\Gamma^{\kappa}\subset N^*\Lambda$, we find that $$ \widehat X_0^*\det(X+2\mathbf i\epsilon I_\kappa)^{\lambda} dE\to \widetilde m_k^0$$ in the normal H\"ormander topology as stated. \endproof \begin{Corollary}\label{cor:can_compute_on_LC} Let $M \subset V^{n+1}$ be a pseudosphere or a pseudohyperbolic space, and $A\subset M$ either a smooth domain with LC-regular boundary, or a smooth LC-regular hypersurface without boundary. Assume all $Q$-degenerate tangents to $A$ of codimension $k$ are regular. Then \begin{equation}\label{eq:crofton_general}\Cr(m_k)(A)=\int_{\Gr_{n+1-k}(V)}\chi(A\cap E)dm_k(E). \end{equation} \end{Corollary} \proof First note that $\Cr(m_k)\in\mathcal V^{-\infty}(M)$ is isometry invariant and by \cite[Theorem C]{bernig_faifman_solanes_part2} is given by a linear combination of the intrinsic volumes. Thus $A$ is WF-transversal to $\Cr(m_k)$. The assertion now follows from Corollary \ref{cor:CrWF_of_LC}, and Propositions \ref{prop:boundary_value_grassmannian} part i) and \ref{prop:apply_crofton_general}. \endproof \begin{Corollary}\label{cor:can_compute_mu1} Let $M^{n}\subset V^{n+1}$ be a pseudosphere or a pseudohyperbolic space, and $A\subset M$ either a smooth domain or a hypersurface without boundary. Denote $H=H(A)$. \begin{enumerate} \item Assume that for each $x\in H$, $H$ is either pseudo-Riemannian at $x$ or tangentially regular at $x$. Then \eqref{eq:crofton_general} holds for $k=1$. \item If $\mathbb P(H)\subset\mathbb P(V)$ is strictly convex, then \eqref{eq:crofton_general} holds for all $k$. \end{enumerate} \end{Corollary} \proof In both cases, it follows from \cite[Lemma 4.7]{bernig_faifman_solanes} that $H$ is LC-regular, and we can apply Corollary \ref{cor:can_compute_on_LC}. \endproof \textbf{Example.} The complex-valued distribution $m_{n}\in\mathcal M^{-\infty}(\mathbb P(V))$ is invariant under the group of projective transformations preserving the quadric $[Q]=\{Q=0\}$. Its singular support is $[Q]$, and $\WF(m_{n})\subset N^*[Q]$. In particular, $m_{n}(A)$ is well defined for any domain $A\subset \mathbb P(V)$ that is smooth near $[Q]$ and transversal to it. When $Q$ is definite, the quadric $[Q]$ has no real points and $m_{n}$ is the Haar measure on the round projective space. \subsection{The flat case} Next we construct a translation- and $\OO(p,q)$-invariant distribution on the affine Grassmannian $\overline\Gr_{p+q-k}(\mathbb{R}^{p,q})$. \begin{Proposition}\label{prop:derivative_crofton} Let $P$ be a $Q$-compatible Euclidean structure in $W=\mathbb{R}^{p+1,q}=W_+\oplus W_-$. Let $x\in W_+\cap S^{p,q}$, $T=T_xS^{p,q}$, and define \begin{displaymath} s\colon\overline{\Gr}_{p+q-k}(T) \longrightarrow\Gr_{p+q+1-k}(W) ,\qquad s(v+F)=F\oplus \mathbb{R}(x+v),\quad {F\in\Gr_{p+q-k}(T)}, \end{displaymath} which is a diffeomorphism onto its open image. Given $t>0$, consider the homothety $v\mapsto tv$ on $T$ and the induced map $h_t$ on $\overline{\Gr}_{p+q-k}(T)$. \begin{enumerate} \item[i)] Let $dE$ be an $\OO(P)$-invariant measure on $\Gr_{p+q+1-k}(V)$, thus given by a smooth density. Then \begin{displaymath} d\overline F=\frac1{k!}\left.\frac{d^k}{dt^k}\right|_{t=0}h_{t}^* s^*dE \end{displaymath} is an $\overline{\OO(P|_T)}$-invariant measure on $\overline\Gr_{p+q-k}(T)$. \item[ii)] Let $\widehat X_0\colon\Gr_{p+q+1-k}(W)\to \Sym_{\kappa}(\mathbb{R})$ where $\kappa=\min(p+1,q,k,p+q+1-k)$ be as in Proposition \ref{prop:m0_explicit}, and let $\widehat X_0'$ be the corresponding map on $\Gr_{p+q-k}(T)$. Then \begin{displaymath} \left.\frac1{k!}\frac{d^k}{dt^k}\right|_{t=0} (h_{1/t})_* (s^{-1})_* (\widehat X_0^*f_{-\frac{p+q+1}2}(E)dE)=\widehat X_0'^* f_{-\frac{p+q+1}2}(F) d\overline F , \end{displaymath} and this generalized measure is $\overline{\OO(Q|_T)}$-invariant. Here $(s^{-1})_*$ denotes the push-forward by $s^{-1}$ of the restriction to the open set $\mathrm{Image}(s)$. \end{enumerate} \end{Proposition} \begin{proof} \begin{enumerate} \item For $g \in \OO(P|_T)=\Stab_{\OO(P)}(x)\subset \OO(P)$, since $g\circ s\circ h_t=s\circ h_t\circ g$ and $g^* dE=dE$, we have $g^*h_{t}^* s^*dE=h_t^* s^*g^*dE=h_t^* s^*dE$, which yields $\OO(P|_T)$-invariance. As for translation invariance, let $\rho_U\colon U\times \mathbb{R}^k\to \overline{\Gr}_{p+q-k}(T)$ be a local trivialization of the bundle $\pi\colon\overline{\Gr}_{p+q-k}(T)\to{\Gr_{p+q-k}}(T)$, and put $\eta=\rho_U^*s^*dE$. Then $h_t\circ \rho_U(F,w)=\rho_U(F,tw)$ and thus \begin{displaymath} \rho_U^*h_t^*s^*(dE)_{(F,w)}=t^k\eta_{(F,tw)}=t^k\eta_{(F,0)} +O(t^{k+1}). \end{displaymath} Since the induced action of a translation of $T$ on $U\times \mathbb{R}^k$ has the form $(F,w)\mapsto (F, w+\varphi(F))$, the translation invariance of $\overline{dF}$ follows. \item On each $E\in\Gr_{p+q+1-k}(W)$ take a $P$-orthonormal basis $u_1,\dots, u_\kappa$, $v^+_1,\dots, v^+_{p'}$, $v^-_1,\dots v^-_{q'}$ such that $v_i^+\in W^+, v_i^-\in W^-$. Let $X_0$ be the Gram matrix of $Q$ on this basis, and let $\widehat X_0$ be the submatrix corresponding to the $u_i$. Then $X_0=\diag(\widehat X_0, 1,\stackrel{p'}{\ldots}, 1,-1\stackrel{q'}{\ldots},-1)$. In particular $X_0$ and $\widehat X_0$ have the same determinant up to sign, and their signatures differ by $(p',q')$. Let now $f_1,\ldots, f_{p+q-k}$ be a $P$-orthonormal basis of $F\in \Gr_{p+q-k}(T)$, and let $w\in T$ be $P$-orthogonal to $F$. A $P$-orthonormal basis of $s(tw+F)$ is $(1+t^2P(w))^{-\frac12}(x+tw), f_1,\ldots, f_{p+q-k}$. Hence, the Gram matrices $X_0',X_0$ of $Q$ restricted to $F,s(tw+F)$ have the same signature and $$\det X_0=(1+t^2P(w))^{-1}Q(x+tw)\det X_0'.$$ Moreover, arguing as in the previous paragraph we have $\det X_0'=\pm\det\widehat X_0'$ and $\sign(X_0')=\sign(\widehat X_0')+(p',q')$ whenever $Q(x+tw)>0$. Therefore, for $\lambda>0$, $F\in\Gr_{p+q-k}(T)$ and $w\in F^P\cap T$ we have \begin{align*} f_\lambda (\widehat X_0(s(tw+F)))&=e^{\mathbf i \pi q'\lambda}f_\lambda(X_0(s(tw+F)))\\ &=e^{\mathbf i \pi q'\lambda}(1+t^2P(w))^{-\lambda}Q({x+tw})^\lambda f_\lambda(X_0'(F))\\ &=(1+t^2P(w))^{-\lambda}Q({x+tw})^\lambda f_\lambda(\widehat X_0'(F)) \end{align*} whenever $Q(x+tw)>0$, where $f_\lambda$ is defined by Lemma \ref{lem_f_lambda} on $\Sym_\kappa(\mathbb{R})$ or $\Sym_{p+q-k}(\mathbb{R})$ depending on the argument. By analytic continuation we get \begin{displaymath} s^*\widehat X_0^*f_{\lambda}(tw+F)=(1+t^2P(w))^{-\lambda}Q({x+tw})^\lambda (\widehat X_0')^*f_\lambda(tw+F) \end{displaymath} for all $\lambda$. Hence, \begin{displaymath} \lim_{t\to 0} h_t^* s^*\widehat X_0^* f_\lambda= (\widehat X_0')^* f_\lambda. \end{displaymath} In the proof of $i)$ we have seen $(h_{1/t})_*(s^{-1})_*dE=O(t^k)$. Hence, by continuity \begin{align*} \left.\frac{d^k}{dt^k}\right|_{t=0} (h_{1/t})_* (s^{-1})_* (\widehat X_0^*f_{-\frac{p+q+1}2}(E)dE)&=\lim_{t\to 0}h_t^*s^*\widehat X_0^*f_{-\frac{p+q+1}2}\left.\frac{d^k}{dt^k}\right|_{t=0} (h_{1/t})_* (s^{-1})_* dE\\ &=k!(\widehat X_0')^* f_{-\frac{p+q+1}2} d\overline F. \end{align*} Translation invariance is clear. Further, if $g\in \OO(Q|_T)\subset \OO(Q)$, then $g\circ s\circ h_t=s\circ h_t\circ g$. Since $\widehat X_0^*f_{-\frac{p+q+1}2}(E)dE$ is $\OO(Q)$-invariant, this yields $\OO(Q|_T)$-invariance. \end{enumerate} \end{proof} Translation-invariance and ${\OO(P|_T)}$-invariance characterize $d\overline F$ uniquely up to normalization. The normalization can be deduced from Theorem \ref{thm_crofton_formula} in the case $q=0$. As for the translation-invariant and $\OO(Q|_T)$-invariant generalized measure obtained on $T\cong \mathbb{R}^{p,q}$, we take the normalization of Definition \ref{def_tilde_m} as follows. \begin{Definition}On $\overline \Gr_{p+q-k}(\mathbb{R}^{p,q})$ we fix the following translation-invariant and ${\OO(p,q)}$-invariant generalized measure \begin{displaymath} \check{m}_k:=e^{\frac{\mathbf i\pi}2[(p+q+1)\min(k,q)-qk]}(\widehat X_0'(F)) d\overline F. \end{displaymath} \end{Definition} The Crofton map in the flat case is $$ \Cr: \mathcal M^{-\infty}(\overline{\Gr}_{p+q-k}(V))\to \mathcal V^{-\infty}(V),$$ given by $\langle \Cr(\mu), \psi\rangle= \int_{\overline{\Gr}_{p+q-k}(V)}\psi(\overline E)d\mu(\overline E)$ for all $\psi\in\mathcal V_c^\infty(V)$. The results of the present section and sections \ref{sec:crofto}, \ref{sec:LC_crofton_WF} can be easily adapted to the flat pseudo-Euclidean setting. Let us state explicitly Corollary \ref{cor:can_compute_mu1} in the flat case. \begin{Corollary} Let $A\subset \mathbb{R}^{p,q}$ be either a smooth domain or a hypersurface without boundary. Denote by $H=H(A)$ the corresponding closed hypersurface. \begin{enumerate} \item Assume that for each $x\in H$, $H$ is either pseudo-Riemannian near $x$ or has non-zero Gauss curvature at $x$. Then \begin{equation}\label{eq:crofton_affine}\Cr(\check m_k)(A)=\int_{\overline{\Gr}_{p+q-k}(V)}\chi(A\cap E)d\check m_k(E). \end{equation} holds for $k=1$. \item If $H$ is strictly convex, then \eqref{eq:crofton_affine} holds for all $k$. \end{enumerate} \end{Corollary} \section{Crofton formulas for generalized pseudospheres} \label{sec:template} For the de Sitter space embedded in Lorentz space, one can compute the Crofton formulas through a direct computation of the restriction of the measures to subspaces, combined with the Hadwiger theorem and the template method. However for general signatures, an explicit computation appears to be hard. Instead, we carry out an analytic extension argument, which recovers the Crofton formulas for all signatures in a unified fashion. For $\zeta>\frac12$, we denote by $S_\zeta={Q_\zeta^{-1}(1)}\subset \mathbb{R}^{p+q}=\mathbb{R}^{n+1}$ the unit sphere in the Euclidean space $(\mathbb{R}^{n+1},Q_\zeta)$. For $\zeta=0$ we have $S^{p-1,q}=Q_0^{-1}(1)$ with the induced pseudo-Riemannian metric $Q_{0}$. We will also denote by $S^{n}$ the unit sphere in $\mathbb{R}^{n+1}$ with respect to some fixed Euclidean structure (which is independent of $\zeta$). In the following we make use of the operation of restriction of Crofton distributions, as described in Section \ref{sec:crofton_functorial}. \begin{Proposition} \label{prop:restrictions} For the standard inclusion $e\colon \mathbb{R}^{p,q}\hookrightarrow \mathbb{R}^{p+l,q+r}$, we have $e^*m_k=m_k$. \end{Proposition} \begin{proof} For $\zeta>\frac12$, $Q_\zeta$ is positive definite, and so $e^*m_k^\zeta=m_k^\zeta$ by the uniqueness of probability measure on the Grassmannian invariant under the positive definite orthogonal group, as $e^*:\mathcal M^\infty(\Gr_{p+q+l+r-k}(\mathbb{R}^{p+l,q+r}))\to \mathcal M^\infty(\Gr_{p+q-k}(\mathbb{R}^{p,q}))$ is essentially the pushforward operation under intersection with $\mathbb{R}^{p,q}$. The statement then follows by analytic extension in $\zeta$, combined with Proposition \ref{prop:boundary_value_grassmannian}. \end{proof} \begin{Proposition}\label{prop:weak_continuity} Given $A\in\mathcal P( S^{p-1,q})$, let $\overline A\in \mathcal P(S^{n})$ be its radial projection. Assume $A$ is either an LC-regular hypersurface, or a smooth domain with LC-regular boundary. Assume further that either all $Q_0$-degenerate tangents of codimension $k$ are regular, or $\chi(A\cap E)$ is constant for a.e. plane $E$ of codimension $k$. Then \begin{displaymath} \lim_{\epsilon\to 0^+} \Cr_{S^{n}}(m_k^{\mathbf i\epsilon})(\overline A)=\Cr_{S^{p-1,q}}(m_k)(A). \end{displaymath} \end{Proposition} \begin{proof} We have by Proposition \ref{prop:apply_crofton_general} \begin{displaymath} \Cr_{S^{n}}(m_k^{\mathbf i\epsilon})(\overline A)=\langle m_k^{\mathbf i\epsilon}, \chi(\overline A\cap \bullet)\rangle. \end{displaymath} By Corollary \ref{cor:can_compute_on_LC}, resp. by Proposition \ref{prop:apply_crofton_general}, it holds that \begin{align*} \Cr_k^{p-1,q}(A) = \Cr(m_k)(A)=\langle m_k, \chi(A\cap \bullet)\rangle. \end{align*} By part ii) of Proposition \ref{prop:boundary_value_grassmannian}, $m_k^{\mathbf i \epsilon}$ tends to $m_k$ as $\epsilon \to 0^+$ in the normal H\"ormander topology on $\mathcal M^{-\infty}_{N^*\Lambda}(\Gr_{n+1-k}(V))$. Combining Corollary \ref{cor:CrWF_of_LC} and Proposition \ref{prop:boundary_value_grassmannian} part i), we see that evaluating at $\chi(\overline A\cap \bullet)=\chi( A\cap \bullet)$ is continuous in this topology, and the statement follows. \end{proof} We consider for a moment the case $q=1$. We will use two types of templates in the de Sitter sphere $S^{p-1,1}$. The first one is the Riemannian $(p-1)$-unit sphere \begin{displaymath} R^{p-1,0}=S^{p-1,1}\cap \{x_{p+1}=0\}. \end{displaymath} Fix $\theta\in (0,\pi/ 4)$. Our second template is \begin{displaymath} R^{p-1,1}=R^{p-1,1}(\theta)=\{x\in S^{p-1,1}\colon x_{p+1}^2 \leq \tan^2\theta (x_1^2+\cdots+x_{p}^2)\}. \end{displaymath} The points of $\partial R^{p-1,1}$ lie at (time-like) distance of $\rho=\mathrm{arctanh}(\tan\theta)$ from $R^{p-1,0}$. For each $\zeta>\frac 12$ and $s=0,1$, we denote by $T_\zeta^{p-1,s}$ the radial projection of $R^{p-1,s}$ on $S_\zeta$. Thus $T_\zeta^{p-1,0}$ is a totally geodesic $(p-1)$-sphere in $S_\zeta$, and the points of $\partial T_\zeta^{p-1,1}$ lie at distance $\varepsilon=\arctan(\sqrt\xi \tan\theta) $ from $T_\zeta^{p-1,0}$ where $\xi=\frac{2\zeta-1}{2\zeta+1}$. We then have \begin{displaymath} \frac{d\rho}{d\theta}=\frac{1+\tan^2 \theta}{1-\tan^2 \theta}, \quad \frac{d\epsilon}{d\theta}=\sqrt{\xi} \frac{1+\tan^2 \theta}{1+\xi\tan^2 \theta}. \end{displaymath} We will denote by $\mu_k^\zeta\in\mathcal V^\infty(S_\zeta)$ the Riemannian intrinsic volumes in $S_\zeta$, and by $\mu_k\in\mathcal V^{-\infty}(S^{p-1,q})\otimes \mathbb{C}$ the (complex-valued) intrinsic volumes on $S^{p-1,1}$. Note that $\mu_k^\zeta(T_\zeta^{p-1,0})=\mu_k(R^{p-1,0})$ for all $\zeta>\frac12$. \begin{Proposition}\label{prop:continuation_mu} For $s=0,1$, the function $\zeta \mapsto \mu_k^\zeta(T_\zeta^{p-1,s})$ extends to a holomorphic function $f_{k,s}(\zeta)$ on $U_\mathbb{C}$ such that $\lim_{\zeta\to 0}f_{k,s}(\zeta)=\mu_k(R^{p-1,s})$. \end{Proposition} \begin{proof} For $s=0$, the statement is trivial as $\mu_k^\zeta(T_\zeta^{p-1,0})$ does not depend on $\zeta$. Let us consider $s=1$. The radial projections $\pi_\zeta:S^{n}\to S_\zeta$ and $\pi_{0}:S^{n}\to S^{p-1,1}$ have Jacobians \begin{align*} \mathrm{Jac}\pi_\zeta &= \left(\frac{\cos\epsilon}{\cos\theta}\right)^{p-1}\frac{d\epsilon}{d\theta}=\xi^{\frac12}\left(\frac{1+\tan^2\theta}{1+\xi\tan^2\theta}\right)^{\frac{p-1}2+1}, \quad \xi=\xi(\zeta)=\frac{2\zeta-1}{2\zeta+1}\\ % \mathrm{Jac}\pi_{0} &= \left(\frac{\cosh\rho}{\cos\theta}\right)^{p-1}\frac{d\rho}{d\theta}=\left(\frac{1+\tan^2\theta}{1-\tan^2\theta}\right)^{\frac{p-1}2+1}. \end{align*} Since $\xi(\zeta)=\frac{2\zeta-1}{2\zeta+1}$ is continuous on $\mathbb{C}\setminus\{-\frac12\}$, it maps $U_\mathbb{C}$ to a simply connected region in $\mathbb{C}\setminus$. Moreover $\xi(\zeta)\in\mathbb{R}$ if and only if $\zeta\in\mathbb{R}$, and $\xi(\zeta)>0$ for $\zeta>\frac12$, so that $\xi(\zeta),1+\xi(\zeta)\tan^2\theta\neq 0$ for $\zeta\in U_\mathbb{C}$. It follows that the right hand side of the first equation extends to a holomorphic function in $U_\mathbb{C}$ whose limit as $\zeta\to 0$ equals the right hand side of the second equation multiplied by $\mathbf i$. The statement follows for $k=p$ since $\mu_{p}^\zeta=\vol_{p}$, and $\mu_{p}=\mathbf i \vol_{p}$ on $S^{p-1,1}$. Consider now $k=p-1$. Since $\mu_{p-1}(R^{p-1,1})=\frac12\vol_{p-1}(\partial R^{p-1,1}),\mu_{p-1}(T_\zeta^{p-1,1})=\frac12\vol_{p-1}(\partial T_\zeta^{p-1,1})$, and \begin{align*} \frac{d}{d\theta}\vol (R^{p-1,1}(\theta))&=\frac{d\rho}{d\theta}\frac{d}{d\rho}\vol (R^{p-1,1}(\theta))=\frac{1+\tan^2\theta}{1-\tan^2\theta}\vol_{p-1}(\partial R^{p-1,1}) \\ \frac{d}{d\theta}\vol (T_\zeta^{p-1,1}(\theta)) &=\frac{d\epsilon}{d\theta}\frac{d}{d\epsilon}\vol (T_\zeta^{p-1,1}(\theta)) =\sqrt\xi\frac{1+\tan^2\theta}{1+\xi\tan^2\theta}\vol_{p-1}(\partial T_\zeta^{p-1,1}), \end{align*}this case follows from the previous one. For $(k-p-1)$ positive and odd, since $N^*R^{p-1,1}$ is contained in the time-like orbit of the cosphere bundle of $S^{p-1,1}$, we have \begin{align} \label{eq:mu_k_expansion} \mu_k(R^{p-1,1}) & = \sum_\nu \mathbf i^{p-1-k-2\nu}c_{p,k,\nu} [[0,\phi_{k+2\nu,\nu}^-]](R^{p-1,1}) +\mathbf{i} d_{p,k} \vol(R^{p-1,1})\\ \mu_k^\zeta(T_\zeta^{p-1,1}) & = \sum_\nu c_{p,k,\nu} [[0,\phi_{k+2\nu,\nu}^\zeta]] (T_\zeta^{p-1,1}) +d_{p,k} \vol(T_\zeta^{p-1,1}),\label{eq:mu_k_expansion2} \end{align} for certain constants $c_{p,k,\nu },d_{p,k}$, where $\phi_{k,r}^-$ is the smooth form given in Lemma 5.1 of \cite{bernig_faifman_solanes} when $M=S^{p-1,1}$, and $\phi_{k,r}^\zeta$ is the form $\phi_{k,r}^+$ in the same lemma when $M=S_\zeta$. For $k-p-1\geq 0$ and even, equations \eqref{eq:mu_k_expansion}, \eqref{eq:mu_k_expansion2} hold with the volume term removed. Since $S_\zeta$ and $S^{p-1,q}$ have constant curvature 1, we have $\phi_{k,r}^-=\phi_{k,0}^-$ and $\phi_{k,r}^\zeta=\phi_{k,0}^\zeta$. By the structure equations (see \cite[eqs. (32),(33)]{bernig_faifman_solanes}) we have \begin{align*} d\phi_{k,0}^-&=\theta_0 \wedge (- k\phi_{k-1,0}^- -(p-1-k) \phi_{k+1,0}^-)\\ d\phi_{k,0}^\zeta&=\theta_0 \wedge (k\phi_{k-1,0}^\zeta -(p-1-k) \phi_{k+1,0}^\zeta), \end{align*} where $\theta_0$ is the contact 1-form defined by the pseudo-Riemannian metric. Now take $M=S^{p-1,1}$ and assume $\omega\in\Omega^{\dim M-1}(\mathbb P_M)$, and $d\omega=\theta_0\wedge \omega'$. Let $\nu:\partial R^{p-1,1}(\theta)\to\mathbb P_M$ be the outer normal map, and extend it smoothly to $M$. We then have \begin{align*} \frac{d}{d\theta}[[{ 0,}\omega]](R^{p-1,1}(\theta)) & =\frac{d}{d\theta} \left\langle \omega, \llbracket N^*R^{p-1,1}(\theta) \rrbracket \right\rangle \\ % & = \frac{d}{d\theta} \left\langle \nu^*\omega, \llbracket \partial R^{p-1,1}(\theta) \rrbracket \right\rangle\\ % &=\frac{d}{d\theta} \left\langle \nu^*\theta_0\wedge \nu^*\omega', \llbracket R^{p-1,1}(\theta) \rrbracket \right\rangle \\ % & =\left\langle \nu^*\theta_0\wedge \nu^*\omega', \frac{\partial}{\partial \theta}\cdot \llbracket \partial R^{p-1,1}(\theta) \rrbracket \right\rangle\\ % & = \left\langle \nu^*\theta_0, \frac{\partial}{\partial \theta} \right\rangle \cdot \left\langle \nu^*\omega', \llbracket \partial R^{p-1,1}(\theta) \rrbracket \right\rangle \\ % & =\frac{d\rho}{d\theta} \cdot [[0,\omega']] (R^{p-1,1}(\theta)). \end{align*} Hence, \begin{align*} \frac{d}{d\theta}[[0,\phi^-_{k,0}]] (R^{p-1,1}(\theta))&=\frac{1+\tan^2\theta}{1-\tan^2\theta}\left(- k[[0,\phi_{k-1,0}^-]] (R^{p-1,1}(\theta)) -(p-1-k) [[0,\phi_{k+1,0}^-]] (R^{p-1,1}(\theta))\right),\end{align*} and similarly \begin{align*} \frac{d}{d\theta}[[0,\phi_{k,0}^\zeta]] (T_\zeta^{p-1,1}(\theta)) &=\sqrt\xi \frac{1+\tan^2\theta}{1+\xi\tan^2\theta} \left(k[[0,\phi_{k-1,0}^\zeta]](T_\zeta^{p-1,1}(\theta)) -(p-1-k) [[0,\phi_{k+1,0}^\zeta]] (T_\zeta^{p-1,1}(\theta))\right). \end{align*} It follows by induction on $k=p,\dots,0$ that $[[0,\phi_{k,0}^\zeta]](T_\zeta^{p-1,1}(\theta))$ is holomorphic in $\zeta\in U_\mathbb{C}$ and \begin{equation}\label{eq:induction_analytic} \lim_{\zeta\to 0} [[0,\phi_{k,0}^\zeta]](T_{\zeta}^{p-1,1}(\theta))=\mathbf i^{p-1-k} [[0,\phi_{k,0}^-]](R^{p-1,1}(\theta)). \end{equation} By \eqref{eq:mu_k_expansion} and \eqref{eq:mu_k_expansion2} this completes the proof. \end{proof} In order to normalize the leading coefficient in the Crofton formulas we rescale the measures $m_k, \check m_k$ as follows. \begin{Definition} \label{def_normailzation} Let $M\subset \mathbb{R}^{p,q}$ be the pseudosphere of curvature $\sigma>0$, or the pseudohyperbolic space of curvature $\sigma<0$. We define \begin{displaymath} \Cr_k^M={\pi\omega_{k-1}}\sqrt{\sigma^{-1}}^k\Cr_M(m_k). \end{displaymath} In the flat pseudo-Euclidean space $M=\mathbb{R}^{p,q}$ we take \begin{displaymath} {\Cr_k^M={\pi\omega_{k-1}}\Cr_M(\check m_k).} \end{displaymath} \end{Definition} \begin{Theorem}[Crofton formula]\label{thm_crofton_formula} Let $M$ be a pseudosphere, a pseudohyperbolic space or a pseudo-Euclidean space. Then, independently of the signature of $M$, \begin{equation} \label{eq:crofton} {\Cr_k^M= \sum_{j=0}^{\lfloor\frac{n-k}{2}\rfloor}\frac {\omega_{k-1}}{\omega_{k+2j-1}} {-\frac k 2 \choose j}\sigma^{j} \mu_{k+2j}} \end{equation} where $\sigma$ is the sectional curvature of $M$ and $n$ its dimension. \end{Theorem} \begin{proof} Take first the pseudosphere $M=S^{p-1,q}$ of curvature $\sigma=1$. We can assume $q>0$ as the formula is known in $S^{n}$ (cf. e.g. \cite{fu_wannerer}). We know that \begin{displaymath} \Cr_k^{M}=\sum_{j=0}^{\lfloor\frac{n-k}2\rfloor} (a_{j,p,q}\mu_{k+2j}+b_{j,p,q}\overline{\mu_{k+2j}}) \end{displaymath} for certain coefficients $a_{j,p,q},b_{j,p,q}\in\mathbb{C}$. Indeed, by \cite[Theorem C]{bernig_faifman_solanes_part2} we may express $\Cr_k^{M}$ as a linear combination of the intrinsic volumes and their complex conjugates. Since both $\mu_r$ and $\Cr_r^M$ are the restrictions of elements in $\Val_r^{-\infty,+}$ and thus belong to the $(-1)^r$-eigenspace of the Euler-Verdier involution, only the displayed terms appear. Let $e\colon S^{p-1,q}{\hookrightarrow}S^{p-1+l,q}$ and $\tilde e\colon S^{p-1+l,1}{\hookrightarrow}S^{p-1+l,q}$ be standard inclusions. By Proposition \ref{prop:restrictions}, we have \begin{align*} \sum_{j=0}^{\lfloor\frac{p-1+q-k}2\rfloor} a_{j,p,q}\mu_{k+2j}+b_{j,p,q}\overline{\mu_{k+2j}} &= \Cr_k^{S^{p-1,q}}=e^*(\Cr_k^{S^{p-1+l,q}})\\ &=\sum_{j=0}^{\lfloor\frac{p-1+q-k}2\rfloor} a_{j,p+l,q}\mu_{k+2j}+b_{j,p+l,q}\overline{\mu_{k+2j}}\\ \sum_{j=0}^{\lfloor\frac{p+l-k}2\rfloor} a_{j,p+l,1}\mu_{k+2j}+b_{j,p+l,1}\overline{\mu_{k+2j}}&= \Cr_k^{S^{p-1+l,1}}=\tilde e^*(\Cr_k^{S^{p-1+l,q}})\\ &=\sum_{j=0}^{\lfloor\frac{p+l-k}2\rfloor} a_{j,p+l,q}\mu_{k+2j}+b_{j,p+l,q}\overline{\mu_{k+2j}}. \end{align*} By the linear independence of $\{\mu_{i}\}_i\cup\{\overline\mu_i\}_i$ \cite[Corollary 7.4]{bernig_faifman_solanes}, and taking $l\geq q-1$, this yields \begin{align*}a_{j,p,q}&=a_{j,p+l,q}=a_{j,p+l,1},\\ b_{j,p,q}&=b_{j,p+l,q}=b_{j,p+l,1} \end{align*} for all $j\leq \frac{p-1+q-k}2$. It suffices then to determine $a_j:=a_{j,p,1},b_j:=b_{j,p,1}$; i.e. to prove the statement in the de Sitter sphere $M=S^{p-1,1}$. To this end we evaluate both sides on the templates $R^{p-1,s}\subset M$ with $s=0,1$. In order to compute $\Cr_k^M(R^{p-1,s})$ we use the spherical Crofton formula: \begin{equation}\label{eq_crofton_zeta} \Cr_{S_\zeta}(\pi\omega_{k-1}m_k^\zeta)= \sum_{j\geq 0}\frac{\omega_{k-1}} {\omega_{k+2j-1}} {-\frac k 2 \choose j}\mu^{\zeta}_{k+2j}=:\sum_{j\geq 0}c_{j}\mu^{\zeta}_{k+2j}, \end{equation}for $\zeta>\frac12$. Given $p\geq k$ and $s=0,1$, let $S^{p}$ be the unit sphere of an arbitrary Euclidean structure in $\mathbb{R}^{p,1}=\mathbb{R}^{p+1}$ and let $T^{p-1,s}$ be the radial projection on $S^{p}$ of $R^{p-1,s}$. By Definition \ref{def_normailzation}, Proposition \ref{prop:weak_continuity}, and applying analytic continuation to \eqref{eq_crofton_zeta} via Proposition \ref{prop:continuation_mu}, we have \begin{align*} \Cr_k^M(R^{p-1,s}) &={\pi \omega_{k-1}}\lim_{\epsilon\to 0^+} \Cr_{S^p}(m_k^{\mathbf{i}\epsilon})(T^{p-1,s})\\ &= \lim_{\epsilon\to 0^+} \sum_{j\geq 0} c_j f_{k+2j,s}(\mathbf{i}\epsilon)\\ &= \sum_{j\geq 0} c_j \mu_{k+2j}(R^{p-1,s}). \end{align*} Now, for $s=0,1$, taking $p=k+2l-s+1$ we get \begin{align*} \sum_{j\geq 0} c_j \mu_{k+2j}(R^{k+2l-s,s})&=\sum_{j\geq 0} a_j \mu_{k+2j}(R^{k+2l-s,s})+b_j \overline{\mu_{k+2j}(R^{k+2l-s,s})}\\ &=\sum_{j\geq 0} (a_j+(-1)^{s}b_j) \mu_{k+2j}(R^{k+2l-s,s}), \end{align*} since $\mu_{k+2j}(R^{k+2l-s,s})\in \mathbf i^s\mathbb{R}$ by \eqref{eq:mu_k_expansion}. For $l=0$ we have $\mu_{k+2j}(R^{k -s,s})=0$ for all $j \geq 1$ and hence $ c_0=a_0+(-1)^{s}b_0$ for $s=0,1$, and thus $a_0= c_0,b_0=0$. Suppose that $a_j= c_j, b_j=0$ for all $j< j_0$. Taking $l=j_0$ we deduce $ c_{j_0}=a_{j_0}+(-1)^{s}b_{j_0}$ for $s=0,1$. By induction we deduce $a_j= c_j$ and $b_j=0$ for all $j$, which completes the proof for $\sigma=1$. For $\sigma>0$ the theorem follows by the homogeneity of the $\mu_k$ (cf. \cite[Proposition 1.2. iii)]{bernig_faifman_solanes}) . Let us now turn to $\sigma=-1$, i.e. to $H^{p,q-1}\subset \mathbb{R}^{p,q}$. Note that the anti-isometry $j\colon \mathbb{R}^{q,p}\to \mathbb{R}^{p,q}$ of Lemma \ref{lem:crofton_sign} maps $S^{q-1,p}$ to $H^{p,q-1}$. Therefore, by Lemma \ref{lem:crofton_sign} and the homogeneity of the $\mu_k$, \begin{align*} \Cr_k^{H^{p,q-1}}(j(A)) &=\pi\omega_{k-1}\mathbf i^k\int_{\Gr_{n+1-k}} \chi(E\cap j(A)) {dm_k(E)} \\ % &=\pi\omega_{k-1}\mathbf i^k\int_{\Gr_{n+1-k}} \chi(E\cap A) dj^*m_k(E)\\ % &=\mathbf i^k\overline{\Cr_k^{S^{{q-1,p} }}(A)}\\ % &=\mathbf i^k\sum_\nu c_{\nu} \overline{\mu_{k+2\nu}(A)}\\ % &=\mathbf i^k\sum_\nu c_{\nu} \mathbf i^{-k-2\nu}{\mu_{k+2\nu}(j(A))}. \end{align*} This proves the statement for $\sigma=-1$. The case $\sigma<0$ follows as before from the homogeneity of the $\mu_k$. Finally we consider the case $\sigma=0$. Let us identify $M=\mathbb{R}^{p-1,q}$ with the tangent space $T_x S^{p-1,q}$ at some $x\in S^{p-1,q}$. Let $\Lambda_k^x\colon \mathcal V^{-\infty}(S^{p-1,q})^{\OO(p,q)}\to \Val^{-\infty}(T_x S^{p-1,q})^{O(p-1,q)}$ be given by (cf. \cite[Proposition 3.1.5]{alesker_val_man1}) \begin{displaymath} \Lambda_k^x(\varphi)=\frac{1}{k!}\left. \frac{d^k}{dt^k}\right|_{t=0} h_t^*\phi^*\varphi \end{displaymath} where $\phi\colon U\subset T_x S^{p-1,q}\to S^{p-1,q}$ is defined on a neighborhood of $x$ by $$\phi(w)=Q(x+w)^{-\frac12}(x+w)$$ and $h_t(w)=tw$. By Proposition \ref{prop:derivative_crofton} we have \begin{displaymath} \Lambda_k^x \Cr_k^{S^{p-1,q}}=\Cr_k^{\mathbb{R}^{p-1,q}}. \end{displaymath} On the other hand, denoting by $g$ the metric on $S^{p-1,q}$, since $\mu_k\in\mathcal W_k^{-\infty}$, behaves naturally with respect to isometries and is $k$-homogeneous, we have \begin{align*} \Lambda_k^x \mu_k^g&= \lim_{t\to 0} t^{-k}(\phi\circ h_t)^*\mu_k^g=\lim_{t\to 0} \mu_k^{(\phi\circ h_t)^*g/t^2}. \end{align*} Since $(\phi\circ h_t)^*g/t^2$ converges, $C^\infty$-uniformly on compact sets, to the flat metric $g_0$, we conclude by \cite[Proposition 1.2 ii)]{bernig_faifman_solanes} that \begin{displaymath} \Lambda_k^x \mu_k^g =\mu_k^{g_0}. \end{displaymath} Applying $\Lambda_k^x$ to both sides of \eqref{eq:crofton} the case $\sigma=0$ follows. \end{proof} Recall from \cite{bernig_faifman_solanes} that the intrinsic volumes $\mu_k$ were defined in terms of certain generalized curvature measures $C_{k,p}^0,C_{k,p}^1$. On a manifold of constant curvature $\sigma$, these fulfill $C_{k,p}^i=\sigma^p C_{k,0}^i$. Using this and \cite[Eq. (61)]{bernig_faifman_solanes}, the Crofton formula \eqref{eq:crofton} becomes \begin{displaymath} \Cr_k^M =\mathbf i^q \sum_j d_{k,j}\sigma^j \glob(C_{k+2j,0}^0+\mathbf i C_{k+2j,0}^1), \end{displaymath}where $\glob:\mathcal C^{-\infty}(M)\to \mathcal V^{-\infty}(M)$ is the globalization map (cf. \cite[Section 2]{bernig_faifman_solanes}). The constants $d_{k,j}$ are independent of the signature and the curvature and can thus be deduced from the case of Euclidean spheres. Therefore, by \cite[\S3.2]{fu_wannerer} we obtain \begin{equation}\label{eq:Cr_to_C} \Cr_k^M =\frac{\pi^k}{k!\omega_k}\mathbf i^q \sum_j \left(\frac{\sigma}4\right)^j \glob(C_{k+2j,0}^0+\mathbf i C_{k+2j,0}^1). \end{equation} \begin{Remark} It is interesting to note that \eqref{eq:Cr_to_C} yields \begin{displaymath} \chi -\frac{\sigma}{2\pi} \Cr_2^M = \mathbf i^q \glob(C_{0,0}^0 + \mathbf i C_{0,0}^1), \end{displaymath} which can be seen as a generalization of the fact that the angular excess of a spherical triangle is proportional to its area. \end{Remark} \def$'${$'$}
1,116,691,500,522
arxiv
\section{Introduction}\label{intro} Let $T:A\to B$ be a mapping between C$^*$-algebras. We say that $T$ is \emph{orthogonality preserving} if it maps orthogonal elements in $A$ to orthogonal elements in $B$. Along this paper, elements $a,b$ in a C$^*$-algebra $A$ are called orthogonal if $a b^* = b^* a= 0$. If $T(a^*) = T(a)^*$ for all $a\in A$, the mapping $T$ is called symmetric. Bounded linear orthogonality preserving operators between C$^*$-algebras were fully determined by M. Burgos, F.J. Fern{\'a}ndez-Polo, J. Mart{\'i}nez and the authors of this note in \cite{BurFerGarMarPe2008}, previously M. Wolff \cite{Wolff94} had determined the precise form of all symmetric bounded linear orthogonality preserving operators between unital C$^*$-algebras. The just quoted reference also contains a detailed study of uniformly continuous one-parameter groups of one-parameter groups of symmetric orthogonality preserving operators on unital C$^*$-algebras.\smallskip Let us recall that \emph{one-parameter semigroup} of bounded linear operators on a Banach space $X$ is a correspondence $\mathbb{R}_0^{+} \to B(X),$ $t\mapsto T_t$ satisfying $T_{t+s} = T_{s} T_{t}$ for all $s,t\in \mathbb{R}_0^{+}$ and $T_0 =I$, where $B(X)$ stands for the Banach space of all bounded liner operators on $X$. A one-parameter semigroup $\{T_t: t\in\mathbb{R}_0^+ \}$ is uniformly continuous at the origin, i.e. $\displaystyle \lim_{t\to 0} \|T_t -I\| =0$, if and only if there exists a bounded linear operator $R\in B(X)$ such that $T_t =e^{t R}$ for all $t\in \mathbb{R}_0^+$, and in such a case, $T_t$ extends to a uniformly continuous one-parameter group on $\mathbb{R}$ (compare \cite[Proposition 3.1.1]{BratRob1987}).\smallskip Theorem 2.6 in \cite{Wolff94} asserts that if $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of orthogonality preserving symmetric operators on a unital C$^*$-algebra $A$, then there exists a uniquely determined element $h$ in the center of $A$, and also a uniquely determined uniformly continuous group $\{S_t: t\in \mathbb{R}_0^{+}\}$ of $^*$-automorphisms on $A$ such that $T_t (a) = e^{th} S_t(a)$ for all $a\in A$, $t\in \mathbb{R}_0^+$.\smallskip In a recent note we determined all uniformly continuous one-parameter semigroups of orthogonality preserving operators on general C$^*$-algebras (see \cite{GarPeUnitCstaralg20}). In the general case the conclusion is technically more complex. Namely, suppose $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of orthogonality preserving operators on a C$^*$-algebra $A$, then there exists a uniformly continuous one-parameter semigroup $\{S_t: t\in \mathbb{R}_0^{+}\}$ of surjective linear isometries {\rm(}i.e. triple isomorphisms{\rm)} on $A$ such that the identities $$ h_{t+s} = h_t r_t^* S_t^{**} (h_s)=T^{**}_t (h_s),\ h_t^* S_t(x)= S_t(x^*)^* h_t, \ h_t S_t(x^*)^* = S_t(x) h_t^*,$$ $$h_t r_t^* S_t(x) = S_t(x) r_t^* h_t, \hbox{ and } T_t(x) = h_t r_t^* S_t(x) = S_t(x) r_t^* h_t,$$ hold for all $s,t\in \mathbb{R},$ $x\in A$, where $h_t= T_t^{**} (1)$ and $r_t$ is the range partial isometry of $h$ in $A^{**}$ (cf. \cite[Theorem 3]{GarPeUnitCstaralg20}). Among the tools employed in this result we decode the structure of all surjective (respectively, bijective) orthogonality preserving bounded linear operators between C$^*$-algebras. In the general case the sets $\{r_t:t\in \mathbb{R}\}$ and $\{h_t:t\in \mathbb{R}\}$ need not be one-parameter groups. \smallskip C$^*$-algebras are englobed in the wider class of JB$^*$-algebras, and all these structures are inside the class of complex Banach spaces known as JB$^*$-triples, where the notion of orthogonality makes also sense (see Subsection \ref{subsec:defi} for a detailed review of these notions). Actually, as shown in \cite{BurFerGarMarPe2008} and \cite{BurFerGarPe09}, the setting and terminology of JB$^*$-triples seem to be the appropriate language to describe continuous orthogonality preserving linear maps between C$^*$- and JB$^*$-algebras. The main result in \cite{BurFerGarPe09} describes those continuous linear maps preserving orthogonality from a JB$^*$-algebra into a JB$^*$-triple. The present note contains one of the first studies on uniformly continuous one-parameter semigroups of orthogonality preserving operators on JB$^*$-algebras. Our main goal is to extend to the Jordan setting the conclusions we recently obtained in the case of uniformly continuous one-parameter semigroups of orthogonality preserving operators on general C$^*$-algebras commented above.\smallskip As it was shown in \cite{BurFerGarMarPe2008,BurFerGarPe09}, the terminology of JB$^*$-triples seems to be the optimal language to understand continuous linear orthogonality preservers between C$^*$-algebras and, more generally, from a JB$^*$-algebra into a JB$^*$-triple. All bounded linear operators preserving orthogonality from a JB$^*$-algebra into a JB$^*$-triple were completely determined in terms of Jordan and triple homomorphisms, up to multiplication by an element in the second dual of the codomain satisfying certain commutativity identities (cf. Theorem \ref{thm characterization of OP JBstar} below). The recent paper \cite{GarPeUnitCstaralg20} shows that in the case of bounded linear orthogonality preserving operators between C$^*$-algebras those being surjective or bijective enjoy additional properties. In Section \ref{sec: surjective OP JBstar} we study surjective and bijective continuous orthogonality preserving maps from a JB$^*$-algebra into a JB$^*$-triple $E$. We prove that for each bijective bounded linear operator $T$ from a JB$^*$-algebra $\mathcal{A}$ to a JB$^*$-triple $E$, if we set $h=T^{**} (1)\in E^{**}$ and $r$ denotes the range tripotent of $h$ in $E^{**}$. Then the following statements are equivalent:\begin{enumerate}[$(a)$] \item $T$ is orthogonality preserving; \item The elements $h$ and $r$ belong to $M(E)$ with $h$ invertible and $r$ unitary. Consequently, $E^{**}$ is a JBW$^*$-algebra with respect to the Jordan product and involution defined by $a\circ_r b = \{a,r,b\}$ and $a^{*_r} = \{r,a,r\}$, respectively, and contains $E$ as a JB$^*$-subalgebra. Furthermore, there exists a Jordan $^*$-isomorphism $S: \mathcal{A}\to (E,\circ_{r},*_{r})$ such that $h$ lies in $Z(E^{**},\circ_{r},*_{r})$, and $T(x)=h\circ_{r(h)} S(x) = U_{h^{\frac12}} S(x),$ for every $x\in \mathcal{A}$, where $h^{\frac12}$ denotes the square root of the positive element $h$ in the JB$^*$-algebra $E_2^{**} (r)$; \item $T$ is biorthogonality preserving; \item $T$ is orthogonality preserving on $\mathcal{A}_{sa}$; \item $T$ is orthogonality preserving on $\mathcal{A}^{+}$; \item $T$ preserves zero-triple-products, i.e. $$\{a,b,c\}=0 \Rightarrow \{T(a),T(b),T(c)\}=0;$$ \item $T$ preserves zero-triple-products in both directions, i.e. $$\{a,b,c\}=0 \Leftrightarrow \{T(a),T(b),T(c)\}=0,$$ \end{enumerate} (see Propositions \ref{p surjective OP preserves multipliers JB} and \ref{p Ortpreserving on Asa JBstar} and Corollary \ref{c Characterization bd OP plus bijective Jordan}). The proofs in the Jordan setting require a complete new argument which is not a mere aesthetical adaptation of those valid for maps between C$^*$-algebras.\smallskip Section \ref{sec: one-parametri groups OP Jordan} is devoted to the study of one-parameter groups of orthogonality preserving operators on a JB$^*$-algebras. We shall see in this note how the terminology and tools from JB$^*$-triple theory makes more accesible (and perhaps more complete) the study of one-parameter groups. Firstly, uniformly continuous one-parameter groups of surjective isometries on a general JB$^*$-triple are precisely given by the exponential of real multiples of triple derivations (see Lemma \ref{l -parameter group of iso on a JB*-triple}). The paper culminates with the following characterization of one-parameter groups of orthogonality preserving operators on a general JB$^*$-algebra $\mathcal{A}$: Suppose $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a family of orthogonality preserving bounded linear bijections on $\mathcal{A}$ with $T_0=Id$. For each $t\geq 0$ let $h_t = T_t^{**} (1)$ and let $r_t$ be the range tripotent of $h_t$ in $\mathcal{A}^{**}$. According to what we prove in Section \ref{sec: surjective OP JBstar}, $h_t$ and $r_t$ belong to $M(\mathcal{A})$ with $h_t$ invertible and $r_t$ unitary and there exists a Jordan $^*$-isomorphism $S_t: \mathcal{A} \to (\mathcal{A},\circ_{r_t},*_{r_t})$ such that $h_t$ lies in $Z(\mathcal{A}^{**},\circ_{r_t},*_{r_t})$, and $T_t(x)=h_t\circ_{r_t} S_t(x) = U_{h_t^{\frac12}} S_t(x),$ for every $x\in \mathcal{A}$. We shall show that under these hypotheses the following statements are equivalent:\begin{enumerate}[$(a)$]\item $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of orthogonality preserving operators on $\mathcal{A}$; \item $\{S_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of surjective linear isometries {\rm(}i.e. triple isomorphisms{\rm)} on $\mathcal{A}$ {\rm(}and hence there exists a triple derivation $\delta$ on $\mathcal{A}$ such that $S_t = e^{t \delta}$ for all $t\in \mathbb{R}${\rm)}, the mapping $t\mapsto h_t $ is continuous at zero, and the identity \begin{equation}\label{eq new idenity in the statement of theorem 1 on one-parameter Jordan} h_{t+s} = h_t \circ_{r_t} S_t^{**} (h_s)= \{ h_t , {r_t}, S_t^{**} (h_s) \}, \end{equation} holds for all $s,t\in \mathbb{R}$ \end{enumerate} (see Theorem \ref{t Wolff one-parameter for OP JBstar}). \subsection{JB$^*$-algebras and JB$^*$-triples}\label{subsec:defi} A real or complex Jordan algebra is a non-necessarily associative algebra $\mathcal{B}$ over $\mathbb{R}$ or $\mathbb{C}$ whose product (denoted by $\circ$) is commutative and satisfies the so-called \emph{Jordan identity}: $( x \circ y ) \circ x^2 = x\circ ( y\circ x^2 ).$ Given an element $a \in \mathcal{B}$ the symbol $U_a$ will stand for the linear mapping on $\mathcal{B}$ defined by $U_a (b) := 2(a\circ b)\circ a - a^2\circ b$. A Jordan Banach algebra $\mathcal{B}$ is a Jordan algebra equipped with a complete norm satisfying $\|a\circ b\|\leq \|a\| \cdot \|b\|$ for all $a,b\in \mathcal{B}$. A JB$^*$-algebra is a complex Jordan Banach algebra $\mathcal{B}$ equipped with an algebra involution $^*$ satisfying $\|U_a ({a^*}) \|= \|a\|^3$, $a\in \mathcal{B}$. A \emph{JB-algebra} is a real Jordan algebra $J$ equipped with a complete norm satisfying \begin{equation}\label{eq axioms of JB-algebras} \|a^{2}\|=\|a\|^{2}, \hbox{ and } \|a^{2}\|\leq \|a^{2}+b^{2}\|\ \hbox{ for all } a,b\in J. \end{equation} A celebrated result due to J.D.M. Wright shows that every JB-algebra $J$ corresponds uniquely to the self-adjoint part $\mathcal{B}_{sa}=\{x\in B : x^* =x\}$ of a JB$^*$-algebra $\mathcal{B}$ \cite{Wright77}.\smallskip Every C$^*$-algebra is a JB$^*$-algebra with respect to its original norm and involution and the natural Jordan product given by $a \circ b := \frac12 (a b + ba)$. A JB$^*$-algebra is said to be a \emph{JC$^*$-algebra} if it is a JB$^*$-subalgebra of some C$^*$-algebra.\smallskip As seen in many previous references, the notion of orthogonality and the study of orthogonality preservers is easier when these structures are regarded as elements in the class of JB$^*$-triples. There are undoubted motivations coming from holomorphic theory on general complex Banach spaces to be attracted by the notion of JB$^*$-triple (see, for example, \cite{Harris74,Ka}). As introduced by W. Kaup in \cite{Ka} a JB$^*$-triple is a complex Banach space $E$ admitting a continuous triple product $\J ... : E\times E\times E \to E,$ which is conjugate linear in the central variable and symmetric and bilinear in the outer variables satisfying: \begin{enumerate}[{\rm (a)}] \item (Jordan identity) $$L(a,b) L(x,y) = L(x,y) L(a,b) + L(L(a,b)x,y) - L(x,L(b,a)y),$$ for all $a,b,x,y\in E$, where $L(a,b)$ is the linear operator on $E$ defined by $L(a,b) x = \J abx;$ \item For each $a\in E$, $L(a,a)$ is a hermitian operator with non-negative spectrum; \item $\|\{a,a,a\}\| = \|a\|^3$ for all $a\in E$. \end{enumerate} The triple product \begin{equation}\label{eq Cstar triple product} \J xyz := \frac{1}2 (xy^*z + zy^*x),\ \ \ \ \ \ \ \ (x,y,z\in A). \end{equation} can be indistinctly applied to induce a structure of JB$^*$-triple on every C$^*$-algebra and on the space $B(H,K)$ of all bounded linear operators between two complex Hilbert spaces $H$ and $K$ (see \cite[Fact 4.1.41]{Cabrera-Rodriguez-vol1}). Furthermore, each JB$^*$-algebra is a JB$^*$-triple with the same norm and triple product \begin{equation}\label{eq triple product JBstar algebra} \J xyz = (x\circ y^*) \circ z + (z\circ y^*)\circ x - (x\circ z)\circ y^*\end{equation} (see \cite[Theorem 4.1.45]{Cabrera-Rodriguez-vol1}). By a JB$^*$-subtriple of a JB$^*$-triple $E$ we mean a norm closed subspace of $E$ which is also closed for the triple product.\smallskip A JBW$^*$-triple is a JB$^*$-triple which is also a dual Banach space. The bidual of every JB$^*$-triple is a JBW$^*$-triple \cite{Di86}. The alter ego of Sakai's theorem asserts that every JBW$^*$-triple admits a unique (isometric) predual and its product is separately weak$^*$ continuous \cite{BaTi} (see also \cite[Theorems 5.7.20 and 5.7.38]{Cabrera-Rodriguez-vol2}).\smallskip A \emph{triple homomorphism} between JB$^*$-triples $E$ and $F$ is a linear mapping $T:E\to F$ satisfying $T\{a,b,c\} = \{T(a),T(b), T(c)\}$, for all $a,b,c\in E$.\smallskip An element $e$ in a JB$^*$-triple $E$ is called a \emph{tripotent} if $\{e,e,e\} =e$. Each tripotent $e\in E$ induces a \emph{Peirce decomposition} of $E$ in the form $$E= E_{2} (e) \oplus E_{1} (e) \oplus E_0 (e),$$ where for $j=0,1,2,$ $E_j (e)$ is the $\frac{j}{2}$ eigenspace of the operator $L(e,e)$. For $j\in \{0,1,2\}$ the corresponding \emph{Peirce $j$-projection} of $E$ onto $E_j (e)$ is denoted by $P_{j} (e)$. The Peirce 2-subspace $E_2 (e)$ is a JB$^*$-algebra with Jordan product $x\circ_e y := \J xey$ and involution $x^{*_e} := \J exe$ (cf. \cite[\S 4.2.2, Fact 4.2.14 and Corollary 4.2.30]{Cabrera-Rodriguez-vol1}). The reader should be warned that in \cite{Cabrera-Rodriguez-vol1} the Peirce subspaces $E_0(e)$, $E_1(e)$ and $E_2(e)$ are denoted by $E_0(e)$, $E_{\frac12}(e)$ and $E_1(e)$, respectively.\smallskip Suppose $0\neq a$ is an element in a JB$^*$-triple $E$. We set $a^{[1]}= a$, $a^{[3]} = \J aaa$, and $a^{[2n+1]} := \J aa{a^{[2n-1]}},$ $(n\in {\mathbb{N}})$. The norm closure of the linear span of the odd powers $a^{[2n+1]}$ defines a JB$^*$-subtriple of $E$ which coincides with the JB$^*$-subtriple generated by $a,$ and will be denoted by $E_a$. There exists a triple Gelfand theory for single generated JB$^*$-subtriples assuring that $E_a$ is JB$^*$-triple isomorphic (and hence isometric) to some $C_0 (\Omega_{a})$ for some (unique) compact Hausdorff space $\Omega_{a}$ contained in the set $[0,\|a\|],$ such that $0$ cannot be an isolated point in $\Omega_a$. Here and throughout the paper, the symbol $C_0 (\Omega_{a})$ will stand for the Banach space of all complex-valued continuous functions on $\Omega_a$ vanishing at $0.$ We can further find a triple isomorphism $\Psi : E_a \to C_{0}(\Omega_a)$ satisfying $\Psi (a) (t) = t$ $(t\in \Omega_a)$ (cf. \cite[Corollary 4.8]{Ka0}, \cite[Corollary 1.15]{Ka} and \cite{FriRu85} or \cite[Lemma 3.2, Corollary 3.4 and Proposition 3.5]{Ka96}). The set $\Omega_a$ is called \emph{the triple spectrum} of $a$ (in $E$), and it does not change when computed with respect to any JB$^*$-subtriple $F\subseteq E$ containing $a$ \cite[Proposition 3.5]{Ka96}. As in \cite{Ka96} for $a=0$ we set $\Omega_a =\emptyset$.\smallskip Let us recall some properties of the \emph{triple functional calculus}. Given an element $a$ in JB$^*$-triple $E$, let $\Omega_a$ and $\Psi : E_a \to C_{0}(\Omega_a)$ denote the triple spectrum of $a$ and the triple isomorphism given in previous paragraphs. For each continuous function $f\in C_{0}(\Omega_a)$, $f_t(a)$ will denote the unique element in $E_a$ such that $\Psi (f_t(a)) = f$. The element $f_t(a)$ will be called the \emph{continuous triple functional calculus} of $f$ at $a$. For example, the function $g(t) =\sqrt[3]{t}$ produces $g_t(a) = a^{[\frac{1}{3}]}$. \smallskip Among the consequences of this local triple Gelfand theory, for each $a$ in a JB$^*$-triple $E$, there exists a unique \emph{cubic root} of $a$ in $E_a$, i.e. and element $a^{[\frac13 ]}\in E_a$ satisfying \begin{equation}\label{eq existence of cubic roots} \J {a^{[\frac13 ]}}{a^{[\frac13 ]}}{a^{[\frac13 ]}}=a. \end{equation} The sequence $(a^{[\frac{1}{3^n}]})_n$ can be recursively defined by $a^{[\frac{1}{3^{n+1}}]} = \left(a^{[\frac{1}{3^{n}}]}\right)^{[\frac 13]}$, $n\in {\mathbb{N}}$. The sequence $(a^{[\frac{1}{3^n}]})_n$ converges in the weak$^*$-topology of $E^{**}$ to a (unique) tripotent denoted by $r(a)$. The tripotent $r(a)$ is named the \emph{range tripotent} of $a$ in $E^{**}$. Alternatively, $r(a)$ is the smallest tripotent $e\in E^{**}$ satisfying that $a$ is positive in the JBW$^*$-algebra $E^{**}_{2} (e)$ (compare \cite[Lemma 3.3]{EdRu96}).\smallskip The relation ``being orthogonal'' extends naturally from the C$^*$-setting to the setting of JB$^*$-triples, and thus to JB$^*$-algebras. Elements $a,b$ in a JB$^*$-triple $E$ are \emph{orthogonal} (denoted by $a \perp b$) if $L(a,b) = 0$. Lemma 1 in \cite{BurFerGarMarPe2008} contains several reformulations of the relation ``being orthogonal'' which will be applied without any explicit mention. It should be also noted that elements $a,b$ in a C$^*$-algebra $A$ are orthogonal in the C$^*$- sense if and only if they are orthogonal when $A$ is regarded as a JB$^*$-triple.\smallskip Let $\mathcal{B}$ be a JB$^*$-algebra and $I\subseteq \mathcal{B}$ a (norm closed) subspace of $\mathcal{B}$. We shall say that $I$ is a (closed Jordan) ideal of $\mathcal{B}$ if $I\circ \mathcal{B} \subseteq I$. A (closed) \emph{triple ideal} or simply an \emph{ideal} of a JB$^*$-triple $E$ is a (norm closed) subspace $I\subseteq E$ satisfying $\{E,E,I\}+\{E,I,E\}\subseteq I$, equivalently, $\{E,E,I\}\subseteq I$ or $\{E,I,E\}\subseteq I$ or $\{E,I,I\}\subseteq I$ (see \cite[Proposition 1.3]{BuChu92}).\smallskip We refer to \cite{Cabrera-Rodriguez-vol1,HOS} for the basic background on JB$^*$-triples and JB$^*$-algebras. \section{Orthogonality preserving linear surjections from a JB$^*$-algebra}\label{sec: surjective OP JBstar} A (linear) mapping $T$ between JB$^*$-triples is \emph{orthogonality preserving} (respectively, \emph{biorthogonality preserving}) if $T(a) \perp T(b)$ for every $a\perp b$ in the domain (respectively, $T(a)\perp T(b) \Leftrightarrow a\perp b$).\smallskip For each element $a$ in a JB$^*$-algebra $\mathcal{B}$ the (Jordan) multiplication operator by $a$, $M_a,$ is defined by $M_a (x) = a\circ x$ ($x\in \mathcal{B}$). Elements $a$ and $b$ in $\mathcal{B}$ are said to \emph{operator commute} in $\mathcal{B}$ if the multiplication operators $M_a$ and $M_b$ commute in $B(\mathcal{B})$, i.e., $$(a\circ x) \circ b = a\circ (x\circ b), \hbox{ for all $x$ in $\mathcal{B}$.}$$ The center of $\mathcal{B}$, $Z(\mathcal{B})$, is the set of all elements $z$ in $\mathcal{B}$ such that $z$ and $b$ operator commute for every $b$ in $\mathcal{B}$.\smallskip Let us assume that $h$ and $k$ are two self-adjoint elements in a JB$^*$-algebra $\mathcal{B}$. It is known that $h$ and $k$ generate a JB$^*$-subalgebra of $\mathcal{B}$ which can be realised as a JC$^*$-subalgebra of some $B(H)$ (cf. \cite[Corollary 2.2]{Wright77} and Macdonald's and Shirshov-Cohn's theorems \cite[Theorems 2.4.13 and 2.4.14]{HOS}), and under this identification, $h$ and $k$ commute as element of $B(H)$ in the usual sense whenever they operator commute in $\mathcal{B}$ (cf. \cite[Proposition 1]{Topping}). Similarly, it can be shown that $h$ and $k$ operator commute in $\mathcal{B}$ if and only if the following identity holds \begin{equation}\label{eq ideintity for operator commutativity of two hermitian elements}\hbox{ $h^2 \circ k =\J hkh$ (equivalently, $h^2 \circ k = 2 (h\circ k)\circ h - h^2 \circ k$).} \end{equation} It is known that each closed Jordan ideal $I$ of a JB$^*$-algebra $\mathcal{B}$ is self-adjoint --i.e. $I^* = I$-- (see \cite[Theorem 17]{Young78} or \cite[Proposition 3.4.13]{Cabrera-Rodriguez-vol1}). Therefore, every Jordan ideal of $\mathcal{B}$ is a triple ideal. A norm closed subspace $I$ of a JB$^*$-algebra $\mathcal{B}$ is a triple/Jordan ideal if and only if it is the same kind of substructure in the unitization of $\mathcal{B}$. So we can assume that $\mathcal{B}$ is unital. If $I$ is a triple ideal of $\mathcal{B}$ the identities $\{ A, 1, I \} = A \circ I$ and $\{ 1, I, 1\} = \{ x^* : x \in I \}= I^*$ show that $I$ is a self-adjoint Jordan ideal of $\mathcal{B}$.\label{page Jordan and triple ideals coincide}\smallskip Let $\mathcal{B}$ be a JB$^*$-algebra. The \emph{(Jordan) multipliers algebra} of $\mathcal{B}$ is defined as $$M_{J}( \mathcal{B}):=\{x\in \mathcal{B}^{**}: x\circ \mathcal{B} \subseteq \mathcal{B}\}$$ (cf. \cite{Ed80}). The space $M_{J}(\mathcal{B})$ is a unital JB$^*$-subalgebra of $\mathcal{B}^{**}$. Moreover, $M_{J}(\mathcal{B})$ is the (Jordan) idealizer of $\mathcal{B}$ in $\mathcal{B}^{**}$, that is, the largest JB$^*$-subalgebra of $\mathcal{B}^{**}$ which contains $\mathcal{B}$ as a closed Jordan ideal. If we realize $ \mathcal{B}$ as a JB$^*$-triple we can also consider its \emph{triple multipliers} in $\mathcal{B}^{**}$ given by $$M (\mathcal{B}):=\{ x\in \mathcal{B}^{**}: \{x, \mathcal{B}, \mathcal{B}\}\subseteq \mathcal{B}\}$$ as defined in \cite{BuChu92}. In this case $M(\mathcal{B})$ is the largest JB$^*$-subtriple of $\mathcal{B}^{**}$ which contains $\mathcal{B}$ as a closed triple ideal (see \cite[Theorem 2.1]{BuChu92}). Let us observe that these two notions do not cause any source of conflict because the triple multipliers of $\mathcal{B}$ clearly is a JB$^*$-subalgebra of $\mathcal{B}^{**}$ since it contains the unit of $\mathcal{B}^{**},$ and since Jordan and triple ideals of a JB$^*$-algebra are the same (cf. the comments in page \pageref{page Jordan and triple ideals coincide}) both notions above coincide.\smallskip The determination of all bounded linear operators preserving orthogonality between JB$^*$-triples remains as an open problem. However, as shown in \cite{BurFerGarPe09}, the description is affordable if the domain is a JB$^*$-algebra.\smallskip \begin{theorem}\label{thm characterization of OP JBstar}\cite[Theorem 4.1 and Corollary 4.2]{BurFerGarPe09} Let $T:\mathcal{A}\to E$ be a bounded linear operator from a JB$^*$-algebra to a JB$^*$-triple, let $h=T^{**}(1)$ and let $r=r(h)$ denote the range tripotent of $h$ in $E^{**}$. The following assertions are equivalent: \begin{enumerate}[$a)$] \item $T$ is orthogonality preserving; \item There exists a unital Jordan $^*$-homomorphism $S:M(\mathcal{A})\to E_2^{**}(r)$ such that $S(x)$ and $h$ operator commute in the JB$^*$-algebra $E_2^{**}(r)$ and \begin{equation}\label{eq fund equation conts OP JBSTAR} T(x)=h\circ_{r} S(x)= \{h,r,S(x)\}= U_{h^{\frac12}} (S(x)), \end{equation} for every $x\in M(\mathcal{A})$ where $h^{\frac12}$ is the square root of the positive element $h$ in the JB$^*$-algebra $E_2^{**} (r)$ and the $U$ operator is the one given by this JB$^*$-algebra {\rm(}in particular $T(\mathcal{A})\subseteq E_2^{**}(r)${\rm)}; \item $T$ preserves zero-triple-products. \end{enumerate} \end{theorem} Let $\mathcal{A}$ be a JB$^*$-algebra whose positive part will be denoted by $\mathcal{A}^+$. Fix $a\in \mathcal{A}^+$. The JB$^*$-subalgebra of $\mathcal{A}$ generated by $a$, coincides with the JB$^*$-subtriple of $\mathcal{A}$ generated by $a,$ and will be denoted by the same symbol $\mathcal{A}_a$.\label{page ref subalgebra and subtriple generaed coincide} To see this we shall simply observe that the range tripotent of $a$ in $\mathcal{A}^{**}$ lies in the triple multipliers of the JB$^*$-subtriple generated by $a$, and thus $a^2=\{a,r(a),a\}\in \mathcal{A}_a$. It is well known that $\mathcal{A}_a$ is isometrically isomorphic to an abelian C$^*$-algebra (cf. \cite[3.2.3]{HOS}). Moreover, the JB$^*$-subalgebra of $\mathcal{A}$ generated by a self-adjoint element $b$ is isometrically JB$^*$-isomorphic to a commutative C$^*$-algebra (cf. \cite[The spectral theorem 3.2.4]{HOS}). We can define in this way a continuous functional calculus at the element $b$. \begin{lemma}\label{l AkPed for JBstar} Let $I$ be a closed Jordan ideal of a JB$^*$-algebra $\mathcal{A}$. Suppose $x+I , y +I $ are two positive orthogonal elements in $\mathcal{A}/I$. Then there exist $a,b\in I^{+}$ satisfying $(x-a)\perp (y-b)$. \end{lemma} \begin{proof} First observe that we can assume that $x,y$ are positive elements in $\mathcal{A}.$ Indeed, pick $a+I\in (\mathcal{A}/I)_{sa}$ (with $a\in \mathcal{A}_{sa}$) such that $x+I=(a+I)^2=a^2+I,$ and the same for $y$.\smallskip Thought the rest of the proof literally follows the arguments in \cite[Proposition 2.3]{AkPed77}, an sketch of the ideas is included here for the sake of completeness. Define, via continuous functional calculus, $x_1=(x-y)_{+}$ and $y_1=(x-y)_{-}.$ By construction $x_1,y_1\geq 0$ and $x_1 y_1= 0$. Since $(x+I),(y+I)\geq 0$ with $(x+I) (y+I) =0$, we deduce that $((x+I)-(y+I))_{+}=x+I$, and hence $$ x_1+I=(x-y)_{+}+I=((x+I)-(y+I))_{+}=x+I.$$ Similarly we have $y_1+I=y+I.$ Setting $a=x-x_1$ and $b=y-y_1$ we get the desired conclusion. \end{proof} As it was pointed out in \cite[Section 2]{GarPeUnitCstaralg20}, Goldstein's charactersiation of orthogonal bilinear forms in \cite{Gold} turned out to be a very useful tool in the study of orthogonality preserving operators. The Jordan version of this result has been studied in \cite{JamPeSidd2015}. More concretely, let $V:\mathcal{A}\times \mathcal{A}\to {\mathbb{C}}$ be a bilinear form on a JB$^*$-algebra $\mathcal{A}.$ Following \cite{JamPeSidd2015} we say that $V$ is orthogonal (respectively, orthogonal on $\mathcal{A}_{sa}$) if $V(a,b^*)=0$ for every $a,b$ in $\mathcal{A}$ (respectively, in $\mathcal{A}_{sa}$) with $a\perp b.$ Corollary 3.14 (Propositions 3.8 and 3.9) in \cite{JamPeSidd2015} shows that a bilinear form on a JB$^*$-algebra $\mathcal{A}$ is orthogonal if and only if $V$ is orthogonal on $\mathcal{A}_{sa}.$ Actually, it is not hard to see that the latter is equivalent to $V$ being orthogonal on $\mathcal{A}^+$. This result enables us to obtain an appropriate Jordan version of \cite[Proposition 1]{GarPeUnitCstaralg20}. \begin{proposition}\label{p Ortpreserving on Asa JBstar} Let $T:\mathcal{A}\to E$ be a bounded linear operator from a JB$^*$-algebra to a JB$^*$-triple. The following statements are equivalent: \begin{enumerate}[$(a)$] \item $T$ preserves orthogonality; \item $T$ preserves orthogonality on $\mathcal{A}_{sa}$; \item $T$ preserves orthogonality on $\mathcal{A}^{+}.$ \end{enumerate} \end{proposition} \begin{proof} $(a) \Rightarrow (b) \Rightarrow (c)$ are clear. Now let us assume that $T:\mathcal{A}\to E$ preserves orthogonality on $\mathcal{A}^+.$ Let us fix $x\in E$ and $\phi \in E^*$ and define $V_{\phi}:\mathcal{A}\times \mathcal{A}\to {\mathbb{C}}$ by $V_{\phi}(a,b):=\phi (\{T(a),T(b^*),x \}).$ By assumptions, $V_{\phi}$ is orthogonal on $\mathcal{A}^+.$ An analogous argument to that in \cite[Proposition 1]{GarPeUnitCstaralg20} shows that $V_{\phi}$ is orthogonal on $\mathcal{A}_{sa}$ (just apply that if $x\perp y$ in $\mathcal{A}_{sa}$, and we write these elements as differences of orthogonal positive elements $x = x^+-x^-$ and $y= y^+ - y^-$, we have $x^\sigma y^{\tau} =0$ for $\sigma, \tau\in \{\pm\}$). Thus, by \cite[Corollary 3.14, Propositions 3.8 and 3.9]{JamPeSidd2015} we deduce that $V_{\phi}$ is orthogonal on $\mathcal{A}$. Let us fix $a\perp b$ in $\mathcal{A}.$ Then $V_{\phi}(a,b^*)=\phi(\{T(a),T(b),x \} )=0. $ The Hahn-Banach theorem shows that $\{T(a),T(b),x\}=0.$ It follows from the arbitrariness of $x$ in $E$ that $L(T(a),T(b))=0,$ equivalently, $T(a)\perp T(b),$ witnessing that $T$ preserves orthogonality on $\mathcal{A}.$ \end{proof} An element $a$ in a unital JB$^*$-algebra $\mathcal{A}$ is \emph{invertible} if there exists $b\in \mathcal{A}$ such that $a \circ b = 1$ and $a^2 \circ b = a.$ The element $b$ is unique and will be denoted by $a^{-1}$ (cf. \cite[3.2.9]{HOS} or \cite[Definition 4.1.2]{Cabrera-Rodriguez-vol1}). It is known that $a\in \mathcal{A}$ is invertible if and only if the mapping $U_a$ is invertible and in such a case $U_a^{-1} = U_{a^{-1}}$ \cite[Theorem 4.1.3]{Cabrera-Rodriguez-vol1}. For $a\in \mathcal{A}$ invertible, the mapping $M_a$ need not be, in general, invertible. However, if $a\in Z(\mathcal{A})$ is invertible, it can be easily checked that $M_{a^2} = U_a$ is invertible. If we further assume that $a$ is invertible and positive in $Z(\mathcal{A})$, the mapping $M_a = U_{a^{\frac12}}$ is invertible too.\smallskip Let $u$ be an element in $\mathcal{A}$. We say that $u$ is a \emph{unitary} if it is invertible in the Jordan sense and its inverse coincides with $u^*$ (cf. \cite[3.2.9]{HOS} and \cite[Definition 4.1.2]{Cabrera-Rodriguez-vol1}). Proposition 4.3 in \cite{BraKaUp78} assures that an element $u$ in $\mathcal{A}$ is a unitary if and only if it is a unitary in the JB$^*$-triple sense, that is, $\mathcal{A}_2 (u) = \mathcal{A}$. \smallskip We have already gathered the tools required to establish a generalization of \cite[Lemma 1]{GarPeUnitCstaralg20}. \begin{lemma}\label{l kernel is an ideal JBstar} Let $T : \mathcal{A}\to E$ be an orthogonality preserving bounded linear operator from a JB$^*$-algebra to a JB$^*$-triple. Let $h=T^{**} (1)\in E^{**}$, $r$ the range tripotent of $h$ in $E^{**}$, and let $S: \mathcal{A} \to E^{**}$ denote the triple homomorphism given by Theorem \ref{thm characterization of OP JBstar}. Then $\ker(T) =\ker (S)$ is a norm closed (triple) ideal of $\mathcal{A}$. Moreover, the quotient mapping $\widehat{T}: \mathcal{A}/\ker(T)\to E,$ $\widehat{T}(x+\ker(T)) = T(x)$ is an orthogonality preserving bounded linear mapping. \end{lemma} \begin{proof} The relation $\ker(T) \supseteq \ker (S)$ is absolutely clear. Let us assume that $x\in \ker(T).$ Then $h\circ_r S(x)=0.$ We recall that $h$ is positive in $E_2^{**}(r(h)).$ Thus, by Lemma 4.1 in \cite{BurFerGarPe09}, $h\circ_r S(x)=0$ is equivalent to $h\perp S(x).$ Now, by \cite[Lemma 1]{BurFerGarMarPe2008} $S(x)\perp r(h)=r.$ Since $S(x)$ lies in $E_2^{**}(r)$ we have $S(x)=0.$ Therefore $\ker(S)=\ker(T).$ Having in mind that $S$ is a triple homomorphism we deduce that $\ker(S)$ is a norm closed triple ideal, and hence a Jordan ideal, of $\mathcal{A}$ (see comments in page \pageref{page Jordan and triple ideals coincide}).\smallskip Clearly $\mathcal{A}/\ker(T)$ is a JB$^*$-algebra (cf. comments in \pageref{page Jordan and triple ideals coincide}) and $\widehat{T}$ is well defined. Proposition \ref{p Ortpreserving on Asa JBstar} shows that in order to see that $\widehat{T}$ is orthogonality preserving it suffices to show that $\widehat{T}$ preserves orthogonality on $\left(\mathcal{A}/\ker(T)\right)^{+}$. Let us take $x+\ker(T), y+\ker(T)\in \left(\mathcal{A}/\ker(T)\right)^{+}$ with $(x+\ker(T)) (y+\ker(T))=0$. Applying Lemma \ref{l AkPed for JBstar} we find $a,b\in \ker(T)^{+}$ satisfying $(x-a)\perp (y-b)$. By hypotheses $\widehat{T}(x+\ker(T)) =T(x-a)\perp T(y-b) = \widehat{T}(y+\ker(T))$. \end{proof} Let $a$ and $b$ be two hermitian elements in a JB$^*$-algebra $\mathcal{A}$. We have already commented that the JB$^*$-subalgebra generated by $a$ and $b$ is a JC$^*$-subalgebra of some $B(H)$ \cite[Corollary 2.2]{Wright77}. We can further conclude that when $a$ and $b$ are regarded as elements in $B(H)$, they commute in the usual sense whenever they operator commute in $\mathcal{A}$ \cite[Proposition 1]{Topping}. \begin{lemma}\label{l equation for multiplier JB} Let $T:\mathcal{A}\to E$ be a bounded linear operator preserving orthogonality from a JB$^*$-algebra to a JB$^*$-triple. Let $h$, $r$ and $S$ be those given by Theorem \ref{thm characterization of OP JBstar}. Then the identity \begin{equation}\label{eq identity multiplier JB} \{T(a), T(a),T(a)\}=h^{[3]} \circ_r S(a^{3}), \end{equation} holds for every $a\in \mathcal{A}_{sa}$. Furthermore, the operator $\mathcal{A}\ni a \mapsto h^{[3]} \circ_r S(a)$ is $E$-valued. \end{lemma} \begin{proof} We know that $h$ is positive in $ E_2^{**}(r)$, and hence $h^{[3]} = (h\circ_r h)\circ_r h = h^3$. Let us fix $a\in \mathcal{A}_{sa}.$ Proposition 4.1 in \cite{BurFerGarPe09} shows that $T(a)\in E_2^{**}(r)_{sa}$ and that the sets $\{T(a) , h\}$ and $\{ S(a), h\}$ are formed by pairs of elements which operator commute in $ E_2^{**}(r)$ (see also Theorem \ref{thm characterization of OP JBstar}). Therefore $S(a)$ and $h$ generate a JC$^*$-subalgebra of some $B(H),$ and they commute in the usual sense. Let us write $x\cdot y$ to denote the product of two elements $x,y$ in this particular $B(H)$ space. Then we have $x\circ_r y =\frac{1}{2}(x\cdot y+y\cdot x),$ and $T(a) = h\circ_r S(a) = h\cdot S(a).$ By the uniqueness of the triple product (cf. \cite[Proposition 5.5]{Ka}) we have $$\{T(a),T(a),T(a)\}=T(a)\cdot T(a) \cdot T(a)=h\cdot S(a)\cdot h\cdot S(a)\cdot h\cdot S(a) $$ $$=h^3\cdot S(a)^3= h^3 \cdot S(a^3) =h^{[3]}\circ_r S(a^3), $$ which proves the desired identity.\smallskip Let us fix $b\in \mathcal{A}_{sa}$ it follows from the previous identity that $$h^{[3]}\circ_r S(b)=h^{[3]}\circ_r S((b^{[\frac{1}{3}]})^3)=\{T(b^{[\frac{1}{3}]}),T(b^{[\frac{1}{3}]}),T(b^{[\frac{1}{3}]})\}\in E,$$ thus $h^{[3]}\circ_r S(\mathcal{A}_{sa})\subseteq E,$ and hence the second statement follows. \end{proof} The next result is a consequence of the properties commented before the previous Lemma \ref{l equation for multiplier JB}. \begin{lemma}\label{l lemma funcional calculus operator commute} Lat $a$ and $b$ be two hermitian elements in a JB$^*$-algebra $\mathcal{A}.$ Suppose that $a$ and $b$ operator commute. Let $c$ and $d$ be elements in the JB$^*$-subalgebras of $\mathcal{A}$ generated by $a$ and $b$, respectively. Then the elements $c$ and $d$ operator commute. \end{lemma} Our next goal is a Jordan version of \cite[Propositions 2 and 3]{GarPeUnitCstaralg20}. The proof will require some translations to the Jordan terminology. \begin{proposition}\label{p surjective OP preserves multipliers JB} Let $T : \mathcal{A}\to E$ be a surjective bounded linear operator preserving orthogonality from a JB$^*$-algebra to a JB$^*$-triple. Let $h=T^{**} (1)\in E^{**},$ $r$ the range tripotent of $h$ in $E^{**}$, and let $S: \mathcal{A}\to E_2^{**}(r)$ denote the Jordan $^*$-homomorphism given by Theorem \ref{thm characterization of OP JBstar}. Then the following statements hold: \begin{enumerate}[$(a)$] \item $r$ is a unitary in $E^{**};$ \item $h$ belongs to $M(E)$; \item $h$ is invertible (and positive) in the JBW$^*$-algebra ${E}^{**}= E^{**}_2 (r)$; \item $r$ belongs to $M(E)$, and consequently $E$ is a JB$^*$-algebra; \item The triple homomorphism $S$ is $E$-valued and surjective; \item If $x\in M(\mathcal{A})$ then $T^{**}(x)\in M(E) $; \item The quotient mapping $\widehat{T}: \mathcal{A}/\ker(T)\to E,$ $\widehat{T}(x+\ker(T)) = T(x)$ is an orthogonality preserving bounded linear bijection; \item There exist a triple homomorphism $S: \mathcal{A}\to E$ and a triple monomorphism $\widehat{S}: \mathcal{A}/\ker(S)\to E$ satisfying \begin{enumerate}[$(1)$] \item $\ker(T)= \ker(S)$; \item $\widehat{S} (x+{\ker(S)}) =S(x)$; \item $\widehat{S}(x)$ and $h$ operator commute in $E^{**}_2(r)$, for all $x\in \mathcal{A};$ \item $\widehat{T}(x+\ker(T)) =T(x) = h \circ_r \widehat{S}(x),$ for all $x\in \mathcal{A}$ \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} $(a)$ Proposition 4.1 in \cite{BurFerGarPe09} proves that $T(\mathcal{A})\subseteq E^{**}_2 (r)$, and thus the surjectivity of $T$ implies that $E \subseteq E^{**}_2 (r) = P_2(r) (E^{**})$. Having in mind that $E$ is weak$^*$ dense in $E^{**}$, $E^{**}_2 (r)$ is weak$^*$-closed, and the triple product of $E^{**}$ is separately weak$^*$-continuous, we conclude that $E^{**} = E^{**}_2 (r),$ witnessing that $r$ is a unitary in $E^{**}$ and the latter is a JBW$^*$-algebra. \smallskip $(b)$ We shall next show that $h$ lies in $ M(E)$. Namely, since $T$ is surjective, it is enough to prove that $\{h,T(a),T(b)\}$ lies in $E,$ for every $a,b\in \mathcal{A}.$ Having in mind that $S$ and $T$ are symmetric operators from $\mathcal{A}$ into $E_2^{**}(r),$ Lemma \ref{l lemma funcional calculus operator commute} implies that for each $a\in \mathcal{A}_{sa}$ both $S(a)$ and $T(a)$ operator commute in $E_2^{**}(r)$ with every power of $h.$ Now for $a\in \mathcal{A}$ we write $a=a_1+ia_2$ with $a_1,a_2\in \mathcal{A}_{sa}$ $$(h^n\circ_r x)\circ_r S(a)=(h^n\circ_r x)\circ_r (S(a_1)+iS(a_2))=(h^n\circ_r x)\circ S(a_1)+i (h^n\circ_r x)\circ_r S(a_2) $$ $$=h^n\circ_r(x \circ_r S(a_1))+ih^n\circ_r (x\circ_rS(a_2))=h^n\circ_r (x\circ_rS(a)),$$ for all $x\in E^{**} = E^{**}_2 (r)$, witnessing that $h^n$ and $S(a)$ operator commute for all $n\in {\mathbb{N}}$ and $a\in \mathcal{A}.$ Powers of $h$ are taken in the JB$^*$-algebra $E_2^{**}(r)$. Clearly, the same holds for $T(a).$\smallskip Now we claim that \begin{equation}\label{eq for h multiplier JBstar} \{h,T(a),T(b)\}=h^3\circ_r S(a^*\circ b) \end{equation} holds for every $a,b\in \mathcal{A}.$ Indeed, for $a,b\in \mathcal{A}$ we have $$\{h,T(a),T(b)\}=(h\circ_r T(a)^{*_r})\circ_r T(a)+(T(b)\circ_r T(a)^{*_r})\circ_r h-(h\circ_r T(b))\circ_r T(a)^{*_r} $$ $$=(h\circ_r T(a^*))\circ_r T(a)+(T(b)\circ_r T(a^*))\circ_r h-(h\circ_r T(b))\circ_r T(a^*) .$$ We shall show that each one of the summands on the right hand side equals $h^{[3]}\circ_r S(a^*\circ b).$ $$(h\circ_r T(a^*))\circ_r T(b) =(h \circ_r (h \circ_r S(a^*) ))\circ_r T(b)=(h^2\circ_r S(a^*))\circ_r T(b)$$ $$ =S(a^*)\circ_r (h^2\circ_r T(b))= S(a^*)\circ_r (h^2\circ_r (h\circ S(b) ))=S(a^*)\circ_r (h^3\circ_r S(b))$$ $$=(S(a^*)\circ_r S(b))\circ_r h^3=h^3\circ_r S(a^*\circ b) = h^{[3]} \circ_r S(a^* \circ b). $$ Similar computations yield $$(T(b)\circ_r T(a^*))\circ_r h =h^{[3]}\circ _r S(a^*\circ b), \hbox{ and } (h\circ_r T(b))\circ_r T(a^*)=h^{[3]}\circ _r S(a^*\circ b),$$ which prove the claim.\smallskip Since by Lemma \ref{l equation for multiplier JB} the operator $a\mapsto h^{[3]}\circ_r S(a)$ is $E$-valued, the equation \eqref{eq for h multiplier JBstar} shows that $h\in M(E).$\smallskip $(c)$ By Lemma \ref{l kernel is an ideal JBstar} $\ker(T) = \ker(S),$ the quotient mapping $\widehat{T} : \mathcal{A}/\ker(T)\to E$, $\widehat{T} (x+\ker(T)) = T(x)$ is a bijective bounded linear operator preserving orthogonality from $\mathcal{A}/\ker(T)$ to $E$. The quotient mapping $\widehat{S} : \mathcal{A}/\ker(T)\to (E^{**},\circ_r,*_{r})$ is a Jordan $^*$-isomorphism, and $\widehat{T} (a+\ker(T)) = T(a) = h\circ_r S(a) = h\circ_r \widehat{S} (a+\ker(T))$ for all $a\in \mathcal{A}$.\smallskip We shall employ a similar argument to that in the proof of \cite[Proposition 2$(c)$]{GarPeUnitCstaralg20}. By the arguments above we can assume that $T$ (and hence $T^{**}$) is a bijection. Let us find $c$ in $\mathcal{A}^{**}$ such that $T^{**} (c) =1$. We can also find, via Goldstine's theorem, a bounded net $(c_{\lambda})_{\lambda}$ in $\mathcal{A}$ converging to $c$ in the weak$^*$ topology of $\mathcal{A}$. Consequently, $(T(c_{\lambda}))_{\lambda}\to T^{**} (1)$ in the weak$^*$ topology of $E^{**}$. On the other hand, the net $(S(c_{\lambda}))_{\lambda}$ is bounded in $E^{**}$, by the Banach--Alaoglu's theorem, there exists a subnet $(S(c_{\mu}))_{\mu}$ converging to some $z\in E^{**}$ in the weak$^*$ topology of $E^{**}$. Obviously the subnet $(T(c_{\mu}))_{\mu}\to T^{**} (1)$ in the weak$^*$ topology of $E^{**}$. Since the identity $T(c_{\mu}) = h\circ_r S(c_{\mu})$ holds for every $\mu$, we deduce from the separate weak$^*$ continuity of the product of $E_2(r)^{**}$ and the above facts that $r = T^{**} (c) = h\circ_r z$.\smallskip Having in mind that, for each $a\in \mathcal{A}$, $h$ and $S(a)$ operator commute in the JBW$^*$-algebra $E_2^{**}(r)$ and that $z = \omega^*\hbox{-}\lim_{\mu} S(c_{\mu})$, we conclude that $h$ and $z$ operator commute in $E_2^{**}(r)$. By combining this fact with the identity $r = T^{**} (c) = h\circ_r z$, it follows that $h$ is invertible in $(E^{**},\circ_r,*_{r})$ with $h^{-1} = z\in E^{**}$.\smallskip $(d)$ Let us observe that $h$ is positive and invertible in the JBW$^*$-algebra $E^{**}=E_2^{**}(r)$ (see $(a)$, $(b)$ and $(c)$). The inverse of $h$ and the unit element $r$ both lie in the JB$^*$-subalgebra of $E_2^{**}(r)$ generated by $h$. We recall that the JB$^*$-subalgebra of $E_2^{**}(r)$ generated by $h$ coincides with the JB$^*$-subtriple of $E_2^{**}(r)$ generated by $h$ (see the comments in page \pageref{page ref subalgebra and subtriple generaed coincide}). By applying that $M(E)$ is a JB$^*$-subtriple of $E^{**}=E_2^{**}(r)$ containing $h$, we deduce that $h^{-1}$ and $r$ both belong to $M(E)$. \smallskip Finally it trivially follows from $r\in M(E)$ that for each $a,b\in E$ we have $E\ni \{a,r,b\} = a\circ_r b$ and $E\ni \{r,a,r\} =a^{*_r}$ (for the latter we observe that $E$ is a triple ideal of $M(E)$), and thus $E$ is a JB$^*$-subalgebra of $E^{**} = E_2^{**} (r)$.\smallskip $(e)$ Having in mind that $h\in M(E)$ is positive and invertible in the JBW$^*$-algebra $E^{**}_2(r)= E^{**}$ and operator commutes with every element in the images of $S$ and $T$, and the identity in \eqref{eq fund equation conts OP JBSTAR}, the desired conclusion follows from the identity $$ S(a)=U_{h^{\frac12}}^{-1} T(a)= U_{h^{-\frac12}} T(a) = \{h^{-\frac{1}{2}},T(a)^*,h^{-\frac{1}{2}}\}\in E,$$ for all $a\in \mathcal{A}$. It can be also deduced from the fact that $$S(a) = h^{-1}\circ_r T(a) = \{h^{-1}, r, T(a)\}\in E,$$ for all $a\in \mathcal{A}$, because $r,h\in M(E)$ with $h$ positive and invertible, $h^{-1}\in M(E)$, $E$ is a triple ideal of $M(E)$ and $h$ (and hence h$^{-1}$) operator commutes with every element in the images of $S$ and $T$.\smallskip $(f)$ Let us fix $x\in M(\mathcal{A})$ and $a\in \mathcal{A}.$ Let us recall that, by Theorem \ref{thm characterization of OP JBstar}, the identity $$T(x) = h\circ_r S(x) = S(x)\circ_r h,$$ holds for all $x\in \mathcal{A}$. We know from $(e)$ that $S: \mathcal{A}\to (E, \circ_r, *_r)$ is a Jordan $^*$-homomorphism. By the separate weak$^*$-continuity of the Jordan product of ${E}^{**} = E_2^{**}(r)$, the weak$^*$ continuity of $T^{**},S^{**} :\mathcal{A}\to E^{**}$ and the weak$^*$ density of $\mathcal{A}$ in $\mathcal{A}^{**}$ we can easily deduce that $$T^{**}(x) = h\circ_r S^{**}(x) = S^{**}(x)\circ_r h, \hbox{ for all } x\in \mathcal{A}.$$ Similar arguments are valid to show that $S^{**}:\mathcal{A}^{**}\to E^{**} = E_2^{**}(r)$ is a Jordan $^*$-homomorphism.\smallskip We observe that $h$ and every element in the image of $S^{**}$ operator commute in the JBW$^*$-algebra $E^{**}_2(r)= E^{**}$ (essentially because $h$ and every element in the image of $S$ operator commute, cf. Theorem \ref{thm characterization of OP JBstar}). Since for each $a\in \mathcal{A}_{sa}^{**}$, the element $S^{**} (a)$ belongs to the self-adjoint part of $E^{**}_2(r)= E^{**}$ and operator commutes with $h$, we deduce from \cite[Lemma 5]{BurFerGarMarPe2008} that $h \circ_r S^{**} (a)$ and $h$ operator commute for every $a\in \mathcal{A}_{sa}^{**}$, and by linearity, for all $a\in \mathcal{A}^{**}$.\smallskip Now, since $h$ lies in $ M(E)$ and $x\in M(\mathcal{A})$, by applying the above facts, we have $$\begin{aligned} T^{**}(x)\circ_r T(a) &= (h\circ_r S^{**}(x))\circ_r (h\circ_r S(a)) = h\circ_r ( (h\circ_r S^{**}(x)) \circ_r S(a)) \\ &= h\circ_r ( ( S(a) \circ_r S^{**}(x)) \circ_r h) = h\circ_r ( S^{**} (a\circ x) \circ_r h)\\ &=h\circ_r ( S (a\circ x) \circ_r h) = h\circ_r T(x\circ a)\in {E}, \end{aligned}$$ for all $a\in \mathcal{A}$. The surjectivity of $T$ proves that $T^{**}(x)\circ_r E\subseteq E.$ We have shown that $T^{**} (x)$ is a multiplier of the JB$^*$-algebra $M(E,\circ_r, *_{r})$. The uniqueness of the triple product (cf. \cite[Proposition 5.5]{Ka}) assures that $T^{**} (x)\in M(E).$\smallskip Finally, $(g)$ follows from Lemma \ref{l kernel is an ideal JBstar} and $(h)$ is a straight consequence of the previous statements. \end{proof} We can now finish this section with a complete description of all bijective bounded linear operators preserving orthogonality between JB$^*$-algebras, which follows as a consequence of Propositions \ref{p surjective OP preserves multipliers JB} and \ref{p Ortpreserving on Asa JBstar}. \begin{corollary}\label{c Characterization bd OP plus bijective Jordan} Let $T : \mathcal{A}\to E$ be a bijective bounded linear operator from a JB$^*$-algebra to a JB$^*$-triple. Let $h=T^{**} (1)\in E^{**}$ and let $r$ be the range tripotent of $h$ in $E^{**}$. Then the following statements are equivalent:\begin{enumerate}[$(a)$] \item $T$ is orthogonality preserving; \item The elements $h$ and $r$ belong to $M(E)$ with $h$ invertible and $r$ unitary and there exists a Jordan $^*$-isomorphism $S: \mathcal{A}\to (E,\circ_{r},*_{r})$ such that $h$ lies in $Z(E^{**},\circ_{r},*_{r})$, and $T(x)=h\circ_{r(h)} S(x) = U_{h^{\frac12}} S(x),$ for every $x\in \mathcal{A}$, where $h^{\frac12}$ denotes the square root of the positive element $h$ in the JB$^*$-algebra $E_2^{**} (r)$; \item $T$ is biorthogonality preserving; \item $T$ is orthogonality preserving on $\mathcal{A}_{sa}$; \item $T$ is orthogonality preserving on $\mathcal{A}^{+}$; \item $T$ preserves zero-triple-products, i.e. $$\{a,b,c\}=0 \Rightarrow \{T(a),T(b),T(c)\}=0;$$ \item $T$ preserves zero-triple-products in both directions, i.e. $$\{a,b,c\}=0 \Leftrightarrow \{T(a),T(b),T(c)\}=0.$$ \end{enumerate} \end{corollary} \section{The centroid of a JB$^*$-triple in the study of one-parameter semigroups}\label{sec: one-parametri groups OP Jordan} This section is aimed to establish a Jordan version of the description of one-parameter semigroups of orthogonality preserving operators on C$^*$-algebras developed in \cite[Theorem 3]{GarPeUnitCstaralg20}. To facilite the arguments we shall employ some results and terminology developed by S. Dineen and R. Timoney in the wider setting of JB$^*$-triples (see \cite{DiTi88}). According to the nomenclature in the just quoted paper, given a JB$^*$-triple $E$, the \emph{centroid}, $C(E),$ of $E$ is the set of all bounded linear operators $T:E\to E$ satisfying $$ T\{x,y,z\} = \{T(x), y,z\}, \hbox{ for all } x,y,z\in E.$$ It is known that $T\in B(E)$ lies in $C(E)$ if and only if $T$ commutes with all operators of the form $L(x,x)$ ($x\in E$) if and only if $T$ commutes with all operators of the form $L(x,y)$ ($x,y\in E$). It is further known that the centroid of $E$ coincides with the centralizer of the underlying Banach space $E$ in the sense of \cite{Beh79,Cunn67}, and it is precisely the center of the subalgebra of $B(E)$ generated by the Hermitian operators (cf. \cite[Theorem 2.8 and Corollary 2.10]{DiTi88}).\smallskip We have already employed the center of a JB$^*$-algebra $\mathcal{A}$ in the previous section. The \emph{centroid} of $\mathcal{A}$ as JB$^*$-algebra is the set of all bounded linear operators $T:\mathcal{A} \to \mathcal{A}$ satisfying $T(x\circ y ) = T(x) \circ y$ for all $x,y\in \mathcal{A}$. To reassure the reader, we note that all notions and terminology are perfectly compatible. Actually, the centroid of $\mathcal{A}$ as JB$^*$-algebra coincides with the centroid of $\mathcal{A}$ as JB$^*$-triple \cite[Proposition 3.4]{DiTi88}, and moreover, a bounded linear operator $T$ on $\mathcal{A}$ belongs to the centroid if and only if $T= M_a$ for some element $a$ in the center of $M(\mathcal{A})$ \cite[Proposition 3.5]{DiTi88}. Actually, \cite[Proposition 3.5]{DiTi88} is only valid for unital JB$^*$-algebras, however, if $T$ is an element in the centroid of a non-unital JB$^*$-algebra $\mathcal{A}$, $T^{**}$ is an element in the centroid of $\mathcal{A}^{**}$, so there exists an element $a$ in the center of $\mathcal{A}^{**}$ such that $T^{**}(x) = M_{a} (x) = a\circ x,$ for all $x\in \mathcal{A}^{**}$. Since $T= T^{**}|_{\mathcal{A}}$ is $\mathcal{A}$-valued, the element $a$ belongs to $M(\mathcal{A})$.\smallskip We shall need the following result gathered from different papers. \begin{lemma}\label{l technical centroid with two different products} Let $u$ be a unitary in a unital JB$^*$-algebra $\mathcal{A}$. Let $(\mathcal{A},\circ_u,*_{u})$ be the JB$^*$-algebra associated with $u$. The following statements hold:\begin{enumerate}[$(a)$] \item The identity $$\begin{aligned}\J xyz &= (x\circ y^*) \circ z + (z\circ y^*)\circ x - (x\circ z)\circ y^* \\ &= (x\circ_u y^{*_u}) \circ_u z + (z\circ_u y^{*_u})\circ_u x - (x\circ_u z)\circ_u y^{*_u} \end{aligned}$$ holds for all $x,y,z\in \mathcal{A}$; \item If $h\in Z(\mathcal{A},\circ_u,*_{u})$, and $M_h^u$ denotes the Jordan multiplication operator by $a$ in $(\mathcal{A},\circ_u,*_{u})$ {\rm(}i.e., $M_h^u (x) =a \circ_{u} x${\rm)}, the equality $\{ M_h^u (x) , y, z\} = M_h^u \{x,y,z\}$, holds for all $x,y,z\in \mathcal{A}.$ \end{enumerate} \end{lemma} \begin{proof} $(a)$ As we commented in \eqref{eq triple product JBstar algebra}, $\mathcal{A}$ and $(\mathcal{A},\circ_u,*_{u})$ are JB$^*$-algebras for the original norm and the corresponding Jordan products and involutions. Therefore, the identity mapping is a surjective linear isometry between these two JB$^*$-triples, and thus it follows from \cite[Proposition 5.5]{Ka} and \eqref{eq triple product JBstar algebra} that the desired identity holds.\smallskip $(b)$ Since $h\in Z(\mathcal{A},\circ_u,*_{u})$, the mapping $M_h^u$ is a centralizer of the JB$^*$-triple $(\mathcal{A},\{.,.,.\})$ (cf. \cite[Proposition 3.5]{DiTi88} and $(a)$). Consequently, $$M_h^u \{x,y,z\} = \{ M_h^u (x) , y, z\} \hbox{ for every } x,y,z\in \mathcal{A}.$$ \end{proof} We recall that a \emph{triple derivation} on a JB$^*$-triple $E$ is a linear mapping $\delta: E\to E$ satisfying the so-called \emph{triple Leibniz' rule}: $$\delta\{a,b,c\} = \{\delta(a) , b , c\} + \{a, \delta( b ) , c\} + \{a,b,\delta(c)\} \ \ \ (a,b,c\in E).$$ Let us give an example. Fix two elements $a,b\in E$. By the Jordan identity, the mapping $\delta (a,b) := L(a,b)-L(b,a)$ is a triple derivation on $E$ and obviously continuous. It is known that every triple derivation on a JB$^*$-triple is automatically continuous (see \cite[Corollary 2.2]{BarFri90}). Furthermore, by the separate weak$^*$ continuity of the triple product of every JBW$^*$-triple, we can conclude that for each triple derivation $\delta$ on a JB$^*$-triple $E$, the bitranspose $\delta^{**} : E^{**}\to E^{**}$ is a triple derivation too.\smallskip Under the terminology of JB$^*$-triples and triple derivations, Proposition 4 in \cite{GarPeUnitCstaralg20} and some related results in \cite{PedS88} admit the next extension, which, as we shall see below, is even more natural and complete in this wider setting. \begin{lemma}\label{l -parameter group of iso on a JB*-triple} Let $\{U_t : t\in\mathbb{R}_0^+\}$ be a uniformly continuous one-parameter semigroup of surjective isometries on a JB$^*$-triple $E$. Then there exists a triple derivation $\delta : E\to E$ satisfying $U_t = e^{t \delta}$ for all $t\in \mathbb{R}$. Furthermore, for each triple derivation $\delta$ on $E$ the assignment $t\mapsto e^{t \delta}$ is a one-parameter semigroup of surjective isometries {\rm(}triple automorphisms{\rm)} on $E$. \end{lemma} \begin{proof} Let us find $R\in B(E)$ such that $T_t = e^{t R}$ for all $t\in \mathbb{R}$. Since every surjective isometry on $E$ is a triple isomorphism \cite[Proposition 5.5]{Ka}, for each real $t$ the mapping $T_t$ satisfies $$e^{t R} \{a,b,c \} =T_t \{a,b,c \} = \{ T_t(a), T_t(b), T_t(c) \} = \{e^{t R}(a),e^{t R}(b), e^{t R}(c)\} \ \ (\forall t\in \mathbb{R}).$$ Taking derivatives at $t=0$ we get $$R \{a,b,c \} = \{R(a),b,c \} +\{a,R(b),c \} + \{a,b,R(c) \}, \hbox{ for all } a,b,c\in E,$$ which proves that $R= \delta$ is the desired triple derivation.\smallskip For the last statement let us observe that every triple derivation $\delta$ on $E$ is a dissipative mapping (cf. \cite[Theorem 2.1]{BarFri90}). By \cite[Corollary 10.13]{BonsDun73} $\|e^{t \delta}\|\leq 1$ for all $t\in \mathbb{R}^+_0$. Since $-\delta$ also is a triple derivation on $E$, the just quoted result implies that $\|e^{t \delta}\|\leq 1$ for all $t\in \mathbb{R}$, and thus $e^{t \delta}$ is a surjective isometry (equivalently, a triple automorphism) on $E$ for all $t\in \mathbb{R}$. \end{proof} Let $\mathcal{B}$ be a JB$^*$-algebra. A linear mapping $D : \mathcal{B} \to \mathcal{B}$ is said to be a \emph{Jordan derivation} if $D(a \circ b) = D(a) \circ b + a \circ D(b)$, for every $a,b$ in $\mathcal{B}$. A Jordan $^*$-derivation on $\mathcal{B}$ is a Jordan derivation $D$ satisfying $D(a^*) = D(a)^*$ for all $a\in \mathcal{B}$. If $1$ is a unit in $\mathcal{B}$, we have $D(1) =0$. Every Jordan derivation on a JB$^*$-algebra is automatically continuous (see \cite[Corollary 2.3]{HejNik96}). Actually the results in \cite[Lemmata 1 and 2]{HoMarPeRu} (see also \cite[Proposition 3.7]{HoPeRu}) show that a linear mapping $\delta$ on a unital JB$^*$-algebra $\mathcal{B}$ is a triple derivation if and only if $\delta(1)^* = -\delta(1)$ and $\delta-\delta(\frac12 \delta(1), 1)$ is a Jordan $^*$-derivation on $\mathcal{B}$ (where $\delta(\frac12 \delta(1), 1) (x) = \delta(1)\circ x$ for all $x\in \mathcal{B}$), that is, $\delta$ is a triple derivation if and only if $\delta$ is the sum of a Jordan $^*$-derivation and a Jordan multiplication operator by a skew symmetric element in $\mathcal{B}$.\smallskip We can continue with a Jordan version of one of the statements in \cite[Remark 2]{GarPeUnitCstaralg20}. \begin{lemma}\label{l Jordan *-derivations vanish on the center} Let $d:\mathcal{A}\to \mathcal{A}$ be a Jordan $^*$-derivation on a JB$^*$-algebra. Then $d$ vanishes on the center of $\mathcal{A},$ and consequently $d$ commutes with $M_z$ for every $z\in Z(\mathcal{A})$. \end{lemma} \begin{proof} We have already justified in the comments prior to this lemma that $d$ is a triple derivation on $\mathcal{A}$. Therefore $d|_{\mathcal{A}_{sa}} : \mathcal{A}_{sa}\to \mathcal{A}_{sa}$ is a Jordan derivation on the JB-algebra $\mathcal{A}_{sa}$. Since the center of $\mathcal{A}$ is $^*$-invariant it suffices to prove that $d$ vanishes on every element in $Z(\mathcal{A})_{sa}= Z(\mathcal{A}_{sa}) = Z(\mathcal{A})\cap \mathcal{A}_{sa}.$\smallskip The Approximation Theorem \cite[Approximation Theorem 4.2]{Upmeier0} guarantees that every Jordan derivation on a JB$^*$-algebra can be approximated in the strong operator topology by inner derivations (i.e. by derivations which are finite sums of maps of the form $x\mapsto [M_a,M_b] (x) = (M_a M_b - M_b M_a)(x)$). Fix $z\in Z(\mathcal{A})_{sa}$ and an arbitrary $\varepsilon>0$. It follows from the above result that there exist $a_1,b_1,\ldots,a_m,b_m\in \mathcal{A}_{sa}$ satisfying $\displaystyle \left\| d(z) - \sum_{j=1}^m [M_{a_j},M_{b_j}] (z)\right\|<\varepsilon.$ Having in mind that $z$ is central we obtain $[M_{a_j},M_{b_j}] (z) = a_j\circ (b_j\circ z) - b_j\circ (a_j\circ z)= z\circ (b_j\circ a_j) - z\circ (a_j\circ b_j) =0,$ for all $j$. It then follows that $\left\| d(z) \right\|<\varepsilon,$ and the arbitrariness of $\varepsilon>0$ implies that $d(z) =0$.\smallskip An alternative proof can be obtained as follows: The hermitian part of $\mathcal{A}$ is a real JB$^*$-triple in the sense employed in \cite{HoMarPeRu} and $d|_{\mathcal{A}_{sa}} : \mathcal{A}_{sa}\to \mathcal{A}_{sa}$ is a triple derivation. By the Jordan identity, a typical example of a triple derivation on $\mathcal{A}_{sa},$ regarded as a real JB$^*$-triple, is one given by $\delta(a,b) (x) = L(a,b) (x) -L(b,a)(x) =\{a,b,x\}- \{b,a,x\}$ ($x\in \mathcal{A}_{sa}$), where $a,b$ are fixed elements in $\mathcal{A}_{sa}$. Let us pick $z\in Z(\mathcal{A}_{sa})$. It is easy to check that $$ \begin{aligned} \delta(a,b) (z) &=\{a,b,z\}- \{b,a,z\}=(a\circ b) \circ z + (z\circ b)\circ a - (a\circ z)\circ b \\ &- (b\circ a) \circ z - (z\circ a)\circ b + (b\circ z)\circ a\\ &= 2 (z\circ b)\circ a - 2 (a\circ z)\circ b = 2 (a\circ b) \circ z - 2 (b\circ a) \circ z =0. \end{aligned}$$ Those triple derivations on a real JB$^*$-triple which are expressed as finite sums of triple derivation of the form $\delta(a,b)$ are called inner derivations. Theorem 5 in \cite{HoMarPeRu} proves that every triple derivation on a real JB$^*$-triple can be approximated by inner derivations with respect to the strong operator topology. Therefore, given $z\in Z(\mathcal{A}_{sa})$ and $\varepsilon>0$ there exist $a_1,\ldots,a_m$, $b_1,\ldots,b_m$ in $\mathcal{A}_{sa}$ such that $\displaystyle \varepsilon > \left\| d (z) - \sum_{j=1}^m \delta(a_j,b_j) (z) \right\| = \left\| d (z) \right\|.$ The arbitrariness of $\varepsilon>0$ assures that $d(z)=0$, as desired.\smallskip Finally for $z\in Z(\mathcal{A})$ we have $$d M_z (x) = d (z\circ x) = d(z) \circ x + z\circ d(x) = M_z d(x), \hbox{ for all } x\in \mathcal{A}.$$ \end{proof} We recall next the definition and basic properties of the strong$^*$ topology for general JBW$^*$-triples. Let us suppose that $\varphi$ is a norm one functional in the predual, $M_*$, of a JBW$^*$-triple $M.$ If $z$ is any norm one element in $W$ with $\varphi (z) =1$, Proposition 1.2 in \cite{barton1987grothendieck} proves that the mapping $$(x,y)\mapsto \varphi\J xyz$$ is a positive sesquilinear form on $M,$ which does not depend on the choice of $z$. We find in this way prehilbertian seminorms on $M$ given by $\|x\|_{\varphi}^2:= \varphi\J xxz,$ ($x\in M$). The \emph{strong*-topology} of $M$ is the topology generated by the family $\{ \|\cdot\|_{\varphi}:\varphi\in {M_*}, \|\varphi \| =1 \}$ (cf. \cite{BarFri90}). As in the setting of von Neumann algebras, the triple product of every JBW$^*$-triple is jointly strong$^*$-continuous on bounded sets (see \cite[Theorem]{RodPa91} and \cite[\S 4 and Theorem 9]{PeRo2001}). It is known that a linear map between JBW$^*$-triples is strong$^*$ continuous if and only if it is weak$^*$ continuous (cf. \cite[Corollary 3]{RodPa91} and \cite[page 621]{PeRo2001}).\smallskip Let us go back to the triple spectrum. For each non-zero element $a$ in a JB$^*$-triple $E$ we set $m_q (a) := \min \{\lambda : \lambda\in \Omega_a\}$, where $\Omega_a$ denotes the triple spectrum of $a$. We set $m_q (0) =0$. The mapping $m_q : E\to \mathbb{R}_0^{+}$, $a\mapsto m_q(a)$ has been considered in \cite{JamPeSiddTah2015} in the study of the $\lambda$-function in the case of JBW$^*$-triples. One of the consequences of Theorem 3.1 in the just mentioned reference implies that \begin{equation}\label{eq mq is 1Lipschitz} |m_q (a) -m_q (b)| \leq \|a-b\|, \hbox{ for all } a,b\in E, \end{equation} (cf. \cite[Theorem 3.1 and $(3.2)$]{JamPeSiddTah2015}).\smallskip Let $z$ be an element in a JB$^*$-triple $E$. Back to the local Gelfand theory, we consider the JB$^*$-subtriples $E_z$ and $E_{z^{[3]}}$ generated by $z$ and $z^{[3]}$, respectively. It is known that $E_z = E_{z^{[3]}}$ (cf. \cite[comments before Proposition 2.1]{BunChuZal2000}). The following property holds: \begin{equation}\label{eq uniqueness of cubic root}\hbox{ $z^{[3]} = a$ for some $a$ in $E$ implies that $z=a^{[\frac13]}\in E_a$.} \end{equation} Namely, clearly $a\in E_z$ and thus $E_a \subseteq E_z$. On the other hand $z^{[3]}\in E_a$ and hence $E_z = E_{z^{[3]}} \subseteq E_a \subseteq E_z.$ The local Gelfand theory gives the statement.\smallskip It is now the moment to describe the uniformly continuous one-parameter semigroups of orthogonality preserving operators on a general JB$^*$-algebra. \begin{theorem}\label{t Wolff one-parameter for OP JBstar} Let $\mathcal{A}$ be a JB$^*$-algebra. Suppose $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a family of orthogonality preserving bounded linear bijections on $\mathcal{A}$ with $T_0=Id$. For each $t\geq 0$ let $h_t = T_t^{**} (1)$, let $r_t$ be the range tripotent of $h_t$ in $\mathcal{A}^{**}$ and let $S_t: \mathcal{A} \to (\mathcal{A},\circ_{r_t},*_{r_t})$ denote the Jordan $^*$-isomorphism associated with $T_t$ given by Corollary \ref{c Characterization bd OP plus bijective Jordan}. Then the following statements are equivalent:\begin{enumerate}[$(a)$]\item $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of orthogonality preserving operators on $\mathcal{A}$; \item $\{S_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of surjective linear isometries {\rm(}i.e. triple isomorphisms{\rm)} on $\mathcal{A}$ {\rm(}and hence there exists a triple derivation $\delta$ on $\mathcal{A}$ such that $S_t = e^{t \delta}$ for all $t\in \mathbb{R}${\rm)}, the mapping $t\mapsto h_t $ is continuous at zero, and the identity \begin{equation}\label{eq new idenity in the statement of theorem 1 on one-parameter Jordan} h_{t+s} = h_t \circ_{r_t} S_t^{**} (h_s)= \{ h_t , {r_t}, S_t^{**} (h_s) \}, \end{equation} holds for all $s,t\in \mathbb{R}.$ \end{enumerate} \end{theorem} \begin{proof} Let us begin with a common property employed in both implications. We claim that the identity in \eqref{eq new idenity in the statement of theorem 1 on one-parameter Jordan} implies that \begin{equation}\label{eq 30 for ranges} r_{t+s} = r_t \circ_{r_t} S_t^{**} (r_s)= S_t^{**} (r_s), \hbox{ for all } s,t\in \mathbb{R}. \end{equation} Indeed, the elements $h_{t}, h_{t}^{[\frac{1}{3^n}]}\in Z(\mathcal{A}^{**},\circ_{r_{t}},*_{r_{t}})$ and $S_t : \mathcal{A}\to (\mathcal{A},\circ_{r_t}, *_{r_t})$ is a Jordan $^*$-isomorphism, this can be applied to deduce that the identity \begin{equation}\label{eq 3n cubic roots h last theorem} h_{t+s}^{[\frac{1}{3^n}]} = h_{t}^{[\frac{1}{3^n}]} \circ_{r_t} S_t^{**}(h_s^{[\frac{1}{3^n}]}), \end{equation} holds for all $n\in\mathbb{N}$. Let us briefly convince the reader. The uniqueness of the triple product (cf. \cite[Proposition 5.5]{Ka}) proves that $$\{x,y,z\} = (x \circ_{r_t} y^{*_{r_t}}) \circ_{r_t} z + (z \circ_{r_t} y^{*_{r_t}}) \circ_{r_t} x - (x \circ_{r_t} z) \circ_{r_t} y^{*_{r_t}}, \hbox{ for all } x,y,z\in \mathcal{A},$$ which combined with the commuting properties of $h_{t}$ and $h_{t}^{[\frac{1}{3^n}]}$ in $(\mathcal{A},\circ_{r_t}, *_{r_t})$ assures that $$ \left(h_{t}^{[\frac{1}{3^{n+1}}]} \circ_{r_t} S_t^{**}(h_s^{[\frac{1}{3^{n+1}}]})\right)^{[3]} = \left(h_{t}^{[\frac{1}{3^{n+1}}]}\right)^{[3]} \circ_{r_t} S_t^{**}\left(h_s^{[\frac{1}{3^{n+1}}]}\right)^{[3]} $$ $$ = h_{t}^{[\frac{1}{3^{n}}]} \circ_{r_t} S_t^{**}\left( \left(h_s^{[\frac{1}{3^{n+1}}]}\right)^{[3]}\right) = h_{t}^{[\frac{1}{3^{n}}]} \circ_{r_t} S_t^{**}\left(h_s^{[\frac{1}{3^{n}}]} \right) = h_{t+s}^{[\frac{1}{3^n}]},$$ where in the last equality we applied the induction hypothesis. The discussion in \eqref{eq uniqueness of cubic root} proves that $h_{t}^{[\frac{1}{3^{n+1}}]} \circ_{r_t} S_t^{**}(h_s^{[\frac{1}{3^{n+1}}]}) = h_{t+s}^{[\frac{1}{3^{n+1}}]},$ which concludes the induction argument leading to \eqref{eq 3n cubic roots h last theorem}.\smallskip Now, since $(h_{t+s}^{[\frac{1}{3^n}]})_n\to r_{t+s}$, $(h_{t}^{[\frac{1}{3^n}]})_n\to r_{t}$ and $(h_{s}^{[\frac{1}{3^n}]})_n\to r_{s}$ in the strong$^*$ topology of $\mathcal{A}^{**},$ $S_t^{**}$ is strong$^*$ continuous and the triple product of every JBW$^*$-triple is jointly strong$^*$ continuous on bounded sets (cf. \cite[Theorem]{RodPa91} and \cite[\S 4 and Theorem 9]{PeRo2001}), by taking strong$^*$ limits in \eqref{eq 3n cubic roots h last theorem} we get $r_{t+s} = r_t \circ_{r_t} S_t^{**} (r_s) = S_t^{**}(r_s),$ for all $s,t\in \mathbb{R}$, which concludes the proof of \eqref{eq 30 for ranges}. \smallskip If we apply that $S_t : \mathcal{A}\to (\mathcal{A},\circ_{r_t}, *_{r_t})$ is a Jordan $^*$-isomorphism we also derive \begin{equation}\label{eq rt+s last theorem} r_{t+s}^{*_{r_t}}= S_t^{**}(r_s)^{*_{r_{t}}} = S_t^{**} (r_s^*), \hbox{ for all } s,t\in \mathbb{R}. \end{equation} $(a)\Rightarrow (b)$ It follows from the assumptions that $$h_{s+t} = T^{**}_{s+t} (1) = T^{**}_{t} (T^{**}_{s}(1)) = T^{**}_{t} (h_s) = h_t\circ_{r_t} S_t^{**} (h_s) =U^{t}_{h_t^{\frac12}} S_t^{**} (h_s),$$ for all $s,t\in\mathbb{R}$, where $U^{t}$ stands for the $U$ operator in the JB$^*$-algebra $(\mathcal{A},\circ_{r_t}, *_{r_t})$. We can apply \eqref{eq 30 for ranges} and \eqref{eq rt+s last theorem} to deduce $r_{t+s} = S_t^{**} (r_s),$ and $r_{t+s}^{*_{r_t}}= S_t^{**}(r_s)^{*_{r_{t}}} = S_t^{**} (r_s^*),$ for all $s,t\in \mathbb{R}$.\smallskip It follows from the above conclusions that \begin{equation}\label{eq St Jordan *-isom last theorem} S_t: (\mathcal{A}, \circ_{r_s}, *_{r_{s}}) \to (\mathcal{A}, \circ_{r_{t+s}}, *_{r_{t+s}}) \end{equation} is a (unital and isometric) triple isomorphism, and hence a Jordan $^*$-isomorphism.\smallskip We know from Corollary \ref{c Characterization bd OP plus bijective Jordan} that each $h_t$ is a positive invertible element in $Z(\mathcal{A}, \circ_{r_t}, *_{r_{t}})$. Therefore, the mapping $M_{h_t}^{t} (x) := h_t\circ_{r_t} x$ is invertible in $B(\mathcal{A})$. If we fix an arbitrary $a\in \mathcal{A}$, we deduce from the hypotheses that \begin{equation}\label{eq one 08032020} M_{h_{t+s}}^{t+s} S_{t+s} (a) = T_{t+s} (a) = T_t T_s (a) = M_{h_t}^{t} S_{t} M_{h_s}^s S_{s} (a) \end{equation} $$ = M_{h_t}^{t} S_{t} ({h_s}\circ_{r_s} S_{s} (a)) = M_{h_t}^{t} (S_t^{**} (h_s)\circ_{r_{t+s}} S_t S_s(a)),$$ where in the last equality we applied \eqref{eq St Jordan *-isom last theorem}. We focus next on the left-hand-side term in the first row and we expand it to get \begin{equation}\label{eq two 08032020} M_{h_{t+s}}^{t+s} S_{t+s} (a) = \{h_{t+s}, r_{t+s}, S_{t+s} (a) \} = \hbox{(by \eqref{eq new idenity in the statement of theorem 1 on one-parameter Jordan})} \end{equation} $$= \{h_t \circ_{r_t} S_t^{**} (h_s), r_{t+s}, S_{t+s} (a) \} = \{M_{h_t}^{t} S_t^{**} (h_s), r_{t+s}, S_{t+s} (a) \} $$ $$= M_{h_t}^{t} \{S_t^{**} (h_s), r_{t+s}, S_{t+s} (a) \}= M_{h_t}^{t} ( S_t^{**} (h_s) \circ_{r_{t+s}} S_{t+s} (a)),$$ where in the penultimate step we applied Lemma \ref{l technical centroid with two different products}$(b)$ and the fact that $h_t$ belongs to $Z(\mathcal{A}, \circ_{r_t}, *_{r_{t}})$.\smallskip The mapping $M_{h_t}^{t}$ is invertible in $B(\mathcal{A})$, we deduce from \eqref{eq one 08032020} and \eqref{eq two 08032020} that \begin{equation}\label{eq three 08032020} M_{S_t^{**} (h_s)}^{t+s} S_{t+s} (a) = S_t^{**} (h_s) \circ_{r_{t+s}} S_{t+s} (a) \end{equation} $$= S_t^{**} (h_s)\circ_{r_{t+s}} S_t S_s(a) =M_{S_t^{**} (h_s)}^{t+s} S_t S_s(a).$$ Since, by \eqref{eq St Jordan *-isom last theorem}, $S_t: (\mathcal{A}, \circ_{r_s}, *_{r_{s}}) \to (\mathcal{A}, \circ_{r_{t+s}}, *_{r_{t+s}})$ is a (unital) Jordan $^*$-isomorphism, and $h_s$ is positive, central and invertible in $(\mathcal{A}, \circ_{r_s}, *_{r_{s}})$, the element $S_t^{**} (h_s)$ is positive, central and invertible in $(\mathcal{A}, \circ_{r_{t+s}}, *_{r_{t+s}})$, and thus the mapping $M_{S_t^{**} (h_s)}^{t+s}$ is invertible in $B(\mathcal{A})$. It follows from \eqref{eq three 08032020} that $S_{t+s} (a) = S_t S_s(a).$\smallskip We have therefore shown that $\{S_s: s\in \mathbb{R}_0^{+}\}$ is a one-parameter semigroup of surjective linear isometries (i.e. triple isomorphisms) on $\mathcal{A}$. It only remains to show that it is uniformly continuous. The uniform continuity of the semigroup $\{T_s: s\in \mathbb{R}\}$ proves that the mapping $t\mapsto h_s = T_s^{**} (1) $ is continuous at zero. \smallskip For each real $s$, the element $h_s$ is positive, central and invertible in the JB$^*$-algebra $(\mathcal{A}, \circ_{r_s}, *_{r_{s}})$, and thus $m_q (h_s)>0$ for all $s\in \mathbb{R}$. Since $h_0 =1$, we can deduce from \eqref{eq mq is 1Lipschitz} the existence of $\rho>0$ and $0<\theta_1\leq \theta_2$ in $\mathbb{R}$ such that $\Omega_{h_s} \subseteq [\theta_1, \theta_2],$ equivalently $\theta_1\leq m_q (h_s) \leq \theta_2,$ for all $|s|<\rho$. In particular, for each natural $n$, the mapping $s\mapsto h_s^{[2 n-1]}$ is continuous at zero. Consequently, por each odd polynomial with zero constant term $p(\lambda)$, the mapping $s\mapsto p_{t}(h_s)$ also is continuous at zero (where we employ the triple polynomial calculus). Fix a natural $m$. By the Stone-Weierstrass theorem the function $g_m: [\theta_1, \theta_2] \to \mathbb{R}$, $g_m(\lambda )= \lambda^{\frac{1}{3^m}}$ can be uniformly approximated by an odd polynomial with zero constant term. By combining the previous facts we prove that the mapping $s\mapsto (g_m)_t(h_s) = h_{s}^{[\frac{1}{3^m}]}$ is continuous at zero (for all $m\in \mathbb{N}$). Since the sequence $(g_m)_m$ converges uniformly to the unit element $\textbf{1}$ in $C[\theta_1, \theta_2],$ and $\textbf{1}_t (h_s) = r(h_s)$ for all $|s|<\rho$, it can be easily checked that the mapping $s\mapsto r(h_{s}) = r_s$ is continuous at zero.\smallskip We can therefore conclude that the mapping $s\mapsto L(h_s,r_s) = M_{h_s}^{s}$ must be continuous at zero, where $M_{h_s}^{s}$ is an invertible element in $B(\mathcal{A})$. Having in mind that $S_{s} = \left(M_{h_s}^{s}\right)^{-1} T_{s}$ ($s\in \mathbb{R}_0^{+}$), we deduce that $\{S_s: s\in \mathbb{R}_0^{+}\}$ is uniformly continuous one-parameter semigroup of surjective linear isometries, which finishes the proof of the first implication. \smallskip $(b)\Rightarrow(a)$ The identity in \eqref{eq new idenity in the statement of theorem 1 on one-parameter Jordan} holds by assumptions, it then follows from \eqref{eq 30 for ranges} that $r_{t+s} = S_t^{**} (r_s)$ for all $s,t\in \mathbb{R}$. As before, this implies that $$S_t: (\mathcal{A}, \circ_{r_s}, *_{r_{s}}) \to (\mathcal{A}, \circ_{r_{t+s}}, *_{r_{t+s}})$$ is a Jordan $^*$-isomorphism. Fix an arbitrary $a\in \mathcal{A}$ to compute $$\begin{aligned} T_{t+s} (a) & = h_{t+s} \circ_{r_{t+s}} S_{t+s} (a) = \{ h_{t+s}, r_{t+s}, S_{t+s} (a)\} = \{ h_{t+s}, r_{t+s}, S_{t} S_{s} (a)\} \\ & = \{ h_t \circ_{r_t} S_t^{**} (h_s) , S_{t}^{**} (r_{s}), S_{t} S_{s} (a)\} = \{ M_{h_t}^{t} S_t^{**} (h_s) , S_{t}^{**} (r_{s}), S_{t} S_{s} (a)\} \\ &= M_{h_t}^{t}\{ S_t^{**} (h_s) , S_{t}^{**} (r_{s}), S_{t} S_{s} (a)\} = M_{h_t}^{t} S_t^{**} \{ h_s , r_{s}, S_{s} (a)\} \\ & = {h_t}\circ_{r_t} S_t^{**} \left( h_s \circ_{r_{s}} S_{s} (a)\right) = T_t^{**} T_s^{**} (a) = T_t T_s (a), \end{aligned},$$ where in the fourth and sixth equalities we applied \eqref{eq new idenity in the statement of theorem 1 on one-parameter Jordan} and Lemma \ref{l technical centroid with two different products}$(b)$ with $h_t\in Z(\mathcal{A}, \circ_{r_t}, *_{r_t})$, respectively. We have proved that $\{T_t: t\in \mathbb{R}_0^+\}$ is a one-parameter semigroup of orthogonality preserving operators on $\mathcal{A}$. The uniform continuity of the semigroup can be easily deduced from the corresponding property of the one-parameter semigroup $\{S_t: t\in \mathbb{R}_0^+\},$ the continuity of the mapping $t\mapsto h_t$ at zero, and the identity $T_t (\cdot) = \{h_t ,r_t , S_t (\cdot)\}$ with the same arguments we gave in the final part of the proof of $(a)\Rightarrow (b)$. \end{proof} As in the case of C$^*$-algebras, in the above Theorem \ref{t Wolff one-parameter for OP JBstar} the sets $\{r_t: t\in \mathbb{R}_0^+\}$ and $\{h_t: t\in \mathbb{R}_0^+\}$ need not be one-parameter semigroups for the Jordan product (cf. \cite[Remark 3]{GarPeUnitCstaralg20}). However, assuming that each $T_t$ is a symmetric mapping these sets become semigroups for the Jordan product. \begin{corollary}\label{c Wolff one-parameter for OP JBstar symmetric} Let $\mathcal{A}$ be a JB$^*$-algebra. Suppose $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a family of symmetric orthogonality preserving bounded linear bijections on $\mathcal{A}$ with $T_0=Id$. For each $t\geq 0$ let $h_t = T_t^{**} (1)$, let $r_t$ be the range tripotent of $h_t$ in $\mathcal{A}_{sa}^{**}$ and let $S_t: \mathcal{A} \to (\mathcal{A}^{**},\circ_{r_t},*_{r_t})$ denote the Jordan $^*$-isomorphism associated with $T_t$ given by Corollary \ref{c Characterization bd OP plus bijective Jordan}. The following statements are equivalent:\begin{enumerate}[$(a)$]\item $\{T_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of orthogonality preserving operators on $\mathcal{A}$; \item $\{S_t: t\in \mathbb{R}_0^{+}\}$ is a uniformly continuous one-parameter semigroup of surjective linear isometries {\rm(}i.e. triple isomorphisms{\rm)} on $\mathcal{A}$ {\rm(}and hence there exists a triple derivation $\delta$ on $\mathcal{A}$ such that $S_t = e^{t \delta}$ for all $t\in \mathbb{R}${\rm)}, the mapping $t\mapsto h_t $ is continuous at zero, $h_t,r_t\in Z(M(\mathcal{A}))$ and the identities $$h_{t+s} = h_t \circ h_s, \ r_{t+s} = r_t \circ r_s, \hbox{ and }$$ $$h_{t+s} = h_t \circ_{r_t} S_t^{**} (h_s)= \{ h_t , {r_t}, S_t^{**} (h_s) \},$$ hold for all $s,t\in \mathbb{R}.$ \end{enumerate} \noindent Moreover, if any of the previous equivalent statements holds, then there exist $h\in Z(M(\mathcal{A}))$ and a Jordan $^*$-derivation $d$ on $\mathcal{A}$ such that $r_t\circ S_t = e^{t d},$ and $$T_t (a) = e^{t h}\circ (r_t\circ S_t) (a) = e^{t (M_h+ d)} (a),$$ for all $a\in A$, $t\in \mathbb{R}.$ \end{corollary} \begin{proof} We shall only prove the extra affirmations in $(a)\Rightarrow (b)$. We begin by observing that the additional hypothesis on $T_{t}$ --i.e. $T_t$ symmetric-- shows that $h_t,r_t\in M(\mathcal{A}_{sa})$. Therefore $r_t$ is a symmetric unitary in $\mathcal{A}^{**}$, and hence $r_t^2 = 1$. In particular, $S_t: \mathcal{A}\to \mathcal{A}$ is a symmetric operator too. Namely, we have shown in the comments before Lemma \ref{l kernel is an ideal JBstar} that the mapping $M_{h_t}^t (x) = h_t \circ_{r_t} x$ is invertible in $B(\mathcal{A})$. Then, by the symmetry of $T_t$, $h_t$ and $r_t$, we get $$\begin{aligned}M_{h_t}^t S_t(x^*) &= h_t \circ_t S_t(x^*)= T_t(x^*) = T_t (x)^* = \left( h_t \circ_t S_t(x) \right)^*\\ &= \{ h_t, r_t, S_t(x)\}^* = \{ h_t, r_t, S_t(x)^*\} = M_{h_t}^t \left(S_t(x)^*\right), \end{aligned} $$ and thus $S_t(x)^* = S_t(x)^*$ for all $x\in \mathcal{A}$. \smallskip As in the proof of Theorem \ref{t Wolff one-parameter for OP JBstar}$(a)\Rightarrow (b)$, the identity $h_{t+s} = h_t \circ_{r_t} S_t^{**} (h_s)$ implies that $r_{t+s} = S_t^{**} (r_s),$ for all $s,t\in \mathbb{R}$ (cf. \eqref{eq 30 for ranges}).\smallskip Since $S_t : \mathcal{A}\to (\mathcal{A},\circ_{r_t}, *_{r_t})$ is a Jordan $^*$-isomorphism we get $$\begin{aligned} r_t^2 \circ S_t (a)= \{r_t, r_t, S_t (a) \} &= S_t (a) = S_t (a)^{*_{r_t}} = \{r_t, S_t (a), r_t \} \end{aligned}$$ for all $a\in \mathcal{A}_{sa}$, which guarantees that $r_t$ and $S_t(a)$ operator commute for all $a\in \mathcal{A}_{sa}$ (compare \eqref{eq ideintity for operator commutativity of two hermitian elements}). Having in ming that $S_t (\mathcal{A}_{sa}) = \mathcal{A}_{sa}$ we conclude that $r_t$ lies in the center of $M(\mathcal{A})$.\smallskip Now, by \cite[Corollary 4.1$(a)$]{BurFerGarPe09} for each $a\in \mathcal{A}_{sa}$ we have $$\begin{aligned}h_t^2 \circ T_t(a) &=\{T_t (a),h_t,h_t\} =\{ h_t, T_t (a), h_t\}, \end{aligned} $$ which combined with the surjectivity of $T_t$ and \eqref{eq ideintity for operator commutativity of two hermitian elements}, it suffices to deduce that $h_t$ lies in the center of $M(\mathcal{A})$.\smallskip We shall next show that $\{r_t \circ S_t\}_{t\in \mathbb{R}}$ is a one-parameter group of Jordan $^*$-isomorphisms on $\mathcal{A}$. Indeed, since $r_t\circ S_t: \mathcal{A}\to \mathcal{A}$ is a Jordan $^*$-isomorphism, $r_s\in Z(\mathcal{A}^{**})$ and $r_{t+s} = S_t^{**} (r_s)$ we get $$(r_t\circ S_t) (r_s\circ S_s (a)) = (r_t\circ S_t^{**}) (r_s) \circ (r_t\circ S_t)(S_s(a)) = r_t^2 \circ (S_t^{**} (r_s)\circ S_t S_s(a))$$ $$= S_t^{**} (r_s)\circ S_t S_s(a)= r_{t+s}\circ S_t S_s(a) = (r_{t+s}\circ S_{t+s}) (a), $$ for all $a\in A$, which proves the desired statement.\smallskip Therefore $\{(r_t \circ S_t)|_{Z(\mathcal{A})} \}_{t\in \mathbb{R}}$ is a one-parameter group of Jordan $^*$-isomor-phisms on the commutative C$^*$-algebra $Z(\mathcal{A})$, and thus Lemma \ref{l Jordan *-derivations vanish on the center} guarantees that it must the identity constant group. Finally, since $r_t,h_t\in Z(M(\mathcal{A}))$ we deduce that $$\begin{aligned} h_{t+s} &= T_{t+s}^{**} (1) = T_{t}^{**} T_{s}^{**} (1)= h_t \circ_{r_t} S_t^{**} (h_s) \\ &=\{ h_t, {r_t}, S_t^{**} (h_s)\} = h_t\circ ({r_t}\circ S_t^{**} (h_s)) = h_t\circ h_s \end{aligned}$$ and consequently $r_{t+s} = r_t\circ r_s,$ for all $t,s\in \mathbb{R}.$\smallskip We have proved that $\{ r_t ; t\in \mathbb{R}\}$ and $\{ h_t ; t\in \mathbb{R}\}$ are uniformly continuous semigroups in $Z(M(\mathcal{A}))$. Since $Z(M(\mathcal{A}))$ is a unital commutative C$^*$-algebra, we can proceed as in the proof of \cite[Corollary 2]{GarPeUnitCstaralg20} to find $h\in Z(M(\mathcal{A}))$ such that $h_t = e^{t h}$ for all $t\in \mathbb{R}$. Having in mind that $\{r_t \circ S_t\}_{t\in \mathbb{R}}$ is a uniformly continuous one-parameter group of Jordan $^*$-isomorphisms on $\mathcal{A}$, Lemma \ref{l -parameter group of iso on a JB*-triple} and subsequent comments assure the existence of a Jordan $^*$-derivation $d$ on $\mathcal{A}$ such that $r_t\circ S_t = e^{t d},$ and thus $$T_t (a) = e^{t h}\circ (r_t\circ S_t) (a) = e^{t h}\circ e^{t d} (a) = e^{t (M_h+ d)} (a),$$ for all $a\in A$, $t\in \mathbb{R}$, where in the last equality we applied that $h \in Z(M(\mathcal{A}))$ and Lemma \ref{l Jordan *-derivations vanish on the center}. \end{proof} \textbf{Acknowledgements} A.M. Peralta partially supported by the Spanish Ministry of Science, Innovation and Universities (MICINN) and European Regional Development Fund project no. PGC2018-093332-B-I00, Junta de Andaluc\'{\i}a grant FQM375 and Proyecto de I+D+i del Programa Operativo FEDER Andalucia 2014-2020, ref. A-FQM-242-UGR18. \smallskip \section*{Conflict of interest} The authors declare that they have no conflict of interest.
1,116,691,500,523
arxiv
\section{Introduction} The modeling of neutron stars (NS) relies mostly on the present knowledge of nuclear physics, since nuclear properties determine the characteristics of its crust -- together with the electrons -- and the energetics of the nucleon liquid in its outer core~\cite{Lattimer2016, Haensel2007, Steiner2005}. Nowadays however, most of the theoretical efforts necessary for the understanding of observational data, such as gravitational waves emitted from binary NS~\cite{ligo} or X-ray emissions from milli-second pulsars~\cite{nicer1a,nicer1b,nicer2a,nicer2b}, require for the most part the understanding of the NS inner core, where densities exceed by several units the saturation density of nuclear matter ($\rho_{\mathrm{sat}}\approx 2.7\times 10^{14}$~g~cm$^{-3}$). These new data question the impact of nuclear physics constraints, operating at or around saturation density, on the properties of supra-saturation density matter. To what extent do global properties of neutron stars, such as their masses, radii or tidal deformabilities, require accurate experimental nuclear data as complementary constraints? Is the extrapolation of nuclear physics models to higher densities predominantly controlled by nuclear physics data at saturation density? What is the impact of other uncertainties, such as for instance the isospin symmetry dependence of the equation of state (EoS), which is for the most part unknown, except close to saturation density and to isospin symmetry ($(N-Z)/A\lesssim0.25$). Such questions were recently addressed by analyzing the correlations between a few nuclear empirical parameters (NEP), namely the symmetry energy $E_{\mathrm{sym}}$, its slope $L_{\mathrm{sym}}$ and the incompressibility modulus $K_\mathrm{sat}$, and NS global observables for a set of microscopic nuclear EoS derived within the Brueckner-Hartree-Fock (BHF) formalism~\cite{Wei2020}. No correlation were found, except the one between NS radius and tidal deformability for a 1.4~M$_\odot$ NS ($R_{1.4}$ and $\Lambda_{1.4}$) and the pressure of beta-stable matter at twice saturation density, as initially suggested in Ref.~\cite{Prakash2001}. In a different analysis based on the relativistic mean field (RMF) description of dense matter, a linear correlation between $\Lambda_{1.4}$ with $K_\mathrm{sat}$ and $L_{\mathrm{sym}}$ was however found, as well as an anti-correlation with $E_{\mathrm{sym}}$ and the effective mass~$m^*$~\cite{Souza2020}. Such a controversy implies that the question of the role of low-energy nuclear physics in the prediction of global properties of NS is not yet clarified. This motivates the new analysis presented in this paper. We perform a statistical analysis based on a large number of nuclear physics models (415 in total). We explore the question of the model dependence of the results by investigating various types of modeling: the Skyrme nuclear force~\cite{Bender2003} and two types of relativistic mean field (RMF) approaches, the RMF with non-linear couplings (\mbox{RMF-NL})~\cite{Bender2003,Reinhard1989} and the RMF with density dependence couplings (\mbox{RMF-DD})~\cite{Typel2018}). Note that RMF-DD models take into account, at least partially, the effect of BHF correlations at the mean field level. We have included such interactions in our analysis and we assume here that they serve as surrogates for more elaborate BHF models, such as these analysed in Ref.~\cite{Wei2020}. At variance with the analysis presented in Ref.~\cite{Wei2020}, we directly compare the nuclear models predictions in finite nuclei to experimental data. Our approach is different from the one presented in Refs.~\cite{Stone2007,Dutra2012,Dutra2014}, where almost the same set of models were confronted to NS observables ignoring their adequacy in describing low energy nuclear physics properties. The two first papers compare non-relativistic Skyrme interactions~\cite{Stone2007,Dutra2012}, while the last one uses relativistic Hartree interactions as well as a smaller set of relativistic point interactions~\cite{Dutra2014}. In each study, only an extremely small number of the interactions were able to describe all of the nuclear matter properties considered. Surprisingly, none of the successful interactions are among those that provide the best fits to nuclear binding energies and charge radii. In the present paper we adopt a different strategy where we first select the models according to their ability to reproduce low-energy nuclear physics data. To do so, we perform a direct comparison in finite nuclei between model predictions and low energy nuclear data, namely we consider nuclear binding energies, charge radii, giant monopole energies and a constraint on the density dependence of the symmetry energy. We end up with different groups of models passing various constraints with different accuracies, as in the previous analyses. In the second step of the analysis, we propagate the model predictions to higher densities. This allows us to analyse the impact of low energy nuclear physics data on the predictions of global NS properties. We show that the model dispersion at high density is weakly impacted by low-energy nuclear physics data, except for the data associated to the symmetry energy, while the largest source of uncertainties lies in the density dependence of the EoS, which is not constrained by low energy nuclear physics data. To be more precise, we find that the constraint to reproduce low-energy nuclear physics properties leads to the prediction that canonical mass neutron stars should have a radius between 12 and 14~km, if they are made of nucleons and leptons only. However, increasing the accuracy of the reproduction of the low-energy data is less effective than the missing information about the density dependence of the EoS at two to four times nuclear saturation density. So the confrontation of the nuclear equation of state (EoS) with low-energy nuclear physics data, while necessary, is not sufficient for an accurate prediction of the dense matter equation of state. While such a result may have been anticipated, at least qualitatively, our analysis provides quantitative estimates of the link between the goodness of nuclear models assessed in finite nuclei and their predictions for NS global properties. In our analysis, we do not explore the impact of phase transitions on NS global properties while they are even more uncertain than the density dependence of dense nucleonic matter. For massive neutron stars, the dominant source of uncertainties comes indeed from the lack of a precise prediction for the new phase(s). The question whether present astrophysical data already indicate the existence of a phase transition is not yet a settled one, see for instance~\cite{Annala2020,Li2021,Somasundaram2022b} for a sample of recent papers on this subject. In the present work, we focus on the nuclear physics uncertainties, although our conclusion concerning the weak impact of nuclear physics data becomes even stronger in the case of phase transition(s) in dense matter. The present paper is organised as follow: We first list the experimental data in Section~II, namely the nuclear binding energies, the nuclear charge radii, the isoscalar giant monopole resonance energy in $^{208}$Pb, and the density dependence of the symmetry energy, and discuss the uncertainty that we consider in the comparison between the models and the data. The binding energies and charge radii are considered only for a set of spherical nuclei in order to avoid the complication of including a pairing interaction and possible deformation effects. We then explain in Section~III how the EDF are classified and we show that our best selection defines a clear correlation between $E_{\mathrm{sym}}$ and $L_{\mathrm{sym}}$. We then calculate in Section IV masses and radii of NS based on the different groups. A further analysis of the density dependence of the symmetry energy is performed in Section V, considering the constraint of the NS mass. We next determine NS global properties from our best set of models and analyse the correlation between the radius, the mass and the central pressure at beta-equilibrium in Section VI. Other global properties such as tidal deformability and moment of inertia are studied in Section VII. We present our conclusions in Section VIII. \section{Low energy nuclear experimental data and modeling} \label{sec:data} In this section we start with a quick discussion of the models and then present and discuss the nuclear experimental data employed in the model selection. \subsection{Modeling nuclear low energy properties} In our analysis, we consider a set of Energy Density Functionals (EDFs), which have been found to be an effective tool for analysing the fundamental properties of finite nuclei and for connecting these to nuclear matter properties~\cite{Bender2003}. EDFs can be employed over the full nuclide chart, except for very light nuclei with mass number $A\lesssim 10$. They have a number of free parameters (typically from 5 to 10) which are adjusted on low energy nuclear properties and are usually calibrated to reproduce the ground state energy of spherical nuclei or of the entire nuclear chart and charge radii. Some of them however are only adjusted to nuclear empirical parameters without being employed to describe finite nuclei. In our analysis, we consider a full set of existing EDFs (415 in total), independently of the way they have been adjusted. We consider both non-relativistic and relativistic mean field models. The former are employed in the Hartree-Fock or Hartree-Fock-Bogoliubov framework together with a zero-range Skyrme-type interaction or a finite range Gogny or M3Y type force~\cite{Bender2003,Stone2007,skyrmeligo}. The latter are typically used in a Hartree-Bogoliubov approach with a Lagrangian based on meson-exchange potentials~\cite{Bender2003,Reinhard1989,Typel2018}. In all cases the effective nucleon-nucleon interaction is the key to good agreement of the calculations with experimental data. EDFs also predict nucleon densities, deformations and skin thicknesses, as well as the nuclear EoS, which is a fundamental ingredient to determine the properties of neutron stars, see Ref.~\cite{Bender2003,Stone2007} for a complete review. \subsection{Energies of doubly magic nuclei} Doubly magic nuclei are often used to calibrate EDF models since they are spherical (no deformation) and have closed shells (no pairing). The many-body complexity is therefore reduced, which accelerates the search for the best set of parameters reproducing the experimental data. Introducing pairing and deformation would lead to an increase of the number of parameters in the model and increase the subsequent uncertainties as well. There are about 13 doubly magic nuclei, see tables~\ref{tab:data:magic} and \ref{tab:rchsample}, which span the nuclear mass table from light to heavy nuclei, as well as from isospin symmetric to asymmetric nuclei. They allow an easy and tractable search for possible sources of uncertainties in the confrontation of mean field interactions with experimental data. \begin{table}[tb] \centering \setlength{\tabcolsep}{1pt} \renewcommand{1.5}{1.3} \caption{Binding energies $B$ for the 13 doubly magic nuclei which are considered in the present work. Here (-) stands for experimental error-bars smaller than the accuracy given in the table and $^\#$ identifies interpolated numbers. We also compare our reference values~\cite{AMDC2016} to the ones from Ref.~\cite{unedf}.} \begin{ruledtabular} \begin{tabular}{rrrd{9}d{9}} $Z$ & $N$ & nucleus & \multicolumn{1}{c}{$B$ (MeV)} & \multicolumn{1}{c}{$B$ (MeV)} \\ & & & \multicolumn{1}{c}{Ref.~\cite{AMDC2016}} & \multicolumn{1}{c}{Ref.~\cite{unedf}} \\ \hline 8 & 8 & $^{16}$O & -127.6193(-) & -127.6172(-)\\ 14 & 20 & $^{34}$Si & -283.4289(140) & -283.4208(141) \\ 20 & 20 & $^{40}$Ca & -342.0521(-) & -342.0336(2) \\ 20 & 28 & $^{48}$Ca & -416.0009(1) & -415.9720(41) \\ 20 & 32 & $^{52}$Ca & -438.3279(7) & -436.5522(6986) \\ 20 & 34 & $^{54}$Ca & -445.3642(500) & -.- \\ 28 & 20 & $^{48}$Ni$^\#$ & -348.7275(5000) & -.- \\ 28 & 28 & $^{56}$Ni & -483.9956(4) & -483.9505(110) \\ 28 & 50 & $^{78}$Ni$^\#$ & -641.5470(6000) & -.- \\ 40 & 50 & $^{90}$Zr & -783.8972(1) & -783.7953(23) \\ 50 & 50 & $^{100}$Sn & -825.2944(3000) & -824.6295(7054) \\ 50 & 82 & $^{132}$Sn & -1102.8430(20) & -1102.6860(136) \\ 82 & 126 & $^{208}$Pb & -1636.4301(11) & -1635.8927(12) \\ \end{tabular} \end{ruledtabular} \label{tab:data:magic} \end{table} Let us first analyse the present situation in terms of the low-energy nuclear data. The experimental data we use in the present study are given in Table~\ref{tab:data:magic}. We have considered 13 doubly magic nuclei, including two for which the binding energy is not measured but extrapolated from neighbouring nuclei ($^{48}$Ni and $^{78}$Ni). Both of the latter are the first unmeasured-mass nucleus of their respective double beta-decay mass parabolas, which each contain five nuclei with measured masses and thus permit a fairly precise extrapolation of the unmeasured masses. The 13 nuclei are grouped into isospin symmetric ones (group S containing 4 nuclei) and the isospin asymmetric ones (group A with 9 nuclei). We also compare the binding energies we consider with the ones used by the UNEDF collaboration~\cite{unedf}, originating from averages of the AME2003~\cite{AME2003} mass table values with recent JYFLTRAP measurements~\cite{JYFLTRAP}. The latter values deviate from those we consider by less than about 0.2~MeV, except for $^{52}$Ca, $^{100}$Sn and $^{208}$Pb, where the differences are respectively 1.8, 0.7 and 0.5~MeV. These deviations are smaller than the criteria we will introduce in the following to assess the quality of the mean field interactions. These differences in the experimental values thus have little impact on the definitions of the groups of interactions we define in the following. In the case of the Bruxelles-Montreal Skyrme interactions, a phenomenological Wigner correction $E_W$ is applied to the binding energy, which is given in term of the following expression, \begin{eqnarray} E_W &=& V_W \exp \left\{ -\lambda\left(\frac{N-Z}{A}\right)^2\right\} \nonumber \\ &&+V_W^\prime \vert N-Z\vert \exp \left\{ -\left(\frac{A}{A_0}\right)^2\right\} \, . \end{eqnarray} A spin-orbit interaction is added to the non-relativistic Skyrme force, see Appendix~\ref{ap:sof}, while the relativistic approach generates it naturally from the scalar and time components of the self-energies~\cite{Bender2003}. In the following, we consider that the model accuracy in the prediction of binding energies $B$ is \begin{equation} \delta_{B}=2.0~\hbox{MeV}\, . \end{equation} This uncertainty is much larger than the experimental one, see Ref.~\cite{Dobaczewski2015} and references therein for more detailed discussions, and essentially reflects the limitation of the EDF approach. Note however that this uncertainty represents a per mil (0.1 \%) accuracy for $^{208}$Pb. \subsection{Charge radii of doubly magic nuclei} \begin{table*}[tb] \centering \setlength{\tabcolsep}{1pt} \renewcommand{1.5}{1.3} \caption{Comparison of the experimental charge radii measured by different groups to those of a sample of effective nuclear interactions. Here we consider the values given in Ref.~\cite{Angeli2013}. $^\dagger$: Using the 1995 PDG data (see the text), as in Ref.~\cite{Chabanat1997}.} \begin{ruledtabular} \begin{tabular}{rrrccccccccc} $Z$ & $N$ & nucleus & $R_{ch}$~(fm) & $R_{ch}$~(fm) & $R_{ch}$~(fm) & SLy5 & BSk18 & UNEDF0 & DD-ME2 & NL3* & NLRA1 \\ & & & \rm{Ref.}~\cite{Angeli2013} & \rm{Ref.}~\cite{unedf} & \rm{Ref.}~\cite{Fricke1995} & \rm{Ref.}~\cite{Chabanat1998} & \rm{Ref.}~\cite{BSK18} & \rm{Ref.}~\cite{unedf0} & Ref.~\cite{ddme2} & \rm{Ref.}~\cite{nl3s} & \rm{Ref.}~\cite{NLRA1} \\ \hline 8 & 8 & $^{16}$O & 2.6991(52) & 2.7010 & -.- & 2.7975 & 2.8141 & 2.8138 & 2.7283 & 2.7346 & 2.7167\\ & & & & & & & 2.7825$^\dagger$ & 2.7992$^\dagger$ & \\ 20 & 20 & $^{40}$Ca & 3.4776(19) & 3.4780 & 3.4767(8) & 3.5059 & 3.5200 & 3.4980 & 3.4651 & 3.4701 & 3.4664\\ & & & & & & & 3.4939$^\dagger$ & 3.5081$^\dagger$ & \\ 20 & 28 & $^{48}$Ca & 3.4771(20) & 3.4790 & 3.4736(8) & 3.5262 & 3.5353 & 3.5204 & 3.4811 & 3.4701 & 3.4700 \\ & & & & & & & 3.5137$^\dagger$ & 3.5228$^\dagger$ & \\ 40 & 50 & $^{90}$Zr & 4.2694(10) & 4.2690 & 4.2692(10) & 4.2859 & 4.2919 & 4.2716 & 4.2733 & 4.2631 & 4.2717\\ & & & & & & & 4.2759$^\dagger$ & 4.2857$^\dagger$ & \\ 50 & 82 & $^{132}$Sn & 4.7093(76) & -.- & -.- & 4.7198& 4.7410 & 4.7221 & 4.7172 & 4.7031 & 4.7141 \\ & & & & & & & 4.7102$^\dagger$ & 4.7315$^\dagger$ & \\ 82 & 126 & $^{208}$Pb & 5.5012(13) & 5.4850 & 5.5013(7) & 5.5001 & 5.5184 & 5.5021 & 5.5180 & 5.5085 & 5.5233\\ & & & & & & & 5.4920$^\dagger$ & 5.5103$^\dagger$ & \\ \end{tabular} \end{ruledtabular} \label{tab:rchsample} \end{table*} We compare charge radii $R_{ch}$ from various compilations in Table~\ref{tab:rchsample}. They are in good agreement, with differences less than 0.01~fm, except for $^{208}$Pb, where the value taken in Ref.~\cite{unedf} is 0.016~fm smaller than those given in Refs.~\cite{Angeli2013,Fricke1995}. We also show in Table~\ref{tab:rchsample} the charge radii predicted by a set of Skyrme and relativistic approaches and explore the impact of changing the proton radius of the SLy5 and BSk18 interactions from the present adopted one to the one suggested in the 1995 PDG data. Some interactions have indeed been adjusted with different values for the proton radius, since its value has changed over time. The SLy5 interaction, for instance, was obtained with the 1995 PDG data~\cite{Chabanat1997}. The nuclear charge radius $R_{ch}$ is related to the rms proton radius $\langle R_p^2\rangle$ as \cite{Bertozzi1972,Friar1975,Bender2003}, \begin{equation} \langle R_{ch}\rangle^2 = \langle R_p^2\rangle + \langle r_p^2\rangle + \frac N Z \langle r_n^2\rangle + \langle R_{ch}^{so}\rangle^2 + \langle R_{ch}^{DF}\rangle^2 \, , \label{eq:rch} \end{equation} where the second term $\langle r_p^2\rangle= \frac 3 2 \sigma^2$ originates from the convolution of the point particle proton density with a proton Gaussian form factor (with width $\sigma$). The proton radius is further discussed below. The third term in Eq.~\eqref{eq:rch} is a correction induced by the negative electromagnetic contribution of the neutron charge density. It is defined as $\langle r_n^2\rangle=\frac 3 2 \hbar^2 /(m_N c)^2 \mu_n $, with $\mu_n$ the neutron magnetic moment. The spin-orbit charge distribution furnishes a magnetic dipole moment correction to the nuclear rms charge radius, $\langle R_{ch}^{so}\rangle^2$, the fourth term in Eq.~\eqref{eq:rch}, which reads \begin{equation} \langle R_{ch}^{so}\rangle^2=\frac{1}{Ze}\frac{\hbar}{m_{p}c}\sum_{nlj\tau}v_{nlj\tau}^{2}\mu_{\tau}^\prime\left(2j+1\right)\left\langle \vec{\sigma}\cdot\vec{l}\right\rangle _{lj}\,, \label{eq:rso} \end{equation} where the $v_{nlj\tau}^2$ are the orbital occupation probabilities. The modified magnetic dipole moments $\mu_{\tau}^\prime$ are defined as $\mu_{n}^\prime=\mu_{n}$ and $\mu_{p}^\prime=\mu_{p}-1/2$~\cite{Friar1975}, and the $\mu_{\tau}$ are the intrinsic nucleon magnetic dipole moments, $\mu_{n}=-1.91304\mu_{N}$ and $\mu_{p}=2.79285\mu_{N}$, with $\mu_{N}= e\hbar/(2m_{p}c)$. Note that we have truncated the accuracy with which $\mu_n$ and $\mu_p$ are known since it does not impact the present analysis. Finally the spin matrix elements in Eq.~\eqref{eq:rso} are given in Appendix~\ref{ap:so}. The last term $\langle R_{ch}^{DF}\rangle^2$ in Eq.~\eqref{eq:rch} is the Darwin-Foldy term, which is a relativistic correction considered only in non-relativistic approaches. We take it to have the value $\langle R_{ch}^{DF}\rangle^2=3/(4m_N^2)=0.03311$~fm$^2$~\cite{Friar1997,Jentschura2011}. We note that its value can be almost three times larger when the relativistic effective mass $M^{*}\approx 0.6 M $ is used in the non-relativistic reduction~\cite{Nishizaki1988}. However, in either case, the correction provided by the Darwin-Foldy term is small. Although a center-of-mass correction should also be considered in the comparison to experimental data, it is neglected in most calculations since the correction is usually small. We take the proton and neutron charge radii from the 2020-2021 compilation of the Particle Data Group~\cite{pdg} (PDG), which provides for the proton $\sqrt{\langle r_p^2 \rangle} = 0.8409 \pm 0.0004$~fm and for the neutron $\langle r_n^2 \rangle = -0.1161\pm 0.0022$~fm$^2$. Note that the value for the proton charge radius is still under debate, see Ref.~\cite{Karr2020} for a presentation of the actual situation. The PDG proton charge radius originates from $\mu p$ experiments, which however differs from $e p$ ones, suggesting $\sqrt{\langle r_p^2 \rangle} = 0.8751 \pm 0.0061$~fm. Considering the uncertainties in these values, they are incompatible and represent the largest uncertainty in the intrinsic nucleon properties. Interestingly, a global analysis of the proton and neutron elastic form factors in the light cone frame formulation have extracted $\sqrt{\langle r_p^2 \rangle} = 0.852 \pm 0.002_\mathrm{(stat.)}\pm0.009_\mathrm{(syst.)}$~fm and $\langle r_n^2 \rangle = -0.122\pm 0.004_\mathrm{(stat.)}\pm0.010_\mathrm{(syst.)}$~fm$^2$~\cite{Atac2021} in a good agreement with the PDG compilation~\cite{pdg}. We have estimated the impact of the uncertainty in the proton charge radius on the nuclear charge radius as follows: Considering a typical uncertainty on the proton charge radius $\delta \sqrt{\langle r_p^2 \rangle}\approx 0.04$~fm and neglecting the smaller uncertainty from the neutron charge radius, the effect on the calculation of the nuclear charge radii is of the order of $\delta R_{ch} \approx \delta \langle r_p^2 \rangle / (2 R_{ch})\lesssim 2~10^{-4}$~fm (for a typical $R_{ch}\approx 5$~fm). This uncertainty is therefore much smaller than the experimental uncertainty across different groups~\cite{Angeli2013,unedf,Fricke1995} , see table~\ref{tab:rchsample} for a set of nuclei from $^{16}$O to $^{208}$Pb, as well as the model uncertainties for this observable. We considered SLy5~\cite{Chabanat1998}, BSK18~\cite{BSK18}, UNEDF0~\cite{unedf0}, DD-ME2~\cite{ddme2}, NL3*~\cite{nl3s}, NLRA1~\cite{NLRA1} in table~\ref{tab:rchsample}. The values of the nuclear charge radii obtained by elastic electron scattering from stable and exotic nuclei have been more systematically investigated for non-relativistic Skyrme and relativistic mean field interactions in Ref.~\cite{RocaMaza2008}. Note however that there is actually no estimate of the EDF uncertainty on the nuclear charge radius, to our knowledge, and we suggest below an empirical relation for it. We conclude that the present uncertainty in the proton charge radius has no impact on the following discussion. In the past, the fits of nuclear EDFs have considered older estimates for the proton and neutron charge radii, which have varied more substantially. For instance in 1997, the Saclay-Lyon Skyrme interactions~\cite{Chabanat1997}, e.g. SLy5, employed $\langle r_p^2 \rangle = 0.634$~fm$^2$ (with $\sigma=0.65$~fm) and $\langle r_n^2 \rangle = -0.12674558$~fm$^2$, originating from the 1995 PDG compilation. For SLy5 and BSK18, we compute the charge radii obtained by taking the values for the proton and neutron charge radii from the 1995 PDG compilation. Note also that in 2003 the values $\langle r_p^2 \rangle = 0.74$~fm$^2$ and $\langle r_n^2 \rangle = -0.117$~fm$^2$ were considered in Ref.~\cite{Bender2003}. Although larger, these variations of nucleon charge radii impact the nuclear charge radius by about $0.01$~fm (for a typical $R_{ch}\approx 5$~fm), which is still smaller than the uncertainty we associate in the following to the model predictions. The fluctuation of the proton charge radius reported in the past will thus not impact the present analysis. Finally, the following empirical expression for the charge radius, \begin{equation} \langle R_{ch}^\mathrm{emp}\rangle^2 \approx \langle R_p^2\rangle + 0.64~\mathrm{fm}^2\, , \label{eq:rchemp} \end{equation} has sometimes been considered instead of Eq.~\eqref{eq:rch}, see for instance the discussion in Ref.~\cite{Bender2003}. The difference between Eqs.~\eqref{eq:rch} and \eqref{eq:rchemp} is of the order of 0.02~fm for the lightest nuclei, e.g. $^{16}$O, and decreases to about 0.0001~fm for $^{132}$Sn and $^{208}$Pb. This is the largest source of theoretical uncertainty in the estimate of the nuclear charge radius. In summary, by considering both experimental and theoretical uncertainties and by including the uncertainties in using the empirical formula~\eqref{eq:rchemp} instead of \eqref{eq:rch}, we come to the following estimate of the nuclear charge radius uncertainties which can be used in the confrontation of EDF modeling of nuclear data, \begin{equation} \delta_{R_{ch}}\approx 0.1 A^{-1/3}~\mathrm{fm}. \end{equation} We will see in the following that such a loose uncertainty in the nuclear charge radius is still able to filter out many nuclear EDFs. \subsection{Isoscalar giant monopole resonance (ISGMR) collective mode} \begin{table*}[tb] \centering \setlength{\tabcolsep}{1pt} \renewcommand{1.5}{1.3} \caption{Experimental value for the ISGMR centroid energy $E_\mathrm{GMR}$ in $^{208}$Pb compared to predictions from various nuclear EDFs. For consistency with the theoretical calculations, we report in this table the ISGMR experimental centroid energy defined as $\sqrt{m_1/m_{-1}}$ and provided in Ref.~\cite{Garg2018}. The incompressibility modulus $K_\mathrm{sat}$, the skewness parameter $Q_\mathrm{sat}$ and the parameters $p_\mathrm{c}$, $K_\mathrm{c}$, and $M_\mathrm{c}$ are also given for the EDFs.} \begin{ruledtabular} \begin{tabular}{rrrlccccccccc} $Z$ & $N$ & nucleus & $E_\mathrm{GMR}^\mathrm{exp.}$ (MeV) & SLy5 & BSk18 & UNEDF0 & RATP & SGII & SIII & DD-ME2 & NL3* & NLRA1\\ & & & $\sqrt{m_1/m_{-1}}$ & Ref.~\cite{Chabanat1998} & Ref.~\cite{BSK18} & Ref.~\cite{unedf0} & Ref.~\cite{ratp} & Ref.~\cite{sgii} & Ref.~\cite{siii} & Ref.~\cite{ddme2} & Ref.~\cite{nl3s} & Ref.~\cite{NLRA1}\\ \hline 82 & 126 & $^{208}$Pb & 13.50(10) \cite{Garg2018} & 13.77(1) & 14.02(0) & 13.65(1) & 14.12(1)& 13.44(1) & 16.79(1) & 14.08(1) & 14.77(1) & 15.50(1)\\ \hline & & & $K_\mathrm{sat}$ (MeV) & 230 & 242 & 230 & 240 & 215 & 355 & 251 & 258 & 285 \\ & & & $Q_\mathrm{sat}$ (MeV) & -364 & -364 & -404 & -350 & -381 & 101 & 479 & 122 & 279 \\ & & & $p_\mathrm{c}$ (MeV.fm$^{-3}$) & -0.653 & -0.675 & -0.659 & -0.673 & -0.608 & -0.822 & -0.589 & -0.650 & -0.678 \\ & & & $K_\mathrm{c}$ (MeV) & 35.3 & 36.0 & 36.7 & 35.4 & 34.8 & 27.4 & 23.4 & 35.7 & 31.9 \\ & & & $M_\mathrm{c}$ (MeV) & 1141 & 1202 & 1147 & 1188 & 1066 & 1717 & 992 & 1160 & 1271 \\ \end{tabular} \end{ruledtabular} \label{tab:data:gmr} \end{table*} The isoscalar giant monopole resonance energy is also used in the estimation of the adequacy of a nuclear EDF for NS properties, since it is correlated with the incompressibility modulus~\cite{Blaizot1980,Blaizot1995}. The latter determines the variation of the energy density as the nucleon density departs from the saturation density in symmetric nuclear matter (SM). It thus provides important information about the density dependence of the EoS, fundamental for the determination of NS properties. For recent reviews of the incompressibility in finite nuclei and nuclear matter, see for instance Refs.~\cite{Stone2014,Garg2018}. The energy of the ISGMR can be calculated using the sum rule approach, which provides a fast and consistent way to get the centroid of the ISGMR energy in deeply bound nuclei. It is defined as~\cite{Bohigas1979} \begin{equation} E_\mathrm{ISGMR} = \sqrt{\frac{m_1}{m_{-1}}} \, , \end{equation} where the $k$th energy-weighted sum rule is \begin{equation} m_k = \sum_l (E_l)^k \vert\langle l\vert\hat{Q}\vert 0\rangle\vert^2 \, , \end{equation} with $E_l$ the collective excitation energy and $\hat{Q}=\sum_{i=1}^A r_i^2$ the isoscalar monopole transition operator. The moment $m_1$ is evaluated in terms of a double commutator using the Thouless theorem~\cite{Thouless1961}, \begin{equation} m_1 = 2 A \frac{\hbar^2}{m_N} \langle r^2 \rangle \, , \end{equation} where $A$ is the nucleon number, $m_N$ the nucleon mass, and $\langle r^2 \rangle$ the rms radius. In the constrained Hartree-Fock (CHF) approach~\cite{Bohigas1979,Colo2004} the moment $m_{-1}$ is obtained from the derivative of the expectation value of the monopole operator, \begin{equation} m_{-1} = -\frac 1 2 \left[\frac{\partial}{\partial \lambda} \langle\lambda\vert\hat{Q}\vert\lambda\rangle\right]_{\lambda=0}\, , \end{equation} where $\vert\lambda\rangle$ is the ground-state energy of the constrained Hamiltonian, \begin{equation} \hat{H}_\mathrm{constr.} = \hat{H} + \lambda \hat{Q} \, . \end{equation} In Table~\ref{tab:data:gmr}, the experimental value and theoretical predictions for the ISGMR centroid are given for $^{208}$Pb. It has been estimated that an uncertainty of about 0.2-0.4~MeV in the centroid can be translated into an uncertainty of about 15~MeV in the incompressibility modulus~\cite{Avogadro2013}. Precision of the experimental results and of the theoretical calculations for the centroid energy are thus essential. Considering that the present uncertainty in $K_\mathrm{sat}$ is of the order of 20~MeV~\cite{Garg2018}, we have fixed the uncertainty in the model prediction for the ISGMR centroid energy to be \begin{equation} \delta_{\mathrm{ISGMR}}=0.7~\hbox{MeV}\, . \label{eq:deltagmr} \end{equation} We also report in Table~\ref{tab:data:gmr} a set of parameters defined in uniform matter. The incompressibility modulus $K_\mathrm{sat}$ and the skewness parameter $Q_\mathrm{sat}$ are nuclear empirical parameters (NEP) encoding the density dependence of the energy per particle in SM as, \begin{equation} e_\mathrm{SM}(n) = E_\mathrm{sat} + \frac 1 2 K_\mathrm{sat} x^2 + \frac 1 6 Q_\mathrm{sat} x^3 + \dots \end{equation} with $x=(n-n_\mathrm{sat})/3n_\mathrm{sat}$. We can check that the models predicting $K_\mathrm{sat}=230\pm 20$~MeV~\cite{Garg2018} also predict in $^{208}$Pb $E_\mathrm{ISGMR}=13.50\pm 0.7$~MeV, confirming \textsl{a posteriori} the relation~\eqref{eq:deltagmr}. Note also the large differences predicted by these EDFs for the parameter $Q_\mathrm{sat}$ for the models with good incompressibilities: between -400 and -350~MeV for the non-relativistic EDFs and an opposite sign for the relativistic ones. It has been suggested that these systematic differences are at the origin of the model dependence in the $E_\mathrm{ISGMR}$-$K_\mathrm{sat}$ correlation~\cite{Khan2012, Margueron2018a}. The other parameters $p_c$, $K_c$ and $M_\mathrm{c}$ reported in Table~\ref{tab:data:gmr} are the pressure, incompressiblity and $M_c$-parameter, \begin{eqnarray} p(n) &=& n^2 \frac{\partial e}{\partial n} \, , \\ K(n) &=& \frac{18}{n} p(n) + 9 n^2 \frac{\partial^2 e}{\partial n^2} \, , \\ M(n) &=& 3 n \frac{\partial K(n)}{\partial n} \, , \label{eq:mc} \end{eqnarray} defined at the crossing density $n_\mathrm{c}=0.71(1)n_\mathrm{sat}$~\cite{Khan2012}. Note that the values of the pressure and incompressibilities, $p_\mathrm{c}$ and $K_\mathrm{c}$ are quite constant for the models with good incompressibilities. The value of $M_\mathrm{c}$ is in agreement with the one suggested in Ref.~\cite{Khan2012}, namely $M_\mathrm{c}=1100\pm70$~MeV. \subsection{The symmetry energy} \label{sec:esym} The symmetry energy is a crucial quantity guiding the exploration of asymmetric nuclear matter, such as beta-equilibrium matter in NS's. We shall, however, distinguish between the global symmetry energy $e_\mathrm{sym}$ defined as \begin{equation} e_\mathrm{sym}(n) = e_\mathrm{NM}(n) - e_\mathrm{SM}(n) \, , \end{equation} and its quadratic contribution $e_\mathrm{sym,2}$, \begin{equation} e_\mathrm{sym,2}(n) = \frac 1 2 \frac{\partial^2 e(n,\delta)}{\partial \delta^2}\bigg\vert_{\delta=0} \, , \end{equation} where $e(n,\delta)$ is the energy per particle in asymmetric matter, $\delta=(n_n-n_p)/n$ the isospin asymmetry, and $e_\mathrm{NM}$ the energy per particle in neutron matter (NM). The quadratic contribution to the symmetry energy $e_\mathrm{sym,2}$ is the quantity which is probed by nuclear physics experiments, since the isospin parameter remains small ($\delta\lesssim 0.25$), while the properties of neutron matter, with large asymmetries, are better described by $e_\mathrm{sym}$. The difference between the symmetry energy and its quadratic contribution, $e_\mathrm{sym}-e_{\mathrm{sym},2}$, represents the non-quadraticities, which are often found to be small (2-3\% of the symmetry energy), see Ref.~\cite{Somasundaram2021} and references therein for a recent study. In the literature, these two quantities are usually not distinguished, although formally, they are different. The two representations of the symmetry energy $e_\mathrm{sym}$ and $e_{\mathrm{sym},2}$ can be expanded in terms of the density parameter $x=(n-n_\mathrm{sat})/(3n_\mathrm{sat})$ as, \begin{align} e_\mathrm{sym}(n) &= E_{\mathrm{sym}} + L_{\mathrm{sym}} x + \frac{1}{2}K_{\mathrm{sym}} x^2 + \frac{1}{6}Q_{\mathrm{sym}} x^3 + \dots, \label{eq:esym}\\ e_\mathrm{sym,2}(n) &= E_{\mathrm{sym},2} + L_{\mathrm{sym},2}\,x + \frac{1}{2}K_{\mathrm{sym},2}\, x^2 \nonumber\\ &+ \frac{1}{6}Q_{\mathrm{sym},2}\, x^3 + \dots, \label{eq:esym2} \end{align} where $E_{\mathrm{sym}}$, $L_{\mathrm{sym}}$, $K_{\mathrm{sym}}$, and $Q_{\mathrm{sym}}$ are nuclear empirical parameters (NEP) and $E_{\mathrm{sym},2}$, $L_{\mathrm{sym},2}$, $K_{\mathrm{sym},2}$, and $Q_{\mathrm{sym},2}$ are quadratic nuclear empirical parameters (QNEP). \begin{figure}[tb] \centering \includegraphics[width=0.50\textwidth]{plot_EsymLsym_exp.pdf} \caption{Correlation between the symmetry energy $E_{\mathrm{sym},2}$ and its slope $L_{\mathrm{sym},2}$ at saturation density. See text for more details on the various constraints.} \label{fig:ELsymExp} \end{figure} There are several experimental constraints for the symmetry energy in finite nuclei, see Refs.~\cite{Lattimer2013,Wei2020} for a detailed presentation of these. Adopting the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ representation, we show a few of them in Fig.~\ref{fig:ELsymExp}, including the recent ones from the analyses of the PREX-II and CREX parity-violating electron scattering (PVES) experiments. Before discussing these recent results, let us first present the others: ``HIC'': constraints inferred from isospin diffusion in heavy ion collisions (HICs)~\cite{Tsang2009}; ``Polarizability'': constraints on the electric dipole polarizability of $^{208}$Pb, $^{120}$Sn and $^{68}$Ni~\cite{RocaMaza2015}; ``$\Delta r_{np}$(Sn)'': constraints deduced from the analysis of neutron skin thickness in Sn isotopes~\cite{Chen2010}; ``FRDM'': constraint from the finite-range droplet mass model calculations~\cite{FRDM}; ``IAS'': constraint deduced from the analysis of the excitation energy of the isobaric analog state (IAS) based on Skyrme-Hartree-Fock calculations~\cite{Danielewicz2013}; ``IAS+$\Delta r_{np}$'': combination of the IAS constraint and neutron skin in $^{208}$Pb~\cite{Danielewicz2013}. Two additional constraints are also represented, while they formally refer to the global symmetry energy NEP ($E_{\mathrm{sym},2}$ and $L_{\mathrm{sym},2}$: ``Neutron Stars'': horizontal constraint obtained from a Bayesian analysis of mass and radius observations of NSs by considering the 95\% confidence values for $L_{\mathrm{sym}}$~\cite{Steiner2013}; ``Unitary Gas'': the analysis of the unitary gas predictions for the symmetry energy parameters~\cite{Tews2017} permits the values to the right of the curve. Also shown in Fig.~\ref{fig:ELsymExp} are analyses of the PREX-II~\cite{Adhikari2021} and CREX~\cite{Adhikari2022} PVES experiments: There are indeed big differences between the analysis by Reed et al~\cite{Reed2021} ($E_{\mathrm{sym},2}=38.1\pm 4.7$~MeV, $L_{\mathrm{sym},2}=106\pm 37$~MeV) and the one by Reinhard et al.~\cite{Reinhard2021} ($E_{\mathrm{sym},2}=32\pm 1$~MeV, $L_{\mathrm{sym},2}=54\pm 8$~MeV), which also includes the electric dipole polarizability. Another analysis by Zhang and Chen~\cite{Zhang2022} combining PREX-II and CREX using a Bayesian inference find a very low centroid for $L_{\mathrm{sym},2}$ ($E_{\mathrm{sym},2}=30.2^{+3.0}_{-4.1}$~MeV, $L_{\mathrm{sym},2}=15.3^{+41.5}_{-46.8}$~MeV). It has indeed been pointed our that the results of PREX-II and CREX are in disagreement~\cite{Reinhard2022,Yuksel2022}. There are large differences among the various PVES analyses. One of the tightest constraints in the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ diagram shown in Fig.~\ref{fig:ELsymExp} is the one referred to as ``IAS+$\Delta r_{np}$''~\cite{Danielewicz2013}. We will investigate the role of this constraint in the following analysis. \section{Combined analysis of the modeling reproducing low energy nuclear data} Since we have different types of low-energy nuclear physics data, we face the difficulty of assembling them together in a meaningful way. We suggest two ways of performing the assessment, each of them providing interesting results about the interactions. \subsection{The groups $G_i$ and $D_i$} The first method is a global assessment, in which all nuclei contribute equally to the variance of each type of observable, where the variances $\sigma_i$ for the binding energies ($i=B$), the charge radii ($i=R_{ch}$) and the ISGMR energy ($i$=ISGMR) are defined as, \begin{eqnarray} \sigma_{B}^2 &=& \frac{1}{N_{B}} \sum_{i} \left[\frac{B_i(\mathrm{exp})-B_i(\mathrm{model})}{\delta_{B}} \right]^2 \, , \\ \sigma_{R_{ch}}^2 &=& \frac{1}{N_{R_{ch}}} \sum_{i} \left[\frac{R_{ch,i}(\mathrm{exp})-R_{ch,i}(\mathrm{model})}{\delta_{R_{ch}}(A_i)} \right]^2 \, , \\ \sigma_{\mathrm{ISGMR}}^2 &=& \frac{1}{N_{\mathrm{ISGMR}}} \times \nonumber\\ && \sum_{i} \left[\frac{E_{\mathrm{ISGMR},i}(\mathrm{exp})-E_{\mathrm{ISGMR},i}(\mathrm{model})}{\delta_{\mathrm{ISGMR}}}\right]^2,\qquad \end{eqnarray} with $N_{B}=13$ (see Table~\ref{tab:data:magic}), $N_{R_{ch}}=6$ (see Table~\ref{tab:rchsample}), and $N_{\mathrm{ISGMR}}=1$ (see Table~\ref{tab:data:gmr}). The uncertainties $\delta_B$, $\delta_{R_{ch}}(A_i)$ and $\delta_{\mathrm{ISGMR}}$ have been introduced in Section~\ref{sec:data}. The groups built on this global assessment will be called G$_i$. In the second method, the variances of the binding energy and the charge radius of the symmetric $N=Z$ and asymmetric $N\ne Z$ nuclei are accumulated separately. We evaluate the following rms deviations for the symmetric nuclei, \begin{eqnarray} \sigma_{B,S}^2 &=& \frac{1}{N_{B,S}} \sum_{i\in S} \left[\frac{B_i(\mathrm{exp})-B_i(\mathrm{model})}{\delta_{B}} \right]^2 \, , \\ \sigma_{R_{ch},S}^2 &=& \frac{1}{N_{R_{ch},S}} \sum_{i\in S} \left[\frac{R_{ch,i}(\mathrm{exp})-R_{ch,i}(\mathrm{model})}{\delta_{R_{ch}}(A_i)} \right]^2 \, , \end{eqnarray} for asymmetric nuclei, \begin{eqnarray} \sigma_{B,A}^2 &=& \frac{1}{N_{B,A}} \sum_{i\in A} \left[\frac{B_i(\mathrm{exp})-B_i(\mathrm{model})}{\delta_{B}} \right]^2 \, , \\ \sigma_{R_{ch},A}^2 &=& \frac{1}{N_{R_{ch},A}} \sum_{i\in A} \left[\frac{R_{ch,i}(\mathrm{exp})-R_{ch,i}(\mathrm{model})}{\delta_{R_{ch}}(A_i)}\right]^2 \, , \end{eqnarray} and finally for the ISGMR energy, which remains the same as in the previous case. We include calculations for the following nuclei in the groups described above, \begin{itemize} \item $(B,S)$: $^{16}$O, $^{40}$Ca, $^{56}$Ni, $^{100}$Sn. \item $(B,A)$: $^{34}$Si, $^{48}$Ca, $^{52}$Ca, $^{54}$Ca, $^{48}$Ni, $^{78}$Ni, $^{90}$Zr, $^{132}$Sn, $^{208}$Pb. \item $(R_{ch},S)$: $^{16}$O, $^{40}$Ca. \item $(R_{ch},A)$: $^{48}$Ca, $^{90}$Zr, $^{132}$Sn, $^{208}$Pb. \item $(\mathrm{ISGMR})$: $^{208}$Pb. \end{itemize} We thus have $N_{B,S}=4$ and $N_{B,A}=9$, $N_{R_{ch},S}=2$ and $N_{R_{ch},A}=4$, and $N_{\mathrm{ISGMR}}=1$. We note that the rms deviations of the global approach are simply the renormalized sums of the deviations of this second approach. In the following, the groups built upon this more detailed approach are called D$_i$. \begin{figure}[tb] \centering \includegraphics[scale=0.47,trim=10 10 0 0, clip=true]{plot-corner.png} \caption{Representation of the rms deviations for the observables $B$, $R_{ch}$ and $E_{\mathrm{ISGMR}}$.} \label{fig:corner} \end{figure} We show in Fig.~\ref{fig:corner} the distribution of the rms deviations $\sigma_i$ associated with the observables $i=E$, $i=R_{ch}$ and $i=E_{\mathrm{ISGMR}}$, for all the interactions considered (415 in total). Note that for these three observables, the main peak is systematically located at about $\sigma_i<2$, which supports our choices for the associated uncertainties presented in Section~\ref{sec:data}. In the following, we sort the modeling according to the rms deviation and attribute to them a set of letters, namely the three letters $L_{B}L_{R_{ch}}L_{E_\mathrm{ISGMR}}$ for the groups G$_i$ and the five letters $L_{B,S}L_{B,A}:L_{R_{ch},S}L_{R_{ch},A}:L_{E_\mathrm{ISGMR}}$ for the groups D$_i$, where the letters $L$ are: \begin{itemize} \item $L=A$, if $\sigma<1$; \item $L=B$, if $1<\sigma<2$; \item $L=C$, if $2<\sigma<3$; \item $L=D$, if $\sigma>3$. \end{itemize} The complete list of the scores for each parametrization analyzed in this work is given in the supplemental material. As examples, we obtain the following scores for the two approaches (global versus detailed) in the cases of the relativistic NLSV1, and the non-relativistic RATP and SLy4 Skyrme forces: \begin{itemize} \item NLSV1: ABC, BA:AB:C, \item RATP: BBA, BC:BB:A, \item SLy4: BBA, BB:BB:A. \end{itemize} The relativistic NLSV1 interaction reproduces the binding energies better than the charge radii, which are better reproduced than the ISGMR energy. In detail, the binding energies (charge radius) of the $N\ne Z$ nuclei are better (worse) reproduced than the $N=Z$ ones. For the non-relativistic models, we observed that they are scored identically (BBA) in the general analysis, but a more detailed analysis shows that the SLy4 is better than the RATP at reproducing the binding energies in $N\ne Z$ nuclei. This illustrates the differences in the global and detailed approach, which will be further analysed in the following. \begin{table}[tb] \tabcolsep=0.05cm \def1.5{1.5} \centering \caption{Number of EDFs passing the filters imposed by the groups G$_i$ and D$_i$, and D$_{4\mathrm{sym}}$. The number of EDFs in each groups for which M$_{\mathrm{TOV}}\geq 1.6$M$_\odot$ and M$_{\mathrm{TOV}}\geq 2.0$M$_\odot$ are also counted. See the text for more details.} \begin{tabular}{lcccccccccc} \hline\noalign{\smallskip} \rm & D$_0$/G$_0$ & D$_1$ & G$_1$ & D$_2$ & G$_2$ & D$_3$ & G$_3$ & D$_4$ & G$_4$ & D$_{4\mathrm{sym}}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} Total & 374 & 81 & 90 & 66 & 74 & 61 & 74 & 45 & 54 & 22 \\ M$_{\mathrm{TOV}}\geq 1.6$M$_\odot$ & 312 & 77 & 85 & 65 & 72 & 61 & 72 & 45 & 52 & 22 \\ M$_{\mathrm{TOV}}\geq 2.0$M$_\odot$ & 198 & 49 & 53 & 44 & 49 & 41 & 49 & 25 & 29 & 12 \\ \noalign{\smallskip}\hline \label{tabgroups} \end{tabular} \end{table} Based on the criteria described above, we separate the interactions submitted to the finite nucleus constraints into five different groups, as follows, \begin{itemize} \item D$_0$ and G$_0$: groups containing all the interactions considered, \item D$_1$ and G$_1$: groups containing interactions with a letter rank from $A$ to $C$ over all types of data, \item D$_2$ and G$_2$: groups containing interactions with a letter rank of $A$ or $B$ for the binding energies, \item D$_3$ and G$_3$: groups containing interactions with a letter rank of $A$ or $B$ for the binding energies and charge radii, \item D$_4$ and G$_4$: groups containing interactions with a letter rank of $A$ or $B$ for the binding energies, charge radii and GMR energies. \item D$_{4\mathrm{sym}}$: This group imposes on top of D4 the constraint ``IAS+$\Delta r_{np}$'', as detailed in the following. \end{itemize} \begin{figure}[tb] \centering \includegraphics[width=0.50\textwidth]{plot_EsymLsym_G_orders.pdf} \includegraphics[width=0.50\textwidth]{plot_EsymLsym_D_orders.pdf} \caption{Correlation between the symmetry energy and its slope for the groups G$_i$ (top panel) and D$_i$ (bottom panel). The unitary gas boundary is shown for reference.} \label{fig:ELsym_orders} \end{figure} The number of interactions surviving the conditions imposed on the different groups D$_i$ and G$_i$ are shown in Table~\ref{tabgroups} by the line denoted by ``total''. We also count the number of interactions that permit a neutron star of mass M$_{\mathrm{TOV}}\geq 1.6$M$_\odot$ and M$_{\mathrm{TOV}}\geq 2.0$M$_\odot$ in the case of TOV hydrostatic equilibrium, as is detailed in the following Section~\ref{sec:MR}. We remark that the D$_0$ (or G$_0$) group is composed of $374$ parametrizations rather than $415$, the total number of interactions. This is due to the fact that a number of problematic interactions have been discarded, due to one of the following conditions: (i) spinodal instability (negative values of the sound speed) above $n_\mathrm{sat}$ or (ii) negative value of the pressure in stellar matter. \begin{figure}[tb] \centering \includegraphics[scale=0.35]{esym.pdf} \caption{Symmetry energy as a function of the density $n$ for the interactions of the D$_4$ group confronted with the IAS contour and the IAS+$\Delta r_{np}$ one. Dashed curves: interactions satisfying the IAS + $\Delta r_{np}$ constraint. Full curves: interactions not compatible with this constraint. See text for more details.} \label{fig:esym} \end{figure} \subsection{Impact of the groups $G_i$ and $D_i$ on the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ correlation} \label{sec:ELcor} We compare in Fig.~\ref{fig:ELsym_orders} the impact of the different groups G$_i$ and D$_i$ on the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ correlation. It is clear that the groups D$_i$ are better correlated than the groups G$_i$, reflecting the constraint that $N=Z$ and $N\ne Z$ nuclei are reproduced with same accuracy. Already the group D$_2$ is better correlated than the group G$_2$, showing that the goodness of the models to reproduce data (the difference between G$_1$ and G$_2$ or D$_1$ and D$_2$) is less effective than the condition imposed on the D$_2$ models (difference between D$_2$ and G$_2$). The additional condition also appears through the charge radii, i.e. D$_3$ removes the lower values of $L_{\mathrm{sym},2}$, while the additional constraint on the ISGMR in $^{208}$Pb plays a small but non-negligible role (difference between D$_3$ and D$_4$). By comparing the contours G$_4$ with D$_4$, we see clearly the impact of imposing that $N=Z$ and $N\ne Z$ nuclei are reproduced with the same accuracy. In G$_4$ for instance, a poor reproduction of $N\ne Z$ nuclei could be compensated by a better description of $N=Z$ ones, e.g., as occurs for the SKa interaction. This is not true in the group D$_4$, which creates a tighter correlation in the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ diagram, see bottom panel in Fig.~\ref{fig:ELsym_orders}. In conclusion, Fig.~\ref{fig:ELsym_orders} shows the effectiveness of the condition that $N=Z$ and $N\ne Z$ nuclei are reproduced with same accuracy on the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ correlation. The constraints on the charge radii and ISGMR centroid energy also play an additional role for the D$_i$ groups, but have almost no impact on the G$_i$ groups. \subsection{The group D$_{4\mathrm{sym}}$ with an additional symmetry energy constraint}\label{d4sym-esym} \begin{figure}[tb] \centering \includegraphics[width=0.50\textwidth]{plot_EsymLsym_D.pdf} \caption{Correlation between the symmetry energy and its slope for the groups D$_i$. The contour of the group D$_0$=G$_0$ is shown in light grey, while the contour in dark grey includes D$_4$ group. The blue (green) contour includes the D$_4$+IAS (D$_{4\mathrm{sym}}$) group, defined by the constraints shown in Fig.~\ref{fig:esym}. A few other experimental constraints from Fig.~\ref{fig:ELsymExp} are shown for reference.} \label{fig:ELsym} \end{figure} We now detail how the group D$_{4\mathrm{sym}}$, see Table~\ref{tabgroups}, is obtained by adding symmetry energy constraints on top of the D$_4$ group, which has been shown in Section~\ref{sec:ELcor} to naturally constrain $E_{\mathrm{sym},2}$ and $L_{\mathrm{sym},2}$. Among the set of experimental constraints for the symmetry energy shown in Section~\ref{sec:esym}, we decided to investigate the impact of IAS+$\Delta r_{np}$~\cite{Danielewicz2013} since it is obtained from low-energy nuclear data. Another reason for investigating these constraints is that they are provided as contours in the density dependence of the symmetry energy, see Fig.~\ref{fig:esym}. We can then filter our interactions in the D$_4$ group according to their ability to fit inside these contours, as illustrated in Fig.~\ref{fig:esym}. Two contours are shown in Fig.~\ref{fig:esym}: the one representing the IAS constraint alone as well as one representing the IAS+$\Delta r_{np}$ constraints together. The different colors reflect the different kinds of interactions analyzed, namely, black for Skyrme, blue for RMF-NL, and red for RMF-DD. For each model, we compute a loss function defined as $\chi^2=1/N\sum_i (e_{\mathrm{sym},2}^{av}(i)-e_{\mathrm{sym},2}^{model})^2/(\Delta e_{\mathrm{sym},2}(i))^2$, where the index $i=1$ to $N$ scans over the data. Models with $\chi^2<1$ are accepted. The interactions compatible with the IAS + $\Delta r_{np}$ constraint are represented by dashed curves and define the D$_{4\mathrm{sym}}$ group. We now show in Fig.~\ref{fig:ELsym} the region of the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ which is populated by the D$_{4\mathrm{sym}}$ group. For a better understanding, we also show the role of the groups D$_0$, D$_4$, D$_4$+IAS and finally D$_{4\mathrm{sym}}$. The contour D$_4$ isolates a subgroup allowed by HIC and is partially excluded by the unitary gas constraint. We show how the constraint from IAS (D$_4$+IAS) and IAS + $\Delta r_{np}$ (D$_{4\mathrm{sym}}$)~\cite{Danielewicz2013} reduces the viable models shown in the $E_{\mathrm{sym},2}$-$L_{\mathrm{sym},2}$ diagram. Including the IAS constraint (D$_4$+IAS) reduces the contour to a smaller group which makes it compatible with neutron stars, and finally the contour D$_{4\mathrm{sym}}$ reduces it even further inside the $\Delta r_{np}$(Sn) experimental constraint. All the new contours (D$_4$, D$_4$+IAS and D$_{4\mathrm{sym}}$) are given in the supplemental material. \begin{figure}[tb] \centering \includegraphics[width=0.50\textwidth]{plot_EsymLsym_D_full_nomodel.pdf} \caption{Same as Fig.~\ref{fig:ELsymExp} including the new contours from the present analysis: D$_4$, D$_4$+IAS, D$_{4\mathrm{sym}}$. The light-grey band represent the contour of all explored interactions (group D$_0$=G$_0$).} \label{fig:ELsymNew} \end{figure} One could also remark that, despite the good overlap between the IAS experimental constraint and our D$_4$+IAS contour, some of our interactions are outside the IAS experimental constraint. This is because we have considered a larger number of interactions, including relativistic approaches, which haven't been considered in Ref.~\cite{Danielewicz2013} where the contours are based on results solely from Skyrme models. The same remark could be made about the comparison of the group D$_{4\mathrm{sym}}$ and the IAS+$\Delta r_{np}$ experimental constraint. \begin{table}[tb] \tabcolsep=0.5cm \def1.5{1.5} \centering \caption{$E_{\mathrm{sym},2}$ and $L_{\mathrm{sym},2}$ centroid and standard deviation evaluated for the interactions in the groups: D$_4$, D$_4$+IAS and D$_{4\mathrm{sym}}$.} \begin{tabular}{lcc} \hline\noalign{\smallskip} group & $E_{\mathrm{sym},2}$ & $L_{\mathrm{sym},2}$ \\ & (MeV) & (MeV) \\ \noalign{\smallskip}\hline\noalign{\smallskip} D$_4$ & 33.5$\pm$2.4 & 73.4$\pm$23.3 \\ D$_4$+IAS & 32.3$\pm$1.2 & 62.9$\pm$12.3 \\ D$_{4\mathrm{sym}}$ & 31.8$\pm$0.7 & 58.1$\pm$9.0 \\ \noalign{\smallskip}\hline \label{tab:centroids} \end{tabular} \end{table} Finally, the centroid and the standard deviation evaluated among the interactions forming the groups D$_4$, D$_4$+IAS and D$_{4\mathrm{sym}}$ are given in Table~\ref{tab:centroids}. Note the small dispersion obtained for the D$_{4\mathrm{sym}}$ group. \section{Masses and radii of neutron star} \label{sec:MR} We now reach the second stage of our analysis where the EDFs that have been successfully selected by their goodness in reproducing finite nuclei properties are confronted with their predictions for NS properties. The EoS for dense matter and NS are detailed in Appendix~\ref{sec:densematter}. In this section, we discuss their predictions. \begin{figure}[tb] \centering \includegraphics[width=0.55\textwidth,trim=120 60 0 350, clip=true]{mr2.png} \includegraphics[width=0.55\textwidth,trim=120 100 0 350, clip=true]{mr2-band.png} \caption{Mass-radius diagram for various groups: The $D_0$ group is shown as a grey band, the $D_0$ group with the restriction on M$_\mathrm{TOV}$, namely, $1.6$M$_\odot$ (left panels) and $2.0$M$_\odot$ (right panels). Curves in top panels: interactions of the D$_4$ group following the same notation as in Fig.~\ref{fig:esym}. In the bottom panels are shown the contours (dark brown ones) constructed from the interactions of the D$_4$ group in which M$_\mathrm{TOV}\geq1.6$M$_\odot$ (left) and M$_\mathrm{TOV}\geq2.0$M$_\odot$ (right). Also in the bottom panels, we display the contours (green ones) defined by the interactions of the D$_{4\mathrm{sym}}$ subgroup and the curves that define it (dotted lines on top panel). Finally, violet and magenta contours represent the mass-radius constraints of the NICER mission for PSR J0030+0451~\cite{nicer1a,nicer1b} and PSR J0740+6620~\cite{nicer2a,nicer2b} at the $90\%$ confidence level. Dashed (solid) lines indicates the data from Miller et al.~\cite{nicer1a,nicer2a} (Riley et al. \cite{nicer1b,nicer2a}). The constraint determined from LIGO and Virgo Collaboration on the GW170817 event~\cite{ligo} is represented by brown contours.} \label{fig:mr2} \end{figure} The properties of non-rotating NS are obtained from the solution of the the Tolman-Oppenheimer-Volkoff (TOV) equations~\cite{tov39,tov39a,glen} written as ($G=c=1$) \begin{align} \frac{dp_\mathrm{tot}(r)}{dr}&=-\frac{[\rho_\mathrm{tot}(r) + p_\mathrm{tot}(r)][m(r) + 4\pi r^3p_\mathrm{tot}(r)]}{r^2[1-2m(r)/r]}, \label{eq:tov1} \\ \frac{dm(r)}{dr}&=4\pi r^2\rho_\mathrm{tot}(r), \label{eq:tov2} \end{align} whose solution is determined by the initial conditions $p_\mathrm{tot}(0)=p_c$ (central pressure) and $m(0) = 0$. In Eqs.~\eqref{eq:tov1} and \eqref{eq:tov2}, the energy density $\rho_\mathrm{tot}$ and pressure $p_\mathrm{tot}$ are given from Eqs.~\eqref{eq:totaled}-\eqref{eq:totalp}. The maximum value of $M$ for a given EoS is called M$_\mathrm{TOV}$. The radius corresponding to a given mass, e.g., $1.4$M$_\odot$, is called $R_{1.4}$. The break-down density above which the nucleonic EoS is replaced by an EoS with new degrees of freedom, e.g. hyperons or quarks, is not known. Turning the discussion of the break-down density into NS masses is easier in terms of observational data. For instance, a NS with mass $1.2/1.6/2.0$M$_\odot$ corresponds to central densities of $\approx$1.7-3/2-4.5/2.3-6$n_\mathrm{sat}$, where the larger central densities are obtained for the softer EoSs. From these numbers, it is reasonable to extrapolate the nucleonic EoS up to about $1.6$M$_\odot$, while the NS with $2.0$M$_\odot$ are considered as an extreme nucleonic scenario. In the following, we explore two cases where M$_\mathrm{TOV}\geq1.6$M$_\odot$ and M$_\mathrm{TOV}\geq2.0$M$_\odot$. At very low mass (below M$_\odot$) the core EoS is connected to a crust EoS. The necessity of having a unified approach for both the crust and the core~\cite{DH2001,fortin} has been pointed out. Nevertheless, a piece-wise approach, in which the EoS in the core is connected to another EoS in the crust, is also widely used when a precision in the NS radius of about 100~m is sufficient or when detailed information of the crust-core transition is not required. In this work we adopt the procedure used in Refs.~\cite{Margueron2018b} in which the SLY interaction of Ref.~\cite{DH2001}, based on the SLy4 Skyrme parametrization~\cite{Chabanat1998}, is used for the crust region up to $n=0.1n_\mathrm{sat}$. As in Refs.~\cite{Margueron2018b}, a uniform matter EoS starts at $n_\mathrm{sat}$ and a log scale cubic spline takes care of smoothly connecting the lower limit of the crust to the upper limit of the core. Such a prescription allows a good description of the crust and the core, provided they can be smoothly connected. Exceptions exist however. For instance if the core EoS is described by an interaction with a value for $L_{\mathrm{sym},2}$ much larger than the one used to describe the crust, difficulties in connecting the pressure in the crust and the core regions appear. This, however, is not the case for the interactions selected in the D$_4$ group. Mass-radius profiles are shown in Fig.~\ref{fig:mr2} for various sets of EoSs. The condition M$_\mathrm{TOV}\geq2.0$M$_\odot$ removes a small region in the MR diagram corresponding to the softer EoSs for which M$_\mathrm{TOV}\geq1.6$M$_\odot$. We represent the individual contributions of the interactions belonging to the D$_4$ group with the same convention as detailed in Fig.~\ref{fig:esym}. Dashed curves represent the interactions satisfying this constraint, namely, black for Skyrme, and blue for RMF-NL interactions with constant coupling constants. In the bottom panels of Fig.~\ref{fig:mr2}, the envelopes of the groups D$_4$ (dark brown) and D$_{4\mathrm{sym}}$ (green) are compared. The stiffest EoSs from the D$_4$ group are excluded in the D$_{4\mathrm{sym}}$ group, since they require a symmetry energy out of the boundaries shown in Fig.~\ref{fig:esym}. We obtain $R^{\rm{mean}}_{1.4}=13.00\pm0.78$ (12.53$\pm0.69$)~km for the D$_4$ (D$_{4\mathrm{sym}}$) group for M$_\mathrm{TOV}\geq 1.6$M$_\odot$ and $R^{\rm{mean}}_{1.4}=13.10\pm1.00$ (12.38$\pm0.87$)~km for M$_\mathrm{TOV}\geq 2.0$M$_\odot$. The contours related to the observational constraints from NICER~\cite{nicer1a,nicer1b,nicer2a,nicer2b} and for the GW170817 event detected by LIGO and Virgo Collaboration~\cite{ligo} are indicated in the bottom panels. A good overlap between the NICER contours and the groups D$_4$ and D$_{4\mathrm{sym}}$ is obtained, illustrating the agreement between the present constraints from nuclear physics and the ones from observations of neutron stars. A similar conclusion was obtained in Ref.~\cite{Tews2018} for the gravitational waves constraint extracted from GW170817. \begin{figure}[tb] \centering \includegraphics[scale=0.37]{stat-R.pdf} \caption{Radius dispersion represented by the largest radius uncertainty $\Delta R$ and its standard deviation $\sigma_R$ as a function of the mass M, for the groups D$_i$ (solid lines) and G$_i$ (dashed lines) with $i=1$ (black), 2 (red), 3 (blue), and 4 (orange). The results correspond to those interactions satisfying M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (closed symbols) and M$_\mathrm{TOV}\geq 2.0$M$_\odot$ (open symbols). See text for more details.} \label{fig:statr} \end{figure} We now perform a more detailed study of the different groups D$_i$ and G$_i$ ($i=1$ to 4) to understand the impact of the different low energy nuclear constraints we have considered. In Fig.~\ref{fig:statr}, we compare the largest radius uncertainty $\Delta R$, namely, the difference between the maximum and the minimum radius, with the standard deviation $\sigma_R$ defined as \begin{eqnarray} \sigma^2_R=\frac{1}{n}\sum_{i=1}^{n}\big(R_i - \left<R\right>\big)^2\, , \end{eqnarray} where $i$ runs over the nuclear interactions belonging to the groups D$_i$ (solid lines) or G$_i$ (dashed lines). As expected, we have $\sigma_R< \Delta R$ and both quantities increase as functions of the mass. For a canonical mass NS and the D$_4$ group, we obtain $\Delta R\approx 2.8$~km and $\sigma_R\approx 0.8$~km (assuming only that M$_\mathrm{TOV}\geq 1.6$M$_\odot$). The number of interactions belonging to each group is given in Table~\ref{tabgroups}. Comparing D$_1$/G$_1$ and D$_2$/G$_2$ one measures the impact of better accuracy in the reproduction of the masses: the impact is very small in general. Note however a small reduction of $\Delta R$ between G$_1$ and G$_2$ at the mass 1.6M$_\odot$. Then, comparing D$_2$/G$_2$ and D$_3$/G$_3$ as well as D$_3$/G$_3$ and D$_4$/G$_4$, one can see the successive impact of an improved reproduction of the charge radii and ISGMR. Note the reduction of $\Delta R$ induced by the condition on the charge radius in the groups $D_i$, which is not visible for the groups $G_i$. This shows that the requirement to reproduce the charge radius in $N=Z$ and $N\ne Z$ with the same accuracy is the main condition which breaks the degeneracy between the groups $G_i$ and $D_i$. The striking result from Fig.~\ref{fig:statr} is however the very weak dependence of the radius uncertainty, represented here by $\sigma_R$ and $\Delta R$, across the increasing index $i$ of the groups D$_i$ and G$_i$. This feature indicates that a more accurate reproduction of experimental masses, charge radii and the GMR energy in $^{208}$Pb does not have a major impact on the modelling of global NS properties, here M and R. There is however still an impact from the requirement that $N=Z$ and $N\ne Z$ nuclei are described with same accuracy, which is mainly given by the charge radius data. \begin{figure*}[tb] \begin{center} \includegraphics[width=0.475\textwidth]{hist-Esym.png} \includegraphics[width=0.475\textwidth]{hist-Lsym.png} \includegraphics[width=0.495\textwidth]{hist-Ksym.png} \includegraphics[width=0.495\textwidth]{hist-Qsym.png} \vspace{-0.5cm} \caption{Normalized distributions of several isovector QNEPs, in particular, the symmetry energy ($E_{\mathrm{sym},2}$) and its derivatives ($L_{\mathrm{sym},2}$, $K_{\mathrm{sym},2}$, $Q_{\mathrm{sym},2}$). The probability density functions (PDF) for the priors from interactions in the D$_0$ group (black dashed curves) are also presented in each panel.} \label{fig:hist} \end{center} \end{figure*} Our finding suggests that a rough reproduction of low-energy nuclear physics properties (experimental masses, charge radii and ISGMR energy), as given in model D$_4$ for instance, is sufficient, provided that $N=Z$ and $N\ne Z$ nuclei are described with same accuracy. The gain in improving the modeling reproducing these experimental data is not effective for the prediction of NS global properties. The reason is that the extrapolation from finite nuclei located at around $n_{\mathrm{sat}}$ and close to isospin symmetry $\delta\lesssim 0.25$ to canonical mass NS with densities above $2n_\mathrm{sat}$, where matter is neutron rich $\delta\sim 1$, requires the knowledge of the density and isospin dependence of the nuclear EoS, which represents a large and effective source of uncertainties. However, low-energy nuclear physics properties should be more important in determining the properties of the crust, such as the mass and the charge of nuclear clusters ($A_\mathrm{cl}$, $Z_\mathrm{cl}$), see for instance Refs.~\cite{Wolf2013,Fortin2016,Antic2019,Grams2022a,Grams2022b}. \section{Further analysis of the density dependence of the symmetry energy} In the previous section, we illustrated the needs for a better understanding of the density dependence of the nuclear EoS for the prediction of NS global properties. Since the symmetry energy is the most important term in the EoS driving the density dependence of the EoS and since the quadratic nuclear empirical parameters (QNEPs) allow for a simple representation of this density dependence, we now analyse directly the QNEPs. We present the normalized distributions (ND) of the QNEPs $E_{\mathrm{sym},2}$, $L_{\mathrm{sym},2}$, $K_{\mathrm{sym},2}$, and $Q_{\mathrm{sym},2}$ in Fig.~\ref{fig:hist} for various scenarios. We also present the prior distribution represented by the black dashed lines. This prior is obtained from the D$_0$ group. \begin{figure*}[tb] \centering \includegraphics[scale=0.55]{bivariate-scatter.png} \caption{Representation of correlations between $K_{\mathrm{sym},2}$ ($Q_{\mathrm{sym},2}$) and $L_{\mathrm{sym},2}$ for the interactions of the D$_0$ group (left panels) and D$_4$ group with the restriction of reaching at least $1.6M_\odot$ (middle panels) and $2.0M_\odot$ (right panels). The green symbols represent the interactions of the D$_{4\mathrm{sym}}$ subgroup. The boundaries values (dashed lines) are given in Table~\ref{tab:boundaries}.} \label{fig:bivariate} \end{figure*} The positions of the ND peaks for $E_{\mathrm{sym},2}$ are almost identical to that of the prior, showing no effect of the interaction selection or the condition on M$_\mathrm{TOV}$ for this QNEP. The ND for the group D$_{4\mathrm{sym}}$ is more peaked than the others. For $L_{\mathrm{sym},2}$, the prior is almost flat while the posterior distributions indicate a preference for lower values of $L_{\mathrm{sym},2}$ (about 50-70~MeV). In the case of group D$_{4\mathrm{sym}}$, the $L_{\mathrm{sym},2}$ normalized distribution is double peaked, as is the distribution for $K_{\mathrm{sym},2}$. For the latter, one maximum is located in the region $-50$ to $20$~MeV and another around $-100$~MeV. With regard to the D$_4$ group, notice that the relative size of the peaks in $K_{\mathrm{sym},2}$ changes with the mass constraint: the peak at -100~MeV is preferred if M$_\mathrm{TOV}\geq 2.0$M$_\mathrm{TOV}$, while the peak around 0~MeV is preferred if M$_\mathrm{TOV}\geq 1.6$M$_\mathrm{TOV}$. For $Q_{\mathrm{sym},2}$, the PDF is very broad, with most of the models lying between 100 an 600~MeV. The centroid of the normalized distribution conditioned by M$_\mathrm{TOV}\geq 2.0$M$_\odot$ is, however, larger than the one conditioned by M$_\mathrm{TOV}\geq 1.6$M$_\odot$. This shows that both $K_{\mathrm{sym},2}$ and $Q_{\mathrm{sym},2}$ are impacted by the condition on M$_\mathrm{TOV}$. \begin{table}[tb] \tabcolsep=0.1cm \def1.5{1.5} \centering \caption{Boundaries for the quantities presented in Fig~\ref{fig:bivariate}. All quantities are in MeV.} \begin{tabular}{lrrr} \hline\noalign{\smallskip} \rm & D$_0$ & D$_{4\,(1.6)} = $D$_{4\,(2.0)}$ & D$_{4\mathrm{sym}\,(1.6)}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $L_{\mathrm{sym},2}$ & 5.75 -- 160.06 & 45.36 -- 126.60 & 45.36 -- 73.23\\ $K_{\mathrm{sym},2}$ & -393.73 -- 159.57 & -121.90 -- 132.12 & -121.90 -- 12.96\\ $Q_{\mathrm{sym},2}$ & -786.14 -- 1361.45 & -166.17 -- 776.91 & 128.26 -- 524.75\\ \noalign{\smallskip}\hline \label{tab:boundaries} \end{tabular} \end{table} Another view of these data is shown in Fig.~\ref{fig:bivariate}, where we plot the correlations between the QNEPs ($K_{\mathrm{sym},2}$ versus $L_{\mathrm{sym},2}$ in the top panels and $Q_{\mathrm{sym},2}$ versus $L_{\mathrm{sym},2}$ in the bottom panels) for interactions in the D$_0$ group (left panels) and D$_4$ groups central and right panels. These two last cases also show the impact of M$_\mathrm{TOV}$. We observe that there are no strong correlations among the QNEPs visible in this figure. The reason is that a low value of $L_{\mathrm{sym},2}$ could be compensated by larger values of $K_{\mathrm{sym},2}$ and/or $Q_{\mathrm{sym},2}$. There is however a set of limits that we can extract from Fig.~\ref{fig:bivariate}. This set is given in Table~\ref{tab:boundaries}. The lower boundary in $L_{\mathrm{sym},2}$ is due to two constraints. The first one is the reproduction of the low-energy nuclear physics properties, which produces a lower limit of the order of 30~MeV, as shown in Fig.~\ref{fig:hist}. The requirement to reproduce large masses, such as $1.6M_\odot$ or $2.0M_\odot$ pushes this lower limit up to about 45~MeV. It is however important to keep in mind that the boundaries we obtain are strongly impacted by the hypothesis that we have made concerning the absence of phase transition above saturation density. The case of a phase transition has been studied in Ref.~\cite{Xie2021}. The range of values for the isovector QNEPs reported in Table~\ref{tab:boundaries} could however be compared to other boundaries suggested in the literature. In the following we assume that the NEPs and QNEPs in the isospin channel are similar. For instance, by analysing non-relativistic Skyrme, relativistic mean field and relativistic Hartree-Fock models, it has been suggested that $K_{\mathrm{sym},2}=(-100\pm 100)$~MeV~\cite{Margueron2018a}. From observational constraints based on X-ray emission from seven NSs in globular clusters, a value $K_{\mathrm{sym}}=-85^{+82}_{-70}$~MeV was preferred~\cite{Baillot2019}. These values are consistent with the recent analysis based on GMR energies in $^{90}$Zr, $^{116}$Sn and $^{208}$Pb using the Skyrme model, leading to $K_{\mathrm{sym},2}=(-120\pm 40)$~MeV~\cite{Sagawa2019}. From the analysis of GW170817, it was suggested that $-259\mbox{ MeV} \leq K_{\mathrm{sym}} \leq 32$~MeV~\cite{Carson2019} and, in a similar analysis, the following values were suggested~\cite{Guven2020} $K_{\mathrm{sym}}=(440\pm 210)$~MeV, $K_{\mathrm{sym}}=(560\pm 150)$~MeV, and $K_{\mathrm{sym}}=(260\pm 240)$~MeV, depending on the observational PDF for $\tilde{\Lambda}$ extracted from GW170817~\cite{Abbott2019,De2018,Coughlin2019}. The value taken for $K_{\mathrm{sym}}$, however, was shown to be well anti-correlated with $L_{\mathrm{sym}}$, mainly driven by the condition to reproduce 2M$_\odot$. Since the PDFs for $\tilde{\Lambda}$ from GW170817 prefer low values for $L_{\mathrm{sym}}$, this explains why large values for $K_{\mathrm{sym}}$ were obtained in Ref.~\cite{Guven2020}. More recently, a compilation of 16 results from independent analyses of neutron star observational data since GW170817 lead to the following expectation, $K_{\mathrm{sym}}=(-107\pm 88)$~MeV~\cite{Li2021}. We also mention a recent analysis based on the latest results from the NICER observatory, where it was found that the lower radius limit of J0740 by Riley et al.~\cite{nicer2b} only requires $K_{\mathrm{sym}}$ to be higher than about $-150$~MeV, depending somewhat on the value of the skewness of the symmetry energy (unconstrained by the data). All these results have to be taken with caution however, since they have been obtained with different nuclear models and corresponding systematical uncertainties that are difficult to estimate. Different priors have also been considered for the results based on a Bayesian statistical approach, which impact the results. We note, however, that our present findings for $K_{\mathrm{sym},2}$ are in agreement with the predictions from these analyses. In conclusion, we have extracted constraints on the QNEPs determining the density dependence of the symmetry energy. These constraints are given in Table~\ref{tab:boundaries} for the different groups: D$_0$, D$_4$ and D$_{4\mathrm{sym}}$ conditioned by M$_\mathrm{TOV}$. One could remark that these QNEPs are still largely unknown, although they are crucial for precise predictions of NS properties. However, the ranges for these QNEPs given in Table~\ref{tab:boundaries} represent the best evaluation of these QNEPs based on low energy nuclear experiments and conditioned by M$_\mathrm{TOV}$. \section{Neutron star global properties} In this section we further elaborate on the role of the symmetry energy in the determination of NS global properties. We then introduce a refined classification of the EDFs based on the properties of the symmetry energy and we analyse the impact of this classification on NS global properties. \subsection{Mass-radius relation of neutron stars} \begin{figure}[tb] \centering \includegraphics[scale=0.35]{mr1.pdf} \caption{Mass-radius profiles obtained from the interactions of the D$_4$ group, restricted to those producing a neutron star of at least $1.6M_\odot$ (left panel) and $2.0M_\odot$ (right panel). The points show the values at which the central density corresponds to saturation density $n_\mathrm{sat}$ (circles), $2n_\mathrm{sat}$ (squares), $3n_\mathrm{sat}$ (triangles up), $4n_\mathrm{sat}$ (triangles down) and $5n_\mathrm{sat}$ (x symbol).} \label{fig:mr1} \end{figure} The mass-radius (MR) relations for the interactions of the group D$_4$ are shown in Fig.~\ref{fig:mr1}, in a similar manner as in the top panels of Fig.~\ref{fig:mr2}. In addition, we have grouped the interactions in D$_4$ into six different sets according to their MR relations. These sets are shown in color as indicated in the legend. \begin{table*}[tb] \centering \caption{Properties of the interactions belonging to the D$_4$ group. All entries are in MeV, except for the saturation density $n_\mathrm{sat}$ given in fm$^{-3}$, and the dimensionless effective mass $m^* = M^*(n_\mathrm{sat})/M_{\mbox{\tiny nuc}}$. $E_\mathrm{sat}$ is the binding energy, $K_{\rm NM}=K_\mathrm{sat}+K_{\mathrm{sym},2}$, $Q_{\rm NM}=Q_\mathrm{sat}+Q_{\mathrm{sym},2}$, and $K_{\rm \tau,v}=K_{\mathrm{sym},2}-6L_{\mathrm{sym},2}-Q_\mathrm{sat} L_{\mathrm{sym},2}/K_\mathrm{sat}$. The symbol~$\checkmark$ refers to those interactions that also produce neutron stars with M$_\mathrm{TOV} \geq 2.0$M$_\odot$ and/or and belong to the D$_{4\mathrm{sym}}$ group. See the text for the definition of the subgroups I to VI.} \setlength{\tabcolsep}{1pt} \renewcommand{1.5}{1.3} \begin{ruledtabular} \begin{tabular}{l|cc|l|c|c|c|c|c|c|c|c|c|c|c|c|c} & $2.0M_\odot$ & D$_{4\mathrm{sym}}$ & interaction & Ref. & $n_\mathrm{sat}$ & $E_\mathrm{sat}$ & $K_\mathrm{sat}$ & $Q_\mathrm{sat}$ & $E_{\mathrm{sym},2}$ & $L_{\mathrm{sym},2}$ & $K_{\mathrm{sym},2}$ & $Q_{\mathrm{sym},2}$ & $K_{\rm NM}$ & $Q_{\rm NM}$ & $K_{\rm \tau,v}$ & $m^*$ \\ \hline \multirow{7}{*}{I} & $\checkmark$ & $\checkmark$ & SLy3 & \cite{Chabanat-thesis}& 0.160 & -15.94 & 229.51 & -362.56 & 31.97 & 45.36 & -121.90 & 524.75 & 107.61 & 162.18 & -322.39 & 0.70 \\ & $\checkmark$ & $\checkmark$ & SLy4 & \cite{Chabanat1998} & 0.160 & -15.97 & 229.91 & -363.11 & 32.00 & 45.94 & -119.73 & 521.53 & 110.18 & 158.43 & -322.83 & 0.69 \\ & $\checkmark$ & $\checkmark$ & SLy230b & \cite{Chabanat1997} & 0.160 & -15.97 & 229.91 & -363.10 & 32.01 & 45.97 & -119.72 & 521.50 & 110.19 & 158.40 & -322.92 & 0.69 \\ & $\checkmark$ & $\checkmark$ & SLy0 & \cite{Chabanat-thesis}& 0.160 & -15.97 & 229.66 & -364.01 & 31.98 & 47.11 & -116.23 & 508.68 & 113.43 & 144.67 & -324.23 & 0.70 \\ & $\checkmark$ & $\checkmark$ & SLy8 &\cite{Chabanat-thesis} & 0.160 & -15.97 & 229.89 & -363.27 & 32.00 & 47.18 & -115.59 & 509.88 & 114.31 & 146.61 & -324.09 & 0.70 \\ & $\checkmark$ & $\checkmark$ & SLy2 & \cite{Chabanat-thesis}& 0.161 & -15.99 & 229.92 & -364.21 & 32.00 & 47.46 & -115.13 & 506.52 & 114.79 & 142.31 & -324.69 & 0.70 \\ & $\checkmark$ & $\checkmark$ & SLy5 & \cite{Chabanat1998} & 0.161 & -15.99 & 229.92 & -364.16 & 32.01 & 48.15 & -112.76 & 500.67 & 117.16 & 136.51 & -325.38 & 0.70 \\ \hline \multirow{1}{*}{II} & & $\checkmark$ & SD1 & \cite{sd1} & 0.156 & -15.70 & 231.91 & -376.39 & 32.00 & 60.94 & -115.87 & 281.29 & 116.04 & -95.10 & -382.59 & 1.00 \\ % \hline \multirow{19}{*}{III} & & $\checkmark$ & IUFSU* & \cite{iufsus} & 0.150 & -16.02 & 235.67 & -259.49 & 29.85 & 50.30 & 12.20 & 388.15 & 247.87 & 128.67 & -234.20 & 0.61 \\ & & & SINPA & \cite{sinpab} & 0.151 & -16.00 & 202.55 & -58.73 & 31.20 & 53.85 & 26.58 & 334.85 & 229.13 & 276.12 & -280.92 & 0.58 \\ & & $\checkmark$ & BSR8 & \cite{BSR} & 0.147 & -16.04 & 230.95 & -290.85 & 31.08 & 60.25 & -0.74 & 238.23 & 230.22 & -52.62 & -286.36 & 0.61 \\ & & $\checkmark$ & BSR15 & \cite{BSR} & 0.146 & -16.03 & 226.82 & -512.29 & 30.97 & 61.79 & -21.36 & 128.26 & 205.47 & -384.03 & -252.54 & 0.61 \\ & & $\checkmark$ & FSUGZ06 & \cite{fsugz}& 0.146 & -16.05 & 225.06 & -503.17 & 31.18 & 62.42 & -24.49 & 153.31 & 200.57 & -349.86 & -259.47 & 0.61 \\ & & $\checkmark$ & BSR16 & \cite{BSR} & 0.146 & -16.05 & 224.98 & -503.17 & 31.24 & 62.33 & -24.17 & 152.29 & 200.82 & -350.88 & -258.75 & 0.61 \\ & & $\checkmark$ & BSR9 & \cite{BSR} & 0.147 & -16.07 & 232.50 & -297.11 & 31.61 & 63.89 & -11.32 & 202.86 & 221.18 & -94.25 & -313.03 & 0.60 \\ & & $\checkmark$ & FSUGZ03 & \cite{fsugz} & 0.147 & -16.07 & 232.48 & -297.13 & 31.54 & 63.98 & -11.66 & 203.43 & 220.82 & -93.69 & -313.79 & 0.60 \\ & & $\checkmark$ & BSR17 & \cite{BSR} & 0.146 & -16.05 & 221.67 & -489.45 & 31.98 & 67.44 & -31.58 & 176.65 & 190.09 & -312.80 & -287.31 & 0.61 \\ & & $\checkmark$ & BSR10 & \cite{BSR} & 0.147 & -16.06 & 227.41 & -255.13 & 32.72 & 70.83 & -16.51 & 205.04 & 210.89 & -50.09 & -362.04 & 0.60 \\ & & & SINPB & \cite{sinpab} & 0.150 & -16.05 & 206.40 & -449.21 & 33.96 & 71.55 & -50.60 & 552.47 & 155.80 & 103.26 & -449.21 & 0.59 \\ & & & BSR18 & \cite{BSR} & 0.146 & -16.05 & 221.13 & -485.73 & 32.74 & 72.65 & -42.24 & 199.39 & 178.89 & -286.35 & -318.55 & 0.61 \\ & $\checkmark$ & & SKa & \cite{ska} & 0.155 & -15.99 & 263.16 & 300.13 & 32.91 & 74.62 & -78.46 & 174.54 & 184.70 & 474.66 & -441.08 & 0.61 \\ & & & BSR12 & \cite{BSR} & 0.147 & -16.10 & 232.35 & -290.31 & 34.00 & 77.90 & -44.23 & 324.15 & 188.12 & 33.85 & -414.30 & 0.61 \\ & & & BSR11 & \cite{BSR} & 0.147 & -16.08 & 226.75 & -312.37 & 33.69 & 78.78 & -24.72 & 172.54 & 202.03 & -139.83 & -388.86 & 0.61 \\ & & & BSR19 & \cite{BSR} & 0.147 & -16.08 & 220.83 & -484.25 & 33.78 & 79.47 & -50.13 & 194.70 & 170.70 & -289.55 & -352.70 & 0.61 \\ & & & BSR20 & \cite{BSR} & 0.146 & -16.09 & 223.25 & -507.75 & 34.54 & 88.03 & -39.90 & 82.74 & 183.35 & -425.02 & -367.86 & 0.61 \\ & & & BSR13 & \cite{BSR} & 0.147 & -16.13 & 228.64 & -294.46 & 35.82 & 91.07 & -41.68 & 138.98 & 186.96 & -155.48 & -470.82 & 0.60 \\ & & & BSR21 & \cite{BSR} & 0.145 & -16.12 & 220.32 & -468.20 & 35.96 & 92.94 & -46.01 & 67.45 & 174.30 & -400.75 & -406.16 & 0.60 \\ & & & BSR14 & \cite{BSR} & 0.147 & -16.18 & 235.47 & -317.10 & 36.32 & 93.85 & -41.95 & 112.53 & 193.51 & -204.57 & -478.66 & 0.61 \\ \hline \multirow{11}{*}{IV} & & & FSUGarnet & \cite{fsugarnet} & 0.153 & -16.23 & 229.61 & -13.12 & 30.92 & 50.96 & 59.44 & 138.08 & 289.06 & 124.96 & -249.21 & 0.58 \\ & $\checkmark$ & & DD-ME2 & \cite{ddme2} & 0.152 & -16.14 & 250.92 & 478.75 & 32.30 & 51.25 & -87.19 & 776.91 & 163.73 & 1255.67 & -492.45 & 0.57 \\ & $\checkmark$ & & DD-ME1 & \cite{ddme1} & 0.152 & -16.20 & 244.72 & 316.66 & 33.06 & 55.45 & -101.05 & 705.59 & 143.66 & 1022.25 & -505.50 & 0.58 \\ & $\checkmark$ & $\checkmark$ & BSR1 & \cite{BSR} & 0.148 & -16.02 & 239.89 & -35.68 & 31.04 & 59.41 & 12.96 & 468.10 & 252.85 & 432.42 & -334.65 & 0.61 \\ & $\checkmark$ & $\checkmark$ & BSR2 & \cite{BSR} & 0.149 & -16.03 & 239.93 & -48.06 & 31.50 & 62.02 & -3.14 & 403.21 & 236.79 & 355.15 & -362.81 & 0.61 \\ & $\checkmark$ & $\checkmark$ & FSUGZ00 & \cite{fsugz} & 0.149 & -16.03 & 240.00 & -47.74 & 31.43 & 62.16 & -3.46 & 402.48 & 236.54 & 354.74 & -364.05 & 0.61 \\ & $\checkmark$ & $\checkmark$ & BSR3 & \cite{BSR} & 0.150 & -16.09 & 230.55 & -114.72 & 32.74 & 70.45 & -7.76 & 397.59 & 222.78 & 282.86 & -395.42 & 0.60 \\ & $\checkmark$ & $\checkmark$ & BSR4 & \cite{BSR} & 0.150 & -16.08 & 238.57 & 4.00 & 33.17 & 73.23 & -20.71 & 420.06 & 217.85 & 424.06 & -461.34 & 0.61 \\ & $\checkmark$ & & BSR5 & \cite{BSR} & 0.151 & -16.12 & 235.71 & -10.96 & 34.46 & 83.37 & -14.16 & 346.84 & 221.55 & 335.88 & -510.53 & 0.61 \\ & $\checkmark$ & & BSR6 & \cite{BSR} & 0.149 & -16.13 & 235.75 & -7.59 & 35.62 & 85.68 & -49.55 & 352.00 & 186.20 & 344.41 & -560.86 & 0.60 \\ & $\checkmark$ & & BSR7 & \cite{BSR} & 0.149 & -16.18 & 231.80 & -19.80 & 37.26 & 99.14 & -16.97 & 198.47 & 214.83 & 178.66 & -603.32 & 0.60 \\ \hline \multirow{1}{*}{V} & $\checkmark$ & & FSUGold2 & \cite{fsugold2} & 0.150 & -16.26 & 237.69 & -149.95 & 37.57 & 112.68 & 25.38 & -166.17 & 263.07 & -316.13 & -579.61 & 0.59 \\ \hline \multirow{4}{*}{VI} & $\checkmark$ & & Q1 & \cite{q1} & 0.148 & -16.10 & 241.86 & 8.70 & 36.44 & 115.71 & 105.65 & 266.72 & 347.51 & 275.43 & -592.77 & 0.60 \\ & $\checkmark$ & & FAMA1 & \cite{fama1} & 0.148 & -16.00 & 200.05 & -303.20 & 38.01 & 120.53 & 113.22 & 403.17 & 313.27 & 99.97 & -427.27 & 0.60 \\ & $\checkmark$ & & NL3* & \cite{nl3s} & 0.150 & -16.31 & 258.25 & -122.04 & 38.68 & 122.63 & 105.56 & 223.95 & 363.81 & 101.92 & -688.19 & 0.59 \\ & $\checkmark$ & & E & \cite{eer} & 0.150 & -16.13 & 221.43 & 20.87 & 38.58 & 124.57 & 132.12 & 381.38 & 353.54 & 402.25 & -627.06 & 0.58 \\ & $\checkmark$ & & ER & \cite{eer} & 0.149 & -16.16 & 220.49 & -24.93 & 39.42 & 126.60 & 127.62 & 377.17 & 348.11 & 352.24 & -617.67 & 0.58 \\ \end{tabular} \end{ruledtabular} \label{tab:bulk} \end{table*} These sets could also be sorted by the values of the NEPs $L_{\mathrm{sym},2}$, $K_{\mathrm{sym},2}$ and the condition that M$_\mathrm{TOV}\geq 2$M$_\odot$ as shown in table~\ref{tab:bulk}: set I consists of interactions from the group D$_4$ for which $L_{\mathrm{sym},2}\leq 50$~MeV; set II, III and IV have $50~\mbox{MeV}<L_{\mathrm{sym},2}\leq 100~\mbox{MeV}$, and additionally set II contains the single EoS for which $K_{\mathrm{sym},2}\leq -110$~MeV, while sets III and IV have larger values for $K_{\mathrm{sym},2} > -110$~MeV. The difference between set III and IV is that all EoSs from set IV satisfy the condition M$_\mathrm{TOV}\geq 2$M$_\odot$, while models in set III do not, except for the SKa parametrization. Finally, sets V and VI are EoSs with large values for $L_{\mathrm{sym},2}\geq 100$~MeV. Set V has a value of $K_{\mathrm{sym},2}$ of about $25$~MeV, while set VI has $K_{\mathrm{sym},2}\geq 100$~MeV. The analysis of Fig.~\ref{fig:mr1} and Table~\ref{tab:bulk} leads to the following conclusions: The main difference among the different sets is coming from the value of $L_{\mathrm{sym},2}$: low values (set I), intermediate values (sets II, III and IV) and large values (set V and VI). The value of $L_{\mathrm{sym},2}$ determines the stiffness of the EoS as shown in Fig.~\ref{fig:mr1}: the softer the EoS, the lower the radius and the larger the central density for a given mass M. To a certain degree, the stiffness of the EoS is determined by the value of $K_{\mathrm{sym},2}$. In order to make this more clear in Fig.~\ref{fig:mr1}, we have separated set II from sets III and IV according to the value of $K_{\mathrm{sym},2}$. We also indicate in Table~\ref{tab:bulk} the EoSs which belong to the subgroup D$_{4\mathrm{sym}}$. They are the EoSs for which $L_{\mathrm{sym},2}< 90$~MeV. Note that the precise upper value for $L_{\mathrm{sym},2}$ actually depends on the value of $K_{\mathrm{sym},2}$. In conclusion, we have analysed the dominant role of $L_{\mathrm{sym},2}$ in globally controlling the MR diagram, with some additional contribution from $K_{\mathrm{sym},2}$. In Fig.~\ref{fig:mr1} however, one observes some correlations between $L_{\mathrm{sym},2}$ and the masses/radii at fixed central densities. In the following, we analyse these correlations in more detail. \subsection{Radius and mass (individual analysis)} \begin{figure}[tb] \centering \includegraphics[scale=0.35]{rl.pdf} \caption{Neutron star radius at $n_\mathrm{sat}$ (black symbols), $2n_\mathrm{sat}$ (red symbols), and $3n_\mathrm{sat}$ (blue symbols) as a function of $L_{\mathrm{sym},2}$, obtained from the interactions of the D$_4$ group and restricted to those satisfying M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (left panel) and M$_\mathrm{TOV}\geq 2.0$M$_\odot$ (right panel). The symbols correspond to the sets I to VI as indicated in the legend.} \label{fig:rl} \end{figure} We represent in Fig.~\ref{fig:rl} the correlation between the NS radius R -- extracted at different densities ($n_\mathrm{sat}$, $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$) -- and $L_{\mathrm{sym},2}$. Note that the correlation is opposite at $n_\mathrm{sat}$ to that at higher densities: At $n_\mathrm{sat}$ the radius decreases as $L_{\mathrm{sym},2}$ increases, while above, the radius increases as a function of $L_{\mathrm{sym},2}$. The reason is simple: having a larger value of $L_{\mathrm{sym},2}$ implies a softer EoS below $n_\mathrm{sat}$. So the anti-correlation at $n_\mathrm{sat}$ reflects that the EOS is softer at low densities for larger values of $L_{\mathrm{sym},2}$. At higher densities, the situation is different since the larger the value of $L_{\mathrm{sym},2}$, the stiffer the EoS is above $n_\mathrm{sat}$. The EoSs are so stiff that they change the MR relation: above saturation density, the radius is weakly impacted by the mass. Since stiffer EoSs above $n_\mathrm{sat}$ imply larger values for $L_{\mathrm{sym},2}$, the radius is correlated with $L_{\mathrm{sym},2}$ in this region. \begin{figure}[tb] \centering \includegraphics[scale=0.35]{rp.pdf} \caption{Empirical relation between pressure (in units of MeV.fm$^{-3}$) and the radius (in km) obtained from the interactions of the D$_4$ group and restricted to those satisfying M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (left panel) and M$_\mathrm{TOV}\geq 2.0$M$_\odot$ (right panel). The symbols correspond to the sets I to VI as indicated in the legend. The solid lines represent the best values for the product $Rp_c^{-1/4}$, namely, left (right) panel: $5.79$ ($5.55$) and $4.33$ ($4.05$), for the red and blue lines, respectively. All numbers in units of \mbox{km fm$^{3/4}$ MeV$^{-1/4}$}.} \label{fig:rp} \end{figure} We now test the empirical relation between R and $p_c$, suggested in Ref.~\cite{Prakash2001}. The pressure $p_c$ is the central pressure of the NS (at $\beta$-equilibrium). We show the quantity $Rp_c^{-1/4}$ as a function of $R$ in Fig.~\ref{fig:rp}. The correlation between R and $p_c$ is better at $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$ compared to $n_\mathrm{sat}$, as already noted in Ref.~\cite{Prakash2001}. The product $Rp_c^{-1/4}$ is weakly correlated with the radius, if the pressure is taken at $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$. Since the radius is well correlated with $L_{\mathrm{sym},2}$ at these densities, see Fig.~\ref{fig:rl}, the product $Rp_c^{-1/4}$ is also weakly correlated with $L_{\mathrm{sym},2}$ at $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$. At $n_\mathrm{sat}$ however, the points are much less aligned than at $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$. The dispersion between the points reflects the (dominant) effect of $L_{\mathrm{sym},2}$ as well as that of $K_{\mathrm{sym},2}$. In NS, the central pressure near saturation density is dominantly given by $L_{\mathrm{sym},2}$~\cite{Lattimer2016}, while other NEPs contribute more as the density increases~\cite{Margueron2018a,Margueron2018b}. The correlation suggested in Ref.~\cite{Prakash2001} at $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$ reflects at least two interesting features: first it shows the weak influence of the radial distribution of the pressure in NSs, since it is mostly the central value which fixes the NS radius, and second, it hides the contribution of the different NEPs to the density dependence of the pressure, since the correlation furnishes the central pressure $p_c$ directly. \begin{figure}[tb] \centering \includegraphics[scale=0.36]{ml.pdf} \caption{Neutron star masses corresponding to central densities of $n_\mathrm{sat}$ (black symbols), $2n_\mathrm{sat}$ (red symbols), and $3n_\mathrm{sat}$ (blue symbols) as a function of $L_{\mathrm{sym},2}$, obtained from the interactions of the D$_4$ group and restricted to those satisfying M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (left panel) and M$_\mathrm{TOV}\geq 2.0$M$_\odot$ (right panel). Full lines: fitting curves with the respective correlation coefficients.} \label{fig:ml} \end{figure} With regard to the correlation between NS mass and $L_{\mathrm{sym},2}$, notice that in Fig.~\ref{fig:mr1} one observes the relation between the mass M at fixed central density and the radius, which reflects the influence of $L_{\mathrm{sym},2}$. To make this clearer, we explicitly represent in Fig.~\ref{fig:ml} the correlation between the mass M and $L_{\mathrm{sym},2}$ at different central densities: $n_\mathrm{sat}$, $2n_\mathrm{sat}$ and $3n_\mathrm{sat}$. The correlation is almost perfect at $n_\mathrm{sat}$ with the correlation coefficient being 0.995 (0.998) for M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (M$_\mathrm{TOV}\geq 2.0$M$_\odot$). However, it becomes broader at higher densities. This reflects the role of other empirical parameters governing the density dependence of the EoS, as for instance $K_\mathrm{sym}$ or $Q_\mathrm{sat}$. It is also interesting to observe that the correlations are very close for $n_\mathrm{sat}$ and 2$n_\mathrm{sat}$ when conditioned by M$_\mathrm{TOV}\geq 2.0$M$_\odot$ in comparison to the M$_\mathrm{TOV}\geq 1.6$M$_\odot$ case, reflecting the weak impact of M$_\mathrm{TOV}$ on this correlation. The same is not true for 3$n_\mathrm{sat}$, that presents a better correlation when the 2M$_\mathrm{TOV}\geq 2.0$M$_\odot$ condition is applied. \section{Other global properties of neutron stars} In this last section of the paper, we analyse global properties of NS that have not yet been analysed, namely the moment of inertia and the tidal deformability. \subsection{Moment of inertia} In the low spin regime, as suggested by Hartle and Sharp~\cite{inertia1}, the rotation of a NS is much smaller than the Kepler frequency, allowing us to assume that the NS remains spherical. The moment of inertia is therefore expressed as~\cite{inertia1,inertia2} \begin{eqnarray} I = \frac{8\pi}{3}\int_0^Rdrr^4\epsilon\left(1 + \frac{p}{\epsilon}\right)\frac{\bar{\omega}}{\Omega}e^{\lambda-\Phi}, \end{eqnarray} where $\bar{\omega}$ is the local spin frequency, which represents the correction from general relativity to the asymptotic angular momentum $\Omega$. The local angular momentum is $\omega=\Omega-\bar{\omega}$. Furthermore, $e^\lambda=[1-m(r)/r]^{-1/2}$ and $\Phi$ is the gravitational potential solution of the equation \begin{eqnarray} \frac{d\Phi(r)}{dr} = \frac{m(r) + 4\pi r^3p(r)}{r^2[1-2m(r)/r]} \, , \label{eq:phi} \end{eqnarray} with the boundary condition $\Phi(R)=\frac 1 2 \ln (1-2M/R)$. \begin{figure}[tb] \includegraphics[width=0.55\textwidth,trim=170 45 0 350, clip=true]{inertia.png} \includegraphics[width=0.55\textwidth,trim=170 100 0 350, clip=true]{inertia-band.png} \caption{$I$-$M$ correlation obtained from the D$_0$ group (grey band in all panels). Orange band: regions from the D$_0$ group conditioned by M$_\mathrm{TOV}$. Top panels: the curves are from the subgroups of D$_4$ as given in Table~\ref{tab:bulk} and the legend is the same as in Fig.~\ref{fig:mr1}. Bottom panels: the dark brown contour is constructed from the interactions of the D$_4$ group conditioned by M$_\mathrm{TOV}$ (as in the top panels) and the green contour delimits the predictions based on the D$_{4\mathrm{sym}}$ group.} \label{fig:inertia} \end{figure} We then investigate how the moment of inertia is influenced by the low energy nuclear experimental constraints. We show in the top panels of Fig.~\ref{fig:inertia} the moment of inertia $I$ as a function of the mass M for the six sets, as indicated in the legend. There is a reasonable ordering of the moment of inertia as a function of the sets: the moment of inertia increases for increasing values of $L_{\mathrm{sym},2}$. We obtain $I^{\rm{mean}}_{1.4}=1.33\pm0.16$ (1.24$\pm0.14$)~$10^{45}$g.cm$^2$ for the D$_4$ (D$_{4\mathrm{sym}}$) group for M$_\mathrm{TOV}\geq 1.6$M$_\odot$ and $I^{\rm{mean}}_{1.4}=1.36\pm0.20$ (1.22$\pm0.18$)~$10^{45}$g.cm$^2$ for M$_\mathrm{TOV}\geq 2.0$M$_\odot$. \begin{figure}[tb] \centering \includegraphics[scale=0.35]{stat-I.pdf} \caption{$\Delta I$ (range) and $\sigma_I$ (standard deviation) as a function of the mass M, for the groups D$_i$ (solid lines) and G$_i$ (dashed lines) with $i=1$ (black), 2 (red), 3 (blue), and 4 (orange). The results correspond to those interactions satisfying M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (closed symbols) and M$_\mathrm{TOV}\geq 2.0$M$_\odot$ (open symbols). See text for more details.} \label{fig:statI} \end{figure} A more systematical investigation of the moment of inertia is shown in Fig.~\ref{fig:statI}, where the effects of the groups D$_1$-D$_4$ and G$_1$-G$_4$ are given in the four panels. As in Fig.~\ref{fig:statr}, we represent the largest uncertainties $\Delta I=I_{\mbox{\tiny max}}-I_{\mbox{\tiny min}}$ and the standard deviation $\sigma_I=(1/n)\sum_i (I_i-\left<I\right>)^2$. The uncertainty measured by these two quantities increases as a function of the mass M. We also confirm our previous conclusions from Fig.~\ref{fig:statr}: there is only a very limited impact due to a better description of the low-energy nuclear data. An improvement is seen when using the D$_i$ groups rather than the G$_i$ ones, showing that the condition to describe equally well the $N=Z$ and $N\neq Z$ nuclei also plays a role here. However, the uncertainty in the moment of inertia is generated by the unknown density dependence of the EoS, as we have already discussed in the case of the radius. \subsection{Tidal deformability} Finally, we address the question of the tidal deformability, which is probed by coalescing neutron stars and carried away by gravitational waves emitted during the last orbits before merger. We analyse its correlation with the experimental nuclear data. \begin{figure}[tb] \centering \includegraphics[width=0.55\textwidth,trim=10 170 0 350, clip=true]{lambda.png} \includegraphics[width=0.55\textwidth,trim=10 170 0 350, clip=true]{lambda-band.png} \caption{$\Lambda$-M correlation for the group D$_0$ group (grey band in all panels) and for the group D$_0$ conditioned by M$_\mathrm{TOV}$ (orange band). See Fig.~\ref{fig:inertia} for the description of the curves in the top panels and the contours in the bottom panels. The data for $\Lambda_{1.4}$ shown in the bottom panels is extracted from Ref.~\cite{ligo} by the LIGO/Virgo Collaboration (GW17817 event).} \label{fig:lambda} \end{figure} Tidal deformability is defined as the quotient of the induced quadrupole moment $Q_{ij}$ to the tidal field $\varepsilon_{ij}$. In terms of the second Love number $k_2$, it is given by $\lambda={2\over{3}} k_{2} R^{5}$. One can also define the dimensionless tidal deformability as $\Lambda = {2\over{3}} k_{2} (R/M)^5 \equiv {2\over{3}} k_{2} C^{-5}$ where $C$ is the compactness. The Love number $k_2$ is defined as, \begin{eqnarray} &k_2& =\frac{8C^5}{5}(1-2C)^2[2+2C(y_R-1)-y_R]\nonumber\\ &\times&\Big\{2C [6-3y_R+3C(5y_R-8)] \nonumber\\ &+& 4C^3[13-11y_R+C(3y_R-2) + 2C^2(1+y_R)]\nonumber\\ &+& 3(1-2C)^2[2-y_R+2C(y_R-1)]{\rm ln}(1-2C)\Big\}^{-1},\qquad \label{k2} \end{eqnarray} with $y_R\equiv y(R)$ and $y(r)$ being the solution of the differential equation, \begin{align} r\frac{dy}{dr} + y^2 + yF(r) + r^2Q(r) = 0, \label{dydr} \end{align} where the functions $F(r)$ and $Q(r)$ are given by \begin{eqnarray} F(r) &=& \frac{1 - 4\pi r^2[\epsilon(r) - p(r)]}{f(r)}, \\ Q(r)&=&\frac{4\pi}{f(r)}\left[5\epsilon(r) + 9p(r) + \frac{\epsilon(r)+p(r)}{c_s^2(r)}- \frac{6}{4\pi r^2}\right] \nonumber\\ &-& 4\left[ \frac{m(r)+4\pi r^3 p(r)}{r^2f(r)} \right]^2, \label{qr} \end{eqnarray} in which $c_s^2(r)=\partial p(r)/\partial\epsilon(r)$ is the square of the sound speed and $f(r)=1-2m(r)/r$~\cite{tanj10,Prakash,hind08,damour,tayl09,had4}. The dimensionless tidal deformability $\Lambda$ is shown as a function of M in Fig.~\ref{fig:lambda} for the six sets, as indicated in the legend. We can compare the envelop of the best EoS~(group D$_4$) with the prior from group D$_0$ (grey band). The orange band represents the envelop of the group D$_0$ conditioned by the constraint on M$_\mathrm{TOV}$. As discussed previously for the M-R relation, as well as for the I-R one, there is only a small impact of a better reproduction of the low-energy nuclear data, with most of the uncertainty originating from the unknown density dependence of the EoS. We also indicate the point reported by the LIGO/Virgo collaboration for the dimensionless tidal deformability of a canonical star, namely, $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{ligo}. As we see, the softest interactions are more compatible with this specific restriction. We obtain $\Lambda^{\rm{mean}}_{1.4}=669\pm323$ (482$\pm179$) for the D$_4$ (D$_{4\mathrm{sym}}$) for M$_\mathrm{TOV}\geq 1.6$M$_\odot$ and $\Lambda^{\rm{mean}}_{1.4}=760\pm 400$ (474$\pm 231$) for M$_\mathrm{TOV}\geq 2.0$M$_\odot$. \begin{figure}[tb] \centering \includegraphics[width=0.55\textwidth,trim=30 140 0 300, clip=true]{lambda-c.png} \caption{Dimensionless tidal deformability $\Lambda$ as a function of the compactness $C=M/R$ for the D$_4$ group. The grey band represents the contour of the D$_0$ group (prior). The insets show a zoom of the curve in linear scale and for a small region of $C$.} \label{fig:lambda-c} \end{figure} We investigate in Fig.~\ref{fig:lambda-c} the $\Lambda$-$C$ universal relation suggested in Ref.~\cite{Yagi2014}. The contributions for the six sets are shown in different colors. We confirm the universal relation, and we show that the dispersion in this relation is mainly given by $L_{sym,2}$. In addition, the dispersion is even larger when all EDFs in the D$_0$ group are considered. The origin of the small dispersion is thus due to the interaction selection of the G$_i$/D$_i$ groups. We have checked that an accurate description of the low-energy nuclear data by the EDFs is also less important here than an improved determination of the density dependence of the EoS. \begin{figure}[tb] \centering \includegraphics[scale=0.36]{stat-L.pdf} \caption{Uncertainties in $\Lambda$ represented by the largest $\Lambda$ uncertainty $\Delta\Lambda$ and the standard deviation $\sigma_\Lambda$ for each groups G$_i$ and D$_i$ with $i=1$ (black), 2 (red), 3 (blue), and 4 (orange). The results correspond to those interactions satisfying M$_\mathrm{TOV}\geq 1.6$M$_\odot$ (closed symbols) and M$_\mathrm{TOV}\geq 2.0$M$_\odot$ (open symbols).} \label{statL} \end{figure} In Fig.~\ref{statL} we present the maximal uncertainty and the standard deviation related to the dimensionless tidal deformability, defined similarly to the moment of inertia and to the radius shown before. We clearly see a reduction of $\Delta\Lambda=\Lambda_{\mbox{\tiny max}}-\Lambda_{\mbox{\tiny min}}$ and $\sigma_\Lambda$ as a function of the neutron star mass, at variance with the analysis of R and I. This is related to the fact that $\Lambda$ is strongly decreasing with M, as shown in Fig.~\ref{fig:lambda}. The correlation of the experimental nuclear data with the tidal deformability is extremely small. Among the quantities we have investigated in this study, the tidal deformability is perhaps the quantity on which the constraints provided by the experimental nuclear data has the smallest impact. \begin{figure}[tb] \centering \includegraphics[scale=0.35]{lambdaxsat.pdf} \caption{Dimensionless tidal deformability $\Lambda_{1.4}$ as a function of $E_{\mathrm{sym},2}$ (left), $L_{\mathrm{sym},2}$ (middle), and $K_\mathrm{sat}$ (right) for the D$_4$ group conditioned by M$_\mathrm{TOV}$. We also present results for the D$_{4\mathrm{sym}}$ subgroup restricted to M$_\mathrm{TOV}\geq 1.6M_\odot$. Full lines: fitting curves with the respective correlation coefficients.} \label{fig:lambdaxsat} \end{figure} Finally, we show in Fig.~\ref{fig:lambdaxsat} the dimensionless tidal deformability of a \mbox{M=$1.4$M$_\odot$} star, namely, $\Lambda_{1.4}$, as a function of the NEPs $E_{\mathrm{sym},2}$, $L_{\mathrm{sym},2}$, and $K_\mathrm{sat}$. This figure is similar to Fig.~5 from Ref.~\cite{Wei2020}, which showed no correlation for a reduced set of nuclear interactions. From this figure, the authors of Ref.~\cite{Wei2020} concluded that there is no correlation between global properties of NS and saturation properties of nuclear matter. We find that a correlation does indeed exist, when considering the D$_4$ group conditioned by M$_\mathrm{TOV}$. We find a good correlation for $\Lambda_{1.4}$-$E_{\mathrm{sym},2}$ and $\Lambda_{1.4}$-$L_{\mathrm{sym},2}$ for the D$_4$ group with both constraints on M$_\mathrm{TOV}$. In the case of the D$_{4\mathrm{sym}}$ subgroup, we see that the correlation of $E_{\mathrm{sym},2}$ with $\Lambda_{1.4}$ n longer exists, but for $L_{\mathrm{sym},2}$ there is still a correlation, although smaller than for the D4 group. Concerning $\Lambda_{1.4}$-$K_\mathrm{sat}$, we find the Pearson correlation coefficients to be smaller than $0.6$ in all cases. We thus conclude that correlations do exist between global properties of NSs and saturation properties, especially for the isovector QNEPs, although there is a large dispersion in these correlations, which limits the value of the Pearson correlation coefficient. \section{Conclusions} In this study, we have analysed the link between the constraints on mean field EDFs generated by low-energy nuclear experimental data and their corresponding predictions for NSs. To do so, we have investigated 415 mean field interactions, both relativistic and non-relativistic, for which we have calculated several quantities that can be directly compared to the experimental data. These quantities are the masses, radii and GMR energies of a number of doubly magic nuclei (chosen to minimize the impact of uncontrolled approximations such as pairing, deformation, etc). We have defined five groups, from G$_0$ to G$_4$, where G$_0$ is the set of interactions reproducing the experimental nuclear masses with the largest tolerance, G$_2$ with the smaller tolerance, while G$_3$ and G$_4$ add successively the constraint on the charge radius and the giant monopole resonance. In these groups, we have evaluated the reproduction of the experimental data globally. They are contrasted with another set of groups, called D$_0$ to D$_4$, for which a more detailed evaluation is performed by separating the $N=Z$ nuclei from the others: To be well ranked in the groups D$_i$, the interactions must reproduce equally well the $N=Z$ and $N \ne Z$ nuclei. From this first step of our analysis, we find that \begin{enumerate} \item The group D$_4$ exhibits a fairly strong correlation between $E_{\mathrm{sym},2}$ and $L_{\mathrm{sym},2}$. \item By combining the low-energy nuclear data and an analysis of the density dependence of the symmetry energy~\cite{Danielewicz2013}, we have isolated a group D$_{4\mathrm{sym}}$ that further reduces the uncertainty in the symmetry energy. We find $E_{\mathrm{sym},2}=31.8\pm0.7$~MeV and $L_{\mathrm{sym},2}=58.1\pm 9.0$~MeV. \end{enumerate} In a second step, we have confronted the different groups G$_i$ and D$_i$ with global observational quantities related to stable NSs, such as radii, moments of inertia and tidal deformabilities. We have compared the priors, identified as the G$_0$ or D$_0$ groups, which include all viable EoSs, with the best interactions of the groups G$_4$ and D$_4$. From this comparison, we find that \begin{enumerate}[resume] \item The selection of interactions according to their adequacy in reproducing the experimental nuclear data has a weak impact on the reduction of uncertainties of global NS properties with masses around or above the canonical one. This reveals that the density dependence of the EoS is not constrained by precision measurements of low-energy nuclear data. \item The selection of the groups D$_i$ has a greater impact on the results than the selection of the groups G$_i$, showing the importance of having control over the isotopic predictions of the interactions. The charge radius plays an important role in this selection. \item The 1.4M$_\odot$ neutron star (NS) radius lies between 12 and 14~km for the ``better'' nuclear interactions. \item To a large degree, the density dependence of the symmetry energy explains the observed dispersion in NS properties, so that a more detailed knowledge of the symmetry energy should result in a reduction of the uncertainties in NS radii, at least for canonical to low-mass NS, where there is no phase transition to exotic matter. \end{enumerate} The fourth point is not surprising, since NS matter is an extrapolation of current nuclear interactions towards large isospin asymmetries. The third point, however, is a bit more surprising. It tells us that the constraints of experimental nuclear data near saturation density are only weakly correlated with the behavior of the EoS at several times saturation density. This confirms the conclusions of Ref.~\cite{Margueron2018b}, where the uncertainties in the extrapolation of the nucleonic EoS are found to be fairly uncontrolled above the densities at which experimental data exist. We therefore emphasize that the reduction of the uncertainties in NS global properties will not originate from better data related to low-energy nuclear physics, since the density or energy region for which constraints are required is outside the reach of standard nuclear physics. The experimental data on the symmetry energy is found to be much more constraining, however. We find that the slope of the symmetry energy $L_{\mathrm{sym},2}$ near saturation density is well correlated with NS radii and masses. We also observe that the experimental data on IAS+$\Delta r_{np}$ (group D$_{4\mathrm{sym}}$) has a large impact on further selection among our best set of interactions D$_4$. The D$_{4\mathrm{sym}}$ group furnishes, for the most part, values of $L_{\mathrm{sym},2}\leq 90$~MeV, depending on the value of $K_{\mathrm{sym},2}$. It also determines boundaries for a few QNEPs, $L_{\mathrm{sym},2}$, $K_{\mathrm{sym},2}$, and $Q_{\mathrm{sym},2}$, which are tighter than the ones proposed in Ref.~\cite{Margueron2018a}. In conclusion, we have shown in our analysis that the experimental nuclear masses, radii and GMR energies of a set of doubly magic nuclei show little correlation with the properties of nucleonic matter at several times saturation density. The experimental data related to the symmetry energy, however, are somewhat better correlated with these properties. In the future, we plan to perform a complementary analysis including data from heavy-ion collision exploring densities above $n_\mathrm{sat}$, the saturation density of nuclear matter. This appears to be a necessary condition for making substantial progress on the understanding of the properties of dense nuclear matter. \begin{acknowledgements} This work is supported by the project INCT-FNA proc. No. 464898/2014-5, as well as by the Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq) under Grants No. 303131/2021-7 (B.V.C.), 312410/2020-4 (O.L.), and 433369/2018-3 and 308528/2021-2 (M.D.). We also acknowledge support from the Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) under Thematic Project 2017/05660-0 (O.L., M.D., B.V.C) and Project 2020/05238-9 (O.L., M.D.). J.M. is supported by the CNRS/IN2P3 NewMAC project, and benefits from PHAROS COST Action MP16214 and the LABEX Lyon Institute of Origins (ANR-10-LABX-0066) of the \textsl{Universit\'e de Lyon}. The authors acknowledge support from the CNRS International Research Project ``SUBATOMICS", which enabled a visit by J.M. to ITA, where part of this work was carried out, and we thank LAB-CCAM from ITA for computational support. \end{acknowledgements}
1,116,691,500,524
arxiv
\section{Introduction} \label{introduction} The spin glass theory has been one of the most difficult problems treated by statistical mechanics during the last two decades. Despite its value in the field of solid state physics, its study has also contributed to develop new techniques which now apply to a wide range of fields, such as optimization problems, neural networks, and other complex systems \cite{Fischer}. The first microscopic approach to spin glasses is due to Edwards and Anderson (EA) \cite{Edwards}, whose model basically consists on a Ising system with random positive and negative exchange couplings. Until now only its mean field version, known as the Sherrington Kirkpatrick model (SK) \cite{Sherrington} has been exactly solved, but unfortunatly its solution requires the sophisticated replica trick. Under such limitations, Monte Carlo numerical simulations have become one of the most applied techniques in the field. At the same time, it is not clear at the moment whether one should expect that the spin glass phase of the mean field SK model resembles the behavior of the spin glass phase of the low dimension EA model. For many years there has been great controversy on whether the spin glass transition is either of thermodynamical or dynamical nature. However numerical simulations \cite{Ogie} and phenomenological scaling arguments at zero temperature \cite{Bray} strongly suggest the existence of a true thermodynamical phase transition. From a dynamical point of view, a very careful numerical study of the time decay of the auto--correlation function $q(t)$ has shown that the system displays three different dynamical regimes: above the Curie point $T_c$ of the nonrandom Ising model, the auto-correlation decays exponentially; between $T_c$ and the spin glass temperature $T_g$ the auto-correlation function has a stretched exponential behavior with temperature dependent exponents; finally, in the spin glass phase only power law decay is observed at all times scales. Since the SK model can be understood as the infinite dimension version of the EA model, it is desirable to be able to study the effects of dimensionality both in the static and dynamical properties of the system, even if such analysis should be limited to numerical considerations. In 1992 Parisi, Ritort and Rub\'{\i} \cite{pariru} introduced the {\em hypercubic cell model} which allows a very efficient treatment of high dimensional models, at least when compared with hypercubic lattice models. It consists of a unique cubic cell of dimension $D$ with an Isin spin variable associated to each of its $2^D$ cornes. Despite its simplicity and unrealistic features, one expects that its behavior for dimension $2D$, resembles, at least qualitatively, that observed in a $D$ dimensional hypercubic lattice, since both share the same connectivity $D$. Even more, for $D\to \infty$ the hypercell model recovers the mean field SK model. This approach has been used in the last few years to analyze both dynamical and statical consequences of dimensionality in the spin glass phase of different models \cite{pariru,Cugliandolo,Marinari,Stariolo}. In this work we apply the damage spreading method to the hypercell Ising spin glass model simulated with a heat bath Monte Carlo dynamics. This technique basically consists in measuring the time evolution of the Hamming distance between two initially different configurations submitted to the same thermal noise, i.e, updated with the same random number sequence. The dependence of the damage and other related quantities on temperature, time, initial conditions and other relevant parameters leads to a dynamical phase diagram of the model. In general, this phase diagram strongly depends on the Monte Carlo dynamics used in the numerical simulation. In particular, for the bi and three-dimensional Ising ferromagnet one finds that the dynamical transition coincides with the static one when the system is submitted to heat bath dynamics, while the opposite occurs when submitted to Glauber dynamics. When more complex systems are analyzed with heat bath dynamics, more than two dynamical phases are usually found, where only a few of them are correlated with termodynamical phases (see \cite{Silva} and references therein). In particular, for spin glasses in three and four dimensions \cite{derwei}, three different dynamical regimes were obtained, as ocurred when the auto-correlation was analyzed. For low temperatures, the final damage is non null and its value depends on the initial Hamming distance. For intermediate temperatures, the damage still spreads but its final value is independent on the initial damage. Finally, for high temperature the final damage is always zero. While the lower dynamical transition temperature seems to agree with the equilibrium one ($T_g$) separating the spin glass and the paramagnetic phases, the upper transition temperature seems to be consistent with the temperature below which the stretched exponential relaxation emerges ($T_c$). Surprisingly, a similar behavior was reported for the two dimensional spin glass model, which does not present a non zero temperature spin glass phase. Nevertheless, despites some numerical evidence, it is not clear whether these three regimes found with damage spreading are related to those oberved through the temporal behavior of the auto--correlation function. On the other hand, when the SK model was studied only a two phases structure was found, in good agreement with the thermodynamical diagram. This paper is organized as follows. In section 2 we introduce in more detail the hypercell model and describe the spreading of damage technique. In section 3 we present the results for different dimensions. Finally, in section 4 we discuss the main conclusions of the paper. \section{The model and the method} \label{model} The model consists of a single hypercubic cell in dimension $D$ with an Ising spin variable $S_i = \pm 1$ associated to each of its $2^D$ corners. Each spin interacts with its $D$ nearest neighbours through the Hamiltonian \begin{equation} H= - \sum_{\langle ij \rangle} J_{ij} S_i S_j , \end{equation} where $\langle ij \rangle$ denotes nearest neighbours and the $J_{ij}$ are chosen accordingly to the following probability distribution: \begin{equation} P_J(J_{ij})=\frac{1}{2}\delta(J_{ij}-J)+\frac{1}{2}\delta(J_{ij}+J). \end{equation} Here we have taken $J=1/D^{1/2}$ to normalize extensive quantities. \\ The method consists on simulating the time evolution of the system through a heath bath Monte Carlo process. The spins are sequentially updated with the following rule: \begin{equation} S_i(t+1)= \left\{ \begin{array}{ll} +1 & \qquad \mbox{with probability} \quad \frac{1}{2}[1+\tanh(h_i(t))] \\ & \\ -1 & \qquad \mbox{with probability} \quad \frac{1}{2}[1-\tanh(h_i(t))] \\ \end{array} \right. \end{equation} where $h_i= \sum_{i \ne j}^N J_{ij} S_i(t)$ is the local field in site $i$ at time $t$. For a given disorder configuration $\{ J_{ij} \}$ we choose two different initial states $\{ S_i^A \}$ and $\{ S_i^B \}$ and let both evolve with the same thermal noise, i.e., by using the same random sequence. We then measure the time evolution of the Hamming distance or {\em damage} between them, defined as \begin{equation} dh(t)=\frac{1}{4N} \sum_{i}^N (S_i^A(t)-S_i^B(t))^2 . \end{equation} For each temperature we calculate the time average of the damage $\overline{\langle dh \rangle}$ over $10 000$ Monte Carlo Steps (MCS), defined by: \begin{equation} \overline{\langle dh \rangle} = \frac{\sum_{t} \langle dh \rangle(t) P(t)}{\sum_{t} P(t) } \; . \end{equation} Here $P(t)$ is the probability that the two replicas do not become identical at time $t$ \cite{derwei}. Note that, since we use the same random sequence for updating both replicas, if at any time $t$ they become identical (i.e., they meet in the phase space), they will continue identical for all subsequent times. This procedure was repeated $M$ times ($M$ depending on $D$ and $T$) in order to obtain a configurational average $\langle dh \rangle$ of the damage over different coupling constants, initial conditions and random number sequences. In the next section we will study the influence of the dimensionality $D$ and the initial damage between the two replicas $dh(0)$ in the long time behavior of the Hamming distance. This will allow us to characterize different dynamical behaviors as a function of the temperature of the system and analyze their possible relationships with the thermodynamical phases. \section{Results} We start this section by describing the behavior of the model for dimension $D=8$. In Fig.\ 1 we show the Hamming distance as a function of temperature for three different initial damages, namely, $dh(0)=0.1$, $0.5$ and $1$. % Observe that the system displays three different dynamical regimes: \begin{description} \item[a)] for low temperatures ($T < T_1^8$) we observe that $\langle dh \rangle $ is non null and its value depends on the initial damage (it increases as the initial damage increases); \item[b)] for intermediate temperatures ($T_{1}^{8} < T < T_{2}^{8}$) the system is characterized by a single value of $\langle dh \rangle$ independent of the initial damage; \item[c)] for high temperatures ($T > T_2^8$) the Hamming distance $\langle dh \rangle $ is always zero. \end{description} This behavior is similar to the one observed by Derrida and Weisbuch \cite{derwei} in the three-dimensional Edwards-Anderson model and differs from that observed on the Sherrington--Kirkpatrick model, as described in the introduction. As we will show soon, the same qualitative behavior was also observed for $D=6$, $10$ and $15$. Next we characterize each phase by the temporal behavior of both $P$ and $\langle dh \rangle$. In Fig.\ 2 we show the behavior of $P(t)$ and $\langle dh \rangle (t)$ for $T=0.53$ (the low temperature phase), with $dh(0)=1$. After suffering an exponential decay to a value close to $0.5$ (a similar behavior was observed by Arcangelis {\em et al.} \cite{arca} for the EA model) the Hamming distance $\langle dh \rangle (t)$ grows slowly, while $P(t)$ decays slowly too. In Fig.\ 3 we show the same results in a double logarithmic plot, from which follows that, after an initial transient of about 1000 MCS both quantities vary with a power law behavior $P(t) \approx t^{-\delta}$ (with $\delta\approx 0.258$) and $\langle dh \rangle(t) \approx t^{-\gamma}$ (with $\gamma\approx 0.047$). We have also found that these exponents depend on the temperature of the system, although a more careful analysis of such dependence should be done with better statistics and for different temperatures in order to confirm these results. This behavior can be understood in terms of the phase space structure of the system. If, as happens in the SK model, the phase space has valleys separated by a wide distribution of Hamming distances, then the replicas that are closer to each other become identical faster than those that are further apart. As time goes on, $\langle dh \rangle$ takes into account only those replicas that are far apart and, as a consequence, it grows. This indicates that bigger energy barriers separate valleys that are further apart. Since we are working with small systems, $N=256$, this barriers can be crossed for long times as the ones we considered ($t=10000$). Next we make a similar study in the intermediate phase. In Fig.\ 4 we plot the curve $\ln{(-\ln{P(t)})}$ vs. $\ln{t}$ for $T=1.41$, which, for a wide range of values of $t$, can be very well fitted by a linear function indicating a stretched exponential decay of $P(t)$. The Hamming distance presents a different behavior, since it remains constant as time flows and $P(t)$ decays. For long times $\langle dh \rangle$ displays big fluctuations, which appear as a consecuence of the poor statistics (note that only a small number of replicas have survived for such long times). These results accept three different interpretations: \begin{itemize} \item the system has a phase space structure with multiple valleys but all of them equidistant; \item the system has only two valleys, like a ferromagnet; \item the phase space is almost flat as a function of the free energy, so the two replicas wander through a phase space (represented by a hypercube of dimension $2^{2^D}$) and do not find themselves due to its high dimensionality. \end{itemize} In the first two hipotesis, the faster decay of $P(t)$ indicates that the valleys are not mutually impenetrable. It is probable then that in this paramagnetic phase the system separates regions in phase space (valleys) that are accesible to each other. Finally, in the high temperature phase ($T > T_2^6$) all the replicas become zero in a few MCS and both $P(t)$ and $\langle dh \rangle$ decays exponentially. This results are very important since: \begin{itemize} \item $P(t)$ has a temporal behavior similar to the one found in \cite{Ogie}, indicating the possibility of a close relationship between the phases found with spreading of damage and those studied through the auto--correlation function $q(t)$; \item they show that the hipercell model in dimension $D=8$ is similar to the three and four dimensional EA model not only in its static properties (as studied by Parisi {\em et al.} \cite{pariru}) {\em but also} in its dynamical behavior. \end{itemize} The same detailed study was performed for $D=6$ and the same qualitative behavior was observed for all the quantities. In dimension $D=10$ a new dynamical behavior emerges. In considering the $\langle dh \rangle$ vs. $T$ plot presented in Fig.\ 5, we see that the system basically displays the same three regimes found in $D=8$. Neverthless a more detailed analysis of the dependence of $P$ and $\langle dh \rangle$ with time, showed in Fig.\ 6, reveals new features. Now both in the intermediate and the lower temperature phases $P(t)$ equals 1 for all considered times ($t < 10000$) while $\langle dh \rangle$ keeps a constant value (after an initial fast exponential decay). The only difference resides in the dependence on the initial damage shown in Fig.\ 5. The difference between this phases can be better observed in Fig.\ 7 where we present the histograms of Hamming distances in $t=100$ for $T=0.35$ and $T=1.92$ respectively, with initial damage $dh(0)=1$. We verify that the low temperature phase still presents a wide distribution, indicating a complex structure as the one described by replica symmetry breaking. On the other hand, in the intermediate regime the distribution is narrow, indicating a behavior that corresponds to one of the three hipotesis made for the $D=8$ case. It is worth mentioning that the histograms present the same qualitative behavior in all studied dimensions indicating a drastic change in the phase space structure at the critical temperature $T_1$. The same analysis has been done for $D=15$ and in Fig.\ 8 we present $\langle dh \rangle$ vs. $T$ with the three usual phases. The temporal analysis displays the same behavior in the different phases. Finally, in table \ (\ref{tabla1}) we present the values of the critical temperatures $T_{1}^{D}$ and $T_{2}^{D}$ obtained for the different dimensions studied in this paper. Note that as $D$ increases, $T_1$ seems to approach, as expected, the value 1, which corresponds to the critical static temperature of the Sherrington Kirkpatrick model. Unfortunatly, as much as we know, the static critical temperatures of the spin glass--paramagnetic transition for finite $D$ have never been studied, so, it is impossible to compare static and dynamical transition temperatures. If, as happens with all Ising like spin models studied in the literature with heat bath dynamics, these temperatures coincide, we can then conclude that the convergence of this critical temperature $T_1$ is very slow. Concerning $T_2$, it also increases, but higher dimensions should be considered in order to extrapolate the $D\to \infty$ behavior. It is important here to stress that, at least for $D=15$, we have not found a dynamical behavior that resembles the one obtained in the study of the Sherrington Kirkpatric model, namely, a two phase structure with the critical dynamical temperature in good agreement with the static one. In other words, for all the temperatures considered in this paper, we have shown that the system has a dynamical phase diagram similar to the one of the Edwards--Anderson model, i.e., we did not find an {\em upper critical dynamical dimension} above which the system displays a mean field behavior. \section{Conclusions} In this work we have applied the damage spreading technique to the hypercell Isin spin glass model in order to study its dynamical behavior and the influence of dimensionality. As was stressed in the introduction, previous studies had found different dynamical phase diagrams for the EA and the SK model. While the former presented three different regimes (suggesting a correlation with the temporal decay of the autocorrelation function), the last one presented a unique phase transition at a temperature compatible with the spin glass--paramagnet static transition. Since the SK model is recovered as the $D\to \infty$ version of the EA model, we studied the effect of increasing the dimensionality in the dynamical behavior of the system in the hope of finding some critical dimension above which the system displays the mean field dynamical phase diagram. The phase diagram, for all dimensions studied, presents a three phase structure similar to that obtained for the EA model with $D=3$ and $D=4$, namely, a low temperature phase that displays dependence with the initial damage, an intermediate phase where the damage spreads but its final value is independent of the initial damage and a high temperature phase where the damage decays exponentially to zero. While the lower critical dynamical temperature seems to converge to SK static temperature, for the upper critical temperature we were not able to extrapolate its behavior (we are probably far from an asymptotic regime). This means that, at least for $D=15$, we are still far from the SK regime. Further simulations with higher dimensions would be required but the computation time needed exceeds our numerical capacity. When one considers the temporal behavior of the quantity $P(t)$ for different dimensions, some interesting conclusion can be extracted: \begin{itemize} \item There is a drastic change in the behavior of $P(t)$ for $D\le 8$ and $D\ge 10$. In the former case, $P(t)$ displays a decay similar to that observed for the auto-correlation function in the EA model \cite{Ogie} characterizing three different phases: power law decay for $T < T_{1}^{D}$, stretched exponential decay for $T_{1}^{D} < T < T_{2}^{D}$ and exponential decay for $T > T_{2}^{D}$. In the last case ($D \ge 10$), $P(t)$ is constant and equals 1 in the low and intermediate temperature regimes and decays exponentially in the high temperature phase ($T > T_{2}^{D}$). \item The detailed analysis of the histograms of Hamming distances reveals that the low temperature phase is characterized by a wide distribution, as expected in a multi-valley phase diagram, for all the dimensions considered. This structure resembles, at least qualitatively, the one found in the SK model. On the other hand, in the intermediate phases we always found narrow distributions of the Hamming distances. Note that in this regime the final distance is always non zero independently of the initial damage. This is also true for vanishing small initial damages, meaning that in this phase the heat bath Monte Carlo dynamics is truely chaotic. \end{itemize} \acknowledgements We are gratefully acknowledged to D.A. Stariolo for fruitful discussions. \begin{figure} \label{fig1} \caption{$\langle dh \rangle$ vs Temperature for $D=8$ and for three different initial damages: $dh(0)=1$ (triangles), $dh(0)=0.5$ (squares) and $dh(0)=0.1$ (circles).} \end{figure} \begin{figure} \label{fig2} \caption{Temporal behavior of $P(t)$ and $\langle dh \rangle(t)$ for $D=8$ and $dh(0)=1$. The average was calculated over 1000 different samples.} \end{figure} \begin{figure} \label{fig3} \caption{$\langle dh \rangle$ and $P(t)$ as a function of $t$ in a double logaritmic scale for $D=8$ and $T=0.52$ (in the low temperature phase).} \end{figure} \begin{figure} \label{fig4} \caption{ $\ln{(-\ln{\langle dh\rangle})}$ and $\ln{(-\ln{P(t)})}$ for $D=8$ and $T=1.41$, (in the intermediate phase).} \end{figure} \begin{figure} \label{fig5} \caption{$\langle dh \rangle$ vs Temperature for $D=10$ and for three different initial damages: $dh(0)=1$ (triangles), $dh(0)=0.5$ (squares) and $dh(0)=0.1$ (circles).} \end{figure} \begin{figure} \label{fig6} \caption{$\langle dh \rangle$ vs. $t$ for $D=10$, $dh(0)=1$ and for $T=0.35$ (low temperature phase) and $T=1.92$ (intermediate phase).} \end{figure} \begin{figure} \label{fig7} \caption{Histogram of Hamming distances at $t=100$ with $dh(0)=1.$ for $D=10$ and a) $T=0.35$ (in the low temperature phase) and b) $T=1.92$ (in the intermediate phase).} \end{figure} \begin{figure} \label{fig8} \caption{$\langle dh \rangle$ vs Temperature for $D=15$ and for two different initial damages: $dh(0)=1$ (circles), $dh(0)=0.5$ (squares).} \end{figure} \begin{table} \label{tabla1} \begin{center} \begin{tabular}{|l|l|l|} \hline $D$ & $T_{1}^{D}$ & $T_{2}^{D}$ \\ \hline $6$ & $0.65 \pm 0.04$ & $1.8 \pm 0.2$ \\ \hline $8$ & $0.66 \pm 0.04$ & $1.7 \pm 0.1$ \\ \hline $10$& $0.74 \pm 0.08$ & $2.0 \pm 0.1$ \\ \hline $15$& $0.79 \pm 0.01$ & $3.25 \pm 0.05$\\ \hline \end{tabular} \end{center} \end{table}
1,116,691,500,525
arxiv
\section{Introduction} The Sherrington-Kirkpatrick (SK-) model \cite{SK} has been an extraordinary stimulating source for analytical and numerical studies in the physics of frustrated magnetism and in numerous interdisciplinary applications. Many relevant properties of the SK-model have been explored since its discovery \cite{mezard,parisi,thomsenthorpe,talagrand,sommers}. Near the zero temperature limit and at $T=0$, the model displays rather simple features but also analytically still unresolved behavior, and therefore remains challenging in our point of view. Along a particular line of theoretical analysis, on which the present article evolves, we searched for explicit analytical solutions deep in the ordered phase and particularly at zero temperature. The limit of zero temperature was recently described to accommodate two critical points related to the criticality of the hierarchical scheme of replica symmetry breaking (RSB) hosted by the SK-model \cite{prl2005,prl2007,mjs-ro-pre,ro-mjs-pre}. In order to reveal this type of criticality, the finite-step discrete-RSB was utilized. Extreme high order calculations up to $200$ orders and Pad\'e-approximants allowed to analyze the limit towards infinite breaking analyzed by means of high-precision numerical work, renormalization group ideas, and scaling theory in addition. \cite{prl2005,prl2007,mjs-ro-pre,ro-mjs-pre}. The scaling theory allowed to resolve the puzzle of noncommuting zero temperature limit and infinite-step limit of replica symmetry breaking \cite{ro-mjs-pre}. In this article we present new numerical calculations which led us to a surprisingly simple modeling of the distribution of internal magnetic fields. Pioneering work on this local magnetic field distribution by Thomsen et al \cite{thomsenthorpe} presented various analytical limits and numerical studies (TAP theory, simulations etcetera) for vector spin glasses (incl. SK of course) and for the relevant temperature range from zero to above the glass transition. By virtue of an exact mapping as derived by Perez-Castillo and Sherrington \cite{perez}, we can, for our convenience, evaluate the spectral function (or density of states) for the fermionic version of the SK-model (see below for definition) and directly employ its functional form in one-to-one correspondence as the field distribution of the standard SK-model. In the following section we recall a few necessary details of the fermionic SK-model (sometimes called "ISG$_f$" or "fSK") and of the fermionized SK-model as an exact representation of the standard SK-model. The formalism on how to deal with the Parisi scheme in the Grassmann field theories of these models has been presented in numerous earlier papers at length. Our main goal here is to report an astonishingly simple representation of the internal magnetic field distribution by the functional form of a single eigenstate of a weakly anharmonic quantum oscillator. This idea was guided by the known simple split-Gaussian replica-symmetric solution for the density of states $\rho(E)$ at $T=0$. The broad energy gap of this lowest approximation is filled by replica symmetry breaking. The resulting pseudogap reflects the criticality of RSB at $T=0$. Its power law must be compared with the Efros-Shlovskii pseudogap, and indeed we will add below remarks on the Coulomb glass pseudogap as derived by M\"uller and Pankov \cite{mp}. Their intricate selfconsistent scheme \cite{mp} is used to unravel the special role of the Coulomb interaction and of RSB for the pseudogap-creation at very low temperatures. The representation of the spectral density by means of 1D quantum oscillator wavefunctions was also motivated by the advantage that one can integrate analytically the Lehmann representation of the single fermion Greens function (term by term) for the fermionic SK-model. The paper is organized as follows: The main results on the functional mapping between field-distribution $P(h)$, density of states $\rho(E)$ and the first excited state of an unconventional quantum oscillator with nonanalytic shift and very weak anharmonicity, is addressed in all chapters. The numerical data, on which the partially-analytical proposal is based, are presented and analyzed in section 4. The analytical formulas, which were evaluated numerically, are presented in section \ref{dos-section}. The theoretical basis which led to these analytical equations was published earlier \cite{ro-amg} and employs the Grassmann-field and generating functionals so that the density of states can be extracted from the single-fermion propagator. Section 5 finally contains the comparison with the Coulomb glass. \section{The models under consideration, and their relationships} \subsection{The spin glass models} \label{models} Let us begin with the SK-model, its exact fermionic representation (fermionized SK-model), and the generic fermionic SK-model, sometimes called ISG$_f$ of $f$SK-model. These models agree for half-filling ($\mu=0$) and at $T=0$, and hence can be chosen for technical convenience. We took advantage of the grand-canonical apparatus of generating functionals in Grassmann field theory, which has been elaborated for the fermionized version. The fermionic SK-model, as the extreme localized limit of an itinerant spin glass model, contains certain aspects which can be compared with the Coulomb glass. The SK-model together with its grand-canonical fermionic representation (fermionization) and its fermionic variant, listed by the spin Hamiltonian ${\cal H}_{SK}$ and the fermionic partners ${\cal H}_{fSK}$ respectively, are given by \begin{eqnarray} {\cal H}_{SK}&=&-\sum\limits_{i,j}J_{ij}S_i\, S_j-\sum\limits_{i=1}^N H_i\, S_i\quad\quad{\rm with}\quad S_i=\pm1\\ {\cal H}_{fSK}&=&-\sum\limits_{i,j}J_{ij}\sigma_i\sigma_j-\sum\limits_{i=1}^N (H_i\,\sigma_i-\mu\, n_i)\quad\quad{\rm with}\quad \sigma_i=n_{_{i\uparrow}}-n_{_{i\downarrow}} \end{eqnarray} where $\sigma_i\equiv n_{i\uparrow}-n_{i\downarrow}$, $n_{i\lambda}\equiv a_{i\lambda}^{\dagger}a_{i\lambda}$ in terms of the fermionic operators obeying $\{a_{\alpha},a_{\gamma}^{\dagger}\}=\delta_{\alpha,\gamma}$. The spin interaction $J_{ij}$ is assumed to be range-free and Gaussian-distributed with zero mean (for simplicity) and variance $J^2/N$. The chemical potential $\mu$, assumed as nonrandom and homogeneous in this case, is chosen for the grand-canonical model Hamiltonian (2) in two ways as given by \begin{eqnarray} (2a) &&\mu=i\frac{\pi}{2}T \quad \text{ fermionized SK-model}\nonumber\\ (2b) &&\mu =0 \quad \text{ half-filled SK-model}\nonumber \end{eqnarray} The ingenious trick introduced by Popov and Fedotov \cite{popov-fedotov}, which allows fermionization for $\mu=\frac{i}{2}\pi T$, has been used before in many of our preceding papers (including its generalization to higher spin quantum numbers). This mapping is exact in the sense that the thermodynamics of the original model and of its fermionized version are identical. The 2nd model is also classical, since all operators commute - quantum time-dependence can only be observed when correlations of odd numbers of fermionic operators at different times are considered. The chemical potential signals that a grand-canonical ensemble is employed. For half-filling at $T=0$, realized by $\mu=0$, both models agree. The distribution of internal magnetic fields $P(h)$ \cite{thomsenthorpe} is defined for the SK-model by \begin{equation} P(h)=\frac{1}{N}\sum_i\langle \delta(h-\sum_j J_{ij} S_j)\rangle \end{equation} while the fermionic density of states of the fermionic SK-model is defined via the disorder-averaged single fermion Greens function by \begin{equation} \rho(E)=\sum_{\alpha} \langle\delta(E-E_{\alpha})\rangle= -\frac{1}{\pi}Im\,\langle\,G_{ii}^R(E)\,\rangle \end{equation} where $G^R$ represents for the retarded and real-space-local fermionic Greens function. The bracket refers to statistical- and disorder-average over the Gaussian-distributed $J_{ij}$. Standard many body formalism tells us how to obtain $G^R$ from the real-time-ordered $T=0$ Greens function $G_{ij}(t-t')=-\langle T_t \{a_i(t) a_j^{\dagger}(t')\} \rangle$ Their (Lehmann) spectral weight representations $G_{ii}^R(E)=\int\limits_{-\infty}^{\infty} du\, \frac{\rho(u)}{E+i\delta-u}$ and $G_{ii}(E)=\int\limits_{-\infty}^{\infty} du\, \frac{\rho(u)}{E+i\,sign(E)\,\delta-u}$ yield either a symmetric imaginary part $\rho(E)$ as spectral weight or the antisymmetric function $\rho^{(a)}(E)\equiv sign(E)\,\rho(E) =-\frac{1}{\pi}Im\{G(E)\}$ with $\rho^{(a)}(E)=-\rho^{(a)}(-E)$. We want to expand the antisymmetric function $\rho^{(a)}(E)$ in the complete orthonormal set of eigenfunctions of a 1D harmonic oscillator. We shall describe below in detail how many states are necessary to describe with increasing accuracy the deviation of the numerically evaluated anti-symmetrized DOS $\rho^{(a)}(E)$ from the (functional form of the) first excited state of the harmonic oscillator, $\psi_1(x)$. We add one subsection in order to motivate the analogies between variables used in the DOS or $P(h)$ and in the oscillator models' wavefunctions. Finally we shall compare the density of states of the fermionic SK-model with that of the Coulomb glass. Pioneering work by Davies, Lee, and Rice \cite{davies} and by Gr\"unewald, Pohlmann, and W\"urtz \cite{wuertz} provided strong numerical evidence for the connection between the Efros-Shlovskii pseudogap and glassy order in Coulomb glasses, which intend to model the deep localized regime of disordered electrons interacting by Coulomb interaction. In these models, random chemical potentials were either modeled by a constant probability distribution on a finite width, while later on, M\"uller and Pankov \cite{mp} used a Gaussian distribution to obtain an effective pseudo-spin model and analyzed the ordered phase in great detail \cite{mp}. Our objective is to display both the similarities and the differences between density of states of the fermionic SK-model and of the Coulomb glass. We report a new calculation on the replica symmetric approximation including the unstable temperature-regime and $T=0$. We are then in a position to appreciate the different types of pseudogaps created by RSB. For 3D M\"uller and Pankov found that the DOS vanishes quadratically at the pseudogap-center, while in the SK-case it behaves linearly for all dimensions. \subsection{The oscillator models} The one-dimensional harmonic oscillator is one of the most elementary models of quantum mechanics. Its eigenfunctions are usually written in terms of Hermite polynomials and Gaussian functions (while the 2nd linear independent solution of the differential equation, which is a confluent hypergeometric function, is excluded by boundary conditions). Using this complete orthonormal set of basis functions for representations of other models' solutions, does not imply a priori any physical relationship with the harmonic oscillator. Hence in general one cannot expect to gain much from such a representation. The apparent resemblance between $P(h)$ of the $T=0$ SK-model (and hence of the density of states $\rho(E)$ of its half-filled fermionic model extension \cite{perez}) on one hand, and the modulus of the first excited state wavefunction $|\psi_1(x)|$ of a harmonic oscillator on the other hand, motivated us to look for the number of basis states needed to describe well the deviations. Another faint suspicion was feeded by the replica-symmetric spin glass solutions at $T=0$, which has a split Gaussian shape and reminded (partially) of the oscillator ground state. Analyzing spin glass data in terms of such an expansion, we first found that less basis functions are needed than expected. Furthermore, for an excellent approximation (see below) it turned out that an unconventional shift was needed. This model can be described as \begin{equation} \label{osc-model} {\cal H}_{osc}=\frac{\hat{p}^2}{2m}+\frac12 m\omega^2 \left(|x|- x_0\right)^2+c_3 |x|^3+c_5\,|x|^5+... \end{equation} where $\hat{p}\equiv -i\hbar \frac{d}{dx}$. The introduction of the nonanalytic shift helped to select only the first excited state in order to match accurately the DOS-data by only the first excited state. For simplicity we set again $\hbar=1$ and $m=1$. The energy of the oscillator in the $n$-th excited state, given by $$E_{osc}^{(n)}=\int\, dx\, \psi_n^*(x){\cal H}_{osc}\,\psi_n(x),$$ while the SK-energy $E_{SK}$ at $T=0$ will then be given by \begin{equation} \label{SK-energy} E_{SK}=F_{SK}(T=0)=\int\limits_{-\infty}^{0} dh\,h\, P(h) =\int\limits_{-\infty}^{\mu=0} dE\,E\,\rho(E) \end{equation} and using the results for $P(h)$ or $\rho(E)$ we confirm the recently derived high-precision value of the SK-energy at $T=0$ correctly up to $O(10^{-8})$. Anticipating the result let us represent the SK-energy $E_{SK}$ as an integral over the quantum oscillator wavefunction by \begin{equation} E_{SK}=\int\limits_{-\infty}^0 dx\, x\,|\psi_1(x)| \end{equation} which is the direct translation of Eq.(\ref{SK-energy}). The SK-energy $E_{SK}$ has little to do with the oscillator energy in the first excited state mentioned above. We do not introduce the special quantum oscillator as a simpler 'replacement'-model for the SK spin glass, but for the time being rather focus on the mere demonstration that its first excited state reproduces the functional form of $P(h)$ or $\rho(E)$, which are both two important quantities of the spin glass order at $T=0$. \subsection{Distinguishing differential equations of the $T=0$ SK-model from Burgers equation} In order to motivate the corresponding sets of variables needed in the functional mapping of DOS, $P(h)$ and one hand and the oscillator eigenfunctions on the other, we wish to return to the differential equations of the spin glasses at zero temperature. It was observed that the Parisi scheme led to a recursive relation for the so-called exponent-correction-function, denoted by $expC$, in arbitrary finite order of RSB (discrete Parisi scheme). The recursion relation turned into a partial differential equation \cite{mjs-ro-pre,diss_mjs} in the so-called continuum limit. Here, for the purpose of comparing it with a Burgers equation, we apply a different simple transformation. It is sufficient to start from the original equation as given in Ref.\cite{mjs-ro-pre}) for fields $0<h\leq\infty$ (see also Ref. \cite{diss_mjs} which includes $h=0$) \begin{equation} \frac{-1}{q'(a)}\partial_a expC(a,h)+\frac12 \left[\left(\partial_h^2+2\,a\,\frac{h}{|h|}\partial_h\right)expC(a,h) +a\left(\partial_h expC(a,h)\right)^2\right]=0, \end{equation} where the order function $q(a)$ is known to be monotonic and its derivative $q'(a)$ vanishes only in the limit $a\rightarrow\infty$. Applying the transformation \begin{equation} \label{Yeqofa} Y(a,h):=expC(a,h)+|h|-\frac12 \int\limits_0^a d\tilde{a}\,\tilde{a}\,q'(\tilde{a}) \end{equation} simplifies further the PDE. We had given a numerically satisfying analytic model function $q(a)$ for $T=0$ in Ref.\cite{ro-mjs-pre}, which differs everywhere within $0\leq a\leq\infty$ by only less than $O(10^{-3})$ from the exact solution. This function is monotonic and we may switch to the unique inverse function $a(q)$ and, even better, choose $\tau\equiv 1-q$ as a new independent variable, in order to remove the inconvenient sign. We call this second variable pseudo-time, since, by means of the transformation (\ref{Yeqofa}), the PDE now matches a diffusion equation with diffusion constant $\frac12$ and one nonlinear term. Apart from the inverse order function $a(\tau)$ it resembles a Burgers equation as discussed just below. \begin{equation} \label{Y-pde} \frac{\partial Y(\tau,h)}{\partial\,\tau}=\frac12 \left[\frac{\partial^2}{\partial h^2} \, Y(\tau,h)+a(\tau)\,\left(\frac{\partial\, Y(\tau,h)}{\partial h}\right)^2\right]\quad,\quad \tau\equiv 1-q \end{equation} Apparently we obtain a modified Burgers equation with a coefficient $a(\tau)$ which diverges for small argument like the inverse square root for $q\rightarrow 1$, hence $a(\tau)\sim \tau^{-1/2}$. In recent papers we concluded that $\{a=0,T=0\}$ and $\{a=\infty,T=0\}$ are too different critical points of RSB hosted by the SK-model. The latter limit belongs to $q\rightarrow1$, while the small-$a$ limit corresponds to $q\sim a\rightarrow0$ and $a(\tau=1)=0$. In order to complete our discussion above, we recall that the Burgers equation $\partial_t u(t,x)=D\, \partial_x^2 u(t,x)-u(t,x)\partial_x u(t,x)$ turns into $$\partial_t\, \phi(t,x)=D\,\partial^2_x\,\phi(t,x)-\lambda\,\left(\partial_x\phi(t,x)\right)^2$$ by means of $u=\partial_x \phi$ and $\partial_t\partial_x \phi=\partial_x\partial_t\phi$. It is well-known that the Burgers equation has a redundant nonlinearity, and that it can be transformed into a diffusion equation by means of the Cole-Hopf transformation \cite{debnath}. In the differential equation for the SK-model, Eq.(\ref{Y-pde}), however, the coupling function $a(\tau)$ of the nonlinear term prevents a linearization by means of the Cole-Hopf transformation. This may indicate that an exact analytical solution of the SK-model at $T=0$ is much harder than the exact KPZ-solution in one dimension\cite{calabrese,sasamoto}. This argument does not yet refer to the particular initial condition of the SK-model at $T=0$. \subsection{Randomly stirred SK-model differential equation versus KPZ-equation} It is natural to consider also a (non-thermal) noise perturbation of the $T=0$ SK differential equation. In case of the pure Burgers equation this perturbation is called random stirring and results in the KPZ-equation, which describes random growth of a surface height above a substrate during particle deposition. Let us perturb the SK-energy by a non-thermal $\delta$-correlated noise function $\eta(t,h)$ such that the characteristic differential equation (\ref{Y-pde}) takes a form comparable with the KPZ-equation \begin{eqnarray} &&\frac{\partial u(t,h)}{\partial t}=D \,\nabla_x^2 \, u(t,x)+\lambda\left(\nabla_x u(t,x)\right)^2+\eta(t,x)\\ &&\frac{\partial Y(\tau,h)}{\partial\,\tau}=\frac12 \left[\frac{\partial^2}{\partial h^2} \, Y(\tau,h)+a(\tau)\,\left(\frac{\partial\, Y(\tau,h)}{\partial h}\right)^2\right]+\eta(\tau,h) \end{eqnarray} The renormalization of the one-dimensional KPZ-equation yields the critical exponents exactly \cite{barabasi} and it is known that simple rescaling did not lead to those exponents, except for the Edwards-Wilkinson behaviour near the trivial fixed point. This fixed point is however unstable for one space dimension. A simple power counting analysis which compared different terms of the differential equation had no access to the behavior near the stable finite fixed point for the effective coupling \cite{barabasi}. An exact solution of the 1+1 dimensional KPZ-equation with flat initial conditions was presented by Calabrese and Le Doussal \cite{calabrese}. Let us reconsider the question whether the modified PDE, which describes the SK-model with randomly perturbed energy, may however allow power counting to be successful. Its disadvantage, which is the divergent coefficient function $a(\tau)\sim 1/\sqrt{\tau}$ of the nonlinear term, may render this possible. \subsubsection{Scaling regimes in the randomly-stirred SK-model differential equation} We have a good analytical model for the $T=0$ order function $q(a)$ for all $a$ and almost exact knowledge of the asymptotic behaviour close to the critical points at $a=0$ and $a=\infty$, as derived from a scaling theoretical \cite{ro-mjs-pre} explanation of high-precision numerical data \cite{mjs-ro-pre}. This can be used to estimate the role played by the different parts of the differential equation. The diffusive part dominates the small $a$-limit and the nonlinear term the large $a$ limit, respectively. Under this assumption that the noise term does not essentially change the large-$a$ property $1-q\sim 1/a^2$ and hence the coefficient function $a(\tau)$, we can estimate by power counting the competition between terms of the differential equation and, in analogy with the treatment of the KPZ-equation \cite{barabasi}, eventually obtain power laws and even critical exponents. Rescaling the pseudotime by $\tau\rightarrow b^z \tau$, the analog of the real-space variable by $h\rightarrow b\, h$, and the field $Y$ by $Y\rightarrow b^{\alpha} Y$ one gets for the small $\tau$-regime (large $a$-regime or $q(a)\sim 1-.41/a^2$ in terms of the spin glass order function $q(a)$) $$b^{\alpha-z}\frac{\partial Y(\tau,h)}{\partial\,\tau}\cong b^{\alpha-2}\frac12 \frac{\partial^2}{\partial h^2} \, Y(\tau,h)+b^{2\alpha-z/2-2} \frac{const}{\sqrt{\tau}}\,\left(\frac{\partial\, Y(\tau,h)}{\partial h}\right)^2 +b^{-(z+1)/2}\eta(\tau,h)$$ In the large $a$-regime or small $\tau$-regime the nonlinear term dominates over the diffusive one and one gets \begin{equation} \alpha=\frac34\, ,\quad z=1-\frac{\alpha}{2}=\frac52.\nonumber \end{equation} This simple power counting result does not replace a complete renormalization group calculation. Note that in this language of dimensionless pseudotimes $\tau=1-q$ the above scaling behaviour refers to a critical short-time limit. It describes the behaviour of the function $Y(q)$ or $expC(q)$ for $q$-values close to the diagonal of the Parisi-matrix. For the long $\tau$-regime, which refers to small $a$ in the original equation, one has $q(a)\sim a$ behaviour instead and the diffusive term dominates over the nonlinear one. For this regime one is left with a randomly stirred diffusive equation in leading order and the behaviour is the equivalent of Edwards-Wilkinson (EW) behaviour in the KPZ growth-model for one space dimension (and one time dimension). In this case one gets simply $z=2$ and $\alpha=\frac12$ as for the EW-limit of the KPZ-equation. \section{Evaluation of the zero temperature density of states} \label{dos-section} We consider the spectral function of the fermionic SK model, which has the virtue of being mappable to the internal field distribution $P(h)$ of the standard SK-model \cite{perez}. Thus we can make use of the functional identity $\rho(x)=P(x)$. The density of states function $\rho_{\kappa}(u)$ at a fixed order $\kappa$ of replica-symmetry breaking steps can be described by the following hierarchical integrals below. They form the basis for the present numerical evaluation of $\rho_{\kappa}(E)$ for $\kappa=0,1,2,...,100$ and on a dense grid of energies $E$. An RSB solution for small orders $\kappa \leq 4$ can be found in \cite{ro-ds-prb}, which also considered arbitrary filling and an additional Hubbard interaction. The following formula for the DOS holds for arbitrary order $\kappa$. Its evaluation allows to understand precisely the RSB-flow through the quasi-continuous regime at large $\kappa=O(10^2)$ towards $\kappa=\infty$. \begin{equation} \rho_{\kappa}(E,\chi_1) = \frac{1}{2 \pi \sqrt{q_1 - q_2}} \exp(a_1 (|E|-\chi_1)) I_{\kappa}(|E|,0)\theta(|E|-\chi_1) \end{equation} which involves $\kappa$ nested integrals $I_n(E,h)$ given by the recursion relation \begin{equation} I_{n}(E,h) = \int\limits_{-\infty}^{\infty} dh' \, g(h-h',q_{n+1} - q_{n+2}) \left(D_{n-1}(h')\right)^{\frac{a_{n+1}}{a_{n}} - 1} I_{n - 1}(E, h'). \label{rho} \end{equation} The initial condition is given by the free-propagator like forms \begin{equation} I_0(E,h) = \exp\left(-\frac{(E - \chi_1 - h)^2}{2(q_1 - q_2)}\right) \end{equation} with \begin{equation} g(x,y) = \frac{1}{\sqrt{2 y}} \exp\left(-\frac{1}{2} \frac{x^2}{y}\right). \end{equation} A second recursive structure is furthermore needed in Eqs.\ref{rho}, which now concerns the nested integrals of type $D_{n}(x)$. Their recursive structure is given by \begin{equation} D_{n}(x) = \int\limits_{-\infty}^{\infty}dh\, \frac{\exp(\frac{(h-x)^2}{2(q_{n+1} - q_{n+2})})}{\sqrt{q_{n+1}-q_{n+2}}} \left( D_{n-1}(h) \right)^{\frac{a_{n+1}}{a_{n}}} \end{equation} with the initial condition \begin{eqnarray} D_0(x) &=& \exp\left(-a_1 x + \frac{1}{2}a_1^2(q_1 - q_2)\right) \frac{1}{2}\left(1 - erf(\frac{x-a(q_1 - q_2)}{\sqrt{2(q_1 - q_2)}})\right)\nonumber \\ &&\quad + \exp(a_1 x + \frac{1}{2}a_1^2(q_1 - q_2)) \frac{1}{2}\left(1 - erf(\frac{-x-a(q_1 - q_2)}{\sqrt{2(q_1 - q_2)}})\right). \end{eqnarray} The parameter sets $\{a_i\}$, $\{q_i\}$, and $\chi_1$, which are required in this recursive set of equations, depend on the order $\kappa$. They are inferred from a previous calculation, which reported their values \cite{mjs-ro-pre} for all $\kappa$ up to the maximum order $200$. They were obtained by extremization of the SK free-energy and their $T=0$ subset can be used since the fermionic SK- and the SK-model features coincide for $T=0$ and half-filling ($\mu = 0$), as mentioned in the model section \ref{models}. In particular the non-equilibrium susceptibility $\chi_1$ vanishes for $\kappa\rightarrow\infty$ \cite{mjs-ro-pre}, which gives rise to the perfect pseudogap. \begin{figure}[here] \centering \resizebox{1.\textwidth}{!} \includegraphics{Insert_Plot.eps} \hspace{-0.cm}\includegraphics{gap_fill.eps}} \caption{(left figure:) The plot shows the numerical result $\rho_{100}(E)$ for the DOS at $\kappa = 100$ (blue, symmetric curve) compared with its partial sum $\sum_{n = 0}^{21} c_{100}^{(n)} \psi_{n}(E)$ of the harmonic oscillator decomposition and the first excited state $\tilde{\psi}_1(E)$ of the approximate Hamiltonian $H_{approx}$. In the lower right corner the difference $\Delta \rho(E) = \rho_{100}(E) - \rho_{approx}(E)$ is plotted for the two approximations. Note that both approximations are correct up to $\mathcal{O}(10^{-3})$ at each point. \\ (right figure:) This example displays, for selected energies in the pseudogap regime, the flow of density of states data for the fermionic SK-Model under increased RSB-order $\kappa$. The approach of the fixed point DOS $\rho(E)\equiv lim_{\kappa\rightarrow\infty}\rho_{\kappa}(E)$ with slope $0.3$ \cite{thomsenthorpe} is obvious (even from a subset of the available $\kappa$).} \end{figure} \section{Analysis of the numerical high order RSB data} Numerical results for the $T=0$ density of states have been evaluated for all $\kappa=1,2...,100$ orders of replica symmetry breaking. One observes an incredibly fast convergence such that after $\kappa = 10$ any high $\kappa$ numeric solution can be effectively treated as the $\kappa = \infty$ result. For the following calculations the data for $\kappa=100$ are used. The quality has been checked by comparing $\rho_{100}(E)$ with the fixed point function $\lim\limits_{\kappa\rightarrow\infty}\rho_{\kappa}(E)$ evaluated by using Pad\'e-series. The difference is negligibly small rendering either choice equally good. Finding an analytic form of the density of states function is not only interesting but also useful for further calculations (e.g. Greens-Function for the model). In the course of finding a good fit for the data, it turned out to be quite practical to antisymmetrize the data by hand, thus getting rid of the non-analytical point at the origin. Looking at this new data-set the similarity to the first excited state of the harmonic oscillator becomes rather striking. However the detailed analysis shows some necessary corrections as described in the next section. \subsection{Calculating the overlap of $\rho(E)$ with harmonic oscillator eigenfunctions} The eigenfunctions of the harmonic quantum oscillator, described by the Hamiltonian $\hat{H_0} = \frac12 \hat{x}^2 + \frac12 \hat{p}^2$ in dimensionless variables, given by \begin{equation*} \psi_n(x) = \frac{1}{\sqrt{\sqrt{\pi}n! 2^n}} H_n(x) \exp\left(-\frac{x^2}{2}\right) \end{equation*} with the Hermite polynomials $H_n(x)$, form the basis for our calculations. Calculating the overlap coefficients \begin{equation*} c_{\kappa}^{(n)} = \int dx \, \psi_n(x) \rho_{\kappa}^{(a)}(x) \end{equation*} quantifies the deviation of the function $\rho^{(a)}(x)$ from $\psi_1(x)$. \subsection{Construction of an anharmonic Hamiltonian and functional mapping} It turns out that it is possible to perturb the harmonic oscillator in a way that the DOS (here $\rho_{\kappa=100}^{\text{(a)}}$) coincides with the functional form of the oscillator's first excited state. The perturbation used in our case is rather unusual, since it involves the absolute values of the $\hat{x}$ operator, being $\vert\hat{x} \vert$. Since the overlap-coefficients $c_{\kappa}^{(n)}$ go to zero rather fast and the computation of the change in all eigenfunctions due to the perturbation is impossible, we restrict ourselves to a subset of the first $22$ eigenfunctions. That renders an algebraic treatment of the Hamiltonian as a $22\times 22$ matrix possible. We use the fact that eigenstates and eigenvalues of an anharmonic oscillator can be well approximated by diagonalization of the non-diagonal matrix representation $\langle \psi_k | H_{anh} | \psi_l \rangle$ in a subset of eigenstates $|\psi\rangle$ of the unperturbed harmonic oscillator. \begin{eqnarray} &&{H_0}_{mn} = (n + \frac{1}{2}) \delta_{mn} \\ &&\vert \hat{x} \vert_{mn} = \int dx \, \psi_m(x) \vert x \vert \psi_n(x) \\ &&\hat{H}_{p} = \hat{H}_0 + \alpha \vert \hat{x} \vert + \beta \vert \hat{x} \vert^3 + \gamma \vert \hat{x} \vert^5 \end{eqnarray} In order to find the correct coefficients $\alpha, \beta, \gamma$ for the given perturbation it is useful to define the following function $l(\alpha,\beta,\gamma,\lambda)$, introducing a new eigenvalue $\lambda$. In order to find the correct variational parameters solving $\hat{H}_p \vec{c}=\lambda\vec{c}$ it is useful to define the function \begin{equation} l(\alpha,\beta,\gamma,\lambda) = \Vert \hat{H}_p[\alpha,\beta,\gamma] \vec{c} - \lambda \,\vec{c} \Vert \end{equation} which controls the numerical accuracy. The function is constructed in a way such that the correct coefficients minimize it. \subsubsection{Anharmonic quantum oscillator eigenstate modeling the spectral function} The results for the leading parameters $\alpha,\,\beta,\,\gamma$ of the oscillator potential and for the energy eigenvalue of the first excited state are given by \begin{equation} \framebox{$\alpha = -0.899165,\quad \beta = -0.003268,\quad \gamma = 0.0000405,\quad \lambda = 0.364335$} \end{equation} and the square distance from the minimum is $l = 9.9\, 10^{-7}$. The minimum was found by employing the Minimize-function implemented in Mathematica. While the focus is on the first excited state, Fig.3 includes the three lowest states for the given potential parameters \begin{figure}[here] \label{fig:pot} \centering \resizebox{.7\textwidth}{!} \includegraphics{pot.eps}} \caption{The three lowest eigenstates of the nonanalytic cubic potential $V(x)=\frac12 x^2+\alpha |x|+ \beta |x|^3+ \gamma |x|^5$ are displayed together. The position of their base-line indicates the corresponding energy eigenvalue. The first excited eigenstate (odd parity) reproduces the density of states $\rho(x)sign(x)$ of the fermionic SK-model, and hence $P(h)sign(h)$ of the standard SK-model too, with full Parisi symmetry breaking at zero temperature.} \end{figure} Even the cubic term could be neglected and still the perturbed Hamiltonian produces a very good fit of the calculated DOS with slightly modified parameters \begin{equation} \framebox{$\alpha = -0.9225,\quad\beta=\gamma=0,\quad \lambda = 0.3444$} \end{equation} and $l = 7.3\, 10^{-6}$. Thus, by keeping only linear and quadratic terms, one may rewrite the oscillator model with an unconventional quadratic potential shifted by a constant, but using $\vert \hat{x} \vert$ instead of the normal space operator $\hat{x}$. Alternatively, one may use the standard real space operator together with an unconventional shift, which flips its sign at $x=0$. The potential function as included in Fig.3 shows a double-well feature induced by the unusual shift; a difference between the cubic and the quadratic approximation is invisible on the given scale. The resulting approximate Hamiltonian assumes the following form \begin{equation} \hat{H}_{\text{approx}} = \frac{1}{2} \left[ \hat{p}^2 + (\vert \hat{x} \vert - x_0)^2 \right]=\,-\frac{1}{2}\partial_x^2 + \frac12(x - x_0\, sign(x))^2 \end{equation} In order to control the finite cutoff imposed on the number of harmonic oscillator basis functions, the effective model with nonanalytic shift and small anharmonic terms, as derived above, can be plugged into a Schr\"odinger equation, which is then solved numerically. Neglecting anharmonic terms of $O(|x|^3)$ because of their smallness but keeping only the nonanalytic shift (hence the $|x|$-term), the approximate model Hamiltonian $\hat{H}_{\text{approx}}$ leads to the differential equation \begin{equation} - \partial_x^2 \psi(x) + (x^2 + 2 \alpha \lvert x \rvert - 2 \lambda) \psi(x) = 0 \end{equation} The numerical solution of this equation confirms the results of our large but finite matrix diagonalization approach. This means, not only the desired first excited state $\psi_1$ is well approximated, but the other stationary states too. We used this as an independent control of our procedure. \section{A comparison with the mean field solution of the Coulomb glass} Since we focus in the present article on characterization and classification of replica symmetry breaking at lowest temperatures including $T=0$, we wish to compare its role in two different host models by means of pseudogap features. There seems to be enough evidence that the host-model, which undergoes this type of symmetry breaking, can have an influence. While the universal critical behavior of replica symmetry breaking at $T=0$ is primarily caused by its hierarchical structure, it can yet belong to different universality classes depending on the host-model. Recent work on the so-called Coulomb glass (CbGl) offers an interesting comparison with the fermionic SK-type of RSB. Both models can be viewed to describe the insulating phase of a disordered itinerant fermionic systems - the source of glassiness being either a random chemical potential (CbGl) or a frustrated spin interaction respectively. A selfconsistent structure was derived by M\"uller and Pankov for the Coulomb glass model \cite{mp}. Their starting point, was the transformation to an effective spin glass model, justifying a mean field description by means of the long-range nature of the bare Coulomb interaction. The resulting model appeared as a type of SK-model with an effective $1/r$-dependent pseudo-spin interaction. This spatial dependence provokes a dimensional dependence. An intriguing feature of the M\"uller-Pankov selfconsistent theory is the large depletion of the fermionic density of states already far above the glass transition, identified by the Almeida-Thouless temperature. The dip of $\rho(E)$ apparently becomes very pronounced already above $T_g\equiv T_{AT}$, and the authors have shown the changeover into the final Coulomb pseudogap by taking the full Parisi RSB into account. The similarity with the Efros-Shklovskii result was demonstrated. The evaluation close to $T=0$ indicated a quadratic decay of the DOS in 3D, in agreement with the ES-law. The authors also reported some results for the two-dimensional case. In order to see the quantitative genuine effect of replica symmetry breaking in the CbGl-model, we evaluated their self-consistent coupled equations {\it without} replica symmetry breaking also below $T_g$. As initially suspected, the difference between solutions with broken and unbroken symmetry becomes larger with temperature decreasing towards zero. At first sight, the pseudogap seemed to be shaped already above $T_g$ (resembling a lot the curves of the fermionic SK-model in a certain range below its $T_c=T_g$, see Fig.\ref{RS-gap}), with only small corrections to be expected from symmetry breaking as $T$ decreases further. But we found that, within the replica-symmetric self-consistent structure, the depletion starts to form a hardgap, which, under certain assumptions, can even be elaborated analytically. The numerical results down to $T=10^{-3}$ and moderate disorder are shown in Fig.\ref{RS-gap}. Using the same notation and units as \cite{mp}, the hardgap half-width equals $h_0$, which is evaluated to be $h_0\cong 0.894306684$. This value could be given more accurately, but is is not needed here, it also shows that only parts of the calculation can be done numerically. Thus, one might say that the pseudogap-like form, in part developed already above $T_g$, would be destroyed without RSB as $T\rightarrow 0$. Hence RSB is again responsible for the existence and shape of the pseudogap. This is one of the crucial points for a comparison with the fermionic SK-model: the replica symmetry breaking is fully responsible for the pseudogap in the low temperature limit. \begin{figure}[here] \resizebox{.45\textwidth}{!} \includegraphics{hardgap-fSK.eps}} \resizebox{.45\textwidth}{!} \includegraphics{cbgl-hardgap.eps}} \caption{left panel shows density of states (DOS) of the fermionic SK-model for temperatures above $T_c$ (red) and below (blue), including the exact analytical result at $T=0$ (dashed-black) without replica symmetry breaking. Right panel: For comparison, the analogous replica-symmetric (RS) re-calculation on the basis of the M\"uller-Pankov Coulomb glass (CbGl) equations is shown. The depletion of the DOS almost looks like a pseudogap already at temperatures far above the Almeida-Thouless $T_g$ but, as the temperature decreases further towards zero, a hard gap is formed in the RS-approximation of the CbGl as in the fermionic SK-model. Displayed curves are shown for temperatures above $T_g$ (red), at $T_g$ (green), below (blue) down to $T=10^{-3}$. The true pseudogaps (see above for the fermionic SK) and the Coulomb glass \cite{mp} obey different power laws and differ strongly from the shown RS-solutions.} \label{RS-gap} \end{figure} The second point is that the CbGl-pseudogap shows, according to M\"uller and Pankov\cite{mp}, a $d$-dependent power law and in particular a quadratic decay near the center of the pseudogap for 3D, while the SK-model shows linear behavior (for all $d$). Thus the host of replica symmetry breaking can have an effect on its critical behavior, since, in our point of view, the small energy deviation from the gap-center belongs to the critical point $a=\infty$ of the order function $q(a)$ as derived recently in Ref.\cite{ro-mjs-pre}, and hence to small pseudo-times $\tau=1-q$. \section{Conclusions and remarks} The main purpose of the present article is to show that the internal magnetic field distribution and the density of states as central quantities of SK spin glass physics can be described by the functional form of the first excited state of a weakly anharmonic quantum oscillator with nonanalytic shift. This meant the harmonic potential to be of the form $(x-x_0\, sign(x))^2=(|x|-x_0)^2$ instead of $x^2$. Puzzled by the unexpected relationship between spin glass and oscillator with a non-analytical shift, we searched for other singular anomalous oscillator models. We became aware of recent work by Ritort on wedge-shaped oscillator potentials \cite{ritort}. He reported glassy behavior and aging effects in so-called generalized oscillator model(s) GOM, including the case of a pure and positive $|x|$-potential. In our independent study the $|x|$-term has opposite sign and required (at least) $x^2$-term for stability. The nonanalytic linear term was required as an optimized shift to reproduce correctly the $\rho(E)$-data (and by analogy $P(h)$ as well) with a minimal number of oscillator base functions. Let us also mention recent numerical work by Boettcher et al \cite{boettcher} on small- and moderate-sized spin glass models (including SK) for the magnetic field distribution at $T=0$. The field distribution of the 3D Coulomb glass, and also those of XY- or Heisenberg spin glass models, show rather quadratic small field behavior. This raises the question whether they can be represented by oscillators too, but not by one single excited state. For continuous dimensions $D$ one may also wonder how a $\rho(E)\sim (E-E_F)^{D-1}$ can be obtained. Perhaps fractional derivatives can generalize the differential equations such that these power laws can be represented by oscillator models. These questions appear quite open and might lead to more insight into the spin glass oscillator relationship. \section{Acknowledgements} We wish to thank the DFG for support under Op28/7-2. One of us is indebted to A. Crisanti, C. De Dominicis, and T. Sarlat for useful remarks, and for hospitality extended to one of us (R.O.) at the CEA Saclay, where part of this work was initiated. We are also indebted to David Sherrington for his long-term interest in our research on low $T$ and $T=0$-RSB, and for his constant emphasis on the importance of $P(h)$.
1,116,691,500,526
arxiv
\section{Introduction} In this note, we are interested in the following quantitative unique continuation problem at infinity of some higher order elliptic operators with constant coefficients. Assume $u$ satisfies \begin{equation}\label{1.1} P(D)u+Vu=0, \quad\text{in}~~ \mathbb{R}^{n}, \end{equation} and \begin{align}\label{1.2} |V|\leq C, ~~|u|\leq C,~~ u(0)=1. \end{align} For large $R$, one can define $$ M(R)=\inf_{|x_0|=R}\sup_{B(x_0,1)}|u(x)| $$ to measure the precise decay information at infinity of the solution, then a natural question is how small can $M(R)$ be ? We first briefly recall the second order case, where a related problem was originally studied by Landis in 1960's \cite{KL}. He conjectured that if \eqref{1.1} and \eqref{1.2} are satisfied for $P=\Delta$, and $u(x)\leq C \exp\{ -C|x|^{1+}\}$ for some constant, then u is identically zero. This conjecture was disproved by Meshkov \cite{M} who constructed non-trivial bounded, complex-valued functions $u,V$ satisfying \eqref{1.1} and $u(x)\lesssim e^{-C|x|^{\frac43}}$. In 2005, Bourgain and Kenig \cite{BK} derived a quantitative version of Meshkov's result in their resolution of Anderson localization for the Bernoulli model. More precisely, they showed that if \eqref{1.1} and \eqref{1.2} are satisfied for $P=\Delta$, then $M(R)\gtrsim \exp\{-CR^{\frac43}\log{R}\}$. This lower bound is sharp in view of Meshkov's example. Later this result was extended to the following general case by Davey \cite{D} \begin{equation}\nonumber \begin{gathered} -\Delta u+W\cdot\nabla u+Vu=\lambda u,\\ |V|\leq C\langle x\rangle^{-N},~~|W(x)|\leq C\langle x\rangle^{-P},~~ |u|\leq C, \end{gathered} \end{equation} for some $P, N>0$, where $\langle x\rangle=\sqrt{1+|x|^2}$. See \cite{LW} for generalizations to more general second order elliptic equations. Now we turn to the higher order case. Weak and strong unique continuation properties for higher order elliptic equations have been studied by many authors, see e.g. \cite{W}, \cite{CG}, \cite{CGT}, \cite{L} and references therein. However, it seems that quantitative results for higher order operators are quite few. In a recent paper by Zhu \cite{Z}, he obtained vanishing order of solutions of polyharmonic equations by using the monotonicity property of a variant of frequency function, where its application to strong unique continuation problems was first observed by Garofalo and Lin \cite{GL}. As a corollary, it was shown that for $P=(-\Delta)^m$, and if $u$ is a solution to \eqref{1.1} with $n\ge 4m$, then $$ M(R)\gtrsim \exp\{-CR^{2m}\log{R}\}. $$ We shall show that the condition $n\ge 4m$ is not necessary and the same bound $\frac43$ is still valid for power of Laplacian. Instead of using frequency function and Sobolev estimates, we improve this bound by noticing that a iteration of the Carleman estimates used in \cite{BK} will allow us to follow Bourgain and Kenig's approach. Our first result is \begin{theorem}\label{thm1.1} Let $P=(-\Delta)^m$, and $u$ satifies \eqref{1.1} and \eqref{1.2}, then $$ M(R)\gtrsim \exp\{-CR^{\frac43}\log{R}\}. $$ \end{theorem} Currently, we don't know yet whether the bound $\frac43$ here is also optimal (up to logarithmic loss) for $(-\Delta)^m$, $m>1$. Nevertheless, in dimension 2, we're able to show that for any $\epsilon>0$, there exists some fourth order elliptic operators, such that the lower bound can be improved to $\frac87+\epsilon$. Furthermore, we shall prove this bound is essentially sharp (up to $\epsilon$-power loss) by constructing a Meshkov type example. \begin{theorem}\label{thm1.2} For any $\epsilon>0$, Let $P=P_1P_2$ in $\mathbb{R}^2$, where $P_1=\Delta_{\mathbb{R}^2}$, $P_2=\partial_{x_1}^2+(1+\frac{\epsilon}{2})\partial_{x_2}^2$. Assume $u$ satisfies \eqref{1.1} and \eqref{1.2}, then $$ M(R)\gtrsim \exp\{-CR^{\frac87+\epsilon}\log{R}\}. $$ Furthermore, there exists nontrivial bounded functions $u, V$ satisfying \eqref{1.1} and $$ u(x)\lesssim e^{-C|x|^{\frac87}}. $$ \end{theorem} \begin{remark}\label{rmk1.3} Although the operator $P$ above can be view as an "$\epsilon$ perturbation" of $\Delta^2$ in dimension 2, it seems that the order $\frac87$ can not be derived for $\Delta^2$ in this way since we shall see in section \ref{sec3} (see Example \ref{exa3.2}) that no weight function satisfies the strong pseudoconvex condition with respect to $\Delta^2$. \end{remark} The paper is organized as follows. In section \ref{sec2}, we prove Theorem \ref{thm1.1}, in addition to the Carleman estimates, we also need an interior regularity lemma to deal with the lower order terms. Section \ref{sec3} is devoted to prove Theorem\ref{thm1.2} by using the method similar to \cite{CGT}, which concerns the pseudo-convex weight functions (with respect to $P$). Throughout the paper, $C$ and $C_j$ denote absolute positive constants whose dependence will be specified whenever necessary. The value of C may vary from line to line. \section{Proof of Theorem \ref{thm1.1}}\label{sec2} We start with the following Carleman type inequality \begin{lemma}\label{lem2.1} There are constants $C_1, C_2, C_3$ and an increasing function $\omega=\omega(r)$ for $0<r<10$, such that $$ \frac1C_{1}\leq \frac{\omega(r)}r{}\leq C_1, $$ and for all $f\in C_{0}^{\infty}(B(0,10)\setminus\{0\})$, $\tau>C_2$, we have \begin{equation}\label{equ2.1} \tau^{3m}\int{\omega(|x|)^{-1-2\tau}|f|^2\, dx}\leq C_3\int{\omega(|x|)^{3m-1-2\tau}|\Delta^m f|^2\, dx}. \end{equation} \end{lemma} \begin{proof} In the case $m=1$, the result is due to Lemma 3.15 in \cite{BK}, while the general result can be deduced by applying this $m$ times and noting that $\tau^{3m}\lesssim \prod_{j=0}^{m-1}(\tau-\frac32 j)^3$. \end{proof} In order to prove Theorem \ref{thm1.1}, we shall also need the following interior regularity property of elliptic operators, which can be thought of as the $L^{\infty}$ version of Theorem 17.1.3 in \cite{H}. \begin{lemma}\label{lem2.2} Assume $P(D)$ is homogeneous and elliptic of order $2m$. Let $X$ be an open set containing $0$, and denote $d(x)$ the distance from $x\in X$ to $\complement{X}$, the complement of $X$. If $P(D)u\in L^{\infty}$ and $u\in L^{\infty}$, then it follows that for $|\alpha|<2m$, \begin{equation}\label{equ2.2} \|d(x)^{|\alpha|}D^{\alpha}u\|_{L^{\infty}(X)}\leq C(\|d(x)^{|2m|}P(D)u\|_{L^{\infty}(X)}+\|u\|_{L^{\infty}(X)}). \end{equation} \end{lemma} \begin{proof} The proof is essentially similar to Theorem 17.1.3 in \cite{H}, we sketch the proof here for the sake of completeness. First, we claim that for any $A>0$, $|\alpha|<2m$, $\frac{A^{|\alpha|}\xi^{\alpha}}{1+P(A\xi)}$ is a $L^1$ multiplier with bound independent of $A$, hence a $L^{\infty}$ multiplier by duality. In fact, by scaling, it suffices to assume $A=1$, furthermore, we note that for $|\alpha|<2m$, $$ |D^{\beta}(\frac{\xi^{\alpha}}{1+P(\xi)})|\leq C_\beta (1+|\xi|)^{-1-|\beta|}, $$ thus the claim follows from Bernstein's theorem (see e.g.\cite{G}). So we have the following $$ \sum_{|\alpha|<2m}{A^{2m-|\alpha|}\|D^{\alpha}u\|_{L^{\infty}}}\leq C(\|P(D)u\|_{L^{\infty}}+A^{2m}\|u\|_{L^{\infty}}), \quad A>0 . $$ Then we can proceed as Theorem 17.1.3 in \cite{H} with minor changes. Applying the above estimates to $v=u\cdot\chi(\frac{x-y}{R})$, where $y\in X$, and $d(y)\ge 2R$ and $\chi\in C_0^{\infty}(B(0,1))$ which is equal to 1 in $B(0,\frac12)$, and expanding $P(D)v$ by Leibniz' formula with $A=M/R$, gives \begin{align*} \sum_{|\alpha<2m|}{M^{2m-|\alpha|}}R^{\alpha}\sup_{B(y, \frac{R}{2})}|D^{\alpha}u|&\leq C(R^{2m}\sup_{B(y, R)}|P(D)u|+\sum_{|\alpha<2m|}R^{\alpha}\sup_{B(y, R)}|D^{\alpha}u|\\ &+M^{2m}\sup_{B(y, R)}|u|), \end{align*} where $M$ is some large constant.With $R_0$ to be chosen later, we define $$ R(y)=\min\{R_0, \frac{d(y)}{2}\}. $$ Since $|R(x)-R(y)|\leq\frac{|x-y|}{2}$, then with a new constant independent of $R_0$, we have \begin{align}\label{equ2.22} \sum_{|\alpha<2m|}{M^{2m-|\alpha|}}&\sup_{B(y, \frac{R(y)}{2})}R(x)^{\alpha}|D^{\alpha}u|\leq C(\sup_{B(y, R(y))}R(x)^{2m}|P(D)u|\nonumber \\ &+\sum_{|\alpha<2m|}\sup_{B(y, R(y))}R(x)^{\alpha}|D^{\alpha}u|+M^{2m}\sup_{B(y, R)}|u|), \end{align} Now we take sup norm with respect to $y\in X$, and absorb the second term in the right hand side of \eqref{equ2.22} to the left hand side, which gives \eqref{equ2.2}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.1}] Define $u_1(x)=u(ARx+x_0)$ with some small but fixed constant $A$ to be specified later. Since $u$ satisfies \eqref{1.1}, we have \begin{equation}\label{equ2.3} |u_1|\leq C, \quad |\Delta^{m}u_1|\leq C(AR)^{2m}|u_1|. \end{equation} Assume as we may $\max_{|x|=\frac1A}|u_1(x)|=1$, First, we have for $|\alpha|<2m$, \begin{equation}\label{equ2.4} |D^{\alpha}u_1|\leq C(AR)^{|\alpha|}, \end{equation} Now choose a bump function $\zeta\in C_0^{\infty}(\frac{1}{4R}<|x|<4)$, with $\zeta=1$, if $\frac{1}{3R}\leq|x|\leq 3$ such that the following estimates \begin{equation}\label{equ2.5} \begin{cases} |\zeta^{\alpha}|\leq CR^{|\alpha|}, \quad \text{if}\quad |x|\leq \frac{1}{3R},\\[4pt] |\zeta^{\alpha}|\leq C_{\alpha}, \quad \text{if} \quad |x|\ge 3. \end{cases} \end{equation} hold. Applying \eqref{equ2.1} to $f=u_1\zeta$ gives \begin{equation}\label{equ2.6} \begin{aligned} \tau^{3m}\int{\omega^{-1-2\tau}\zeta^2|u_1|^2}&\leq C\int{\omega^{3m-1-2\tau}\zeta^2|\Delta^m u_1|^2}\\ &+\{\int_{\frac{1}{4R}\leq |x|\leq\frac{1}{3R}} +\int_{3\leq |x|\leq 4}\}\omega^{3m-1-2\tau}\sum_{|\alpha|<2m}{|D^{2m-\alpha}\zeta|^2|D^{\alpha}u_1|^2}\\ &\triangleq I_1+I_2. \end{aligned} \end{equation} By \eqref{equ2.3}, we have $$ I_1\leq C(AR)^{4m}\int{\omega^{-1-2\tau}\xi^2|u_1|^2}. $$ Choose $$ \tau\sim R^{\frac43}, $$ we can absorb the term $I_1$ into the left hand side of \eqref{equ2.6}. To deal with the term $I_2$, we note that by \eqref{equ2.4} and \eqref{equ2.5}, one has $$ \int_{\frac{1}{4R}\leq |x|\leq\frac{1}{3R}}{\omega^{3m-1-2\tau}\sum_{|\alpha|<2m}{|D^{2m-\alpha}\zeta|^2|D^{\alpha}u_1|^2}}\leq R^{2\tau+m+1-2|\alpha|}\max_{|x|\leq \frac{1}{3R}}\sum_{|\alpha|<2m}{|D^{\alpha}u_1|^2}, $$ and $$ \int_{3\leq |x|\leq 4}{\omega^{3m-1-2\tau}\sum_{|\alpha|<2m}{|D^{2m-\alpha}\zeta|^2|D^{\alpha}u_1|^2}}\leq C(AR)^{4m-2}\omega(3)^{3m-1-2\tau}. $$ Therefore \begin{equation}\label{equ2.7} \begin{aligned} \frac{\tau^{3m}}{2}\int{\omega^{-1-2\alpha}\zeta^2|u_1|^2}&\leq C\{R^{2\tau+m+1-2|\alpha|}\max_{|x|\leq \frac{1}{3R}}\sum_{|\alpha|<2m}{|D^{\alpha}u_1|^2}\\ &+(AR)^{4m-2}\omega(3)^{3m-1-2\tau}\} \end{aligned} \end{equation} Let now $u_1(a)=1$ for some $a\in \mathbb{R}^n$, $|a|=\frac1A$, thanks to \eqref{equ2.4}, one has \begin{align}\nonumber |u_1(x)|\ge \frac12, \quad \text{if}\quad |x-a|\leq \frac{1}{CAR}. \end{align} As the same in \cite{BK}, we can choose $A$ such that the last term in \eqref{equ2.7} can also be absorbed to the left hand side of \eqref{equ2.7}. Now apply Lemma \ref{lem2.2} with $X=B(0,\frac{1}{R})$ and use \eqref{equ2.3} , we obtain \begin{align}\nonumber R^{-CR^{\frac43}} &\leq \max_{|x|\leq \frac{1}{3R}}\sum_{|\alpha|<2m}{|D^{\alpha}u_1|^2}\\ &\leq C\sum_{|\alpha|<2m}(R^{|\alpha|-2m}\max_{|x|\leq \frac{1}{R}}|\Delta^m u_1|+R^{|\alpha|}\max_{|x|\leq \frac{1}{R}}|u_1|)\nonumber\\ &\leq CR^{2m-1}\max_{|x|\leq \frac{1}{R}}|u_1|,\nonumber \end{align} which proves the theorem. \end{proof} \begin{remark}\label{rmk2.3} In \cite{M}, Meshkov showed that if $u\in H_{2}^{loc}(\Omega_{\rho})$, where $\Omega_{\rho}=\mathbb{R}^n\setminus B(0, \rho)$, and satisfies $\Delta u-Vu=0$, for some bounded potential $V$, and if for any $\tau>0$, \begin{equation}\label{equ2.9} \int_{\Omega_{\rho}}|u|^2\exp\{2\tau|x|^{\frac43}\}<\infty, \end{equation} then $u\equiv 0$. We note that this result can also be generalized to the case $(-\Delta)^m$ by assuming $u\in H_{2m}^{loc}(\Omega_{\rho})$ and the above growth condition \eqref{equ2.9}, since on the one hand, the following Carleman estimates $$ \tau^{3m}\int{|v|^2r\exp\{2\tau r^{\frac43}\}\, drd\omega}\leq C\int{|\Delta^m v|^2r\exp\{2\tau r^{\frac43}\}\, drd\omega}, $$ can be easily deduced from Lemma 1 in \cite{M} \footnote[1]{we thank Jiuyi Zhu for pointing out this to us.} , and on the other hand, the condition \eqref{equ2.9} allows us to obtain a weighted interior $L^2$ regularity estimates in each annulus, since the weight $e^{2\tau|x|^{\frac43}}$ is bounded both from below and above in such annulus, and we can sum over the annulus to get a global one (with a different $\tau$). \end{remark} \begin{remark}\label{rmk2.4} It seems that the example constructed in \cite{M} is not enough to show the sharpness for the power of Laplacian, though the constructions indicate that in dimension 2, there exists a nontrivial solution $u$ of the equation $\Delta^2 u + Vu=0$ with some bounded $V$, such that $|u(x)|\leq C \exp\{-c|x|^{\frac87}\}$, see Section \ref{sec3} below for the case of "perturbations" of $\Delta^2$. \end{remark} \section{Proof of Theorem \ref{thm1.2}}\label{sec3} First, we recall the following notion of pseudo-convex weight fucntions. \begin{definition}\label{def3.1} Let $P$ be principally normal in $X\subset\mathbb{R}^n$, with principal symbol $p$. A $C^2$ function $\varphi$ is called strongly pseudo-convex with respect to $P$ at $x_0$ if \begin{align*} \Re\{\bar{p}, \{p, \varphi\}\}(x_0, \xi)>0,\quad \text{whenever} \, p(x_0,\xi)=0, \, \xi\in \mathbb{R}^n\setminus \{0\}, \end{align*} and \begin{align*} \{\bar{p}(x,\xi-i\tau\nabla\varphi), p(x,\xi+i\tau\nabla\varphi)\}/2i\tau >0\quad \text{on} \, \{ p(x,\xi+i\tau\nabla\varphi)=0,\, \tau>0,\\ (\xi, \tau)\neq 0\}, \end{align*} where $\{p, q \}=\sum{(\frac{\partial p}{\partial \xi_j}\frac{\partial q}{\partial x_j}-\frac{\partial q}{\partial \xi_j}\frac{\partial p}{\partial x_j})}$ is the Poisson bracket of $p$ and $q$. \end{definition} In particular, if $P$ is elliptic, then $\varphi$ is strongly pseudo-convex with respect to $P$ if \begin{equation}\label{equ3.1} \{\Re{p(x,\xi+i\tau\nabla\varphi)}, \Im{p(x,\xi+i\tau\nabla\varphi)}\}>0 \quad \text{on}~~ p(x,\xi+i\tau\nabla\varphi)=0. \end{equation} \begin{example}\label{exa3.2} (i) Consider $P=-\Delta$, it's easy to see that $\varphi$ is strongly pseudo-convex with respect to $-\Delta$ if and only if $$ (\xi, H(\varphi)\xi)+\tau^2(\nabla\varphi, H(\varphi)\nabla\varphi)>0 $$ on the set defined by \begin{align}\label{equ3.2} \begin{cases} |\xi|^2=\tau^2|\nabla\varphi|^2\\[4pt] \xi\cdot\nabla\varphi=0 \end{cases} \end{align} where $(\cdot,\cdot)$ is the standard inner product in Euclidian space, and $H(\varphi)$ is the Hessian of $\varphi$. Let $\varphi_1=-\ln{|x|}-\int_{0}^{|x|}{\frac{e^{-t}-1}{t} dt}$ and assume $0\notin X$, which is the weight (singular at the origin) used in Section \ref{sec2} (see \cite{BK}) , in this case $$ H(\varphi_1)=-\frac{e^{-|x|}}{|x|^2}Id+ (\frac{e^{-|x|}}{|x|^3}+\frac{2e^{-|x|}}{|x|^4})x\cdot x^t, $$ where Id is the identity matrix, thus on the set defined by \eqref{equ3.2}, one has \begin{align*} (\xi, H(\varphi_1)\xi)+\tau^2(\nabla\varphi_1, H(\varphi_1)\nabla\varphi_1)=\tau^2\frac{e^{-3|x|}}{|x|^3}>0, \end{align*} which implies that $\varphi_1$ is strongly pseudo-convex with respect to $-\Delta$ in $X$. We note also that other strongly (singular) pseudo-convex weight functions include $\varphi_2(x)=(\ln{|x|})^2$, $\varphi_3(x)=-\ln(|x|+\lambda|x|^2)$, where $\lambda>1$. These weight functions are very useful in obtaining strong unique continuation theorems for second order elliptic operators with principal part $\Delta$, see e.g. \cite{H83}, \cite{R}, \cite{L}. (ii) $P=(-\Delta)^m, m>1$. This is quite different from the case $m=1$. In this case, no functions satisfy the convex condition \eqref{equ3.1}. In fact, denote $p_m=|\xi|^{2m}$, if such a function exists, then \begin{align*} &\{\bar{p}_m(\xi+i\tau\nabla\varphi), \bar{p}_m(\xi+i\tau\nabla\varphi)\}/2i\tau\\ &=\{(|\xi|^2-\tau^2|\nabla\varphi|^2-2i\tau\xi\cdot\nabla\varphi)^m, (|\xi|^2-\tau^2|\nabla\varphi|^2+2i\tau\xi\cdot\nabla\varphi)^m\}\\ &=m^2[(|\xi|^2-\tau^2|\nabla\varphi|^2)^2+4\tau^2(\xi\cdot\nabla\varphi)^2]^{m-1}\{\bar{p}_1(\xi-i\tau\nabla\varphi), p_1(\xi+i\tau\nabla\varphi)\}/2i\tau\\ &\equiv 0 \end{align*} on $\{p_m(\xi+i\tau\nabla\varphi)=0\}$, i.e., the set defined by \eqref{equ3.2}. \end{example} Now we are in a position to prove Theorem \ref{thm1.2}, the key point is the following Carleman estiamtes. \begin{lemma}\label{lemma3.3} Let $\varphi(r)=r^{-\alpha}$, $r=|x|$, $P=P_1P_2$, where $P_1=\Delta_{\mathbb{R}^2}$, $P_2=\partial_{x_1}^2+b\partial_{x_2}^2$, $b>0, b\neq 1$. Further suppose $$ \alpha>\max\{\frac{1}{b}-1, b-1\}. $$ Then we have \begin{equation}\label{equ3.3} \tau^{-1}\|(r|\nabla\varphi|)^{-\frac12}e^{\tau\varphi(r)}u\|_{4, \tau}^2\leq C\|e^{\tau\varphi(r)}Pu\|_{L^2}^2, \quad u\in C_0^{\infty}(B(0, 10)\setminus \{0\}), \end{equation} where $\|v\|_{4, \tau}^2=\|\partial^4v\|_{L^2}+\||\tau\nabla\varphi|^4v\|_{L^2}^2$, and C is some positive constant does not depend on $\tau$. \end{lemma} \begin{proof} We first note that it suffices to establish \eqref{equ3.3} for $u\in C_0^{\infty}(\frac12<|x|<1)$, since if this is true, then the same scaling arguments as in \cite{CGT} will imply \eqref{equ3.3}. To this end, we shall prove that $\varphi$ satisfies the following form of strongly pseudo-convex condition \begin{equation}\label{equ3.4} \{\Re{p_{\tau}}, \Im{p_{\tau}}\}\geq \frac{C}{r}(|\xi|+\tau|\nabla\varphi|)^7 \quad \text{on} ~~ p_{\tau}=0, \end{equation} where $p(\xi)=(\xi_1^2+\xi_2^2)(\xi_1^2+b\xi_2^2)$, and $p_{\tau}=p(\xi+i\tau\nabla\varphi)$. In fact, we notice that the following identity holds \begin{align*} \{\Re{p_{\tau}}, \Im{p_{\tau}}\}=\{\Re{p_{1, \tau}}, \Im{p_{1, \tau}}\}|p_{2, \tau}|^2+\{\Re{p_{2, \tau}}, \Im{p_{2, \tau}}\}|p_{1, \tau}|^2, \quad \text{on } ~~p_{\tau}=0. \end{align*} The condition $b\neq 1$ indicates that $Char P_{1, \tau}\bigcap Char P_{2, \tau}=\emptyset$. We note that by homogeneity consideration one can let $\tau=1$. Without loss of generality, we can assume $p_{2, \tau}=0$, that is \begin{equation}\label{equ3.5} \begin{gathered} \xi_1^2+b\xi_2^2=(\partial_{x_1}\varphi)^2+b(\partial_{x_2}\varphi)^2,\\ \xi_1\partial_{x_1}\varphi+b\xi_2\partial_{x_2}\varphi=0, \end{gathered} \end{equation} which implies that $$ \xi_1^2+(\partial_{x_1}\varphi)^2=b(\xi_2^2+(\partial_{x_2}\varphi)^2), $$ so \begin{align}\label{equ3.6} |p_{1, \tau}|^{2} &=(|\xi|^2-|\nabla\varphi|^2)^2+4(\xi\cdot \nabla\varphi)^2\nonumber \\ &=(b-1)^2(\xi_2^2+(\partial_{x_2}\varphi)^2)^2 \nonumber\\ &\geq C_b(|\xi|+|\nabla\varphi|)^4. \end{align} On the other hand, if we denote by $$ \xi_b=(\xi_1,~ b\cdot\xi_2)^{T}, \quad \nabla\varphi_b=(\partial_{x_1}\varphi,~ b\cdot\partial_{x_2}\varphi)^{T}, $$ one has \begin{align*} \{\Re{p_{2, \tau}}, \Im{p_{2, \tau}}\}&=-2\alpha|x|^{-\alpha-2}(|\xi_b|^2+|\nabla\varphi_b|^2)\\ &+2\alpha(\alpha+2)|x|^{-\alpha-4}[(x\cdot\xi_b)^2+(x\cdot\nabla\varphi_b)^2] \end{align*} Thanks to \eqref{equ3.5}, we have $x\cdot\xi_b=0$ and $|\xi_b|^2=b\alpha^2|x|^{-2\alpha-2}$, thus, we obtain \begin{align*} \{\Re{p_{2, \tau}}, \Im{p_{2, \tau}}\}&=2\alpha^3|x|^{-3\alpha-4}[(\alpha+2)(\frac{x_1^2+bx_2^2}{|x|^2})^2-b-\frac{x_1^2+b^2x_2^2}{|x|^2}], \end{align*} it follows from our assumption on $\alpha$ that there exists a positive constant c (depending on $b$) such that $$ (\alpha+2)(\frac{x_1^2+bx_2^2}{|x|^2})^2-b-\frac{x_1^2+b^2x_2^2}{|x|^2}>c>0. $$ Note also that on $p_{2, \tau}=0$, one has the relation $|\xi|\sim |\nabla\varphi|$, hence we have \begin{equation}\label{equ3.65} \{\Re{p_{2, \tau}}, \Im{p_{2, \tau}}\}\geq C\frac{(|\xi|+|\nabla\varphi|)^3 }{|x|}. \end{equation} Combine \eqref{equ3.6} and \eqref{equ3.65}, we obtain \eqref{equ3.4}, and the desired Carleman estimates follows by standard arguments (see e.g. \cite{CGT}, \cite{H}). \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.2}] For any $0<\epsilon<1$, set $b=1+\frac{\epsilon}{2}$, $\varphi(r)=r^{-\epsilon}$. Applying Lemma \ref{lemma3.3} to $P=(\partial_{x_1}^2+b\partial_{x_2}^2)\Delta$, one has $$ \tau^7\int{|\nabla\varphi|^7|x|^{-1}e^{2\tau\varphi}|u|^2}\leq C\int{e^{2\tau\varphi}|u|^2}, \quad u\in C_0^{\infty}(B(0, 10)\setminus \{0\}), $$ which plays the same role as Lemma \ref{lem2.1} in the proof of Theorem \ref{thm1.1}. Following the same way in the previous section we prove that $$ M(R)\gtrsim \exp\{-CR^{\frac87+\epsilon}\log{R}\}, $$ which finishes the first part of Theorem \ref{thm1.2}. We end the proof by constructing a Meshkov type example to show that the bound $\frac87$ is optimal. For simplicity, we shall assume $P=\Delta_{\mathbb{R}^2}(\partial_{x_1}^2+2\partial_{x_2}^2)$. The key point is the following observation \begin{proposition}\label{pro3.4} Suppose $\rho>0$ is large enough, choose $n,k\in \mathbb{Z}$ such that $|n-\rho|\leq 1$, $|k-8\rho^{\frac87}|\leq 6$. Then in the annulus $\rho\leq |x|\leq 7\rho^{\frac37}$, there exists a solution u satisfying \eqref{1.1} and \eqref{1.2} such that the following properties hold\\ (i) If $r\in [\rho, \rho+0.1\rho^{\frac37}]$, then $u(r,\varphi)=r^{-n}e^{-in\varphi}$. If $r\in [\rho+6.9\rho^{\frac37}, \rho+7\rho^{\frac37}]$, then $u(r,\varphi)=ar^{-n-k}e^{-i(n+k)\varphi}$, where $a$ is some positive constant.\\ (ii)Let $m(r)=\max\{|u(r, \varphi)|, 0\leq \varphi\leq 2\pi\}$, there exists an absolute constant C, which does not depend on $\rho, n, k$ such that for $r\in [\rho, \rho+7\rho^{\frac37}]$, \begin{equation}\label{equ3.7} \ln{m(r)}-\ln{m(\rho)}\leq -C\int_{\rho}^{r}{t^{\frac17}\, dt}+C. \end{equation} \end{proposition} \begin{proof} We remark here that the constants $C, C_j$ appeared in the proof below can all be chosen independent of $\rho, n, k$. Step 1 (when $\rho\leq |x|\leq 2\rho^{\frac37}$) The solution $u_1=r^{-n}e^{-in\varphi}$ is rearranged to $u_2=-br^{-n+2k}e^{-iF(\varphi)}$, where $b=(\rho+\rho^{\frac37})^{-2k}$, $F(\varphi)=(n+2k)\varphi+\Phi(\varphi)$. Moreover, $\Phi(\varphi)$ satisfies \begin{align} |\Phi^{(j)}(\varphi)|\leq C\rho^{\frac87j-\frac47}, \quad j=0,1,2,\ldots.\label{equ3.8}\\ \Phi(\varphi)=-4k\varphi+b_m, \quad \text{in} \quad |\varphi-\varphi_m|\leq \frac T5,\label{equ3.9} \end{align} where $b_m\in \mathbb{R}$, $T=\frac{\pi}{n+k}$, $\varphi_m=mT$, $m=0,1,\ldots, 2(n+k)-1$. For the existence of such $\Phi$, see \cite{M} for the detail. Next there exists $C^{\infty}$ functions $\psi_1, \psi_2$ with $\psi_1(r)=1$, if $r\leq \rho+\frac{13}{7}\rho^{\frac37}$, $\psi_1(r)=0$, if $r\ge \rho+1.9\rho^{\frac37}$, and $\psi_2(r)=1$, if $r\ge \rho+\frac{1}{7}\rho^{\frac37}$, $\psi_2(r)=0$, if $r\leq \rho+0.1\rho^{\frac37}$ such that \begin{equation}\label{equ3.10} \psi_{k}^{j}(r)\leq C \rho^{-\frac37 j}, \quad k=1,2, \quad j=0, 1, 2,\ldots \end{equation} Now set $u=\psi_1u_1+\psi_2u_2$, by \eqref{equ3.9}, we have $\Delta u=0$ in the region $$ \{\rho+\frac{1}{7}\rho^{\frac37}\leq r\leq \rho+\frac{13}{7}\rho^{\frac37}, |\varphi-\varphi_m|\leq \frac T5\},\quad m=0, 1,\ldots 2(n+k)-1. $$ Since $|\frac{u_2}{u_1}|=(\frac{r}{\rho+\rho^{\frac37}})^{2k}$, it follows that there exists some positive constant C (say 10), such that \begin{align} |\frac{u_2}{u_1}|&\leq e^{-C}, \quad r\in[\rho, \rho+\frac{1}{7}\rho^{\frac37}], \label{equ3.11}\\ |\frac{u_2}{u_1}|&\ge e^{-C}, \quad r\in[\rho+\frac{13}{7}\rho^{\frac37}, \rho+2\rho^{\frac37}]. \label{equ3.12} \end{align} First we consider the annulus $\rho\leq |x|\leq \frac17 \rho^{\frac37}$, then it's easy to see from \eqref{equ3.11} that \begin{equation}\label{equ3.13} |u|\ge e^C|u_2| \quad \text{for some} \quad C>0. \end{equation} Now we estimate $(\partial_{x_1}^2+2\partial_{x_2}^2)\Delta u$ in this region. Note that in the polar coordinates, one has \begin{align*} \Delta_{\mathbb{R}^2}&=\frac{\partial^2}{\partial r^2}+\frac1r\frac{\partial}{\partial r}+\frac{1}{r^2}\frac{\partial^2}{\partial \varphi^2}, \end{align*} \begin{align*} \partial_{x_1}^2+2\partial_{x_2}^2 &=(1+\sin^2\varphi)\frac{\partial^2}{\partial r^2}+(\frac{1+\cos^2\varphi}{r}+\frac{\sin{2\varphi}}{r}\frac{\partial}{\partial \varphi})\frac{\partial}{\partial r}\\&-\frac{\sin{2\varphi}}{2r^2}\frac{\partial}{\partial \varphi}+\frac{1+\cos^2\varphi}{r^2}\frac{\partial^2}{\partial \varphi^2}. \nonumber \end{align*} Then we write $$\Delta u=\Delta(\psi_2u_2)=\psi_2\Delta u_2+2\frac{\partial \psi_2}{\partial r}\frac{\partial u_2}{\partial r}+u_2\Delta\psi_2,$$ and \begin{equation} \Delta u_2=\frac{g_1(\varphi)}{r^2}\cdot u_2, \nonumber \end{equation} where $g_1(\varphi)=(n-2k)^2-(n+2k+\Phi^{'})^2+i\Phi^{''}$, hence, it follows from \eqref{equ3.8} that $$ |\frac{ \Delta u_2}{u_2}|\leq \frac{nk}{r^2}\leq C\rho^{-\frac27}. $$ A further computation shows that \begin{align*} |\frac{\partial^2}{\partial r^2}(\frac{g_1(\varphi)}{r^2} u_2)|&=|\frac{g_1(\varphi)(n-2k)(n-2k+1)}{r^4}u_2|\\ &\leq C\frac{n^3k}{r^4}|u_2| \leq C|u_2|, \end{align*} \begin{align*} |\frac{\partial^2}{\partial r^2}(\frac{\partial \psi_2}{\partial r} \frac{\partial u_2}{\partial r})|&=|\frac{\partial^3 \psi_2}{\partial r^3} \frac{\partial u_2}{\partial r}+2\frac{\partial^2 \psi_2}{\partial r^2} \frac{\partial^2 u_2}{\partial r^2}+\frac{\partial \psi_2}{\partial r} \frac{\partial^3 u_2}{\partial r^3}|\\ &\leq C(\frac{n}{r}\cdot\rho^{-\frac97}+\frac{n^2}{r^2}\cdot\rho^{-\frac67}+\frac{n^3}{r^3}\cdot\rho^{-\frac37})|u_2|\\ & \leq C|u_2|, \end{align*} \begin{align*} |\frac 1r\frac{\partial}{\partial \varphi}\frac{\partial}{\partial r}(\frac{g_1(\varphi)}{r^2} u_2))|&=|\frac{-2g_1'(\varphi)}{r^4} u_2+\frac{g_1(\varphi)}{r^4} \frac{\partial u_2}{\partial r}+\frac{g_1'(\varphi)}{r^3} \frac{\partial u_2}{\partial r}+\frac{g_1(\varphi)}{r^3} \frac{\partial^2 u_2}{\partial r\partial \varphi}|\\ &\leq C\rho^{-\frac37}\frac{n^3}{r^3}|u_2|\leq C|u_2|, \end{align*} \begin{align*} |\frac 1r\frac{\partial}{\partial \varphi}\frac{\partial}{\partial r}(\frac{\partial \psi_2}{\partial r}\frac{\partial u_2}{\partial r})|&\leq C\rho^{-\frac37}\frac{n^3}{r^3}|u_2|\leq C|u_2|, \end{align*} and \begin{align*} |\frac{1}{r^2}\frac{\partial^2}{\partial \varphi^2}(\frac{g_1(\varphi)}{r^2} u_2)|& =|-\frac{((F')^2+iF^{''})g_1-iF'g_1'+g_1^{''}}{r^4} u_2| \\ &\leq C\frac{n^3 k+n\Phi^{'''}+\Phi^{''}\Phi^{'}+n^3\rho^{\frac{20}{7}+\rho^4}}{r^4}|u_2|\\ &\leq C|u_2|, \end{align*} where we have used the fact that $|g_1'(\varphi)|\leq C \rho^{\frac{20}{7}}$, and $|g_1''(\varphi)|\leq C \rho^4$. After a direct computation of other terms, we obtain the following \begin{equation}\label{equ3.14} |\frac{(\partial_{x_1}^2+2\partial_{x_2}^2)\Delta u_2}{u_2}|\leq C, \quad \rho\leq |x|\leq 2\rho^{\frac37}. \end{equation} Combining \eqref{equ3.13} and \eqref{equ3.14} yields \begin{equation}\label{equ3.15} |\frac{(\partial_{x_1}^2+2\partial_{x_2}^2)\Delta u}{u}|\leq C,\quad \rho\leq |x|\leq \frac17 \rho^{\frac37}. \end{equation} Similarly, by \eqref{equ3.12} and arguments above, it follows that \eqref{equ3.15} is valid when $\rho+\frac{13}{7}\rho^{\frac37}\leq |x|\leq 2\frac17 \rho^{\frac37}$. In the remaining annulus sectors \begin{align*} P_m&=\{(r, \varphi),\rho+\frac{1}{7}\rho^{\frac37}\leq r\leq \rho+\frac{13}{7}\rho^{\frac37}, \varphi_m+\frac T5\leq \varphi\leq \varphi_m+\frac {4T}5\}, \\ m&=1, 2,\ldots 2(n+k)-1. \end{align*} One argues the same as in \cite{M} and in this region, we have $$ |u|\ge C|u_2| $$ Using the fact that $u_1$ is harmonic in $P_m$ and $u_2$ satisfies \eqref{equ3.14}, we conclude that \eqref{equ3.15} is also valid in each $P_m$. Step 2 (when $\rho+2\rho^{\frac37}\leq |x|\leq \rho+3\rho^{\frac37}$). The solution $u_2=-br^{-n+2k}e^{-iF(\varphi)}$ is rearranged to $u_3=-br^{-n+2k}e^{i(n+2k)\varphi}$. Let $\psi\in C^{\infty}$, and $\psi(r)=1$, if $r\leq \rho+\frac{15}{7}\rho^{\frac37}$, $\psi(r)=0$, if $r\ge \rho+\frac{20}{7}\rho^{\frac37}$, such that $$ \psi^{j}(r)\leq C_j, \quad r\in (0, \infty) \quad j=0, 1, 2,\ldots $$ Now set $$u=-br^{-n+2k}\exp[i(\psi\Phi(\varphi)+(n+2k)\varphi)],$$ we have $$ \Delta u =g_2(r, \varphi)u, $$ where \begin{align*} g_2&=\frac{(-8nk-2(n+k)\psi\Phi'+(\psi\Phi')^2+i\psi\Phi'')}{r^2}\\ &+i(-2n+4k+1)\frac{\psi'\Phi}{r}+i\Phi(\frac{\psi'}{r}+\psi'')-(\psi'\Phi)^2 \end{align*} it then follows that $$ |g_2(r,\varphi)|\leq C\frac{nk}{r^2}\leq C\rho^{-\frac27}. $$ To estimate $\frac{(\partial_{x_1}^2+2\partial_{x_2}^2)(g_2u)}{u}$ in this region we note that \begin{align*} |\frac{\partial^2 }{\partial r^2}(g_2 u)| \leq C\rho^{-\frac27}\frac{n^2}{r^2} |u|\leq C|u|, \end{align*} \begin{align*} |\frac 1r \frac{\partial}{\partial \varphi}\frac{\partial}{\partial r}(g_2 u)|&=|\frac{1}{r}(\frac{\partial^2 g_2}{\partial \varphi\partial r}u+g_2\frac{\partial^2 u}{\partial \varphi\partial r}+\frac{\partial g_2}{\partial \varphi}\frac{\partial u}{\partial r}+\frac{\partial g_2}{\partial r}\frac{\partial u}{\partial\varphi})|\\ &\leq C(\frac{n^2}{r^2}\rho^{-\frac27}+\frac{n}{r^2}\rho^{-\frac67}) |u|\leq C|u|, \end{align*} and \begin{align*} |\frac{1}{r^2}\frac{\partial^2}{\partial \varphi^2}(g_2 u)|&=|\frac{1}{r^2}(\frac{\partial^2 g_2}{\partial \varphi^2}u+\frac{\partial^2 u}{\partial \varphi^2}g_2+2\frac{\partial g_2}{\partial \varphi}\frac{\partial u}{\partial \varphi})| \\ &\leq C(\frac{\rho^2}{r^2}+\frac{n^2}{r^2}\rho^{-\frac27}+\frac{n}{r^2}\rho^{\frac67})|u|\\ &\leq C|u|, \end{align*} where we have used the fact that $|\frac{\partial^jg_2}{\partial r^j}|\leq C \rho^{-\frac27}$, $|\frac{\partial^jg_2}{\partial \varphi^j}|\leq C \rho^{-\frac27+\frac87 j}$, when $j=0, 1, 2$. Another direct computation shows that other terms are also controlled by $C|u|$, which implies that \eqref{equ3.15} is valid in $\rho+2\rho^{\frac37}\leq |x|\leq \rho+3\rho^{\frac37}$. Step 3 (when $\rho+3\rho^{\frac37}\leq |x|\leq \rho+4\rho^{\frac37}$). The solution $u_3=-br^{-n+2k}e^{i(n+2k)\varphi}$ is rearranged to $u_4=-b_1r^{-n-2k}e^{i(n+2k)\varphi}$. First choose $\psi_3\in C^{\infty}$, and $\psi_3(r)=1$, if $r\leq \rho+\frac{22}{7}\rho^{\frac37}$, $\psi_3(r)=0$, if $r\ge \rho+\frac{27}{7}\rho^{\frac37}$, such that $\psi_3^{(j)}(r)\leq C_j$, if $r\in (0, \infty), j=0, 1, 2,\ldots$. Let $g_3(r)=(\frac{\rho+3\rho^{\frac37}}{r})^{4k}$, then it follows that in $\rho+3\rho^{\frac37}\leq |x|\leq \rho+4\rho^{\frac37}$, $$ |g_3^{(j)}(r)|\leq C\frac{k}{r}^j,\quad j=0, 1, 2,\ldots. $$ Next, we set $h(r)=\psi_3+(1-\psi_3)g_3(r)$, and define $u=u_3\cdot h$, so we have $$ \Delta u=h \Delta u_3+2\frac{\partial h}{\partial r}\frac{\partial u_3}{\partial r}+u_3\Delta h $$ and $$ |\frac{\Delta u_3}{u_3}|=\frac{8nk}{r^2}\leq C \rho^{-\frac27}. $$ The remaining computation is similar to Step 2, so we omit it. Step 4 ((when $\rho+4\rho^{\frac37}\leq |x|\leq \rho+7\rho^{\frac37}$).) The solution $u_4=-b_1r^{-n-2k}e^{i(n+2k)\varphi}$ is rearranged to $u_5=-a_1r^{-n-2k}e^{i(n+k)\varphi}$ for some constant $a_1$. This step is similar to Step 1. Chose $\psi_4, \psi_5$ with $\psi_4(r)=1$, if $r\leq \rho+6\frac{6}{7}\rho^{\frac37}$, $\psi_4(r)=0$, if $r\ge \rho+6.9\rho^{\frac37}$, and $\psi_5(r)=1$, if $r\ge \rho+4\frac{1}{7}\rho^{\frac37}$, $\psi_5(r)=0$, if $r\leq \rho+4.1\rho^{\frac37}$, we also require that $\psi_4, \psi_5$ satisfy the condition \eqref{equ3.10}. Now define $u=\psi_4u_4+\psi_5u_5$, and as in Step 1, it's not hard to see that \eqref{equ3.15} is valid in this region. Now proceed as the same in \cite{M}, we see the solution $u$ satisfies the (ii). \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.2}](continuous). In order to use the Proposition \ref{pro3.4} inductively, we note that if we choose $\rho_1$ to be sufficiently large, and define $\rho_{j+1}=\rho_j+7\rho_j^{\frac37}$, and let $n_j=[\rho_j]$, and $k_j=n_{j+1}-n_j$, we have \begin{align*} k_j&\leq \rho_{j+1}^{\frac87}-\rho_{j}^{\frac87}+2\leq \rho_{j}^{\frac87}(1+7\rho_{j}^{-\frac47})-\rho_{j}^{\frac87}+2\\ &\leq 8\rho_{j}^{\frac47}+6. \end{align*} Therefore, one can choose $k_j$ such that $|k_j-8\rho^{\frac87}|\leq 6$. By using \ref{pro3.4} repeatedly, we have \begin{align*} \ln{m(r)}&\leq C-C\int_{\rho_1}^{r}{t^{\frac17}}\, dt+\ln{m(\rho_1)}\\ &\leq C-Cr^{-\frac87} \end{align*} which completes the the proof. \end{proof} \end{proof} \section*{Acknowledgements} The first author is very grateful to Jiuyi Zhu for many helpful discussions and encouragement during his visit at Johns Hopkins University.
1,116,691,500,527
arxiv
\section{Introduction} \label{sec:intro} For more than 50 years, a significant research effort in theoretical computer science was made to solve the membership problem for regular languages. This problem consists in determining whether a class of regular languages is decidable, that is, whether there is an algorithm inputing a regular language and outputing `yes' if the language belongs to the investigated class, and `no' otherwise. Many results were obtained in a long and fruitful line of research. The most prominent one is certainly Schützenberger's theorem~\cite{sfo}, which gives such an algorithm for the class of star-free languages. For most interesting classes also, we know precisely the computational cost of the membership problem. As can be expected, this cost depends on the way the input language is given. Indeed, there are several ways to input a regular language. For instance, it can be given by a nondeterministic finite automaton (\ensuremath{\mathsf{NFA}}\xspace), or, alternately, by a morphism into a finite monoid. While obtaining an \ensuremath{\mathsf{NFA}}\xspace representation from a morphism into a monoid has only a linear cost, the converse direction is much more expensive: from an \ensuremath{\mathsf{NFA}}\xspace with $n$ states, the smallest monoid recognizing the same language may have an exponential number of elements (the standard construction yields $2^{n^2}$ elements). This explains why the complexity of the membership problem depends on the representation of the input. For instance, for the class of star-free languages, it is \ensuremath{\mathsf{PSpace}}\xspace-complete if one starts from \ensuremath{\mathsf{NFAs}}\xspace (and actually, even from \ensuremath{\mathsf{DFAs}}\xspace~\cite{chofo}) while it is \ensuremath{\mathsf{NL}}\xspace when starting from monoid morphisms. Recently, another problem, called separation, has replaced membership as the cornerstone in the investigation of regular languages. It takes as input \emph{two} regular langages instead of one, and asks whether there exists a third language from the class under investigation including the first input language and having empty intersection with the second one. This problem has served recently as a major ingredient in the resolution of difficult membership problems, such as the so-called dot-depth two problem~\cite{pz:qalt:2014} which remained open for 40 years (see~\cite{pztale,PZ:generic_csr_tocs:18,jep-dd45} for recent surveys on the topic). Dot-depth two is a class belonging to a famous \emph{concatenation hierarchy} which stratifies the star-free languages: the dot-depth~\cite{BrzoDot}. A specific concatenation hierarchy is built in a generic way. One starts from a base class (level 0 of the hierarchy) and builds increasingly growing classes (called levels and denoted by 1/2, 1, 3/2, 2, $\dots$) by alternating two standard closure operations: polynomial and Boolean closure. Concatenation hierarchies account for a significant part of the open questions in this research area. The state of the art regarding separation is captured by only three results~\cite{pzbpol,pbp}: in finitely based concatenation hierarchies (i.e. those whose basis is a finite class) levels 1/2, 1 and 3/2 have decidable separation. Moreover, using specific transfer results~\cite{pzsuccfull}, this can be pushed to the levels 3/2 and 2 for the two most famous finitely based hierarchies: the dot-depth~\cite{BrzoDot} and the Straubing-Thérien hierarchy~\cite{StrauConcat,TheConcat}. Unlike the situation for membership and despite these recent decidability results for separability in concatenation hierarchies, the complexity of the problems and of the corresponding algorithms has not been investigated so far (except for the class of piecewise testable languages~\cite{martens,pvzmfcs13,Masopust18}, which is level 1 in the Straubing-Thérien hierarchy). The aim of this paper is to establish such complexity results. Our contributions are the following: \begin{itemize} \item We present a \textbf{generic} reduction, which shows that for many natural classes, the way the input is given (by \ensuremath{\mathsf{NFAs}}\xspace or finite monoids) has \textbf{no impact} on the complexity of the separation problem. This is proved using two \ensuremath{\mathsf{LogSpace}}\xspace reductions from one problem to the other. This situation is surprising and opposite to that of the membership problem, where an exponential blow-up is unavoidable when going from \ensuremath{\mathsf{NFAs}}\xspace to monoids. \item Building on the results of~\cite{pzbpol}, we show that when the alphabet is fixed, there are polynomial time algorithms for levels 1/2 and 1 in any finitely based hierarchy. \item We investigate levels 3/2 and 2 of the famous Straubing-Thérien hierarchy, and we show that separation is \ensuremath{\mathsf{PSpace}}\xspace-complete for level 3/2 and between \ensuremath{\mathsf{PSpace}}\xspace-hard and \ensuremath{\mathsf{EXPTime}}\xspace for level 2. The upper bounds are based on the results of~\cite{pzbpol} while the lower bounds are based on independent reductions. \end{itemize} \noindent {\bf Organization.} In Section~\ref{sec:prelims}, we give preliminary terminology on the objects investigated in the paper. Sections~\ref{sec:nfatomono}, \ref{sec:fixalph} and~\ref{sec:classic} are then devoted to the three above points. Due to space limitations, many proofs are postponed to the appendix. \section{Preliminaries} \label{sec:prelims} In this section, we present the key objects of this paper. We define words and regular languages, classes of languages, the separation problem and finally, concatenation hierarchies. \subsection{Words and regular languages} An alphabet is a \emph{finite} set $A$ of symbols, called \emph{letters}. Given some alphabet $A$, we denote by $A^+$ the set of all nonempty finite words and by $A^{*}$ the set of all finite words over $A$ (\emph{i.e.}, $A^* = A^+ \cup \{\varepsilon\}$). If $u \in A^*$ and $v \in A^*$ we write $u \cdot v \in A^*$ or $uv \in A^*$ for the concatenation of $u$ and~$v$. A \emph{language} over an alphabet $A$ is a subset of $A^*$. Abusing terminology, if $u \in A^*$ is some word, we denote by $u$ the singleton language~$\{u\}$. It is standard to extend concatenation to languages: given $K,L \subseteq A^*$, we write~$KL = \{uv \mid u \in K \text{ and } v \in L\}$. Moreover, we also consider marked concatenation, which is less standard. Given $K,L \subseteq A^*$, \emph{a marked concatenation} of~$K$ with $L$ is a language of the form $KaL$, for some $a \in A$. We consider \emph{regular languages}, which can be equivalently defined by \emph{regular expressions}, \emph{nondeterministic finite automata}~(\ensuremath{\mathsf{NFAs}}\xspace), \emph{finite monoids} or \emph{monadic second-order logic} (\ensuremath{\textup{MSO}}\xspace). In the paper, we investigate the separation problem which takes regular languages as input. Since we are focused on complexity, how we represent these languages in our inputs matters. We shall consider two kinds of representations: \ensuremath{\mathsf{NFAs}}\xspace and monoids. Let us briefly recall these objects and fix the terminology (we refer the reader to~\cite{pingoodref} for details). \medskip \noindent {\bf NFAs.} An \ensuremath{\mathsf{NFA}}\xspace is a tuple $\ensuremath{\mathcal{A}}\xspace = (A,Q,\delta,I,F)$ where $A$ is an alphabet, $Q$ a finite set of states, $\delta \subseteq Q \times A \times Q$ a set of transitions, $I \subseteq Q$ a set of initial states and $F \subseteq Q$ a set of final states. The language $L(\ensuremath{\mathcal{A}}\xspace) \subseteq A^*$ consists of all words labeling a run from an initial state to a final state. The regular languages are exactly those which are recognized by an \ensuremath{\mathsf{NFA}}\xspace. Finally, we write ``\ensuremath{\mathsf{DFA}}\xspace'' for \emph{deterministic} finite automata, which are defined in the standard way. \medskip \noindent {\bf Monoids.} We turn to the algebraic definition of regular languages. A \emph{monoid} is a set $M$ endowed with an associative multiplication $(s,t) \mapsto s\cdot t$ (also denoted by~$st$) having a neutral element $1_M$, \emph{i.e.}, such that ${1_M}\cdot s=s\cdot {1_M}=s$ for every $s \in M$. An \emph{idempotent} of a monoid $M$ is an element $e \in M$ such that $ee = e$. Observe that $A^{*}$ is a monoid whose multiplication is concatenation (the neutral element is $\varepsilon$). Thus, we may consider monoid morphisms $\alpha: A^* \to M$ where $M$ is an arbitrary monoid. Given such a morphism, we say that a language $L\subseteq A^*$ is \emph{recognized} by~$\alpha$ when there exists a set $F \subseteq M$ such that $L = \alpha^{-1}(F)$. It is well-known that the regular languages are also those which are recognized by a morphism into a \emph{finite} monoid. When representing a regular language $L$ by a morphism into a finite monoid, one needs to give both the morphism $\alpha: A^* \to M$ (\emph{i.e.}, the image of each letter) and the set $F \subseteq M$ such that $L = \alpha^{-1}(F)$. \subsection{Classes of languages and separation} A class of languages \ensuremath{\mathcal{C}}\xspace is a correspondence $A \mapsto \ensuremath{\mathcal{C}}\xspace(A)$ which, to an alphabet $A$, associates a set of languages $\ensuremath{\mathcal{C}}\xspace(A)$ over $A$. \begin{remark} When two alphabets $A,B$ satisfy $A \subseteq B$, the definition of classes does not require $\ensuremath{\mathcal{C}}\xspace(A)$ and $\ensuremath{\mathcal{C}}\xspace(B)$ to be comparable. In fact, it may happen that a particular language $L \subseteq A^* \subseteq B^*$ belongs to $\ensuremath{\mathcal{C}}\xspace(A)$ but not to $\ensuremath{\mathcal{C}}\xspace(B)$ (or the opposite). For example, we may consider the class \ensuremath{\mathcal{C}}\xspace defined by $\ensuremath{\mathcal{C}}\xspace(A) = \{\emptyset,A^*\}$ for every alphabet $A$. When $A \subsetneq B$, we have $A^* \in \ensuremath{\mathcal{C}}\xspace(A)$ while $A^* \not\in \ensuremath{\mathcal{C}}\xspace(B)$. \end{remark} We say that \ensuremath{\mathcal{C}}\xspace is a \emph{lattice} when for every alphabet $A$, we have $\emptyset,A^* \in \ensuremath{\mathcal{C}}\xspace(A)$ and $\ensuremath{\mathcal{C}}\xspace(A)$ is closed under finite union and finite intersection: for any $K,L \in \ensuremath{\mathcal{C}}\xspace(A)$, we have $K \cup L \in \ensuremath{\mathcal{C}}\xspace(A)$ and $K \cap L \in \ensuremath{\mathcal{C}}\xspace(A)$. Moreover, a \emph{Boolean algebra} is a lattice \ensuremath{\mathcal{C}}\xspace which is additionally closed under complement: for any $L \in \ensuremath{\mathcal{C}}\xspace(A)$, we have $A^* \setminus L \in \ensuremath{\mathcal{C}}\xspace(A)$. Finally, a class \ensuremath{\mathcal{C}}\xspace is \emph{quotienting} if it is closed under quotients. That is, for every alphabet $A$, $L \in \ensuremath{\mathcal{C}}\xspace(A)$ and word $u \in A^*$, the following properties~hold: \[ u^{-1}L \stackrel{\text{def}}{=}\{w\in A^*\mid uw\in L\} \text{\quad and\quad} Lu^{-1} \stackrel{\text{def}}{=}\{w\in A^*\mid wu\in L\}\text{\quad both belong to $\ensuremath{\mathcal{C}}\xspace(A)$}. \] All classes that we consider in the paper are (at least) quotienting lattices\xspace consisting of \emph{regular languages}. Moreover, some of them satisfy an additional property called \emph{closure under inverse image}. Recall that $A^*$ is a monoid for any alphabet $A$. We say that a class \ensuremath{\mathcal{C}}\xspace is \emph{closed under inverse image} if for every two alphabets $A,B$, every monoid morphism $\alpha: A^* \to B^*$ and every language $L \in \ensuremath{\mathcal{C}}\xspace(B)$, we have $\alpha^{-1} (L) \in \ensuremath{\mathcal{C}}\xspace(A)$. A quotienting lattice\xspace (resp. quotienting Boolean algebra\xspace) closed under inverse image is called a \emph{positive variety\xspace} (resp. \emph{variety\xspace}). \medskip \noindent {\bf Separation.} Consider a class of languages \ensuremath{\mathcal{C}}\xspace. Given an alphabet $A$ and two languages $L_1,L_2 \subseteq A^*$, we say that $L_1$ is \ensuremath{\mathcal{C}}\xspace-separable from $L_2$ when there exists a third language $K \in \ensuremath{\mathcal{C}}\xspace(A)$ such that $L_1 \subseteq K$ and $L_2 \cap K = \emptyset$. In particular, $K$ is called a \emph{separator} in \ensuremath{\mathcal{C}}\xspace. The \ensuremath{\mathcal{C}}\xspace-separation problem is now defined as follows: \begin{tabular}{ll} {\bf Input:} & An alphabet $A$ and two regular languages $L_1,L_2 \subseteq A^*$. \\ {\bf Output:} & Is $L_1$ \ensuremath{\mathcal{C}}\xspace-separable from $L_2$ ? \end{tabular} \begin{remark} Separation generalizes the simpler \emph{membership problem}, which asks whether a single regular language belongs to \ensuremath{\mathcal{C}}\xspace. Indeed $L \in \ensuremath{\mathcal{C}}\xspace$ if and only if $L$ is \ensuremath{\mathcal{C}}\xspace-separable from $A^* \setminus L$ (which is also regular and computable from $L$). \end{remark} Most papers on separation are mainly concerned about decidability. Hence, they do not go beyond the above presentation of the problem (see~\cite{martens,pz:qalt:2014,pzfo,pzbpol} for example). However, this paper specifically investigates complexity. Consequently, we shall need to be more precise and take additional parameters into account. First, it will be important to specify whether the alphabet over which the input languages is part of the input (as above) or a constant. When considering separation for some fixed alphabet $A$, we shall speak of ``$\ensuremath{\mathcal{C}}\xspace(A)$-separation''. When the alphabet is part of the input, we simply speak of ``\ensuremath{\mathcal{C}}\xspace-separation''. Another important parameter is how the two input languages are represented. We shall consider \ensuremath{\mathsf{NFAs}}\xspace and monoids. We speak of \emph{separation for \ensuremath{\mathsf{NFAs}}\xspace and separation for monoids}. Note that one may efficiently reduce the latter to the former. Indeed, given a language $L \subseteq A^*$ recognized by some morphism $\alpha: A^* \to M$, it is simple to efficiently compute a \ensuremath{\mathsf{NFA}}\xspace with $|M|$ states recognizing $L$ (see~\cite{pingoodref} for example). Hence, we have the following lemma. \begin{lemma} \label{lem:easyreduc} For any class \ensuremath{\mathcal{C}}\xspace, there is a \ensuremath{\mathsf{LogSpace}}\xspace reduction from \ensuremath{\mathcal{C}}\xspace-separation for monoids to \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace. \end{lemma} Getting an efficient reduction for the converse direction is much more difficult since going from \ensuremath{\mathsf{NFAs}}\xspace (or even \ensuremath{\mathsf{DFAs}}\xspace) to monoids usually involves an exponential blow-up. However, we shall see in Section~\ref{sec:nfatomono} that for many natural classes \ensuremath{\mathcal{C}}\xspace, this is actually possible. \subsection{Concatenation hierarchies} We now briefly recall the definition of concatenation hierarchies. We refer the reader to~\cite{PZ:generic_csr_tocs:18} for a more detailed presentation. A particular concatenation hierarchy is built from a starting class of languages \ensuremath{\mathcal{C}}\xspace, which is called its \emph{basis}. In order to get robust properties, we restrict~\ensuremath{\mathcal{C}}\xspace to be a quotienting Boolean algebra\xspace of regular languages. The basis is the only parameter in the construction. Once fixed, the construction is generic: each new level is built from the previous one by applying generic operators: either Boolean closure, or polynomial closure. Let us first define these two operators. \medskip \noindent {\bf Definition.} Consider a class \ensuremath{\mathcal{C}}\xspace. We denote by \bool{\ensuremath{\mathcal{C}}\xspace} the \emph{Boolean closure} of \ensuremath{\mathcal{C}}\xspace: for every alphabet $A$, $\bool{\ensuremath{\mathcal{C}}\xspace}(A)$ is the least set containing $\ensuremath{\mathcal{C}}\xspace(A)$ and closed under Boolean operations. Moreover, we denote by \pol{\ensuremath{\mathcal{C}}\xspace} the \emph{polynomial closure} of \ensuremath{\mathcal{C}}\xspace: for every alphabet $A$, $\pol{\ensuremath{\mathcal{C}}\xspace}(A)$ is the least set containing $\ensuremath{\mathcal{C}}\xspace(A)$ and closed under union and marked concatenation (if $K,L \in \pol{\ensuremath{\mathcal{C}}\xspace}(A)$ and $a \in A$, then $K \cup L,KaL \in \pol{\ensuremath{\mathcal{C}}\xspace}(A)$). Consider a quotienting Boolean algebra\xspace of regular languages \ensuremath{\mathcal{C}}\xspace. The concatenation hierarchy of basis \ensuremath{\mathcal{C}}\xspace is defined as follows. Languages are classified into levels of two kinds: full levels (denoted by 0, 1, 2,$\dots$) and half levels (denoted by 1/2, 3/2, 5/2,$\dots$). Level $0$ is the basis (\emph{i.e.}, \ensuremath{\mathcal{C}}\xspace) and for every $n \in \ensuremath{\mathbb{N}}\xspace$, \begin{itemize} \item The \emph{half level} $n+1/2$ is the \emph{polynomial closure} of the previous full level, \emph{i.e.}, of level $n$. \item The \emph{full level} $n+1$ is the \emph{Boolean closure} of the previous half level, \emph{i.e.}, of level $n+1/2$. \end{itemize} \begin{center} \begin{tikzpicture}[scale=.9] \node[anchor=east] (l00) at (0.0,0.0) {{\large $0$}}; \node[anchor=east] (l12) at (2.0,0.0) {\large $1/2$}; \node[anchor=east] (l11) at (4.0,0.0) {\large $1$}; \node[anchor=east] (l32) at (6.0,0.0) {\large $3/2$}; \node[anchor=east] (l22) at (8.0,0.0) {\large $2$}; \node[anchor=east] (l52) at (10.0,0.0) {\large $5/2$}; \draw[very thick,->] (l00) to node[above] {$Pol$} (l12); \draw[very thick,->] (l12) to node[below] {$Bool$} (l11); \draw[very thick,->] (l11) to node[above] {$Pol$} (l32); \draw[very thick,->] (l32) to node[below] {$Bool$} (l22); \draw[very thick,->] (l22) to node[above] {$Pol$} (l52); \draw[very thick,dotted] (l52) to ($(l52)+(1.0,0.0)$); \end{tikzpicture} \end{center} We write $\frac 12 \ensuremath{\mathbb{N}}\xspace = \{0,1/2,1,2,3/2,3,\dots\}$ for the set of all possible levels in a concatenation hierarchy. Moreover, for any basis \ensuremath{\mathcal{C}}\xspace and $n \in \frac12 \ensuremath{\mathbb{N}}\xspace$, we write $\ensuremath{\mathcal{C}}\xspace[n]$ for level $n$ in the concatenation hierarchy of basis \ensuremath{\mathcal{C}}\xspace. It is known that every half-level is a quotienting lattice\xspace and every full level is a quotienting Boolean algebra\xspace (see~\cite{PZ:generic_csr_tocs:18} for a recent proof). We are interested in finitely based concatenation hierarchies: if \ensuremath{\mathcal{C}}\xspace is the basis, then $\ensuremath{\mathcal{C}}\xspace(A)$ is finite for every alphabet $A$. Indeed, it was shown in~\cite{pzbpol} that for such hierarchies separation is always decidable for the levels 1/2 and 1 (in fact, while we do not discuss this in the paper, this is also true for level 3/2, see~\cite{pbp} for a preliminary version). In Section~\ref{sec:fixalph}, we build on the results of~\cite{pzbpol} and show that when the alphabet is fixed, this can be achieved in polynomial time for both levels 1/2 and 1. Moreover, we shall also investigate the famous \emph{Straubing-Th\'erien} hierarchy in Section~\ref{sec:classic}. Our motivation for investigating this hierarchy in particular is that the results of~\cite{pzbpol} can be pushed to levels 3/2 and 2 in this special case. \section{Handling \ensuremath{\mathsf{NFAs}}\xspace} \label{sec:nfatomono} In this section, we investigate how the representation of input languages impact the complexity of separation. We prove that for many natural classes \ensuremath{\mathcal{C}}\xspace (including most of those considered in the paper), \ensuremath{\mathcal{C}}\xspace-separation has the same complexity for \ensuremath{\mathsf{NFAs}}\xspace as for monoids. Because of these results, we shall be able to restrict ourselves to monoids in later sections. \begin{remark} This result highlights a striking difference between separation and the simpler membership problem. For most classes \ensuremath{\mathcal{C}}\xspace, \ensuremath{\mathcal{C}}\xspace-membership is strictly harder for \ensuremath{\mathsf{NFAs}}\xspace than for monoids. This is because when starting from a \ensuremath{\mathsf{NFA}}\xspace, typical membership algorithms require to either determinize \ensuremath{\mathcal{A}}\xspace or compute a monoid morphism recognizing $L(\ensuremath{\mathcal{A}}\xspace)$ which involves an exponential blow-up in both cases. Our results show that the situation differs for separation. \end{remark} We already have a generic efficient reduction from \ensuremath{\mathcal{C}}\xspace-separation for monoids to \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace (see Lemma~\ref{lem:easyreduc}). Here, we investigate the opposite direction: given some class \ensuremath{\mathcal{C}}\xspace, is it possible to \emph{efficiently} reduce \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace to \ensuremath{\mathcal{C}}\xspace-separation for monoids ? As far as we know, there exists no such reduction which is generic to all classes \ensuremath{\mathcal{C}}\xspace. \begin{remark} There exists an \emph{inefficient} generic reduction from separation for \ensuremath{\mathsf{NFAs}}\xspace to the separation for monoids. Given as input two \ensuremath{\mathsf{NFAs}}\xspace $\ensuremath{\mathcal{A}}\xspace_1,\ensuremath{\mathcal{A}}\xspace_2$, one may compute monoid morphisms recognizing $L(\ensuremath{\mathcal{A}}\xspace_1)$ and $L(\ensuremath{\mathcal{A}}\xspace_2)$. This approach is not satisfying as it involves an exponential blow-up: we end-up with monoids $M_i$ of size $2^{|Q_i|^2}$ where $Q_i$ is the set of states of $\ensuremath{\mathcal{A}}\xspace_i$. \end{remark} Here, we present a set of conditions applying to a pair of classes $(\ensuremath{\mathcal{C}}\xspace,\ensuremath{\mathcal{D}}\xspace)$. When they are satisfied, there exists an efficient reduction from \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace to \ensuremath{\mathcal{D}}\xspace-separation for monoids. By themselves, these conditions are abstract. However, we highlight two concrete applications. First, for every positive variety\xspace \ensuremath{\mathcal{C}}\xspace, the pair $(\ensuremath{\mathcal{C}}\xspace,\ensuremath{\mathcal{C}}\xspace)$ satisfies the conditions. Second, for every finitely based concatenation hierarchies of basis \ensuremath{\mathcal{C}}\xspace, there exists another finite basis \ensuremath{\mathcal{D}}\xspace such that for every $n \in \frac12 \ensuremath{\mathbb{N}}\xspace$, the pair $(\ensuremath{\mathcal{C}}\xspace[n],\ensuremath{\mathcal{D}}\xspace[n])$ satisfies the conditions We first introduce the notions we need to present the reduction and the conditions required to apply it. Then, we state the reduction itself and its applications. \subsection{Generic theorem} We fix a special two letter alphabet $\ensuremath{\mathbbm{E}}\xspace = \{0,1\}$. For the sake of improved readability, we abuse terminology and assume that when considering an arbitrary alphabet $A$, it always has empty intersection with \ensuremath{\mathbbm{E}}\xspace. This is harmless as we may work up to bijective renaming. We exhibit conditions applying to a pair of classes $(\ensuremath{\mathcal{C}}\xspace,\ensuremath{\mathcal{D}}\xspace)$. Then, we prove that they imply the existence of an efficient reduction from \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace to \ensuremath{\mathcal{D}}\xspace-separation for monoids. This reduction is based on a construction which takes as input a \ensuremath{\mathsf{NFA}}\xspace \ensuremath{\mathcal{A}}\xspace (over some arbitrary alphabet $A$) and builds a modified version of the language $L(\ensuremath{\mathcal{A}}\xspace)$ (over $A \cup \ensuremath{\mathbbm{E}}\xspace$) which is recognized by a ``small'' monoid. Our conditions involve two kinds of hypotheses: \begin{enumerate} \item First, we need properties related to inverse image: ``\ensuremath{\mathcal{D}}\xspace must be an an extension of \ensuremath{\mathcal{C}}\xspace''. \item The construction is parametrized by an object called ``tagging''. We need an algorithm which builds special taggings (with respect to \ensuremath{\mathcal{D}}\xspace) efficiently. \end{enumerate} We now make these two notions more precise. Let us start with extension. \medskip \noindent {\bf Extensions.} Consider two classes \ensuremath{\mathcal{C}}\xspace and \ensuremath{\mathcal{D}}\xspace. We say that \ensuremath{\mathcal{D}}\xspace is an extension of \ensuremath{\mathcal{C}}\xspace when for every alphabet $A$, the two following conditions hold: \begin{itemize} \item If $\gamma: (A \cup \ensuremath{\mathbbm{E}}\xspace)^* \to A^*$ is the morphism defined by $\gamma(a) = a$ for $a \in A$ and $\gamma(b) = \varepsilon$ for $b \in \ensuremath{\mathbbm{E}}\xspace$, then for every $K \in \ensuremath{\mathcal{C}}\xspace(A)$, we have $\gamma^{-1}(K) \in \ensuremath{\mathcal{D}}\xspace(A \cup \ensuremath{\mathbbm{E}}\xspace)$. \item For every $u \in \ensuremath{\mathbbm{E}}\xspace^*$, if $\lambda_u: A^* \to (A \cup \ensuremath{\mathbbm{E}}\xspace)^*$ is the morphism defined by $\lambda_u(a) = au$ for $a \in A$, then for every $K \in \ensuremath{\mathcal{D}}\xspace(A \cup \ensuremath{\mathbbm{E}}\xspace)$, we have $\lambda_u^{-1}(K) \in \ensuremath{\mathcal{C}}\xspace(A)$. \end{itemize} Positive varieties\xspace give an important example of extension. Since they are closed under inverse image, it is immediate that for every positive variety\xspace \ensuremath{\mathcal{C}}\xspace, \ensuremath{\mathcal{C}}\xspace is an extension of itself. \medskip \noindent {\bf Taggings.} A \emph{tagging} is a pair $P = (\tau: \ensuremath{\mathbbm{E}}\xspace^* \to T,G)$ where $\tau$ is a morphism into a finite monoid and $G \subseteq T$. We call $|G|$ the \emph{rank} of $P$ and $|T|$ its size. Moreover, given some \ensuremath{\mathsf{NFA}}\xspace $\ensuremath{\mathcal{A}}\xspace = (A,Q,\delta,I,F)$, $P$ is \emph{compatible with \ensuremath{\mathcal{A}}\xspace} when the rank $|G|$ is larger than $|\delta|$. For our reduction, we shall require special taggings. Consider a class \ensuremath{\mathcal{D}}\xspace and a tagging $P = (\tau: \ensuremath{\mathbbm{E}}\xspace^* \to T,G)$. We say that $P$ \emph{fools} \ensuremath{\mathcal{D}}\xspace when, for every alphabet $A$ and every morphism $\alpha: (A \cup \ensuremath{\mathbbm{E}}\xspace)^* \to M$ into a finite monoid $M$, if all languages recognized by $\alpha$ belong to $\bool{\ensuremath{\mathcal{D}}\xspace}(A \cup \ensuremath{\mathbbm{E}}\xspace)$, then, there exists $s \in M$, such that for every $t \in G$, we have $w_t \in \ensuremath{\mathbbm{E}}\xspace^*$ which satisfies $\alpha(w_t) = s$ and $\tau(w_t) = t$. Our reduction requires an efficient algorithm for computing taggings which fool the output class \ensuremath{\mathcal{D}}\xspace. Specifically, we say that a class \ensuremath{\mathcal{D}}\xspace is \emph{smooth} when, given as input $k \in \ensuremath{\mathbb{N}}\xspace$, one may compute in \ensuremath{\mathsf{LogSpace}}\xspace (with respect to $k$) a tagging of rank at least $k$ which fools \ensuremath{\mathcal{D}}\xspace. \medskip \noindent {\bf Main theorem.} We may now state our generic reduction theorem. The statement has two variants depending on whether the alphabet is fixed or not. \begin{theorem} \label{thm:autoreduc} Let $\ensuremath{\mathcal{C}}\xspace,\ensuremath{\mathcal{D}}\xspace$ be quotienting lattices\xspace such that \ensuremath{\mathcal{D}}\xspace is smooth and extends \ensuremath{\mathcal{C}}\xspace. Then the two following properties hold: \begin{itemize} \item There is a \ensuremath{\mathsf{LogSpace}}\xspace reduction from \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace to \ensuremath{\mathcal{D}}\xspace-separation for monoids. \item For every fixed alphabet $A$, there is a \ensuremath{\mathsf{LogSpace}}\xspace reduction from $\ensuremath{\mathcal{C}}\xspace(A)$-separation for \ensuremath{\mathsf{NFAs}}\xspace to $\ensuremath{\mathcal{D}}\xspace(A \cup \ensuremath{\mathbbm{E}}\xspace)$-separation for monoids. \end{itemize} \end{theorem} We have two main applications of Theorem~\ref{thm:autoreduc} which we present at the end of the section. Let us first describe the reduction. As we explained, we use a construction building a language recognized by a ``small'' monoid out of an input \ensuremath{\mathsf{NFA}}\xspace and a compatible tagging. \medskip Consider a \ensuremath{\mathsf{NFA}}\xspace $\ensuremath{\mathcal{A}}\xspace = (A,Q,\delta,I,F)$ and let $P = (\tau: \ensuremath{\mathbbm{E}}\xspace^* \to T,G)$ be a compatible tagging (i.e. $|\delta| \leq |G|$). We associate a new language $L[\ensuremath{\mathcal{A}}\xspace,P]$ over the alphabet $A \cup \ensuremath{\mathbbm{E}}\xspace$ and show that one may efficiently compute a recognizing monoid whose size is polynomial with respect to $|Q|$ and the rank of $P$ (i.e $|G|$). The construction involves two steps. We first define an intermediary language $K[\ensuremath{\mathcal{A}}\xspace,P]$ over the alphabet $A \times T$ and then define $L[\ensuremath{\mathcal{A}}\xspace,P]$ from it. We define $K[\ensuremath{\mathcal{A}}\xspace,P] \subseteq (A \times T)^*$ as the language recognized by a new \ensuremath{\mathsf{NFA}}\xspace $\ensuremath{\mathcal{A}}\xspace[P]$ which is built by relabeling the transitions of \ensuremath{\mathcal{A}}\xspace. Note that the definition of $\ensuremath{\mathcal{A}}\xspace[P]$ depends on arbitrary linear orders on $G$ and $\delta$. We let $\ensuremath{\mathcal{A}}\xspace[P] = (A \times T,Q,\delta[P],I,F)$ where $\delta[P]$ is obtained by relabeling the transitions of \ensuremath{\mathcal{A}}\xspace as follows. Given $i \leq |\delta|$, if $(q_i,a_i,r_i) \in \delta$ is the $i$-th transition of \ensuremath{\mathcal{A}}\xspace, we replace it with the transition $(q_i,(a_i,t_i),r_i) \in \delta[P]$ where $t_i \in G$ is the $i$-th element of $G$ (recall that $|\delta| \leq |G|$ by hypothesis). \begin{remark} A key property of $\ensuremath{\mathcal{A}}\xspace[P]$ is that, by definition, all transitions are labeled by distinct letters in $A \times T$. This implies that $K[\ensuremath{\mathcal{A}}\xspace,P] = L(\ensuremath{\mathcal{A}}\xspace[P])$ is recognized by a monoid of size at most $|Q|^2 + 2$.\end{remark} We may now define the language $L[\ensuremath{\mathcal{A}}\xspace,P] \subseteq (A \cup \ensuremath{\mathbbm{E}}\xspace)^*$. Observe that we have a natural map $\mu: (A\ensuremath{\mathbbm{E}}\xspace^*)^* \to (A \times T)^*$. Indeed, consider $w \in (A\ensuremath{\mathbbm{E}}\xspace^*)^*$. Since $A \cap \ensuremath{\mathbbm{E}}\xspace = \emptyset$ (recall that this is a global assumption), it is immediate that $w$ admits a \emph{unique} decomposition $w = a_1w_1 \cdots a_n w_n$ with $a_1,\dots,a_n \in A$ and $w_1,\dots,w_n \in \ensuremath{\mathbbm{E}}\xspace^*$. Hence, we may define $\mu(w) = (a_1,P(w_1)) \cdots (a_n,P(w_n)) \in (A \times T)^*$. Finally, we define, \[ L[\ensuremath{\mathcal{A}}\xspace,P] = \ensuremath{\mathbbm{E}}\xspace^* \cdot \mu^{-1}(K[\ensuremath{\mathcal{A}}\xspace,P]) \subseteq (A \cup \ensuremath{\mathbbm{E}}\xspace)^* \] We may now state the two key properties of $L[\ensuremath{\mathcal{A}}\xspace,P]$ upon which Theorem~\ref{thm:autoreduc} is based. It is recognized by a small monoid and the construction is connected to the separation. \begin{proposition} \label{prop:variautored1} Given a \ensuremath{\mathsf{NFA}}\xspace $\ensuremath{\mathcal{A}}\xspace = (A,Q,\delta,I,F)$ and a compatible tagging $P$ of rank $n$, one may compute in \ensuremath{\mathsf{LogSpace}}\xspace a monoid morphism $\alpha: (A \cup \ensuremath{\mathbbm{E}}\xspace)^* \to M$ recognizing $L[\ensuremath{\mathcal{A}}\xspace,P]$ and such that $|M| \leq n + |A| \times n^2 \times (|Q|^2+2)$. \end{proposition} \begin{proposition} \label{prop:variautored2} Let $\ensuremath{\mathcal{C}}\xspace,\ensuremath{\mathcal{D}}\xspace$ be quotienting lattices\xspace such that \ensuremath{\mathcal{D}}\xspace extends \ensuremath{\mathcal{C}}\xspace. Consider two \ensuremath{\mathsf{NFAs}}\xspace $\ensuremath{\mathcal{A}}\xspace_1$ and $\ensuremath{\mathcal{A}}\xspace_2$ over some alphabet $A$ and let $P$ be a compatible tagging that fools \ensuremath{\mathcal{D}}\xspace. Then, $L(\ensuremath{\mathcal{A}}\xspace_1)$ is $\ensuremath{\mathcal{C}}\xspace(A)$-separable from $L(\ensuremath{\mathcal{A}}\xspace_2)$ if and only if $L[\ensuremath{\mathcal{A}}\xspace_1,P]$ is $\ensuremath{\mathcal{D}}\xspace(A \cup \ensuremath{\mathbbm{E}}\xspace)$-separable from $L[\ensuremath{\mathcal{A}}\xspace_2,P]$. \end{proposition} Let us explain why these two propositions imply Theorem~\ref{thm:autoreduc}. Let $\ensuremath{\mathcal{C}}\xspace,\ensuremath{\mathcal{D}}\xspace$ be quotienting lattices\xspace such that \ensuremath{\mathcal{D}}\xspace is smooth and extends \ensuremath{\mathcal{C}}\xspace. We show that the second assertion in the theorem holds (the first one is proved similarly). Consider two \ensuremath{\mathsf{NFAs}}\xspace $\ensuremath{\mathcal{A}}\xspace_i = (A,Q_j,\delta_j,I_j,F_j)$ for $j = 1,2$. We let $k= max(|\delta_1|,|\delta_2|)$. Since \ensuremath{\mathcal{D}}\xspace is smooth, we may compute (in \ensuremath{\mathsf{LogSpace}}\xspace) a tagging $P = (\tau: \ensuremath{\mathbbm{E}}\xspace^* \to T,G)$ of rank $|G| \geq k$. Then, we may use Proposition~\ref{prop:variautored1} to compute (in \ensuremath{\mathsf{LogSpace}}\xspace) monoid morphisms recognizing $L[\ensuremath{\mathcal{A}}\xspace_1,P]$ and $L[\ensuremath{\mathcal{A}}\xspace_2,P]$. Finally, by Proposition~\ref{prop:variautored2}, $L(\ensuremath{\mathcal{A}}\xspace_1)$ is $\ensuremath{\mathcal{C}}\xspace(A)$-separable from $L(\ensuremath{\mathcal{A}}\xspace_2)$ if and only if $L[\ensuremath{\mathcal{A}}\xspace_1,P]$ is $\ensuremath{\mathcal{D}}\xspace(A \cup \ensuremath{\mathbbm{E}}\xspace)$-separable from $L[\ensuremath{\mathcal{A}}\xspace_2,P]$. Altogether, this construction is a \ensuremath{\mathsf{LogSpace}}\xspace reduction to \ensuremath{\mathcal{D}}\xspace-separation for monoids which concludes the proof. \subsection{Applications} We now present the two main applications of Theorem~\ref{thm:autoreduc}. We start with the most simple one positive varieties\xspace. Indeed, we have the following lemma. \begin{lemma} \label{lem:extendeasy} Let \ensuremath{\mathcal{C}}\xspace be a positive variety\xspace. Then, \ensuremath{\mathcal{C}}\xspace is an extension of itself. Moreover, if $\bool{\ensuremath{\mathcal{C}}\xspace} \neq \ensuremath{\textup{REG}}\xspace$, then \ensuremath{\mathcal{C}}\xspace is smooth. \end{lemma} That a positive variety\xspace is an extension of itself is immediate (one uses closure under inverse image). The difficulty is to prove smoothness. We may now combine Theorem~\ref{thm:autoreduc} with Lemma~\ref{lem:extendeasy} to get the following corollary. \begin{corollary} \label{cor:autoreducvari} Let \ensuremath{\mathcal{C}}\xspace be a positive variety\xspace such that $\bool{\ensuremath{\mathcal{C}}\xspace} \neq \ensuremath{\textup{REG}}\xspace$. There exists a \ensuremath{\mathsf{LogSpace}}\xspace reduction from \ensuremath{\mathcal{C}}\xspace-separation for \ensuremath{\mathsf{NFAs}}\xspace to \ensuremath{\mathcal{C}}\xspace-separation for monoids. \end{corollary} Corollary~\ref{cor:autoreducvari} implies that for any positive variety\xspace \ensuremath{\mathcal{C}}\xspace, the complexity of \ensuremath{\mathcal{C}}\xspace-separation is the same for monoids and \ensuremath{\mathsf{NFAs}}\xspace. We illustrate this with an example: the \emph{star-free languages}. \begin{example} Consider the star-free languages (\ensuremath{\textup{SF}}\xspace): for every alphabet $A$, $\ensuremath{\textup{SF}}\xspace(A)$ is the least set of languages containing all singletons $\{a\}$ for $a\in A$ and closed under Boolean operations and concatenation. It is folklore and simple to verify that \ensuremath{\textup{SF}}\xspace is a variety\xspace. It is known that \ensuremath{\textup{SF}}\xspace-membership is in \ensuremath{\mathsf{NL}}\xspace for monoids (this is immediate from Sch\"utzenberger's theorem~\cite{sfo}). On the other hand, \ensuremath{\textup{SF}}\xspace-membership is \ensuremath{\mathsf{PSpace}}\xspace-complete for \ensuremath{\mathsf{NFAs}}\xspace. In fact, it is shown in~\cite{chofo} that \ensuremath{\mathsf{PSpace}}\xspace-completeness still holds for \emph{deterministic} finite automata (\ensuremath{\mathsf{DFAs}}\xspace). For \ensuremath{\textup{SF}}\xspace-separation, we may combine Corollary~\ref{cor:autoreducvari} with existing results to obtain that the problem is in \ensuremath{\mathsf{EXPTime}}\xspace and \ensuremath{\mathsf{PSpace}}\xspace-hard for both \ensuremath{\mathsf{NFAs}}\xspace and monoids. Indeed, the \ensuremath{\mathsf{EXPTime}}\xspace upper bounds is proved in~\cite{pzfoj} for monoids and we may lift it to \ensuremath{\mathsf{NFAs}}\xspace with Corollary~\ref{cor:autoreducvari}. Finally, the \ensuremath{\mathsf{PSpace}}\xspace lower bound follows from~\cite{chofo}: \ensuremath{\textup{SF}}\xspace-membership is \ensuremath{\mathsf{PSpace}}\xspace-hard for \ensuremath{\mathsf{DFAs}}\xspace. This yields that \ensuremath{\textup{SF}}\xspace-separation is \ensuremath{\mathsf{PSpace}}\xspace-hard for both \ensuremath{\mathsf{DFAs}}\xspace and \ensuremath{\mathsf{NFAs}}\xspace (by reduction from membership to separation which is easily achieved in \ensuremath{\mathsf{LogSpace}}\xspace when starting from a \ensuremath{\mathsf{DFA}}\xspace). Using Corollary~\ref{cor:autoreducvari} again, we get that \ensuremath{\textup{SF}}\xspace-separation is \ensuremath{\mathsf{PSpace}}\xspace-hard for monoids as well. \qed \end{example} We turn to our second application: finitely based concatenation hierarchies. Consider a finite quotienting Boolean algebra\xspace \ensuremath{\mathcal{C}}\xspace. We associate another finite quotienting Boolean algebra\xspace $\ensuremath{\mathcal{C}}\xspace_\ensuremath{\mathbbm{E}}\xspace$ which we only define for alphabets of the form $A \cup \ensuremath{\mathbbm{E}}\xspace$ (this is harmless: $\ensuremath{\mathcal{C}}\xspace_\ensuremath{\mathbbm{E}}\xspace$ is used as the output class of our reduction). Let $A$ be an alphabet and consider the morphism $\gamma: (A \cup \ensuremath{\mathbbm{E}}\xspace)^* \to A^*$ defined by $\gamma(a) = a$ for $a \in A$ and $\gamma(0) = \gamma(1) = \varepsilon$. We define, \[ \ensuremath{\mathcal{C}}\xspace_{\ensuremath{\mathbbm{E}}\xspace}(A \cup \ensuremath{\mathbbm{E}}\xspace) = \{\gamma^{-1}(L) \mid L \in \ensuremath{\mathcal{C}}\xspace(A)\} \] It is straightforward to verify that $\ensuremath{\mathcal{C}}\xspace_{\ensuremath{\mathbbm{E}}\xspace}$ remains a finite quotienting Boolean algebra\xspace. Moreover, we have the following lemma. \begin{lemma} \label{lem:extendeasy2} Let \ensuremath{\mathcal{C}}\xspace be a finite quotienting Boolean algebra\xspace. For every $n \in \frac12 \ensuremath{\mathbb{N}}\xspace$, $\ensuremath{\mathcal{C}}\xspace_{\ensuremath{\mathbbm{E}}\xspace}[n]$ is smooth and an extension of $\ensuremath{\mathcal{C}}\xspace[n]$. \end{lemma} In view of Theorem~\ref{thm:autoreduc}, we get the following corollary which provides a generic reduction for levels within finitely based hierarchies. \begin{corollary} \label{cor:autoreduc} Let \ensuremath{\mathcal{C}}\xspace be a finite basis and $n \in \frac12 \ensuremath{\mathbb{N}}\xspace$. There exists a \ensuremath{\mathsf{LogSpace}}\xspace reduction from $\ensuremath{\mathcal{C}}\xspace[n]$-separation for \ensuremath{\mathsf{NFAs}}\xspace to $\ensuremath{\mathcal{C}}\xspace_{\ensuremath{\mathbbm{E}}\xspace}[n]$-separation for monoids. \end{corollary} \section{Generic upper bounds for low levels in finitely based hierarchies} \label{sec:fixalph} In this section, we present generic complexity results for the fixed alphabet separation problem associated to the lower levels in finitely based concatenation hierarchies. More precisely, we show that for every finite basis \ensuremath{\mathcal{C}}\xspace and every alphabet $A$, $\ensuremath{\mathcal{C}}\xspace[1/2](A)$- and $\ensuremath{\mathcal{C}}\xspace[1](A)$-separation are respectively in \ensuremath{\mathsf{NL}}\xspace and in \ensuremath{\mathsf{P}}\xspace. These upper bounds hold for both monoids and \ensuremath{\mathsf{NFAs}}\xspace: we prove them for monoids and lift the results to \ensuremath{\mathsf{NFAs}}\xspace using the reduction of Corollary~\ref{cor:autoreduc}. \begin{remark} We do \textbf{not} present new proofs for the decidability of $\ensuremath{\mathcal{C}}\xspace[1/2]$- and $\ensuremath{\mathcal{C}}\xspace[1]$-separation when \ensuremath{\mathcal{C}}\xspace is a finite quotienting Boolean algebra\xspace. These are difficult results which are proved in~\cite{pzbpol}. Instead, we recall the (inefficient) procedures which were originally presented in~\cite{pzbpol} and carefully analyze and optimize them in order to get the above upper bounds. \end{remark} For the sake of avoiding clutter, we fix an arbitrary finite quotienting Boolean algebra\xspace \ensuremath{\mathcal{C}}\xspace and an alphabet $A$ for the section. \subsection{Key sub-procedure} The algorithms $\ensuremath{\mathcal{C}}\xspace[1/2](A)$- and $\ensuremath{\mathcal{C}}\xspace[1](A)$-separation presented in~\cite{pzbpol} are based on a common sub-procedure. This remains true for the improved algorithms which we present in the paper. In fact, this sub-procedure is exactly what we improve to get the announced upper complexity bounds. We detail this point here. Note that the algorithms require considering special monoid morphisms (called ``\ensuremath{\mathcal{C}}\xspace-compatible'') as input. We first define this notion. \medskip \noindent {\bf \ensuremath{\mathcal{C}}\xspace-compatible morphisms.} Since \ensuremath{\mathcal{C}}\xspace is finite, one associates a classical equivalence $\sim_\ensuremath{\mathcal{C}}\xspace$ defined on $A^*$. Given $u,v \in A^*$, we write $u \sim_\ensuremath{\mathcal{C}}\xspace v$ if and only if $u \in L \ \Leftrightarrow\ v \in L$ for all $L \in \ensuremath{\mathcal{C}}\xspace(A)$. Given $w \in A^*$, we write $\ctype{w} \subseteq A^*$ for its $\sim_\ensuremath{\mathcal{C}}\xspace$-class. Since \ensuremath{\mathcal{C}}\xspace is a finite quotienting Boolean algebra\xspace, $\sim_\ensuremath{\mathcal{C}}\xspace$ is a congruence of finite index for concatenation (see~\cite{PZ:generic_csr_tocs:18} for a proof). Hence, the quotient ${A^*}/{\sim_\ensuremath{\mathcal{C}}\xspace}$ is a monoid and the map $w \mapsto \ctype{w}$ a morphism. Consider a morphism $\alpha: A^* \to M$ into a finite monoid $M$. We say that $\alpha$ is \ensuremath{\mathcal{C}}\xspace-compatible when there exists a \emph{monoid morphism} $s \mapsto \ctype{s}$ from $M$ to ${A^*}/{\sim_\ensuremath{\mathcal{C}}\xspace}$ such that for every $w \in A^*$, we have $\ctype{w} = \ctype{\alpha(w)}$. Intuitively, the definition means that $\alpha$ ``computes'' the $\sim_\ensuremath{\mathcal{C}}\xspace$-classes of words in $A^*$. The following lemma is used to compute \ensuremath{\mathcal{C}}\xspace-compatible morphisms (note that the \ensuremath{\mathsf{LogSpace}}\xspace bound holds because \ensuremath{\mathcal{C}}\xspace and $A$ is fixed). \begin{lemma} \label{lem:compat} Given two morphisms recognizing regular languages $L_1,L_2 \subseteq A^*$ as input, one may compute in \ensuremath{\mathsf{LogSpace}}\xspace a \ensuremath{\mathcal{C}}\xspace-compatible morphism which recognizes both $L_1$ and $L_2$. \end{lemma} In view of Lemma~\ref{lem:compat}, we shall assume in this section without loss of generality that our input in separation for monoids is a single \ensuremath{\mathcal{C}}\xspace-compatible morphism recognizing the two languages that need to be separated. \medskip \noindent {\bf Sub-procedure.} Consider two \ensuremath{\mathcal{C}}\xspace-compatible morphisms $\alpha: A^* \to M$ and $\beta: A^* \to N$. We say that a subset of $N$ is \emph{good} (for $\beta$) when it contains $\beta(A^*)$ and is closed under multiplication. For every good subset $S$ of $N$, we associate a subset of $M \times 2^N$. We then consider the problem of deciding whether specific elements belong to it (this is the sub-procedure used in the separation algorithms). \begin{remark} The set $M \times 2^N$ is clearly a monoid for the componentwise multiplication. Hence we may multiply its elements and speak of idempotents in $M \times 2^N$. \end{remark} An \emph{$(\alpha,\beta,S)$-tree} is an unranked ordered tree. Each node $x$ must carry a label $lab(x) \in M \times 2^N$ and there are three possible kinds of nodes: \begin{itemize} \item {\bf Leaves}: $x$ has no children and $lab(x) = (\alpha(w),\{\beta(w)\})$ for some $w \in A^*$. \item {\bf Binary}: $x$ has exactly two children $x_1$ and $x_2$. Moreover, if $(s_1,T_1) = lab(x_1)$ and $(s_2,T_2) = lab(x_2)$, then $lab(x) = (s_1s_2,T)$ with $T \subseteq T_1T_2$. \item {\bf $S$-Operation}: $x$ has a unique child $y$. Moreover, the following must be satisfied: \begin{enumerate} \item The label $lab(y)$ is an idempotent $(e,E) \in M \times 2^N$. \item $lab(x) = (e,T)$ with $T \subseteq E \cdot \{t \in S \mid \ctype{e} = \ctype{t} \in S\} \cdot E$. \end{enumerate} \end{itemize} We are interested in deciding whether elements in $M \times 2^N$ are the root label of some computation tree. Observe that computing all such elements is easily achieved with a least fixpoint procedure: one starts from the set of leaf labels and saturates this set with three operations corresponding to the two kinds of inner nodes. This is the approach used in~\cite{pzbpol} (actually, the set of all root labels is directly defined as a least fixpoint and $(\alpha,\beta,S)$-trees are not considered). However, this is costly since the computed set may have exponential size with respect to $|N|$. Hence, this approach is not suitable for getting efficient algorithms. Fortunately, solving $\ensuremath{\mathcal{C}}\xspace[1/2](A)$- and $\ensuremath{\mathcal{C}}\xspace[1](A)$-separation does not require to have the whole set of possible root labels in hand. Instead, we shall only need to consider the elements $(s,T) \in M \times 2^N$ which are the root label of some tree \textbf{and} such that $T$ is a \textbf{singleton set}. It turns out that these specific elements can be computed efficiently. We state this in the next theorem which is the key technical result and main contribution of this section. \begin{theorem} \label{thm:efficient} Consider two \ensuremath{\mathcal{C}}\xspace-compatible morphisms $\alpha: A^* \to M$ and $\beta: A^* \to N$ and a good subset $S \subseteq N$. Given $s \in M$ and $t \in N$, one may test in \ensuremath{\mathsf{NL}}\xspace with respect to $|M|$ and $|N|$ whether there exists an $(\alpha,\beta,S)$-tree with root label $(s,\{t\})$. \end{theorem} Theorem~\ref{thm:efficient} is proved in appendix. We only present a brief outline which highlights two propositions about $(\alpha,\beta,S)$-trees upon which the theorem is based. We first define a complexity measure for $(\alpha,\beta,S)$-trees. Consider two \ensuremath{\mathcal{C}}\xspace-compatible morphisms $\alpha: A^* \to M$ and $\beta: A^* \to N$ as well as a good subset $S \subseteq N$. Given an $(\alpha,\beta,S)$-tree \ensuremath{\mathbbm{T}}\xspace, we define the \emph{operational height of \ensuremath{\mathbbm{T}}\xspace} as the greatest number $h \in \ensuremath{\mathbb{N}}\xspace$ such that \ensuremath{\mathbbm{T}}\xspace contains a branch with $h$ $S$-operation nodes. Our first result is a weaker version of Theorem~\ref{thm:efficient}. It considers the special case when we restrict ourselves to $(\alpha,\beta,S)$-trees whose operational heights are bounded by a constant. \begin{proposition} \label{prop:computimp} Let $h \in \ensuremath{\mathbb{N}}\xspace$ be a constant and consider two \ensuremath{\mathcal{C}}\xspace-compatible morphisms $\alpha: A^* \to M$ and $\beta: A^* \to N$ and a good subset $S \subseteq N$. Given $s \in M$ and $t \in N$, one may test in \ensuremath{\mathsf{NL}}\xspace with respect to $|M|$ and $|N|$ whether there exists an $(\alpha,\beta,S)$-tree of operational height at most $h$ and with root label $(s,\{t\})$. \end{proposition} Our second result complements the first one: in Theorem~\ref{thm:efficient}, it suffices to consider $(\alpha,\beta,S)$-trees whose operational heights are bounded by a constant (depending only on the class \ensuremath{\mathcal{C}}\xspace and the alphabet $A$ which are fixed here). Let us first define this constant. Given a finite monoid $M$, we define the \ensuremath{\mathcal{J}}\xspace-depth of $M$ as the greatest number $h \in \ensuremath{\mathbb{N}}\xspace$ such that one may find $h$ pairwise distinct elements $s_1,\dots,s_h \in M$ such that for every $i < h$, $s_{i+1} = xs_iy$ for some $x,y \in M$ \begin{remark} The term ``\ensuremath{\mathcal{J}}\xspace-depth'' comes from the Green's relations which are defined on any monoid~\cite{green}. We do not discuss this point here. \end{remark} Recall that the quotient set ${A^*}/{\sim_\ensuremath{\mathcal{C}}\xspace}$ is a monoid. Consequently, it has a \ensuremath{\mathcal{J}}\xspace-depth. Our second result is as follows. \begin{proposition} \label{prop:opbound} Let $h \in \ensuremath{\mathbb{N}}\xspace$ be the \ensuremath{\mathcal{J}}\xspace-depth of ${A^*}/{\sim_\ensuremath{\mathcal{C}}\xspace}$. Consider two \ensuremath{\mathcal{C}}\xspace-compatible morphisms $\alpha: A^* \to M$ and $\beta: A^* \to N$, and a good subset $S \subseteq N$. Then, for every $(s,T) \in M \times 2^N$, the following properties are equivalent: \begin{enumerate} \item $(s,T)$ is the root label of some $(\alpha,\beta,S)$-tree. \item $(s,T)$ is the root label of some $(\alpha,\beta,S)$-tree whose operational height is at most $h$. \end{enumerate} \end{proposition} In view of Proposition~\ref{prop:opbound}, Theorem~\ref{thm:efficient} is an immediate consequence of Proposition~\ref{prop:computimp} applied in the special case when $h$ is the $\ensuremath{\mathcal{J}}\xspace$-depth of ${A^*}/{\sim_\ensuremath{\mathcal{C}}\xspace}$ and $m = 1$. \subsection{Applications} We now combine Theorem~\ref{thm:efficient} with the results of~\cite{pzbpol} to get the upper complexity bounds for $\ensuremath{\mathcal{C}}\xspace[1/2](A)$- and $\ensuremath{\mathcal{C}}\xspace[1](A)$-separation that we announced at the begging of the section. \medskip \noindent {\bf Application to $\ensuremath{\mathcal{C}}\xspace[1/2]$.} Let us first recall the connection between $\ensuremath{\mathcal{C}}\xspace[1/2]$-separation and $(\alpha,\beta,S)$-trees. The result is taken from~\cite{pzbpol}. \begin{theorem}[\cite{pzbpol}] \label{thm:poltheo} Let $\alpha: A^* \to M$ be a \ensuremath{\mathcal{C}}\xspace-compatible morphism and $F_0,F_1 \subseteq M$. Moreover, let $S = \alpha(A^*) \subseteq M$. The two following properties are equivalent: \begin{itemize} \item $\alpha^{-1}(F_0)$ is $\ensuremath{\mathcal{C}}\xspace[1/2]$-separable from $\alpha^{-1}(F_1)$. \item for every $s_0 \in F_0$ and $s_1 \in F_1$, there exists no $(\alpha,\alpha,S)$-tree with root label $(s_0,\{s_1\})$. \end{itemize} \end{theorem} By Theorem~\ref{thm:efficient} and the Immerman–Szelepcsényi theorem (which states that $\ensuremath{\mathsf{NL}}\xspace = co\text{-}\ensuremath{\mathsf{NL}}\xspace$), it is straightforward to verify that checking whether the second assertion in Theorem~\ref{thm:poltheo} holds can be done in \ensuremath{\mathsf{NL}}\xspace with respect to $|M|$. Therefore, the theorem implies that $\ensuremath{\mathcal{C}}\xspace[1/2](A)$-separation for monoids is in \ensuremath{\mathsf{NL}}\xspace. This is lifted to \ensuremath{\mathsf{NFAs}}\xspace using Corollary~\ref{cor:autoreduc}. \begin{corollary} \label{cor:poltheo} For every finite basis \ensuremath{\mathcal{C}}\xspace and alphabet $A$, $\ensuremath{\mathcal{C}}\xspace[1/2](A)$-separation is in \ensuremath{\mathsf{NL}}\xspace for both \ensuremath{\mathsf{NFAs}}\xspace and monoids. \end{corollary} \medskip \noindent {\bf Application to $\ensuremath{\mathcal{C}}\xspace[1]$.} We start by recalling the $\ensuremath{\mathcal{C}}\xspace[1]$-separation algorithm which is again taken from~\cite{pzbpol}. In this case, we consider an auxiliary sub-procedure which relies on $(\alpha,\beta,S)$-trees. Consider a \ensuremath{\mathcal{C}}\xspace-compatible morphism $\alpha: A^* \to M$. Observe that $M^2$ is a monoid for the componentwise multiplication. We let $\beta: A^* \to M^2$ as the morphism defined by $\beta(w) = (\alpha(w),\alpha(w))$ for every $w \in A^*$. Clearly, $\beta$ is \ensuremath{\mathcal{C}}\xspace-compatible: given $(s,t) \in M^2$, it suffices to define $\ctype{(s,t)} = \ctype{s}$. Using $(\alpha,\beta,S)$-trees, we define a procedure $S \mapsto Red(\alpha,S)$ which takes as input a good subset $S \subseteq M^2$ (for $\beta$) and outputs a subset $Red(\alpha,S) \subseteq S$. \[ Red(\alpha,S) = \{(s,t) \in S \mid \text{$(s,\{(t,s)\}) \in M \times 2^{M^2}$ is the root label of an $(\alpha,\beta,S)$-tree}\} \subseteq S \] It is straightforward to verify that $Red(\alpha,S)$ remains a good subset of $M^2$. We now have the following theorem which is taken from~\cite{pzbpol}. \begin{theorem}[\cite{pzbpol}] \label{thm:bpoltheo} Let $\alpha: A^* \to M$ be a morphism into a finite monoid and $F_0,F_1 \subseteq M$. Moreover, let $S \subseteq M^2$ be the greatest subset of $\alpha(A^*) \times \alpha(A^*)$ such that $Red(\alpha,S) = S$. Then, the two following properties are equivalent: \begin{itemize} \item $\alpha^{-1}(F_0)$ is \bool{\pol{\ensuremath{\mathcal{C}}\xspace}}-separable from $\alpha^{-1}(F_1)$. \item for every $s_0 \in F_0$ and $s_1 \in F_1$, $(s_0,s_1) \not\in S$. \end{itemize} \end{theorem} Observe that Theorem~\ref{thm:efficient} implies that given an arbitrary good subset $S$ of $\alpha(A^*) \times \alpha(A^*)$, one may compute $Red(\alpha,S) \subseteq S$ in \ensuremath{\mathsf{P}}\xspace with respect to $|M|$. Therefore, the greatest subset $S$ of $\alpha(A^*) \times \alpha(A^*)$ such that $Red(\alpha,S) = S$ can be computed in \ensuremath{\mathsf{P}}\xspace using a greatest fixpoint algorithm. Consequently, Theorem~\ref{thm:bpoltheo} yields that $\ensuremath{\mathcal{C}}\xspace[1](A)$-separation for monoids is in \ensuremath{\mathsf{P}}\xspace. Again, this is lifted to \ensuremath{\mathsf{NFAs}}\xspace using Corollary~\ref{cor:autoreduc}. \begin{corollary} \label{cor:bpoltheo} For every finite basis \ensuremath{\mathcal{C}}\xspace and alphabet $A$, $\ensuremath{\mathcal{C}}\xspace[1](A)$-separation is in \ensuremath{\mathsf{P}}\xspace for both \ensuremath{\mathsf{NFAs}}\xspace and monoids. \end{corollary} \section{The Straubing-Thérien hierarchy} \label{sec:classic} In this final section, we consider one of the most famous concatenation hierarchies: the Straubing-Thérien hierarchy~\cite{StrauConcat,TheConcat}. We investigate the complexity of separation for the levels 3/2 and 2. \begin{remark} Here, the alphabet is part of the input. For fixed alphabets, these levels can be handled with the generic results presented in the previous section (see Theorem~\ref{thm:alphatrick} below). \end{remark} The basis of the Straubing-Thérien hierarchy is the trivial variety\xspace \sttp{0} defined by $\sttp{0}(A) = \{\emptyset,A^*\}$ for every alphabet $A$. It is known and simple to verify (using induction) that all half levels are positive varieties\xspace and all full levels are varieties\xspace. The complexity of separation for the level one (\sttp{1}) has already been given a lot of attention. Indeed, this level corresponds to a famous class which was introduced independently from concatenation hierarchies: the piecewise testable languages~\cite{simon75}. It was shown independently in~\cite{martens} and~\cite{pvzmfcs13} that \sttp{1}-separation is in \ensuremath{\mathsf{P}}\xspace for \ensuremath{\mathsf{NFAs}}\xspace (and therefore for \ensuremath{\mathsf{DFAs}}\xspace and monoids as well). Moreover, it was also shown in~\cite{Masopust18} that the problem is actually \ensuremath{\mathsf{P}}\xspace-complete for \ensuremath{\mathsf{NFAs}}\xspace and \ensuremath{\mathsf{DFAs}}\xspace\footnote{Since \sttp{1} is a variety\xspace, \ensuremath{\mathsf{P}}\xspace-completeness for \sttp{1}-separation can also be lifted to monoids using Corollary~\ref{cor:autoreducvari}.}. Additionally, it is shown in~\cite{martens} that \sttp{{1}/{2}}-separation is in \ensuremath{\mathsf{NL}}\xspace. In the paper, we are mainly interested in the levels \sttp{{3}/{2}} and \sttp{2}. Indeed, the Straubing-Thérien hierarchy has a unique property: the generic separation results of~\cite{pzbpol} apply to these two levels as well. Indeed, these are also the levels 1/2 and 1 in another finitely based hierarchy. Consider the class \ensuremath{\textup{AT}}\xspace of \emph{alphabet testable languages}. For every alphabet $A$, $\ensuremath{\textup{AT}}\xspace(A)$ is the set of all Boolean combinations of languages $A^*a A^*$ for $a \in A$. One may verify that \ensuremath{\textup{AT}}\xspace is a variety\xspace and that $\ensuremath{\textup{AT}}\xspace(A)$ is finite for every alphabet $A$. Moreover, we have the following theorem which is due to Pin and Straubing~\cite{pin-straubing:upper} (see~\cite{PZ:generic_csr_tocs:18} for a modern proof). \begin{theorem}[\cite{pin-straubing:upper}] \label{thm:alphatrick} For every $n \in \frac12 \ensuremath{\mathbb{N}}\xspace$, we have $\ensuremath{\textup{AT}}\xspace[n] = \sttp{n+1}$. \end{theorem} The theorem implies that $\sttp{{3}/{2}} = \ensuremath{\textup{AT}}\xspace[1/2]$ and $\sttp{2} = \ensuremath{\textup{AT}}\xspace[1]$. Therefore, the results of~\cite{pzbpol} yield the decidability of separation for both \sttp{{3}/{2}} and \sttp{2} (the latter is the main result of~\cite{pzbpol}). As expected, this section investigates complexity for these two problems. \subsection{The level 3/2} We have the following tight complexity bound for \sttp{{3}/{2}}-separation. \begin{theorem} \label{thm:sth} \sttp{{3}/{2}}-separation is \ensuremath{\mathsf{PSpace}}\xspace-complete for both \ensuremath{\mathsf{NFAs}}\xspace and monoids. \end{theorem} The \ensuremath{\mathsf{PSpace}}\xspace upper bound is proved by building on the techniques introduced in the previous section for handling the level 1/2 of an arbitrary finitely based hierarchies. Indeed, we have $\sttp{{3}/{2}} = \ensuremath{\textup{AT}}\xspace[1/2]$ by Theorem~\ref{thm:alphatrick}. However, let us point out that obtaining this upper bound requires some additional work: the results of Section~\ref{sec:fixalph} apply to the setting in which the alphabet is fixed, this is not the case here. In particular, this is why we end up with a \ensuremath{\mathsf{PSpace}}\xspace upper bound instead of the generic \ensuremath{\mathsf{NL}}\xspace upper presented in Corollary~\ref{cor:poltheo}. The detailed proof is postponed to the appendix. \medskip In this abstract, we focus on proving that \sttp{{3}/{2}}-separation is \ensuremath{\mathsf{PSpace}}\xspace-hard. The proof is presented for \ensuremath{\mathsf{NFAs}}\xspace: the result can then be lifted to monoids with Corollary~\ref{cor:autoreducvari} since \sttp{{3}/{2}} is a positive variety\xspace. We use a \ensuremath{\mathsf{LogSpace}}\xspace reduction from the quantified Boolean formula problem (QBF) which is among the most famous \ensuremath{\mathsf{PSpace}}\xspace-complete problems. We first describe the reduction. For every quantified Boolean formula $\Psi$, we explain how to construct two languages $L_\Psi$ and $L'_\Psi$. It will be immediate from the presentation that given $\Psi$ as input, one may compute \ensuremath{\mathsf{NFAs}}\xspace for $L_\Psi$ and $L'_\Psi$ in \ensuremath{\mathsf{LogSpace}}\xspace. Then, we show that this construction is the desired reduction: $\Psi$ is true if and only if $L_\Psi$ is not \sttp{{3}/{2}}-separable from $L'_\Psi$. \medskip Consider a quantified Boolean formula $\Psi$ and let $n$ be the number of variables it involves. We assume without loss of generality that $\Psi$ is in prenex normal form and that the quantifier-free part of $\Psi$ is in conjunctive normal form (QBF remains \ensuremath{\mathsf{PSpace}}\xspace-complete when restricted to such formulas). That is, \[ \Psi = Q_n\ x_n \cdots Q_1\ x_1\ \varphi \] where $x_1 \dots x_n$ are the variables of $\Psi$, $Q_1,\dots,Q_n \in \{\exists,\forall\}$ are quantifiers and $\varphi$ is a quantifier-free Boolean formula involving the variables $x_1 \dots x_n$ which is in conjunctive normal form. We describe the two regular languages $L_\Psi,L'_\Psi$ by providing regular expressions recognizing them. Let us first specify the alphabet over which these languages are defined. For each variable $x_i$ occurring in $\Psi$, we create two letters that we write $x_i$ and $\overline{x_i}$. Moreover, we let, \[ X = \{x_1,\dots,x_n\} \quad \text{and} \quad \overline{X} = \{\overline{x_1},\dots,\overline{x_n}\} \] Additionally, our alphabet also contains the following letters: $\#_1,\dots,\#_i,\$$. For $0 \leq i \leq n$, we define an alphabet $B_i$. We have: \[ B_0 = X \cup \overline{X} \quad \text{and} \quad B_i = X \cup \overline{X} \cup \{\#_1,\dots,\#_i,\$\} \] Our languages are defined over the alphabet $B_n$: $L_\Psi,L_\Psi' \subseteq B_n^*$. They are built by induction: for $0 \leq i \leq n$ we describe two languages $L_i,L'_i \subseteq B_i^*$ (starting with the case $i = 0$). The languages $L_\Psi,L_\Psi'$ are then defined as $L_n,L'_n$. \medskip \noindent {\bf Construction of $L_0,L'_0$.} The language $L_0$ is defined as $L_0 = (B_0)^*$. The language $L'_0$ is defined from the quantifier-free Boolean formula $\varphi$. Recall that by hypothesis $\varphi$ is in conjunctive normal form: $\varphi = \bigwedge_{j \leq k} \varphi_j$ were $\varphi_i$ is a disjunction of literals. For all $j \leq k$, we let $C_j \subseteq B_0 = X \cup \overline{X}$ as the following alphabet: \begin{itemize} \item Given $x \in X$, we have $x \in C_j$, if and only $x$ is a literal in the disjunction $\varphi_j$. \item Given $\overline{x} \in \overline{X}$, we have $\overline{x} \in C_j$, if and only $\neg x$ is a literal in the disjunction $\varphi_j$. \end{itemize} Finally, we define $L'_0 = C_1C_2 \cdots C_k$. \medskip \noindent {\bf Construction of $L_i,L'_i$ for $i \geq 1$.} We assume that $L_{i-1},L'_{i-1}$ are defined and describe $L_i$ and $L'_i$. We shall use the two following languages in the construction: \[ T_i = (\#_ix_i (B_{i-1} \setminus \{\overline{x_i}\})^*\$x_i)^* \quad \text{and} \quad \overline{T_i} = (\#_i \overline{x_i} (B_{i-1} \setminus \{x_i\})^*\$\overline{x_i})^* \] The definition of $L_i,L'_i$ from $L_{i-1},L'_{i-1}$ now depends on whether the quantifier $Q_i$ is existential or universal. \begin{itemize} \item If $Q_i$ is an existential quantifier (i.e. $Q_i = \exists$): \[ \begin{array}{lll} L_i & = & (\#_i(x_i + \overline{x_i})L_{i-1}\$(x_i + \overline{x_i}))^*\#_i \\ L'_i & = &(\#_i (x_i + \overline{x_i})L'_{i-1}\$(x_i + \overline{x_i}))^* \#_i\$ \left(T_i\#_i + \overline{T_i}\#_i\right) \end{array} \] \item If the $Q_i$ is an universal quantifier (i.e. $Q_i = \forall$): \[ \begin{array}{lll} L_i & = & (\#_i(x_i + \overline{x_i})L_{i-1}\$(x_i + \overline{x_i}))^*\#_i \\ L'_i & = & \overline{T_i}\#_i\$(\#_i (x_i + \overline{x_i})L'_{i-1}\$(x_i + \overline{x_i}))^* \#_i\$ T_i\#_i \end{array} \] \end{itemize} Finally, $L_\Psi,L_\Psi'$ are defined as the languages $L_n,L'_n \subseteq (B_n)^*$. It is straightforward to verify from the definition, than given $\Psi$ as input, one may compute \ensuremath{\mathsf{NFAs}}\xspace for $L_\Psi$ and $L_\Psi'$ in \ensuremath{\mathsf{LogSpace}}\xspace. Consequently, it remains to prove that this construction is the desired reduction. We do so in the following proposition. \begin{proposition} \label{prop:reducgoal} For every quantified Boolean formula $\Psi$, $\Psi$ is true if and only if $L_\Psi$ is not \sttp{{3}/{2}}-separable from $L'_\Psi$. \end{proposition} Proposition~\ref{prop:reducgoal} is proved by considering a stronger result which states properties of all the languages $L_i,L'_i$ used in the construction of $L_\Psi,L_\Psi'$ (the argument is an induction on $i$). While we postpone the detailed proof to the appendix, let us provide a sketch which presents this stronger result. \begin{proof}[Proof of Proposition~\ref{prop:reducgoal} (sketch)] Consider a quantified Boolean formula $\Psi$. Moreover, let $B_0,\dots,B_n$ and $L_i,L'_i \subseteq (B_i)^*$ as the alphabets and languages defined above. The key idea is to prove a property which makes sense for all languages $L_i,L'_i$. In the special case when $i = n$, this property implies Proposition~\ref{prop:reducgoal}. Consider $0 \leq i \leq n$. We write $\Psi_i$ for the sub-formula $\Psi_i := Q_i\ x_i \cdots Q_1\ x_1\ \varphi$ (with the free variables $x_{i+1},\dots,x_n$). In particular, $\Psi_0 := \varphi$ and $\Psi_n := \Psi$. Moreover, we call ``\emph{$i$-valuation}'' a sub-alphabet $V \subseteq B_i$ such that, \begin{enumerate} \item $\#_1,\dots,\#_i,\$ \in V$ and $x_1,\overline{x_1},\dots,x_i,\overline{x_i} \in V$, and, \item for every $j$ such that $i < j \leq n$, one of the two following property holds: \begin{itemize} \item $x_j \in V$ and $\overline{x_j} \not\in V$, or, \item $x_j \not\in V$ and $\overline{x_j} \in V$. \end{itemize} \end{enumerate} Clearly, an $i$-valuation corresponds to a truth assignment for all variables $x_j$ such that $j > i$ (i.e. those that are free in $\Psi_i$): when the first (resp. second) assertion in Item~2 holds, $x_j$ is assigned to $\top$ (resp. $\bot$). Hence, abusing terminology, we shall say that an $i$-valuation $V$ \emph{satisfies} $\Psi_i$ if $\Psi_i$ is true when replacing its free variables by the truth values provided by $V$. Finally, for $0 \leq i \leq n$, if $V \subseteq B_i$ is an $i$-valuation, we let $[V] \subseteq V^*$ as the following language. Given $w \in V^*$, we have $w \in [V]$ if and only if for every $j > i$ either $x_j \in \cont{w}$ or $\overline{x_j} \in \cont{w}$ (by definition of $i$-valuations, exactly one of these two properties must hold). Proposition~\ref{prop:reducgoal} is now a consequence of the following lemma. \begin{lemma} \label{lem:reduclem} Consider $0 \leq i \leq n$. Then given an $i$-valuation $V$, the two following properties are equivalent: \begin{enumerate} \item $\Psi_i$ is satisfied by $V$. \item $L_i \cap [V]$ is not \sttp{{3}/{2}}-separable from $L'_i \cap [V]$. \end{enumerate} \end{lemma} Lemma~\ref{lem:reduclem} is proved by induction on $i$ using standard properties of the polynomial closure operation (see~\cite{PZ:generic_csr_tocs:18} for example). The proof is postponed to the appendix. Let us explain why the lemma implies Proposition~\ref{prop:reducgoal}. Consider the special case of Lemma~\ref{lem:reduclem} when $i = n$. Observe that $V = B_n$ is an $n$-valuation (the second assertion in the definition of $n$-valuations is trivially true since there are no $j$ such that $n < j \leq n$). Hence, since $\Psi = \Psi_n$ and $L_\Psi,L'_\Psi = L_n,L'_n$, the lemma yields that, \begin{enumerate} \item $\Psi$ is satisfied by $V$ (i.e. $\Psi$ is true). \item $L_\Psi \cap [V]$ is not \sttp{{3}/{2}}-separable from $L'_\Psi \cap [V]$. \end{enumerate} Moreover, we have $[V] = (B_n)^*$ by definition. Hence, we obtain that $\Psi$ is true if and only if $L$ is not \sttp{{3}/{2}}-separable from $L'$ which concludes the proof of Proposition~\ref{prop:reducgoal}. \end{proof} \subsection{The level two} For the level two, there is a gap between the lower and upper bound that we are able to prove. Specifically, we have the following theorem. \begin{theorem} \label{thm:st} \sttp{2}-separation is in \ensuremath{\mathsf{EXPTime}}\xspace and \ensuremath{\mathsf{PSpace}}\xspace-hard for both \ensuremath{\mathsf{NFAs}}\xspace and monoids. \end{theorem} Similarly to what happened with \sttp{{3}/{2}}, the \ensuremath{\mathsf{EXPTime}}\xspace upper bound is obtained by building on the techniques used in the previous section. Proving \ensuremath{\mathsf{PSpace}}\xspace-hardness is achieved using a reduction from \sttp{{3}/{2}}-separation (which is \ensuremath{\mathsf{PSpace}}\xspace-hard by Theorem~\ref{thm:sth}). The reduction is much simpler than what we presented for \sttp{{3}/{2}} above. It is summarized by the following proposition. \begin{proposition} \label{prop:bpolred} Consider an alphabet $A$ and $H,H' \subseteq A^*$. Let $B = A \cup \{\#,\$\}$ with $\#,\$ \not\in A$, $L = \#(H'\#(A^*\$\#)^*)^*H\#(A^*\$\#)^* \subseteq B^*$ and $L' = \#(H'\#(A^*\$\#)^*)^* \subseteq B^*$. The two following properties are equivalent: \begin{enumerate} \item $H$ is \sttp{{3}/{2}}-separable from $H'$. \item $L$ is \sttp{2}-separable from $L'$. \end{enumerate} \end{proposition} Proposition~\ref{prop:bpolred} is proved using standard properties of the polynomial and Boolean closure operations. The argument is postponed ot the appendix. It is clear than given as input \ensuremath{\mathsf{NFAs}}\xspace for two languages $H,H'$, one may compute \ensuremath{\mathsf{NFAs}}\xspace for the languages $L,L'$ defined Proposition~\ref{prop:bpolred} in \ensuremath{\mathsf{LogSpace}}\xspace. Consequently, the proposition yields the desired \ensuremath{\mathsf{LogSpace}}\xspace reduction from \sttp{{3}/{2}}-separation for \ensuremath{\mathsf{NFAs}}\xspace to \sttp{2}-separation for \ensuremath{\mathsf{NFAs}}\xspace. This proves that \sttp{2}-separation is \ensuremath{\mathsf{PSpace}}\xspace-hard for \ensuremath{\mathsf{NFAs}}\xspace (the result can then be lifted to monoids using Corollary~\ref{cor:autoreducvari}) since \sttp{2} is a variety\xspace). \section{Conclusion} \label{sec:conc} We showed several results, all of them raising new questions. First we proved that for many important classes of languages (including all positive varieties\xspace), the complexity of separation does not depend on how the input languages are represented. A natural question is whether the technique can be adapted to encompass more classes. In particular, one may define more permissive notions of positive varieties\xspace by replacing closure under inverse image by weaker notions. For example, many natural classes are \emph{length increasing positive varieties\xspace}: closure under inverse image only has to hold for length increasing morphisms (\emph{i.e.}, morphisms $\alpha: A^* \to B^*$ such that $|\alpha(w)| \geq |w|$ for every $w \in A^*$). For example, the levels of another famous concatenation hiearchy, the dot-depth~\cite{BrzoDot} (whose basis is $\{\emptyset,\{\varepsilon\},A^+,A^*\}$) are length increasing positive varieties\xspace. Can our techniques be adapted for such classes? Let us point out that there exists no example of natural class \ensuremath{\mathcal{C}}\xspace for which separation is decidable and strictly harder for \ensuremath{\mathsf{NFAs}}\xspace than for monoids. However, there are classes \ensuremath{\mathcal{C}}\xspace for which the question is open (see for example the class of locally testable languages in~\cite{pvzltt}). We also investigated the complexity of separation for levels 1/2 and 1 in finitely based concatenation hierarchies. We showed that when the alphabet is fixed, the problems are respectively in \ensuremath{\mathsf{NL}}\xspace and \ensuremath{\mathsf{P}}\xspace for any such hierarchy. An interesting follow-up question would be to push these results to level 3/2, for which separation is also known to be decidable in any finitely based concatenation hierarchy~\cite{pbp}. A rough analysis of the techniques used in~\cite{pbp} suggests that this requires moving above \ensuremath{\mathsf{P}}\xspace. Finally, we showed that in the famous Straubing-Thérien hierarchy, \sttp{{3}/{2}}-separation is \ensuremath{\mathsf{PSpace}}\xspace-complete and \sttp{2}-separation is in \ensuremath{\mathsf{EXPTime}}\xspace and \ensuremath{\mathsf{PSpace}}\xspace-hard. Again, a natural question is to analyze \sttp{{5}/{2}}-separation whose decidability is established in~\cite{pbp}. \input{arxiv.bbl}
1,116,691,500,528
arxiv
\section{Introduction} \subsection{The Curie-Weiss-Potts model} The Curie-Weiss-Potts model is a mean field approximation of the well-known Potts model, a famous model in equilibrium statistical mechanics. It is defined in terms of a mean interaction averaged over all sites in the model, more precisely, by sequences of probability measures of $n$ spin random variables that may occupy one of $q$ different states. For $q=2$ the model reduces to the simpler Curie-Weiss model. Two ways in which the Curie-Weiss-Potts model approximates the Potts model are discussed in \cite{Kesten/Schonmann:1989} and \cite{Pearce/Griffiths:1980}. Probability limit theorems for the Curie-Weiss-Potts model are proved first in \cite{Ellis/Wang:1990}. One reason of interest in this model is its explicit exhibition of a number of properties of real substances, such as multiple phase transitions, metastable states and others. In comparison to the Curie-Weiss model it has a more intricate phase transition structure because for example at the critical inverse temperature it has not a second-order phase transition like the Curie-Weiss model but a first order transition. In order to carry out the analysis of the model, detailed information about the structure of the set of canonical equilibrium macro-states is required. The probability observing a configuration $\sigma \in \{1,\ldots,q\}^{n}$ in an exterior field $h$ equals \begin{eqnarray} P_{ \beta,h,n}( \sigma)=\frac{1}{Z_{ \beta,h,n}}\exp\left(\frac{ \beta}{2n}\sum\limits_{1\leq i\leq j\leq n} \delta_{ \sigma_i,\sigma_j}+h\sum\limits_{i=1}^n\delta_{ \sigma_i,1}\right)\label{PBH} \end{eqnarray} where $\delta$ is the Kronecker symbol, $\beta:=T^{-1}$ is the inverse temperature and $Z_{ \beta,h,n}$ is the normalization constant known as the partition function. More precisely: \begin{center} $Z_{ \beta,h,n}=\sum\limits_{\sigma \in \{1,\ldots,q\}^{n}}\exp\left(\frac{ \beta}{2n}\sum \limits_{1\leq i\leq j\leq n}\delta_{ \sigma_i,\sigma_j}+h\sum\limits_{i=1}^n\delta_{ \sigma_i,1}\right).$ \end{center} For $\beta$ small, the spin random variables are weakly dependent while for $\beta$ large they are strongly dependent. It was shown in \cite{Wu:1982} that at $h=0$ the model undergoes a phase transition at the critical inverse temperature \begin{eqnarray} \label{criticaltemp} \beta_c =\begin{cases} q & ~,~\text{if~~} q \leq 2 \\ 2\, \frac{q-1}{q-2}\log(q-1) & ~,~\text{if~~} q> 2; \end{cases} \end{eqnarray} and that this transition is first order if $q>2$. Our interest is in the limit distribution of the empirical vector of the spin variables \begin{equation} \label{N} N=(N_1,\ldots, N_q)=\left(\sum\limits_{i=1}^n\delta_{\sigma_i,1},\ldots,\sum\limits_{i=1}^n\delta_{\sigma_i,q}\right) \end{equation} which counts the number of spins of each color for a configuration $\sigma$. Note that the normalized empirical vector $L_n := N/n$ belongs to the set of probability vectors \begin{center} $\mathcal{H}=\{x\in\mathbb{R}^q:x_1+\cdots+x_q=1\text{ and } x_i\geq 0, \forall i\}$. \end{center} For $q >2$ and $\beta < \beta_c$ $L_n$ satisfies the law of large numbers $P_{\beta,0,n} \bigl( L_n \in d\nu \bigr) \Rightarrow \delta_{\nu_0} (d \nu)$ as $n \to \infty$, where $\nu_0 = (1/q, \ldots, 1/q) \in \R^q$. For $\beta > \beta_c$ the law of large numbers breaks down and is replaced by the limit $$ P_{\beta,0,n} \bigl( L_n \in d \nu \bigr) \Rightarrow \frac 1q \sum_{i=1}^q \delta_{\nu_i(\beta)} (d \nu), $$ where $\{\nu_i(\beta), i =1, \ldots, q\}$ are $q$ distinct probability vectors in $\R^q$, distinct from $\nu_0$. The {\it first order phase transition} is the fact that for $i=1, \ldots, q$ $$ \lim_{\beta \to \beta_c^+} \nu_i(\beta) \not= \nu_0, $$ see \cite{Ellis/Wang:1990}. The case of {\it non-zero} external field $h \not= 0$ was considered in \cite{Biskup:2006} and it turned out that the first-order transition remains on a critical line. The line was computed explicitly in \cite{Blanchard:2008}, see \eqref{criticalline}. In the present work we will obtain certain known probabilistic limit theorems for the Curie-Weiss-Potts model, especially for the empirical vector of the spin variables $N$, {\it but at the same time} we present rates of convergence for all the limit theorems. We consider the fluctuations of the empirical vector $N$ around its typical value outside the critical line and we describe the fluctuations and rates of convergence at a extremity of the critical line. This extends previous results on the Curie-Weiss Potts model with no external field \cite{Ellis/Wang:1990} as well as with external field \cite{Gandolfo:2010}. The method of proof will be an application of Stein's method of so called exchangeable pairs in the case of multivariate normal approximation as well as the application of Stein's method in the case of non-Gaussian approximation. Stein's method will be explained later. We turn to the description of the set of canonical equilibrium macro-states of the Curie-Weiss-Potts model. These states are solutions of an unconstrained minimization problem involving probability vectors on $\R^q$. The macro-states describe equilibrium configurations of the model in the thermodynamic limit $n \to \infty$. For each $i$ the $i$'th component of an equilibrium macro-state gives the asymptotic relative frequency of spins taking the spin value $i$ with $i \in \{1, \ldots, q\}$. We appeal to the theory of large deviations to define the canonical equilibrium macro-states. Sanov's theorem states that with respect to the product measures $P_n(\omega) = 1/ q^n$ for $\omega \in \{1, \ldots, q \}^n$ the empirical vectors $L_n$ satisfy a large deviations principle (LDP) on $\mathcal H$ with speed $n$ and rate function given by the relative entropy $$ I(x) = \sum_{i=1}^q x_i \log (q x_i), \,\, x \in \mathcal H. $$ We use the formal notation $P_n(L_n \in d x) \approx \exp ( -n I(x))$ (precise definition: \cite{Dembo/Zeitouni:LargeDeviations2ed}). The LDP for $L_n$ with respect to $P_{\beta,h,n}$ can be proved as in \cite{EllisHaven:2000}. Let $$ f_{\beta,h} (x) = \sum_{i=1}^q x_i \log (q x_i) - \frac{\beta}{2} \sum_{i=1}^q x_i^2 - h x_1, \,\, x \in {\mathcal H}. $$ Then $P_{\beta,h,n} (L_n \in d x) \approx \exp ( -n J_{\beta,h}(x))$ with $$ J_{\beta,h}(x) :=f_{\beta,h} (x) - \inf_{x \in \mathcal H} f_{\beta,h} (x), $$ see also \cite{CosteniucEllis:2005}. Now if $J_{\beta,h}(\nu) >0$, then $\nu$ has an exponentially small probability of being observed. Hence the corresponding set of canonical equilibrium macro-states are naturally defined by $$ {\mathcal E}_{\beta,h} := \big\{ \nu \in \mathcal H :\,\, \nu \,\, \text{minimizes} \,\, f_{\beta,h}(\nu) \bigr\}. $$ Remark that the specific Gibbs free energy for the Curie-Weiss-Potts model is the quantity $\psi(\beta,h)$ defined by the limit $$ -\beta \psi(\beta,h) = \lim_{n \to \infty} \frac 1n \log Z_{\beta,h,n}. $$ From the large deviations result it follows that $$ -\beta \psi(\beta,h) = - \inf_{x \in \mathcal H} f_{\beta,h}(x). $$ In the case $h=0$ and $q >2$, it is known since \cite{Wu:1982} (for detailed proofs see \cite[Theorem 3.1]{CosteniucEllis:2005}) that ${\mathcal E}_{\beta,0}$ consists of one element for any $0 < \beta < \beta_c$, where $\beta_c$ is the critical inverse temperature given in \eqref{criticaltemp}. For any $\beta > \beta_c$, the set consists of $q$ elements and at $\beta_c$ it consists of $q+1$ elements. In the case with an external field $h \geq 0$ the global minimizers of $f_{\beta,h}$ can be described as follows. In \cite{Blanchard:2008} the following {\it critical line} was computed: \begin{equation} \label{criticalline} h_T :=\biggl\{ (\beta,h): 0 \leq h <h_0 \,\, \text{ and} \,\, h= \log(q-1)-\beta\frac{q-2}{2(q-1)} \biggr\} \end{equation} with extremities $(\beta_c,0)$ and $(\beta_0,h_0)$, where $$ \beta_0 =4 \frac{q-1}{q} \quad \text{and} \quad h_0=\log(q-1)-2\frac{q-2}{q} $$ ($(\beta_0, h_0)$ were already determined in \cite{Biskup:2006}). Now consider the parametrization $$ x_z := \left(\frac{1+z}{2},\frac{1-z}{2(q-1)},\ldots,\frac{1-z}{2(q-1)}\right), \quad z\in [-1,1]. $$ Depending on the parameters $(\beta, h)$ the function $f_{\beta,h}$ presents one or several global minimizers. The following statement summarizes the results of \cite{Wu:1982}, \cite{CosteniucEllis:2005} in the case $h=0$ and of \cite{Blanchard:2008} for $h>0$. \begin{theorem} \label{Minima} Let $\beta,h\geq 0$. \begin{enumerate} \item If $h>0$ and $(\beta,h) \notin h_T$, the function $ f_{\beta,h}$ has a unique global minimum point in $\mathcal{H}$. This minimizer is analytic in $\beta$ and $h$ outside of $h_T\cup \{(\beta_0,h_0)\}$. \item If $h>0$ and $(\beta,h)\in h_T$, the function $f_{\beta,h}$ has two global minimum points in $\mathcal{H}$. More precisely, for any $z\in (0,(q-2)/q)$, the two global minimum points of $f_{\beta_z,h_z}$ at $$ \beta_z=2\frac{q-1}{zq}\log\left(\frac{1+z}{1-z}\right) \quad \text{and} \quad h_z=\log(q-1)-\frac{q-2}{2(q-1)}\beta_z $$ are the points $x_{\pm z}$. \item If $h=0$ and $\beta<\beta_c$, the unique global minimum point of $f_{\beta,0}$ is $(1/q,\ldots,1/q)$. \item If $h=0$ and $\beta>\beta_c$, there are $q$ global minimum points of $f_{\beta,0}$, which all equal $x_z$ up to a permutation of the coordinates for some $z\in ((q-2)/q,1)$. \item If $h=0$ and $\beta=\beta_c$, there are $q+1$ global minimum points of $f_{\beta,0}$: the symmetric one $(1/q,\ldots,1/q)$ together with the permutations of $$ \left(\frac{q-1}{q},\frac{1}{q(q-1)},\ldots,\frac{1}{q(q-1)}\right). $$ \end{enumerate} \end{theorem} Interesting enough, the very first results on probabilistic limit theorems (\cite{Ellis/Newman:1978} for the Curie-Weiss model and \cite{Ellis/Wang:1990} for the Curie-Weiss-Potts model) used the structure of the global minimum points of another function $G_{\beta,h}$. For the Curie-Weiss-Potts model with $h=0$ this function is given by $$ G_{\beta,0}(u) := \frac 12 \beta \langle u, u \rangle - \log \sum_{i=1}^q e^{\beta u_i}, \quad u \in \R^q. $$ With convex duality one obtains the alternative representation of the specific Gibbs free energy: $$ \beta \psi(\beta,0) = \min_{u \in \R^q} G_{\beta,0} (u) + \log q. $$ Actually $f_{\beta,0}$ and $G_{\beta,0}$ have the same global minimum points, see \cite{Kesten/Schonmann:1989} for $\beta \not= \beta_c$. A proof of this result for any $\beta>0$ can be found in \cite[Theorem 3.1]{CosteniucEllis:2005}. The main reason to use $G_{\beta,0}$ instead of $f_{\beta,0}$ is the usefulness of a representation of the distribution of $L_n$ in terms of $G_{\beta,h}$, called {\it Hubbard-Stratonovich transform} (see \cite[Lemma 3.2]{Ellis/Wang:1990} and the proof of Lemma \ref{EnMo} in this paper). This is a famous tool since the work of Ellis and Newman \cite{Ellis/Newman:1978}. For $\beta>0$ and $h$ real the global minimum points of $f_{\beta,h}$ coincide with the global minimum points of the function \begin{equation} \label{Gfunktion} G_{\beta,h}(u) := \frac 12 \beta \langle u, u \rangle - \log \bigl( \sum_{i=1}^q \exp ( \beta u_i + h \delta_{i,1}) \bigr), \quad u \in \R^q \end{equation} (for a proof see \cite[Theorem B.1]{Ellis/Wang:1992}; or apply a general result on minimum points of certain functions related by convex duality, \cite[Theorem A.1]{CosteniucEllis:2005}, see also \cite{Wang:1994}). Hence we know that all statements of Theorem \ref{Minima} hold true for $G_{\beta, h}$. \begin{cor} \label{Mi} The statements in Theorem \ref{Minima} for the global minimum points of $f_{\beta,h}$ hold true one to one for $G_{\beta,h}$, defined in \eqref{Gfunktion}. \end{cor} The {\it detour} first describing the canonical equilibrium macro-states of the Curie-Weiss-Potts model using large deviation theory and second using convex duality has the following reason. Applying Stein's method we will {\it automatically meet} the function $G_{\beta,h}$ and the limit theorems and the proof of certain rates of convergence will depend on the location of the global minimum points of $G_{\beta,h}$ (as in \cite{Ellis/Newman:1978}, \cite{Ellis/Wang:1990} and \cite{Ellis/Wang:1992} and a lot of other papers). But for $h>0$ only $f_{\beta, h}$ and its minimizers were completely characterized in the literature, see Theorem \ref{Minima}. So we had to argue that we also know the phase diagram of $G_{\beta,h}$. \subsection{Stein's method of exchangeable pairs} Starting with a bound for the distance between univariate random variables and the normal distribution Stein's method was first published in \cite{Stein:1972} (1972). Being particularly powerful in the presence of both local dependence and weak global dependence his method proved to be very successful. In \cite{Stein:1986} Stein introduced his exchangeable pair approach. At the heart of the method is a coupling of a random variable $W$ with another random variable $W'$ such that $(W,W')$ is {\it exchangeable}, i.e. their joint distribution is symmetric. Central in his approach is the fact that for all antisymmetric measurable functions $F(x,y)$ we have $\mathbb{E}\left[F(W,W')\right]=0$ if the expectation exists. Stein proved further on that a measure of proximity of W to normality may be provided by the exchangeable pair if $W'-W$ is sufficiently small. He assumed the property that there is a number $\lambda>0$ such that the expectation of $W'-W$ with respect to W satisfies $$ \mathbb{E}[W'-W\mid W]=-\lambda W. $$ Heuristically, this condition can be understood as a linear regression condition: if $(W,W')$ were bivariate normal with correlation $\varrho$, then $\E (W' |W) = \varrho \, W$ and the condition would be satisfied with $\lambda = 1 - \varrho$. Stein proved that for any uniformly Lipschitz function $h$, $| \E h(W) - \E h(Z) | \leq \delta \|h'\|$ with $Z$ denoting a standard normally distributed random variable and \begin{equation} \label{del1} \delta = 4 \E \bigg| 1 - \frac{1}{2 \lambda} \E \bigl( (W'-W)^2 | W \bigr) \bigg| + \frac{1}{2 \lambda} \E |W-W'|^3. \end{equation} Stein's approach has been successfully applied in many models, see e.g. \cite{Stein:1986} or \cite{DiaconisStein:2004} and references therein. In \cite{Rinott/Rotar:1997}, the range of application was extended by replacing the linear regression property by a weaker condition assuming that there is also a random variable $R=R(W)$ such that $$ \mathbb{E}[W'-W\mid W]=-\lambda W+R. $$ While the approach has proved successful also in non-normal contexts (see \cite{ChatterjeeDiaconis:2005},\cite{Chatterjee/Shao:2009} and \cite{Eichelsbacher/Loewe:2010}) it remained restricted to the one-dimensional setting for a long time. The problem was that it was not obvious how to transfer the linearity condition into the multivariate case. However in \cite{ChatterjeeMeckes:2008} this issue was finally addressed. They extended the linearity condition to the multivariate setting such that, for all $i\in \{1,\ldots,d\}$, $\mathbb{E}[W'_i-W_i\mid W]=-\lambda W_i$ for a fixed number $\lambda$, where now $W=(W_1,\ldots,W_d)$ and $W'=(W'_1,\ldots,W'_d)$ are identically distributed $d$-vectors with uncorrelated components. As in the univariate case an extension to the additional remainder term $R$ would be straightforward. This coupling has the purpose to estimate the distance to the standard multivariate distribution. Applying the linear regression heuristic in the multivariate case leads to a new condition due to \cite{ReinertRoellin:2009}: \begin{equation} \label{regressioncond} \mathbb{E}[W'-W\mid W]=-\Lambda W+R \end{equation} for an invertible $d\times d$ matrix $\Lambda$ and a remainder term $R=R(W)$. This linearity condition is more natural than the one of \cite{ChatterjeeMeckes:2008}. Different exchangeable pairs, obviously, will yield different $\Lambda$ and $R$. Interesting enough the Curie-Weiss-Potts model will be an example to demonstrate the power of the approach in \cite{ReinertRoellin:2009}. Constructing an exchangeable pair in the Curie-Weiss-Potts model to obtain an approximate linear regression property \eqref{regressioncond} leads us to the function $G_{\beta, h}$. This will be sketched now. Let $q >2$, $h=0$ and $\beta < \beta_c$, and let $x_0$ denote the unique global minimum point of $G_{\beta, 0}$, see Theorem \ref{Minima}. We consider $$ W := \sqrt{n} \biggl( \frac{N}{n} - x_0 \biggr) = \sqrt{n} \bigl(L_n - x_0 \bigr). $$ We produce a spin collection $\sigma'=(\sigma'_i)_{i\geq 1}$ via a {\it Gibbs sampling procedure}: Let $I$ be uniformly distributed over $\{1,\ldots,n\}$ and independent from all other random variables involved. We will now replace the spin $\sigma_i$ by $\sigma'_i$ drawn from the conditional distribution of the i'th coordinate given $(\sigma_j)_{j\neq i}$, independently from $\sigma_i$. We define $$ Y_i:=(Y_{i,1},\ldots,Y_{i,q})^t := (\delta_{\sigma_i,1},\ldots,\delta_{\sigma_i,q})^t $$ and consider \begin{equation} \label{WStr} W' := W-\frac{Y_I}{\sqrt{n}}+\frac{Y'_I}{\sqrt{n}}. \end{equation} Hence it is not hard to see that $(W,W')$ is an exchangeable pair. This construction will also be evident for all the proofs in this paper. Let $\mathcal{F}:=\sigma(\sigma_1,\ldots,\sigma_n)$. We obtain \begin{align} \mathbb{E}\left[W'_i-W_i\mid \mathcal{F}\right] &=\frac{1}{\sqrt{n}}\mathbb{E}\left[Y'_{I,i}-Y_{I,i}\mid \mathcal{F}\right]\nonumber\\ &=\frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n\mathbb{E}\left[Y'_{j,i}-Y_{j,i}\mid \sigma_1,\ldots,\sigma_n\right]\nonumber\\ &=-\frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^nY_{j,i}+ \frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n\mathbb{E}\left[\delta_{\sigma'_j,i}\mid \sigma_1,\ldots,\sigma_n\right]. \nonumber \end{align} Using our construction we obtain $$ \E \left[\delta_{\sigma'_j,i}\mid \sigma_1,\ldots,\sigma_n\right] =\mathbb{E}\left[\delta_{\sigma_j,i}\mid (\sigma_t)_{t\neq j}\right] =P_{\beta,0,n}\left(\sigma_j=i\mid (\sigma_t)_{t\neq j}\right). $$ By a straightforward calculation (see Lemma \ref{ConD}) we get that $$ P_{\beta,0,n}\left(\sigma_j=i\mid (\sigma_t)_{t\neq j}\right) = \frac{\exp\left(\beta m_{i,j}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k,j}(\sigma)\right)} $$ with $m_{i,j}(\sigma) := \frac 1n \sum_{l \not= j}^n \delta_{\sigma_l,i}$. Using the notion $m_{i}(\sigma) := \frac 1n \sum_{l=1}^n \delta_{\sigma_l, i}$ one obtains \begin{align} \frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n\frac{\exp\left(\beta m_{i,j}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k,j}(\sigma)\right)} = \frac{1}{\sqrt{n}}\frac{\exp\left(\beta m_{i}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k}(\sigma)\right)} +R_n(i). \end{align} Moreover by definition of $G_{\beta,h}$ (see Lemma \ref{expGD}) we obtain $$ \frac{ \exp( \beta u_m)}{\sum_{k=1}^q \exp( \beta u_k)} = u_m - \frac{1}{\beta} \frac{\partial}{\partial u_m} G_{\beta,0} (u). $$ Summarizing we obtain \begin{equation} \label{motivation} \E [ W_i' - W_i | \mathcal{F} ] = - \frac 1n W_i - \frac{x_{0,i}}{\sqrt{n}} + R_n(i) + \frac{1}{\sqrt{n}} \biggl( m_i(\sigma) - \frac{1}{\beta} \frac{\partial}{\partial u_i} G_{\beta,0} (m(\sigma)) \biggr) \end{equation} with $m(\sigma) = (m_1(\sigma), \ldots, m_q(\sigma))$. Hence using all informations on $G_{\beta,h}$ (Taylor-expansion and the results on the global minimum points, Theorem \ref{Minima}, Corollary \ref{Mi}), it seems to be possible to calculate $\Lambda$ and $R$ in the regression condition \eqref{regressioncond}. Indeed we are able to proof that this condition is satisfied for any $(\beta,h)$. In Section 2 of the present paper, the limit theorems and the rates of convergence are stated. They include a central limit theorem and a bound on the distance to a multivariate normal distribution for $L_n$ outside the critical line. When $G_{\beta,h}$ has several global minimizers, that is when $(\beta,h) \in h_T$ or $\beta \geq \beta_c$ and $h=0$, the empirical vector $L_n$ is close to either one or the other of the minimizers. In this case we determine a central limit theorem with conditioning for $L_n$. Next we describe the fluctuations at the extremity $(\beta_0, h_0)$ of the critical line, again combined with a rate of convergence. In Section 3 we state an abstract nonsingular multivariate normal approximation theorem for smooth test functions from \cite{ReinertRoellin:2009}. Moreover we present a Kolmogorov bound (nonsmooth test functions) for bounded random vectors $W'-W$ under exchangeability. Finally we state an abstract non-Gaussian univariate approximation theorem for the Kolmogorov-distance from \cite{Eichelsbacher/Loewe:2010}. Section 4 contains some auxiliary results which will be necessary for the proofs given in Section 5. \section{Statement of results} Let us fix some notation. From now on we will write random vectors in $\mathbb{R}^d$ in the form $w=(w_1,\ldots,w_d)^t$, where $w_i$ are $\mathbb{R}$-valued variables for $i=1,\ldots,d$. If a matrix $\Sigma$ is symmetric, nonnegative definite, we denote by $\Sigma^{1/2}$ the unique symmetric, nonnegative definite square root of $\Sigma$. $Id$ denotes the identity matrix and from now on $Z$ will denote a random vector having standard multivariate normal distribution. The expectation with respect to the measure $P_{\beta,h,n}$ will be denoted by $\mathbb{E}:=\mathbb{E}_{P_{\beta,h,n}}$. Let $q>2$. We first consider the issue of the fluctuations of the empirical vector $N$ defined in \eqref{N} around its typical value. The case of the Curie-Weiss model ($q=2$) was considered in \cite{Griffiths/Simon:1973} and \cite{Ellis/Newman:1978}. A Berry-Esseen bound was proved in \cite{Eichelsbacher/Loewe:2010} (and independently in \cite{Chatterjee/Shao:2009}). The Curie-Weiss-Potts model was treated in \cite{Ellis/Wang:1990} and for non-zero external field in \cite[Theorem 3.1]{Gandolfo:2010}. To the best of our knowledge there are no Kolmogorov bounds known. \begin{theorem} \label{THUM} Let $\beta>0$ and $h\geq 0$ with $(\beta,h) \neq (\beta_0,h_0)$. Assume that there is a unique minimizer $x_0$ of $G_{\beta,h}$. Let $W$ be the following random variable: $$ W:=\sqrt{n}\left(\frac{N}{n}-x_0\right). $$ If $Z$ has the $q$-dimensional standard normal distribution, we have for every three times differentiable function $g$, $$ \big| \mathbb{E} g(W) - \mathbb{E} g \left( \Sigma^{-1/2} Z\right) \big| \leq C \cdot n^{-1/2}, $$ for a constant $C$ and $\Sigma := \mathbb{E}\left[W \, W^t \right]$. Indeed we obtain that $C=\mathcal{O}\left(q^6\right)$. \end{theorem} Remark that we compare the distribution of the rescaled vector $N$ with a multivariate normal distribution with covariance matrix $\E [ W \, W^t ]$. This is an advantage of Stein's method: for any fixed number of particles/spins $n$, we are able to compare the distribution of $W$ with a distribution with the same $n$-dependent covariance structure.\\ In order to state our next result we introduce conditions on the function classes $\mathcal{G}$ we consider. Following \cite{Rinott/Rotar:1996}, let $\Phi$ denote the standard normal distribution in $\mathbb{R}^q$. We define for $g:\mathbb{R}^q\rightarrow \mathbb{R}$ \begin{align} g_{\delta}^+(x)&=\sup\bigl\{g(x+y):|y|\leq\delta\bigr\},\label{g+}\\ g_{\delta}^-(x)&=\inf\bigl\{g(x+y):|y|\leq\delta\bigr\},\label{g-}\\ \tilde g(x,\delta)&=g_{\delta}^+(x)-g_{\delta}^-(x)\label{gschl}. \end{align} Let $\mathcal{G}$ be a class of real measurable functions on $\mathbb{R}^q$ such that \begin{enumerate} \item The functions $g\in\mathcal{G}$ are uniformly bounded in absolute value by a constant, which we take to be 1 without loss of generality. \item For any $q\times q$ matrix $A$ and any vector $b\in\mathbb{R}^q$, $g\bigl(Ax+b\bigr)\in\mathcal{G}$. \item For any $\delta>0$ and any $g \in {\mathcal G}$, $g_{\delta}^+(x)$ and $g_{\delta}^-(x)$ are in $\mathcal{G}$. \item For some constant $a=a(\mathcal{G},q)$, $\sup\limits_{g\in\mathcal{G}}\left\{\int\limits_{\mathbb{R}^q}\tilde g(x,\delta)\Phi(dx)\right\}\leq a\delta$. Obviously we may assume $a\geq 1$. \end{enumerate} Considering the one dimensional case, we notice that the collection of indicators of all half lines, and indicators of all intervals form classes in $\mathcal{G}$ that satisfy these conditions with $a=\sqrt{2/\pi}$ and $a=2\sqrt{2/\pi}$ respectively. This was shown for example in \cite{Rinott/Rotar:1996}. In dimension $q \geq 1$ the class of indicators of convex sets is known to be such a class. Using this notation we are able to present an equivalent Theorem to \ref{THUM} for our function classes $\mathcal{G}$. \begin{theorem} \label{THUM2} Let $\beta>0$ and $h\geq 0$ with $(\beta,h) \neq (\beta_0,h_0)$. Assume that there is a unique minimizer $x_0$ of $G_{\beta,h}$. Let $W$ and $Z$ be as in \ref{THUM}. Then, for all $g\in \mathcal{G}$ with $|g|\leq 1$, we have $$ \big| \mathbb{E} g(W) - \mathbb{E} g \left( \Sigma^{-1/2} Z\right) \big| \leq C \log(n)\cdot n^{-1/2}, $$ for a constant $C$ and $\Sigma := \mathbb{E}\left[W \, W^t \right]$. \end{theorem} Letting ${\mathcal G}$ be the collection of indicators of lower quadrants the distance above specializes to the Kolmogorov distance. When the function $G_{\beta,h}$ has several global minimizers, the empirical vector $N/n$ is close to either one or the other of these minima. We determine the conditional fluctuations and a rate of convergence: \begin{theorem} \label{THMM} Assume that $\beta,h \geq 0$ and that $G_{\beta,h}$ has multiple global minimum points $x_1,\ldots, x_l$ with $l\in\{2,q,q+1\}$ and let $\epsilon>0$ be smaller than the distance between any two global minimizers of $G_{\beta,h}$. Furthermore, let $W^{(i)}:=\sqrt{n}\left(\frac{N}{n}-x_i\right)$. Then, if $Z$ has the q-dimensional standard normal distribution, under the conditional measure \begin{center} $P_{\beta,h,n}\left(\cdot\mid \frac{N}{n}\in B(x_i,\epsilon)\right)$, \end{center} we have for every three times differentiable function $g$, \begin{center} $\big| \mathbb{E} g(W^{(i)}) - \mathbb{E} g\left(\Sigma^{-1/2} Z\right) \big| \leq C\cdot n^{-1/2},$ \end{center} for a constant C and $\Sigma:= \mathbb{E} \left[ W^{(i)} \, (W^{(i)})^t \right]$. $B(x_i, \epsilon)$ denotes the open ball of radius $\epsilon$ around $x_i$. \end{theorem} We note that we get a similar result as in Theorem \ref{THUM2} for the function classes $\mathcal{G}$ in the case of several global minimizers. Finally we will take a look at the extremity $(\beta_0,h_0)$ of the critical line $h_T$. Given a vector $u\in\mathbb{R}^q$, we denote by $u^{\bot}$ the vector space made of all vectors orthogonal to $u$ in the Euclidean space $\mathbb{R}^q$. Consider the hyperplane \begin{equation} \label{hyperM} \mathcal{M}:=\bigg\{ x \in \mathbb{R}^q: \sum\limits_{i=1}^q x_i=0 \bigg\}, \end{equation} which is parallel to $\mathcal H$. The fluctuations belong to $\mathcal M$, since all global minimizers are in $\mathcal H$. The following result extends \cite[Theorem 3.9]{Ellis/Newman:1978} which applies to the case of the Curie-Weiss model at the critical inverse temperature. We remind the fact that at $(\beta_0, h_0)$ the function $G_{\beta_0,h_0}$ has the unique minimizer $x=(1/2, 1/2(q-1), \ldots, 1/2(q-1)) \in \R^q$. Now we take $u=(1-q,1,\ldots, 1)\in {\mathcal M} \subset \R^q$ and define a real valued random variable $T$ and a random vector $V\in\mathcal{M}\cap u^{\bot}$ such that \begin{equation} \label{defTV} N= n\, x+ n^{3/4} \, T\, u +n^{1/2} \,V. \end{equation} Since $N- n \, x \in {\mathcal M}$, the implicit definition of $T$ and $V$ presents a partition into a vector in (the subspace spanned by) $u$ and $u^{\bot}$. The main interest is the limiting behaviour of $T$. The new scaling of $W$ is given by $$ \frac{N_j - n \frac{1}{2 (q-1)}}{n^{3/4}} = T + V_j/n^{1/4}, \quad j=2, \ldots, q, $$ and its possible limit we observe is reminiscent to \cite{Ellis/Newman:1978}, see also \cite{CosteniucEllis:2007}. The following theorem gives a Kolmogorov bound for Theorem 3.7 in \cite{Gandolfo:2010}. \begin{theorem} \label{TT} For $(\beta,h)=(\beta_0,h_0)$ let $x=(1/2,1/2(q-1),\ldots,1/2(q-1))$ be the unique minimizer of $G_{\beta_0,h_0}$ and $u=(1-q,1,\ldots,1)$. Furthermore, let $Z_{q,T}$ be a random variable distributed according to the probability measure on $\mathbb{R}$ with the density $$ f_{q,T}(t):= f_{q,T,n} := C \cdot \exp \left( - \frac{1}{4 \, \E(T^4)} t^4\right), $$ where $T$ is defined in \eqref{defTV}. Then we obtain for any uniformly Lipschitz function $g:\mathbb{R}\to \mathbb{R}$ that $$ \big| \mathbb{E} g(T) - \mathbb{E} g\left(Z_{q,T}\right) \big| \leq C \cdot n^{-1/4}. $$ Moreover we obtain $$ \sup_{t \in \R} \big| \P \bigl( T \leq t \bigr) - F_{q,T}(t) \big| \leq C \cdot n^{-1/4} \quad \text{(bound for the Kolmogorov-distance)}, $$ where $F_{q,T}$ denotes the distribution function of $f_{q,T}$. \end{theorem} \begin{remark} As we will see in the proof of Theorem \ref{TT}, the density $f_{q,T}$ {\it originally} has the form $$ \exp \left( - \frac{4(q-1)^4}{3 \, \E(T \psi(T))} t^4 \right) $$ (up to a constant) with a function $\psi$ such that $\E(T \psi(T))= \frac{16(q-1)^4}{3} \E(T^4)$. From \cite[Theorem 3.7]{Gandolfo:2010} we know that $T$ converges in distribution to the probability measure on $\R$ proportional to $$ g_{q}(t) = \exp \left( - \frac{4(q-1)^4}{3} t^4\right). $$ Hence we conclude that $\lim_{n \to \infty} f_{q,T,n} = g_q$ point-wise and therefore $\frac{16(q-1)^4}{3} \, \E\left[T^4\right] \to 1$. Remark, that the rate of convergence of Theorem \ref{TT} also holds true when we compare the distribution of $T$ with the law on $\R$ with density $g_q$. \end{remark} Additionally we get a theorem for the random vector $V$, improving Theorem 3.7 in \cite{Gandolfo:2010}. \begin{theorem} \label{TV} Let $V$ be defined as in \eqref{defTV}. For $(\beta,h)=(\beta_0,h_0)$ and $\Sigma:= \mathbb{E}[V \, V^t]$ we have that for every three times differentiable function $g$, $$ \big| \mathbb{E} g(V) - \mathbb{E} g\left(\Sigma^{-1/2} Z\right) \big| \leq C \cdot n^{-1/4}, $$ where $C$ is an absolute constant. \end{theorem} \begin{remark} The proofs of Theorem \ref{TT} and Theorem \ref{TV} employed a fourth-order Taylor expansion of $G_{\beta_0,h_0}$, see \eqref{finaleins} and \eqref{finalzwei} in the Appendix. Without a doubt, the first and the third term in \eqref{finaleins}, as well as in \eqref{finalzwei}, gives the order ${\mathcal O}(n^{-1/4})$. \end{remark} \section{Stein's method} Let us fix some more notations. The transpose of the inverse of a matrix will be presented in the form $A^{-t}:=(A^{-1})^t$. Furthermore we will need the supremum norm, denoted by $\parallel \cdot\parallel$ for both functions and matrices. For derivatives of smooth functions $f: \R^d \to \R$, we use the notation $\nabla$ for the gradient operator. For a function $f:\mathbb{R}^d\to \mathbb{R}$, we abbreviate \begin{equation} \label{speznorm} |f|_1 := \sup\limits_{i}\parallel\frac{\partial}{\partial x_i}f\parallel, \quad |f|_2:= \sup\limits_{i,j}\parallel\frac{\partial^2}{\partial x_i\partial x_j}f\parallel, \end{equation} and so on, if these derivatives exist. The method of Stein is based on the characterization of the normal distribution that $Y \in \mathbb{R}^d$, $d\in \mathbb{N}$, is a centered multivariate normal vector with covariance matrix $\Sigma$ if and only if \begin{equation} \label{Steinchar} \mathbb{E}\left[ \nabla^t \Sigma \nabla f(Y)- Y^t \nabla f(Y) \right]=0 \quad \text{for all smooth} \quad f:\mathbb{R}^d \to \mathbb{R}. \end{equation} It is well known, see \cite{Barbour:1990} and \cite{Goetze:1991}, that for any $g:\mathbb{R}^d\to\mathbb{R}$ being differentiable with bounded first derivatives, if $\Sigma\in\mathbb{R}^{d\times d}$ is symmetric and positive definite, there is a solution $f:\mathbb{R}^d \to \mathbb{R}$ to the equation \begin{equation} \label{Steinequation} \nabla^t \Sigma \nabla f(w) - w^t \nabla f(w)= g(w)- \mathbb{E} g \left(\Sigma^{-1/2} Z\right), \end{equation} which holds for every $w\in\mathbb{R}^d$. If, in addition, $g$ is $n$ times differentiable, there is a solution $f$ which is also $n$ times differentiable and one has for every $k=1, \ldots, n$ the bound \begin{equation} \label{Steinbound} \bigg| \frac{\partial^k f(w)}{\prod_{j=1}^k \partial w_{i_j}} \bigg| \leq \frac 1k \bigg| \frac{\partial^k g(w)}{\prod_{j=1}^k \partial w_{i_j}} \bigg| \end{equation} for every $w \in \R^d$. We will apply Theorem 2.1 in \cite{ReinertRoellin:2009}: \begin{theorem} \label{RR}(Reinert, R\"ollin: 2009)\\ Assume that $(W,W')$ is an exchangeable pair of $\mathbb{R}^d$-valued random vectors such that $$ \mathbb{E}[W]=0, \quad \mathbb{E} [W \, W^t] =\Sigma, $$ with $\Sigma \in \mathbb{R}^{d\times d}$ symmetric and positive definite. If $(W,W')$ satisfies \eqref{regressioncond} for an invertible matrix $\Lambda$ and a $\sigma(W)$-measurable random vector $R$ and if $Z$ has d-dimensional standard normal distribution, we have for every three times differentiable function $g$, \begin{equation} \label{mainboundRR} \big| \mathbb{E} g(W) - \mathbb{E} g\left(\Sigma^{-1/2} Z\right) \big| \leq \frac{|g|_2}{4}A+\frac{|g|_3}{12}B+\left(|g|_1+\frac{1}{2}d\parallel\Sigma\parallel^{1/2} |g|_2\right)C, \end{equation} where, with $\lambda^{(i)}:= \sum\limits_{m=1}^d \big| (\Lambda^{-1})_{m,i} \big|$, \begin{eqnarray} A &= &\sum\limits_{i,j=1}^d \lambda^{(i)} \sqrt{\mathbb{V} \left[\mathbb{E}[(W'_i-W_i)(W'_j-W_j)\mid W]\right]}, \nonumber \\ B&=&\sum\limits_{i,j,k=1}^d\lambda^{(i)}\mathbb{E} | (W'_i-W_i)(W'_j-W_j)(W'_k-W_k) |,\\ C&=&\sum\limits_{i=1}^d\lambda^{(i)}\sqrt{\mathbb{V}\left[R_i\right]}. \nonumber \end{eqnarray} \end{theorem} The advantage of Stein's method is that the bounds to a multivariate normal distribution reduce to the computation of, or bounds on, low order moments, here bounds on the absolute third moments, on a conditional variance and on the variance of the remainder term. Such variance computations may be difficult, but we will get rates of convergence at the same time. In the same context as in \cite{ReinertRoellin:2009} we can show the following Theorem, presenting bounds in Kolmogorov distance. Our developement differs from Reinert and R\"ollin, as we use the relationship to the bounds in \cite{Rinott/Rotar:1997}. We obtain a bound of order $\log(n)\cdot n^{-1/2}$ assuming some boundedness, improving Corollary 3.1 in \cite{ReinertRoellin:2009}. \begin{theorem}\label{Kolmogorov} Let $(W,W')$ be an exchangeable pair with $\mathbb{E}[WW^t]=\Sigma$. Again we assume that $(W',W)$ satisfies \eqref{regressioncond} for an invertible matrix $\Lambda$ and a $\sigma(W)$-measurable random vector $R$ and additionally $|W'-W|\leq A$. Then, \begin{eqnarray*} \sup_{g\in\mathcal{G}} \big|\mathbb{E}g(W)-\mathbb{E}g(Z)\big| &\leq C \bigl[ \log(n)A_1+\left(\log(n)\|\Sigma\|^{1/2}+1\right)A_2\nonumber\\&~~~+\bigl(1+\log(n)\sum\limits_{i=1}^q\mathbb{E}\bigl|W_i\bigr|+a\bigr)A^3A_3\nonumber\\ &~~+aA^3\bigl(\frac{1}{A^2}+A_3\bigr) \bigr]\nonumber \end{eqnarray*} where, with $\lambda^{(i)}:=\sum\limits_{m=1}^q\big|(\Lambda^{-1})_{m,i}\big|$, \begin{eqnarray*} A_1 &= & \sum\limits_{i,j=1}^q\lambda^{(i)}\sqrt{\mathbb{V}\left[\mathbb{E}[(W'_i-W_i)(W'_j-W_j)\mid W]\right]},\\\nonumber A_2&=&\sum\limits_{i=1}^q\lambda^{(i)}\sqrt{\mathbb{E}\left[R_i^2\right]}\\ A_3&=&\sum\limits_{i=1}^q\lambda^{(i)}\nonumber, \end{eqnarray*} C denotes a constant and $a>1$ is taken from the conditions on $\mathcal{G}$. \end{theorem} \begin{proof} To prove the Theorem, first we assume that $\Sigma={\rm Id}$. Throughout the proof we write $C$ for universal constants, not necessarily the same at each occurrence. First we consider the multivariate Stein equation deduced from \eqref{Steinequation} with $\Sigma={\rm Id}$ given by \begin{align}\label{MVStein} \nabla^t \nabla f(w)-w^t\cdot \nabla f(w)&=g(w)-\mathbb{E}[g(Z)]. \end{align} For $g \in {\mathcal G}$ define the following smoothing $$ g_t(x)=\int\limits_{\mathbb{R}^q}g\left(\sqrt{t}z+\sqrt{1-t}x\right)\Phi(z)dz. $$ For $g_t$ \eqref{MVStein} is solved by the function $$ f_t(x)=-\frac{1}{2}\int\limits_{t}^{1} \bigl( g_s(x)-\mathbb{E}[g(Z)] \bigr) \frac{ds}{1-s}, $$ see \cite{Goetze:1991}. Again by \cite{Goetze:1991}, we have that for $|g|\leq 1$, there exists a constant $C$, depending only on the dimension $q$, such that in the notion \eqref{speznorm} \begin{align}\label{GoeAb} |f_t|_1 &\leq C\\\label{GoeAb2} |f_t|_2 & \leq C\log(t^{-1}). \end{align} According to \cite{Goetze:1991} (see also \cite{ReinertRoellin:2009}, Lemma A.1) there is also a constant $C>0$, depending on q, such that for all $t\in(0,1)$: \begin{align}\label{Abnonsm} \sup\bigl\{\left|\mathbb{E}g(W)-\mathbb{E}g(Z)\right|:g\in\mathcal{G}\bigr\}&\leq C\cdot \delta_t+a\sqrt{t}, \end{align} where $a>1$ is the constant of $\mathcal{G}$ and \begin{align}\nonumber \delta_t:=\sup\bigl\{\left|\mathbb{E}g_t(W)-\mathbb{E}g_t(Z)\right|:g\in\mathcal{G}\bigr\}. \end{align} Thus, it remains to estimate $\delta_t$. Using exchangeability and the linearity condition we have \begin{align}\nonumber 0&=\frac{1}{2}\mathbb{E}\bigl[(W'-W)^t\Lambda^{-t}(\nabla f_t(W')+\nabla f_t(W))\bigr]\\\nonumber &=\mathbb{E}\bigl[(W'-W)^t\Lambda^{-t}\nabla f_t(W)\bigr]+\frac{1}{2}\mathbb{E}\bigl[(W'-W)^t\Lambda^{-t}(\nabla f_t(W')-\nabla f_t(W))\bigr]\\\nonumber &=\mathbb{E}\bigl[R^t\Lambda^{-t}\nabla f_t(W)\bigr]-\mathbb{E}\bigl[W^t\nabla f_t(W)\bigr]+\frac{1}{2}\mathbb{E}\bigl[(W'-W)^t\Lambda^{-t}(\nabla f_t(W')-\nabla f_t(W))\bigr]. \end{align} Abbreviating $f^{(1)}_j:=\frac{\partial}{\partial x_j}f_t$ and $f^{(2)}_{i,j}:=\frac{\partial^2}{\partial x_j\partial x_i}f_t$, etc. for a function $f_t$, we obtain \begin{align}\nonumber \mathbb{E}\bigl[W^t\nabla f_t(W)\bigr]&=\frac{1}{2}\mathbb{E}\bigl[(W'-W)^t\Lambda^{-t}(\nabla f_t(W')-\nabla f_t(W))\bigr]+\mathbb{E}\bigl[R^t\Lambda^{-t}\nabla f_t(W)\bigr]\\\nonumber &=\frac{1}{2}\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\mathbb{E}\bigl[(W_i'-W_i)(f^{(1)}_j(W')-f^{(1)}_j(W))\bigr]\\\nonumber &~~+\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\mathbb{E}\bigl[R_if^{(1)}_j(W)\bigr]. \end{align} Hence,\\ $\mathbb{E}g_t(W)-\mathbb{E}g_t(Z)$ \begin{align}\nonumber &=\sum\limits_{i=1}^q\mathbb{E}\bigl[f^{(2)}_{i,i}(W)\bigr]-\frac{1}{2}\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\mathbb{E}\bigl[(W_i'-W_i)(f^{(1)}_j(W')-f^{(1)}_j(W))\bigr]\\\nonumber &~~-\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\mathbb{E}\bigl[R_if^{(1)}_j(W)\bigr]\\\nonumber &=\frac{1}{2}\mathbb{E}\bigl[2\sum\limits_{i=1}^qf^{(2)}_{i,i}(W)-2\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}R_iW_jf^{(2)}_{j,j}(W)\\\nonumber &~~-\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}(W'_i-W_i)(W'_j-W_j)f^{(2)}_{j,j}(W)\bigr]\\\nonumber &~~+\mathbb{E}\bigl[\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}R_iW_jf^{(2)}_{j,j}(W)-\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}R_if^{(1)}_j(W)\bigr]\\\nonumber &~~-\frac{1}{2}\mathbb{E}\bigl[\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}(W_i'-W_i)(f^{(1)}_j(W')-f^{(1)}_j(W))\\\nonumber &~~-\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}(W'_i-W_i)(W'_j-W_j)f^{(2)}_{j,j}(W)\bigr]\\\nonumber &=:J_1+J_2+J_3. \end{align} Using \eqref{GoeAb2} and the fact that $\mathbb{E}\bigl[(W'-W)(W'-W)^t\bigr]=2\Lambda^t-2\mathbb{E}\bigl[WR^t\bigr]$ we get \begin{align}\nonumber |J_1|&=\bigg|\frac{1}{2}\mathbb{E}\bigl[2\sum\limits_{i=1}^qf^{(2)}_{i,i}(W)-2\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}R_iW_jf^{(2)}_{j,j}(W)\\\nonumber &~~-\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}(W'_i-W_i)(W'_j-W_j)f^{(2)}_{j,j}(W)\bigl]\bigg|\\\nonumber &\leq C\log(t^{-1})\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\mathbb{E}\bigl|2(\Lambda)_{m,i}-2R_iW_j-\mathbb{E}\bigl[(W'_i-W_i)(W'_j-W_j)| W\bigr]\bigr|\\\nonumber &\leq C\log(t^{-1})\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\sqrt{\mathbb{E}\bigl[\left(2(\Lambda)_{m,i}-2R_iW_j-\mathbb{E}\bigl[(W'_i-W_i)(W'_j-W_j)| W\bigr]\right)^2\bigr]}\\\nonumber &=C\log(t^{-1})\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\sqrt{\mathbb{V}\left[\mathbb{E}[(W'_i-W_i)(W'_j-W_j)| W]\right]}. \end{align} Additionally, again using \eqref{GoeAb} and \eqref{GoeAb2} and the fact that $\mathbb{E}\bigl[WW^t\bigr]=\Sigma$, we have \begin{align}\nonumber |J_2|&=\bigg|\mathbb{E}\bigl[\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}R_iW_jf^{(2)}_{j,j}(W)-\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}R_if^{(1)}_j(W)\bigr]\bigg|\\\nonumber &\leq C\log(t^{-1})\sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\mathbb{E}\left|R_iW_j\right|-C\sum\limits_{m,i,j}(\Lambda^{-1})_{m,i}\mathbb{E}\left|R_i\right|\\\nonumber &\leq C\log(t^{-1})\sum\limits_{m,i=1}^q(\Lambda^{-1})_{m,i}\sqrt{\mathbb{E}\left|R_i\right|^2}. \end{align} The estimation of $J_3$ is a bit more involved. We have \begin{align}\nonumber |2J_3|&=\bigg|\sum\limits_{m,i,j=1}^q\mathbb{E}\bigl[(\Lambda^{-1})_{m,i}(W_i'-W_i)(f^{(1)}_j(W')-f^{(1)}_j(W))\\\nonumber &~~-(\Lambda^{-1})_{m,i}(W'_i-W_i)(W'_j-W_j)f^{(2)}_{j,j}(W)\bigr]\bigg|\\\nonumber &\leq \sum\limits_{m,i,j=1}^q(\Lambda^{-1})_{m,i}\bigl|\mathbb{E}\bigl[(W_i'-W_i)(f^{(1)}_j(W')-f^{(1)}_j(W)-(W'_j-W_j)f^{(2)}_{j,j}(W))\bigr]\bigr|\\\nonumber &\leq \sum\limits_{m,i=1}^q(\Lambda^{-1})_{m,i} \bigg|\mathbb{E}\bigl[A^3\sum\limits_{j,k=1}^q\int\limits_{0}^1(1-\tau)f^{(3)}_{j,j,k}(W'-\tau(W'-W))d\tau\bigr] \bigg|. \end{align} We abbreviate $M:=W'-\tau(W'-W)$. It is important to notice that we can use \eqref{MVStein} to obtain for $w\in\mathbb{R}^q$ \begin{center} $\sum\limits_{j,k=1}^qf^{(3)}_{j,j,k}(w)+\sum\limits_{j,k=1}^qw_jf^{(2)}_{j,k}(w)+\sum\limits_{k=1}^qf^{(1)}_{k}(w)=\sum\limits_{k=1}^qg^{(1)}_{k}(w)$. \end{center} Additionally we notice that $$ \int\limits_0^1(1-\tau)d\tau =\frac{1}{2}\quad \text{and} \quad \bigg| \int\limits_0^1(1-\tau)M_id\tau \bigg| \leq|W'_i|+|W_i|. $$ Thus, \begin{align}\nonumber |2J_3|&\leq \sum\limits_{m,i=1}^q(\Lambda^{-1})_{m,i}A^3\bigg| \mathbb{E}\bigl[\int\limits_0^1(1-\tau)[-\sum\limits_{k=1}^qf^{(1)}_{k}(M)-\sum\limits_{j,k=1}^qM_jf^{(2)}_{j,k}(M) +\sum\limits_{k=1}^qg^{(1)}_{k}(M)]d\tau\bigr]\bigg|\\\nonumber &\leq \sum\limits_{m,i=1}^q(\Lambda^{-1})_{m,i}\bigg( A^3C+A^3\sum\limits_{j=1}^qC\log(t^{-1})\mathbb{E}\bigl[|W'_j|+|W_j|\bigr]\\\nonumber &~~+\underbrace{ \bigg|A^3\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_0^1(1-\tau)g^{(1)}_{k}(M)d\tau\bigr]\bigg|}_{=:T}\bigg)\\\nonumber &\leq \sum\limits_{m,i=1}^q(\Lambda^{-1})_{m,i}\left[A^3C\bigl(1+\log(t^{-1})\sum\limits_{j=1}^q\mathbb{E}\bigl|W_j\bigr|\bigr)+T\right]. \end{align} By partial integration we obtain $$ \sum\limits_{k=1}^qg^{(1)}_{k}(w)=\sum\limits_{k=1}^q\frac{\sqrt{1-t}}{\sqrt{t}}\int\limits_{\mathbb{R}^q}g\left(\sqrt{t}z+\sqrt{1-t}w\right)\Phi^{(1)}_{k}(z)dz. $$ Keeping in mind that $\sum\limits_{k=1}^q\int\limits_{\mathbb{R}^q}\Phi^{(1)}_{k}(z)dz=0$ and using the definitions \eqref{g+}, \eqref{g-} and \eqref{gschl} we obtain \begin{align}\nonumber T&\leq \frac{\sqrt{1-t}}{\sqrt{t}}A^3\bigg|\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_0^1\int\limits_{\mathbb{R}^q}(1-\tau)g(\sqrt{t}z+\sqrt{1-t}M)\Phi^{(1)}_{k}(z)dzd\tau\bigr]\bigg|\\\nonumber &=\frac{\sqrt{1-t}}{\sqrt{t}}A^3\bigg|\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_0^1\int\limits_{\mathbb{R}^q}(1-\tau)[g(\sqrt{t}z+\sqrt{1-t}M)-g(\sqrt{1-t}M)]\Phi^{(1)}_{k}(z)dzd\tau\bigr]\bigg|\\\nonumber &\leq \frac{\sqrt{1-t}}{2\sqrt{t}}A^3\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_{\mathbb{R}^q}[g_{\sqrt{1-t}A+\sqrt{t}|z|}^+(\sqrt{1-t}W)-g_{\sqrt{1-t}A+\sqrt{t}|z|}^-(\sqrt{1-t}W)]\bigl|\Phi^{(1)}_{k}(z)\bigr|dz\bigr]\\\nonumber &\leq \frac{\sqrt{1-t}}{2\sqrt{t}}A^3\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_{\mathbb{R}^q}\underbrace{\tilde g(\sqrt{1-t}W;\sqrt{1-t}A+\sqrt{t}|z|)}_{=:\tilde g(W,A,t,z)}\bigl|\Phi^{(1)}_{k}(z)\bigr|dz\bigr]\\\nonumber &\leq \frac{\sqrt{1-t}}{2\sqrt{t}}A^3\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_{\mathbb{R}^q}[\tilde g(W,A,t,z)-\tilde g(Z,A,t,z)]\bigl|\Phi^{(1)}_{k}(z)\bigr|dz\bigr]\\\nonumber &~~+\frac{\sqrt{1-t}}{2\sqrt{t}}A^3\sum\limits_{k=1}^q\mathbb{E}\bigl[\int\limits_{\mathbb{R}^q}\tilde g(Z,A,t,z)\bigl|\Phi^{(1)}_{k}(z)\bigr|dz\bigr]\\\nonumber &=:B_1+B_2. \end{align} With $\delta:=\sup\bigl\{\left|\mathbb{E}[g(W)]-\mathbb{E}[g(Z)]\right|:g\in\mathcal{G}\bigr\}$ we obtain $$ B_1\leq \frac{\sqrt{1-t}}{2\sqrt{t}}A^3\cdot \delta\leq \frac{C\delta}{\sqrt{t}}A^3. $$ Furthermore by using the conditions established for the function classes $\mathcal{G}$ \begin{align}\nonumber B_2&\leq\frac{\sqrt{1-t}}{2\sqrt{t}}A^3\cdot a\left(\sqrt{1-t}A+\sum\limits_{k=1}^q\int\limits_{\mathbb{R}^q}\sqrt{t}\bigl|z\bigr|\bigl|\Phi^{(1)}_{k}(z)\bigr|dz\right)\\\nonumber &=a\frac{\sqrt{1-t}}{2\sqrt{t}}A^4+aC\frac{\sqrt{1-t}}{2}A^3\\\nonumber &\leq aCA^3\left(\frac{A}{\sqrt{t}}+1\right). \end{align} Thus, combining the estimates of $J_1,J_2$ and $J_3$ with \eqref{Abnonsm} we have \begin{align}\nonumber \delta&\leq C\bigl[\log(t^{-1})A_1+\log(t^{-1})A_2+A^3A_3\bigl(1+\log(t^{-1})\sum\limits_{i=1}^q\mathbb{E}\bigl|W_j\bigr|\bigr)\\\nonumber &~~+\frac{\delta}{\sqrt{t}}A^3A_3+aA^3A_3\bigl(\frac{A}{\sqrt{t}}+1\bigr)\bigr]+a\sqrt{t}. \end{align} Setting $\sqrt{t}=2CA^3A_3$, provided it is less than 1, simple manipulations yield the result for $\Sigma=Id$. If $t>1$ for the choice above, then by enlarging $C$ as necessary, the theorem is trivial.\\ For general $\Sigma$ we can standardize $Y=\Sigma^{-1/2}W$. With the conditions of $\mathcal{G}$ we have that $g\bigl(\Sigma^{-1/2}x\bigr)\in\mathcal{G}$. Hence the bounds \eqref{GoeAb} and \eqref{GoeAb2} can be applied. The proof now continues as for the $\Sigma=Id$ case, but with the standardized variables. \end{proof} In \cite{Eichelsbacher/Loewe:2010}, as well as in \cite{Chatterjee/Shao:2009}, Stein's method of exchangeable pairs was introduced for non-normal distributional approximations. They consider a class of densities on $\R$ which was originally introduced in \cite{DiaconisStein:2004}. Let $p$ be a regular, strictly positive density on the interval $I=[a,b]$. We suppose that this density has a derivative $p'$ that is also regular on $I$ with countable many sign changes. Furthermore $p'$ should be continuous at the sign changes and $\int_I p(x) | \log(p(x)) | dx<\infty$. Additionally we assume that \begin{equation} \label{psi} \psi(x):=\frac{p'(x)}{p(x)} \end{equation} is regular. A density $p$ fulfilling these conditions will be called {\it nice}. Now a random variable $Z$ is distributed according to $p$ if and only if $$ \mathbb{E} \left[f'(Z)+\psi(Z) f(Z)\right] = 0 $$ for a suitable class of functions. The corresponding Stein-identity is \begin{equation} \label{Steineqp} f'(x)+\psi(x) f(x)=g(x)-P(g), \end{equation} where $g$ is a measurable function for which $\int\limits_I |g(x)| p(x) \, dx < \infty$, $P(x):=\int\limits_{-\infty}^x p(y)\,dy$ and $P(g):=\int\limits_I g(y)p(y) \, dy$. For the proof of Theorem \ref{TT} we will apply Theorem 2.4 and Theorem 2.5 in \cite{Eichelsbacher/Loewe:2010}: \begin{theorem} \label{EL} Let $p$ be a {\it nice} density. Let $p_W$ be a probability distribution such that a random variable $Z_W$ is distributed according to $p_W$ if and only if $\E \bigl( \E[W \psi(W)] f'(Z_W) + \psi(Z_W) f(Z_W) \bigr)=0$ for a suitably chosen class of functions and with $\psi$ as in \eqref{psi}. Let $(W,W')$ be an exchangeable pair of real-valued random variables such that \begin{equation} \label{pStein} \mathbb{E} \left[W'-W\mid W\right]= \lambda\psi(W)-R(W) \end{equation} for some random variable $R=R(W)$, $0<\lambda<1$ and $\psi$ as in \eqref{psi}. {\bf (1)}: Let us assume that for any absolutely continuous function $g$ the solution $f_g$ of \eqref{Steineqp} satisfies $$ \parallel f_g \parallel \leq c_1 \parallel g'\parallel, \quad \parallel f_g'\parallel \leq c_2 \parallel g'\parallel \quad \text{and} \quad \parallel f_g''\parallel\leq c_3\parallel g'\parallel. $$ Then for any uniformly Lipschitz function $g$, we obtain $| \mathbb{E} \left[g(W)\right]-\mathbb{E}\left[g(Z_W)\right]| \leq \delta \parallel g'\parallel$ with \begin{equation} \label{boundp} \delta: = \frac{c_2}{2 \lambda} \bigl( \V \bigl( \E[(W-W')^2|W] \bigr) \bigr)^{1/2} + \frac{c_3}{4 \lambda} \E |W-W'|^3 + \frac{c_1+c_2 \sqrt{\E(W^2)}}{\lambda} \sqrt{ \E (R^2)}. \end{equation} {\bf (2)}: Let us assume that for any function $g(x) := 1_{\{x \leq z\}}(x)$, $z \in \R$, the solution $f_z$ of \eqref{Steineqp} satisfies $$ |f_z(x)| \leq d_1, \quad |f_z'(x)| \leq d_2 \quad \text{and} \quad |f_z'(x)-f_z'(y) | \leq d_3 $$ and \begin{equation} \label{addcond} |(\psi(x) \, f_z(x))'| = \bigl| ( \frac{p'(x)}{p(x)} \, f_z(x))' \bigr| \leq d_4 \end{equation} for all real $x$ and $y$, where $d_1, d_2, d_3$ and $d_4$ are constants. Then we obtain for any $A >0$ \begin{eqnarray} \label{kolall} \sup_{t \in \R} \big| P(W \leq t) - \int_{-\infty}^t p_W(t) \, dt \big| & \leq & \frac{d_2}{2 \lambda} \bigl( \V \bigl( \E[(W'-W)^2 |W] \bigr) \bigr)^{1/2} \nonumber \\ & & \hspace{-2cm} + \big( d_1 + d_2 \sqrt{\E(W^2)} + \frac 32 A \bigr) \frac{\sqrt{\E(R^2)}}{\lambda} + \frac{1}{\lambda} \bigl( \frac{d_4 A^3}{4} \bigr) \\ & & \hspace{-2cm} + \frac{3A}{2} \E (|\psi(W)|) + \frac{d_3}{2 \lambda} \E \bigl( (W-W')^2 1_{\{|W-W'| \geq A\}} \bigr). \nonumber \end{eqnarray} \end{theorem} \section{Auxiliary results} Let us fix the convenient that for $k,t \in\{1,\ldots,q\}$ we will always write $\sum\limits_{k\neq m}$ instead of $\sum\limits_{\stackrel{k=1}{k\neq m}}^q$, $\sum\limits_{k,t\neq m}$ instead of $\sum\limits_{\stackrel{k,t=1}{k\neq m, t \neq m}}^q$ and so on. For the proof of the multivariate normal approximations we will apply Theorem \ref{RR} and Theorem \ref{Kolmogorov}, respectively. In the introduction we already presented the construction of the exchangeable pair for $W$, the rescaled empirical spin vector of the Curie-Weiss-Potts model. We have already seen in \eqref{motivation}, that this construction will lead us to $G_{\beta,h}$, defined in \eqref{Gfunktion}. We collect some further results on this function. First we state a result on the structure of the minimizers of $G_{\beta,h}$, determined in several papers, collected in \cite{Gandolfo:2010}: \begin{prop} \label{ChMi} Let $\beta,h\geq 0$ and let $x$ be a global minimum of $ G_{\beta,h}$. Then: \begin{enumerate} \item The vector $x$ has the coordinate $\min(x_i)$ repeated $q-1$ times at least. \item If $h>0$, then $x_1>x_i$, for all $i\in \{2,\ldots,q\}$. \item The inequality $\min(x_i)>0$ holds. \item For any $q\geq 3$ and any $(\beta,h)$, or $q=2$ and $(\beta,h)\neq (\beta_c,0)$, where $\beta_c$ denotes the critical temperature, one has $\min(x_i)<1/\beta$. \end{enumerate} \end{prop} An important identity is the following simple statement: \begin{lemma} \label{expGD} For $u\in\mathbb{R}^q$, we obtain $$ \frac{\exp\left(\beta u_m+h\delta_{m,1}\right)}{\sum\limits_{k=1}^q\exp\left(\beta u_k+h\delta_{k,1}\right)}= u_m-\frac{1}{\beta}\frac{\partial}{\partial u_m}G_{\beta,h}(u). $$ \end{lemma} \begin{proof} Direct calculation yields: $$ \frac{\partial}{\partial u_m}G_{\beta,h}(u)=\frac{\partial}{\partial u_m}\left(\frac{\beta}{2}\langle u,u\rangle - \log\left(\sum\limits_{k=1}^q\exp\left(\beta u_k+h\delta_{k,1}\right)\right)\right) =\beta u_m-\beta \frac{\exp\left(\beta u_m+h\delta_{m,1}\right)}{\sum\limits_{k=1}^q\exp\left(\beta u_k+h\delta_{k,1}\right)}. $$ Rearranging the equality gives the result. \end{proof} Using the notation $m(\sigma)=(m_1(\sigma),\ldots,m_q(\sigma))$ with \begin{equation} \label{mit} m_i(\sigma):=\frac{1}{n}\sum_{j=1}^n \delta_{\sigma_j,i} \quad \text{and} \quad m_{i,t}(\sigma):=\frac{1}{n}\sum_{j \neq t}^n \delta_{\sigma_j,i} \end{equation} we obtain: \begin{lemma} \label{ConD} For arbitrary $i\in \{1,\ldots,q\}$ we have \begin{center} $P_{\beta,h,n}\left(\sigma_j=i\mid (\sigma_t)_{t\neq j}\right)= \begin{cases} \frac{\exp\left(\beta m_{i,j}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k,j}(\sigma)+h\delta_{k,1}\right)}, \quad i \in \{2,\ldots,q\}; \\ \frac{\exp\left(\beta m_{i,j}(\sigma)+h\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k,j}(\sigma)+h\delta_{k,1}\right)}, \quad i=1. \end{cases}$ \end{center} \end{lemma} \begin{proof} For $x_1,\ldots,x_n \in \{1,\ldots,q\}$ we have: $$ P_{\beta,h,n}\left(\sigma_j=i\mid (\sigma_t)_{t\neq j}\right) = \frac{P_{\beta,h,n}\left(\{\sigma_j=i\}\cap \{(\sigma_t)_{t\neq j}\}\right)}{P_{\beta,h,n}\left(\{(\sigma_t)_{t\neq j}\}\right)}. $$ For any fixed $x_1, \ldots, x_{j-1}, x_{j+1}, \ldots, x_n$ we obtain \begin{eqnarray*} & & \frac{P_{\beta,h,n}\left(\sigma_1=x_1,\ldots,\sigma_j=i,\ldots,\sigma_n=x_n\right)}{P_{\beta,h,n} \left(\sigma_1=x_1,\ldots,\sigma_{j-1}=x_{j-1},\sigma_{j+1}=x_{j+1},\ldots,\sigma_n=x_n\right)}\\ &=& \frac{Z_{\beta,h,n}^{-1}\exp\left(\frac{\beta}{2n}\sum\limits_{l,t\neq j}^n\delta_{x_l,x_t}+\frac{\beta}{2n}+ \frac{\beta}{n}\sum\limits_{l\neq j}^n\delta_{x_l,i}+h\sum\limits_{l\neq j}^n \delta_{x_l,1}+h\delta_{i,1}\right)}{\sum\limits_{k=1}^qZ_{\beta,h,n}^{-1}\exp\left(\frac{\beta}{2n} \sum\limits_{l,t\neq j}^n\delta_{x_l,x_t}+\frac{\beta}{2n}+\frac{\beta}{n} \sum\limits_{l\neq j}^n\delta_{x_l,k}+h\sum\limits_{l\neq j}^n\delta_{x_l,1}+h\delta_{k,1}\right)}. \end{eqnarray*} Cancelling equivalent terms in numerator and denominator and finally distinguishing between $i=1$ and $i\neq 1$ yields the result. \end{proof} In the case $h=0$ in \cite[Proposition 2.2]{Ellis/Wang:1990} it is proved that the Hessian $D^2 G_{\beta,0}(x_0)$ of $G_{\beta,0}$ is positive definite if $x_0$ is a global minimum point, and hence invertible. In \cite[Lemma 2]{Wang:1994} it is stated that $D^2 G_{\beta,h}(x_0)$ is positive definite for any $\beta > 0$ and $h \geq 0$, if $x_0$ is a global minimum point. However this result is not correct. The non-degeneracy of $G_{\beta,h}$ at its minimum points for any $(\beta,h) \not= (\beta_0, h_0)$ is stated next and will be proved in the Appendix. \begin{lemma} \label{wangimprovement} For all $q>2$ let $x_s\in\mathbb{R}^q$ denote a global minimum point of $G_{\beta,h}$. Then, if $(\beta,h)\neq (\beta_0,h_0)$, $\beta$ never takes one of the values $\{\frac{1}{x_{s,q}},\frac{1}{qx_{s,1}x_{s,q}}\}$ (implying that $D^2 G_{\beta,h}(x_s)$ is positive definite for any $(\beta,h)\neq (\beta_0,h_0)$). \end{lemma} For the rescaled empirical spin vector of the Curie-Weiss-Potts model, appearing in Theorems \ref{THUM}, \ref{THUM2} and \ref{THMM}, we can bound higher order moments as follows: \begin{lemma} \label{EnMo} For $W=(W_1,\ldots,W_q)$ as in Theorems \ref{THUM}, \ref{THUM2} and \ref{THMM} (the $W^{(i)}$) we obtain that for any $l\in \mathbb{N}$ and $j\in\{1,\ldots,q\}$ $$ \mathbb{E} \big| W_j^l \big| \leq \text{const.} (l), \quad \mathbb{E} \big| \bigl( W_j^{(i)} \bigr)^l \big| \leq \text{const.} (l). $$ \end{lemma} \begin{proof} We consider a well known transformation, sometimes called the {\it Hubbard-Stratonovich transformation}, expressing the distribution of $L_n$ in the Curie-Weiss-Potts model in terms of $G_{\beta,h}$. For $\beta >0$ we pick a random vector $Y$ in a way that $\mathcal{L}(Y)$ equals a $q$-dimensional centered Gaussian vector with covariance matrix $\beta^{-1} \text{Id}$ and $Y$ is chosen to be independent from $N$. Id denotes the $q\times q$ identity matrix. According to a simple adaption of Lemma 3.2 in \cite{Ellis/Wang:1990}, for any point $m\in\mathbb{R}^q$ and $\gamma\in\mathbb{R}$ and any $n\in\mathbb{N}$ we have $$ \mathcal{L}\left(\frac{Y}{n^{1/2-\gamma}}+\frac{n(N/n-m)}{n^{1-\gamma}}\right) = \exp\left[-n G_{\beta,h}\left(m+\frac{y}{n^{\gamma}}\right)\right]dy \, \left(\int\limits_{\mathbb{R}^q}\exp\left[-n G_{\beta,h} \left(m+\frac{y}{n^{\gamma}}\right)\right]dy\right)^{-1}. $$ Lemma 3.2 in \cite{Ellis/Wang:1990} presented this identity only for $h=0$. The calculations for any $h \not= 0$ are omitted. Applying this Lemma for $\gamma=\frac{1}{2}$ and $m=x_0$ (or any other minimum point of $G_{\beta,h}$) does not change the finiteness of any of the moments of the $W_i$. Thus, the new measure has the density $$ \exp\left[-n G_{\beta,h}\left(x_0+\frac{y}{n^{1/2}}\right)\right]dy\left(\int\limits_{\mathbb{R}^q}\exp\left[-n G_{\beta,h}\left(x_0+\frac{y}{n^{1/2}}\right)\right]dy\right)^{-1}. $$ Using second order multivariate Taylor expansion of $G_{\beta,h}$ and the fact that $x_0$ is a global minimum point of $G_{\beta,h}$ we see that the density of the new measure with respect to Lebesgue measure is given by $\text{const.} \exp\left[- \frac 12 \langle y,D^2 G_{\beta,h}(x_0) \, y \rangle \right]$ (up to negligible terms). With Lemma \ref{wangimprovement} we know that for any $(\beta,h) \not= (\beta_0,h_0)$ the Hessian is positive definite, if $x_0$ is a global minimum point. This fact combined with the transformation of integrals yields that a measure with this density has moments of any finite order. \end{proof} For the random variables $T$ and $V$ in Theorems \ref{TT} and \ref{TV}, we can bound higher order moments as well: \begin{lemma} \label{EnMo2} Consider the extremity $(\beta,h)=(\beta_0,h_0)$. For $T$ and $V$ as in \eqref{defTV} we obtain that for any $l\in \mathbb{N}$ and $j\in\{1,\ldots,q\}$ $$ \mathbb{E} \big| V_j^l \big| \leq \text{const.} (l), \quad \mathbb{E} | T^l | \leq \text{const.} (l). $$ \end{lemma} \begin{proof} Remark that with $V \in {\mathcal M} \cap u^{\bot}$ we obtain $V_1=0$. Hence $W_1 = (1-q) n^{1/4} T$. Therefore $$ V = \frac{1}{n^{1/2}} \bigl( N-nx-n^{3/4}Tu \bigr) = W- n^{1/4} Tu = (0, W_2 + \frac{1}{(q-1)} W_1, \ldots, W_q + \frac{1}{(q-1)} W_1). $$ Since $\bar{W} :=(W_1, \ldots, W_q) \in {\mathcal M}$, we have $W_1 = - \sum_{k=2}^q W_k$. We try to check that $\bar{V} := (V_2,\ldots, V_q)$ has finite moments. Thus is suffices to check, that $(W_2, \ldots, W_q)$ has finite moments. Now we define $G_{q-1}(x)$, $x \in \R^{q-1}$, to be the restriction of $G_{\beta_0,h_0}$ on the last $q-1$ coordinates (the first coordinate will be fixed to $1/2$ in the sequel). Again we apply the Hubbard-Stratonovich transformation introduced in the proof of Lemma \ref{EnMo}. We choose a $q-1$ dimensional Gaussian vector $Y$ with covariance matrix $\beta_0^{-1} \text{Id}_{q-1}$ and independent of $\bar{W}$. With $x=( 1/2(q-1), \ldots, 1/2(q-1)) \in \R^{q-1}$ we have that the law of $Y + \bar{W}$ has the density $$ \exp\left[-n G_{q-1} \left( x+\frac{y}{n^{1/2}} \right) \right] dy \, \left( \int\limits_{\mathbb{R}^{q-1}} \exp\left[-n G_{q-1} \left( x+\frac{y}{n^{1/2}} \right) \right] dy \right)^{-1}. $$ Using second order multivariate Taylor expansion of $G_{q-1}$ and the fact that $(\nabla G_{q-1})(x)=0$ ( $(1/2,x) \in \R^q$ is a global minimum point of $G_{\beta_0,h_0}$), we see that the density of the new measure with respect to Lebesgue measure is given by $\text{const.} \exp\left[- \frac 12 \langle y,D^2 G_{q-1}(x) \, y \rangle \right]$ (up to negligible terms). Using the formulas for the second partial derivatives of $G_{\beta_0,h_0}$, see Remark \ref{extrem} in the Appendix, we obtain that $$ D^2 G_{q-1}(x) = \frac{4}{q^2} \bigl( {\mathchoice {1\mskip-4mu\mathrm l_{q-1} + \text{Id}_{q-1} (q-1)(q-2) \bigr), $$ where ${\mathchoice {1\mskip-4mu\mathrm l_{q-1}$ denotes the $(q-1) \times (q-1)$ matrix with all entries equal to 1. It is an immediate computation that $$ \det(D^2 G_{q-1}(x)) = \biggl( \frac{4(q-1)(q-2)}{q^2} \biggr)^{q-2} \biggl( \frac{4(q-1)(q-2)}{q^2} + \frac{4(q-1)}{q^2} \biggr), $$ which shows the invertibility of $D^2 G_{q-1}(x)$ for any $q \geq 3$. Thus $D^2 G_{q-1}(x)$ is positive definite. This fact combined with the transformation of integrals yields that a measure with this density has moments of any finite order. For $(1-q)T= n^{-1/4} W_1$ we apply the Hubbard-Stratonovich transform with $\gamma = 1/4$. Take a Gaussian random variable with expectation zero and variance $\beta_0^{-1}$, independent of $W_1$. The distribution of $n^{-1/4} Y + T$ has a density proportional to $\exp( -n G_1 \bigl( c_q 1/2 + y/n^{1/4} \bigr) \bigr)$ with some constant $c_q$ only depending on $q$ and $G_1$ being the restriction of $G_{\beta_0,h_0}$ to the first component. A fourth order Taylor expansion similar to \eqref{huebsch} will give $G_1(x+t) = G_1(x) + \frac{1}{24} G_1^{(4)} (x + \alpha t) t^4$ for some $\alpha \in (0,1)$. Hence we conclude with a Lebesgue-density given by $\text{const.} \exp( -y^4)$, a measure, which has moments of any finite order. We omit the details. \end{proof} \section{Proofs of the theorems} \begin{proof}[Proof of Theorem \ref{THUM}] Our goal is to apply Theorem \ref{RR}. First, given $W$, we construct a coupling $W'$ and will calculate $\Lambda$ and $R$ to get the approximative regression identity \eqref{regressioncond}. We will first of all deal with the case $h =0$. Hence by Theorem \ref{Minima} we have $\beta < \beta_c$ and $x_0=( 1/q, \ldots, 1/q)$ being the unique minimum point of $G_{\beta,0}$. By the construction given in the introduction, applying Lemma \ref{ConD} and Lemma \ref{expGD} we obtained for any $i=1, \ldots, q$: \begin{eqnarray} \label{motivation2} \E [ W_i' - W_i | \mathcal{F} ] & = & - \frac 1n W_i - \frac{x_{0,i}}{\sqrt{n}} + R_n^{(1)}(i) + \frac{1}{\sqrt{n}} \biggl( m_i(\sigma) - \frac{1}{\beta} \frac{\partial}{\partial u_i} G_{\beta,0} (m(\sigma)) \biggr) \nonumber \\ & = & -\frac{1}{\sqrt{n}} \frac{1}{\beta} \frac{\partial}{\partial u_i} G_{\beta,0} (m(\sigma)) + R_n^{(1)}(i) \end{eqnarray} with \begin{equation} \label{R1i} R_n^{(1)}(i) := \frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n \left[\frac{\exp\left(\beta m_{i,j}(\sigma)\right)}{\sum\limits_{k=1}^q \exp\left(\beta m_{k,j}(\sigma)\right)}-\frac{\exp\left(\beta m_{i}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k}(\sigma)\right)}\right], \end{equation} where $m_i(\sigma)$ and $m_{i,j}(\sigma)$ are defined as in \eqref{mit}. We have used \begin{equation} \label{easy} m_i(\sigma) - x_{0,i} = \frac{W_i}{\sqrt{n}}. \end{equation} Now we apply \eqref{taylorsecond} (see Appendix) to the first summand in \eqref{motivation2}. Since $x_0$ is a global minimum of $ G_{\beta,0}$ we have $ \bigl( \frac{\partial}{\partial u_i} G_{\beta,0} \bigr)(x_0)=0$. Hence the first summand in \eqref{motivation2} is equal to $$ - \frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial^2 u_i} G_{\beta,0} \right) (x_0) \, W_i - \frac{1}{\beta \, n} \sum \limits_{k\neq i} \left( \frac{\partial^2}{\partial u_i\partial u_k} G_{\beta,0} \right) (x_0) \, W_k + R_n^{(2)}(i) $$ with \begin{equation} \label{R2i} R_n^{(2)}(i) : =\mathcal{O}\left(\frac{1}{\sqrt{n}}\left(\frac{W_i}{\sqrt{n}}\right)^2\right)-\sum\limits_{k\neq i}\mathcal{O} \left(\frac{1}{\sqrt{n}}\frac{W_i}{\sqrt{n}}\frac{W_k}{\sqrt{n}}\right)-\sum\limits_{k,t\neq i}\mathcal{O} \left(\frac{1}{\sqrt{n}}\frac{W_k}{\sqrt{n}}\frac{W_t}{\sqrt{n}}\right). \end{equation} Summarizing with $R(i) := R_n^{(1)}(i) + R_n^{(2)}(i)$ we have \begin{eqnarray*} \mathbb{E}\left[W'_i-W_i\mid \mathcal{F}\right] & = & - \frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial^2 u_i} G_{\beta,0} \right) (x_0) \, W_i - \frac{1}{\beta \, n} \sum \limits_{k\neq i} \left( \frac{\partial^2}{\partial u_i\partial u_k} G_{\beta,0} \right) (x_0) \, W_k + R(i) \nonumber \\ & = & - \frac{1}{\beta \, n} \langle \bigl[ D^2 G_{\beta,0} (x_0) \bigr]_i , W \rangle + R(i), \end{eqnarray*} where $\langle \cdot, \cdot \rangle$ denotes the Euclidean scalar-product and $\bigl[D^2 G_{\beta,0}(x_0)\bigr]_i$ the $i$-th row of the matrix $D^2 G_{\beta,0}(x_0)$. We obtain \begin{equation} \label{hzero} \mathbb{E}\left[W'-W\mid \mathcal{F}\right] = - \frac{1}{\beta \, n} \bigl[ D^2 G_{\beta,0} (x_0) \bigr] W + R(W) \end{equation} with $R(W) = (R(1), \ldots, R(q))$. We define $\Lambda= \frac{1}{\beta \, n} \bigl[ D^2 G_{\beta,0} (x_0) \bigr]$. With \cite[Proposition 2.2]{Ellis/Wang:1990}, $D^2 G_{\beta,0}(\nu)$ is positive definite for any $\beta >0$ and any global minimum point $\nu$ and therefore $\Lambda$ is invertible (alternatively one easily sees that $\Lambda$ is a matrix of the form given in Lemma \ref{Matrix} and the determinant is $\frac{1}{n^q} (1- \beta/q)^{q-2} (1 - \beta/q)$ which is non-zero because $\beta_c <q$ with $\beta_c$ given in \eqref{criticaltemp}, and therefore $\beta \not= q$, see Lemma \ref{hessiancal}). Hence \eqref{regressioncond} is fulfilled and we are able to apply Theorem \ref{RR}. In order to calculate the bound given there we need to estimate $\lambda^{(i)}$ as well as the order of the terms $A,B$ and $C$. Note that often in an application of Theorem \ref{RR} it might be tedious to calculate $\Lambda$ (and $\Sigma$) and it is not clear whether the calculations have been carried out correctly. In Remark \ref{heuristic}, we will point out, that there is a nice heuristic in the Curie-Weiss-Potts model expecting $\Lambda$ as it comes out. Obviously we have $\lambda^{(i)}=\mathcal{O}(n)$. We continue by estimating $C$ in Theorem \ref{RR}. First we consider $R_n^{(1)}(i)$ defined in \eqref{R1i}: \begin{eqnarray*} | R_n^{(1)}(i) | &\leq& \frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n \bigg| \frac{\exp\left(\beta m_{i,j}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k,j}(\sigma)\right)}- \frac{\exp\left(\beta m_{i}(\sigma)\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k}(\sigma)\right)} \bigg| \\ &\leq& \frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n\sum\limits_{k\neq i} \big| \exp\left(\beta m_{i,j}(\sigma)+\beta m_{k}(\sigma)\right)- \exp\left(\beta m_{i}(\sigma)+\beta m_{k,j}(\sigma)\right) \big|. \end{eqnarray*} Using the inequality \begin{center} $ \big| \exp(\alpha x)-\exp(\alpha y) \big| \leq \frac{|\alpha|}{2}(\exp(\alpha x)+\exp(\alpha y))\, |x-y|$ , for all $\alpha,x,y\in\mathbb{R}$, \end{center} we obtain $$ | R_n^{(1)}(i) | \leq \frac{1}{\sqrt{n}}\frac{1}{n}\beta e^{2\beta}\sum\limits_{j=1}^n\sum\limits_{k\neq i} \big| m_{i,j}(\sigma)+ m_{k}(\sigma)- m_{i}(\sigma) - m_{k,j}(\sigma) \big|. $$ Consider the first summand $j=1$. In case $\sigma_1=i$, we have for all $k\neq i$ that $m_{k,1}(\sigma)=m_{k}(\sigma)$, and therefore \begin{center} $\sum\limits_{k\neq i} | m_{i,1}(\sigma)+ m_{k}(\sigma)- m_{i}(\sigma)- m_{k,1}(\sigma) | = (q-1) \frac{\delta_{\sigma_1,i}}{n}.$ \end{center} If $\sigma_1\neq i$, then there is a $t\neq i$ with $m_{t,1}(\sigma)\neq m_t(\sigma)$ and for all $k\neq t$: $m_{k,1}(\sigma)=m_k(\sigma)$. By similar observation we have \begin{center} $\sum\limits_{k\neq i} |m_{i,1}(\sigma)+ m_{k}(\sigma)- m_{i}(\sigma)- m_{k,1}(\sigma) | \leq (q-1) \frac{\delta_{\sigma_1,t}}{n}.$ \end{center} The same observation can be made for any other $j\in\{1,\ldots,n\}$. With $ | \delta_{\sigma_j,t} | \leq 1$ we get $$ | R_n^{(1)}(i)| \leq \frac{1}{\sqrt{n}} \frac{1}{n} (q-1) \beta e^{2\beta} =\mathcal{O}(n^{-3/2}). $$ Since $W \in {\mathcal M}$, see \eqref{hyperM}, we get $\sum_{k \not= i} W_k = - W_i$. By Lemma \ref{EnMo} we know that $ \E |W_i^2| \leq \text{const.} (2)$ and therefore we obtain that $\mathbb{E} |R_n^{(2)}(i)|$ in \eqref{R2i} is $\mathcal{O}(n^{-3/2})$. Thus the Cauchy-Schwartz inequality yields $\mathbb{E}[R(i)^2]=\mathcal{O}(n^{-3})$ for all $i\in \{1, \ldots, q\}$. We have \begin{center} $C=\sum\limits_{i=1}^q\lambda^{(i)}\sqrt{\mathbb{E}\left[R_i^2\right]}=\mathcal{O}(n^{-1/2}).$ \end{center} The next thing we notice is that $|W_i'-W_i| =\frac{1}{\sqrt{n}} \big| Y_{I,i}'-Y_{I,i} \big| \leq \frac{1}{\sqrt{n}}$ for all $i$. Thus we easily obtain the bound $B=\mathcal{O}(n^{-1/2})$. It remains to calculate and to estimate the conditional variance in $A$. This is a bit more involved. We have: \begin{eqnarray} \label{ais} \mathbb{E}[(W_i'-W_i)(W_j'-W_j) \mid \mathcal{F}]&=& \frac{1}{n^3}\sum\limits_{t,k=1}^nY_{k,i}Y_{t,j}+\frac{1}{n^3} \sum\limits_{t,k=1}^n\mathbb{E}[Y'_{k,i}Y'_{t,j}\mid \mathcal{F}]\nonumber\\ & -& \frac{2}{n^3}\sum\limits_{t,k=1}^nY_{k,i}\mathbb{E}[Y'_{t,j}\mid \mathcal{F}]=:A_1+A_2+A_3. \nonumber \end{eqnarray} Hence we have to bound the variances of these terms. By definition $\mathbb{V}[A_1]=\frac{1}{n^2}\mathbb{V}[m_i(\sigma)\, m_j(\sigma)]$. Now \begin{eqnarray*} \mathbb{V}[m_i(\sigma)\, m_j(\sigma)] & =& \mathbb{V} \bigl( \frac{W_i \, W_j}{n} + \frac{W_i}{\sqrt{n}} x_{0,j} + \frac{W_j}{\sqrt{n}} x_{0,i} \bigr) \\ & \leq & \text{const.} \max \bigl( \frac{1}{n^2} \V(W_i \, W_j), \frac 1n \V(W_i) \bigr) \leq \frac{\text{const.}}{n^2} \bigl( \E [ W_i^2 W_j^2] + n \E [W_i^2] \bigr). \end{eqnarray*} We make use of Lemma \ref{EnMo} to obtain $\mathbb{V}[m_i(\sigma)\, m_j(\sigma)] = {\mathcal O}(1/n)$ and hence $\V[A_1]= {\mathcal O}(n^{-3})$. Using a conditional version of Jensen's inequality we have $$ \mathbb{V}[A_2] \leq \mathbb{E} \bigl( \mathbb{V} \bigl[ \frac{1}{n^3} \sum\limits_{t,k=1}^n Y'_{k,i} Y'_{t,j} \bigr] \mid \mathcal{F} \bigr) = \mathbb{E} \bigl( \mathbb{V} \bigl[ \frac{1}{n^3}\sum\limits_{t,k=1}^n Y_{k,i}Y_{t,j} \bigr] \mid \mathcal{F} \bigr) = \mathbb{V} \bigl( \frac{1}{n^3}\sum\limits_{t,k=1}^nY_{k,i}Y_{t,j} \bigr). $$ Hence $\V[A_2] = \mathcal{O}(n^{-3})$. With Lemma \ref{ConD} we get \begin{eqnarray*} - A_3/2 & = & \frac{1}{n^3} \sum\limits_{t,k=1}^n Y_{k,i}\, \mathbb{E}[Y'_{t,j}\mid \mathcal{F}] =\frac{1}{n^3} \sum\limits_{t,k=1}^nY_{k,i} \frac{\exp\left(\beta m_{j,t}(\sigma)\right)}{\sum\limits_{l=1}^q\exp\left(\beta m_{l,t}(\sigma)\right)} \\ &=& \frac{1}{n^3} \sum \limits_{t,k=1}^nY_{k,i} \biggl( \frac{\exp\left(\beta m_{j,t}(\sigma)\right)}{\sum\limits_{l=1}^q\exp\left(\beta m_{l,t}(\sigma)\right)} -\frac{\exp\left(\beta m_{j}(\sigma)\right)}{\sum\limits_{l=1}^q\exp\left(\beta m_{l}(\sigma)\right)} \biggr) +\frac{1}{n^2}\sum\limits_{k=1}^n Y_{k,i}\frac{\exp\left(\beta m_{j}(\sigma)\right)}{\sum\limits_{l=1}^q\exp\left(\beta m_{l}(\sigma)\right)}\\ & = &:M_1+M_2. \end{eqnarray*} By using the same estimations as for $R_n^{(1)}(i)$ we obtain $$ M_1 \leq \frac{1}{n^3} \sum \limits_{t,k=1}^n Y_{k,i} \, (q-1) \beta e^{2\beta} = \frac{1}{n} \, (q-1) \beta e^{2 \beta} \bigl(\frac{W_i}{\sqrt{n}}+x_{0,i}\bigr). $$ Hence $\V(M_1)= {\mathcal O}(n^{-3})$ by Lemma \ref{EnMo}. We obtain \begin{eqnarray*} M_2 & = & \frac 1n m_i(\sigma) \bigl( m_j(\sigma) - \frac{1}{\beta} \frac{\partial}{\partial u_j} G_{\beta,0} (m(\sigma)) \bigr) \\ & = & \frac 1n m_i(\sigma) \, m_j(\sigma) - \frac{1}{\beta n} m_i(\sigma) \biggl( \bigl( \frac{\partial^2}{\partial^2 u_j} G_{\beta,0}\bigr)(x_0) (m_j(\sigma) - x_{0,j})\\ & + & \sum_{k \not= j} \bigl( \frac{\partial^2}{\partial u_j u_k} G_{\beta,0} \bigr)(x_0) (m_k(\sigma) - x_{0,k}) + \sqrt{n} R_n^{(2)}(j) \biggr), \end{eqnarray*} where the first equality follows from Lemma \ref{expGD}, the second from \eqref{taylorsecond} and the definition of $R_n^{(2)}(j)$ in \eqref{R2i}. Hence $$ M_2 = {\mathcal O} \biggl( \frac 1n m_i(\sigma) \, m_j(\sigma) \biggr) + {\mathcal O} \biggl( \frac 1n m_i(\sigma) \frac{W_j}{\sqrt{n}} \biggr) + {\mathcal O} \bigl(n^{-1/2} \, R_n^{(2)}(j) \bigr). $$ The first two summands are of order ${\mathcal O} \bigl( W_j / n^{3/2} \bigr)$ and the last term is of order ${\mathcal O}(n^{-2})$. Applying Lemma \ref{EnMo}, it follows that the maximal variance of all the sums in the representation of $M_2$ is of order ${\mathcal O}(n^{-3})$ and therefore $\mathbb{V}(A_3)=\mathcal{O}(n^{-3})$. Thus the variance in $A$ of Theorem \ref{RR} can be bounded by 9 times the maximum of the variances of $A_1, A_2, A_3$, which is a constant times $n^{-3}$. Thus we obtain \begin{center} $A=\sum\limits_{i,j=1}^q\lambda^{(i)}\sqrt{\mathbb{V}\left[\mathbb{E}[(W'_i-W_i)(W'_j-W_j)\mid W]\right]}=\mathcal{O}(n^{-1/2})$. \end{center} This completes the proof for $h=0$. Remark, that we have used the fact that the fourth moment of $W_i$ is bounded. We did not need the finiteness of any higher moment. We have proved a {\it fourth-moment} Theorem together with a rate of convergence of order $\mathcal{O}(n^{-1/2})$. If $h \not=0$ we will slightly change the proof. Here are the details. By Theorem \ref{Minima} we know that for $h >0$ and $(\beta,h) \notin h_T$, the function $G_{\beta,h}$ has a unique global minimum point. Let $x_0$ be the unique global minimum point. Analogously to the first part of our proof we obtain \begin{eqnarray} \label{hnonzero} \mathbb{E}\left[W'_i-W_i\mid \mathcal{F}\right] & = & - \frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial^2 u_i} G_{\beta,h} \right) (x_0) \, W_i - \frac{1}{\beta \, n} \sum \limits_{k\neq i} \left( \frac{\partial^2}{\partial u_i\partial u_k} G_{\beta,h} \right) (x_0) \, W_k + R(i,h) \nonumber \\ & = & - \frac{1}{\beta \, n} \langle \bigl[ D^2 G_{\beta,h} (x_0) \bigr]_i , W \rangle + R(i,h) \end{eqnarray} with $R(i,h) := R_n^{(1)}(i,h) + R_n^{(2)}(i)$ with the new $$ R_n^{(1)}(i,h) := \frac{1}{\sqrt{n}}\frac{1}{n}\sum\limits_{j=1}^n\left[\frac{\exp\left(\beta m_{i,j}(\sigma)+h\delta_{i,1}\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k,j}(\sigma)+h\delta_{k,1}\right)}-\frac{\exp\left(\beta m_{i}(\sigma)+h\delta_{i,1}\right)}{\sum\limits_{k=1}^q\exp\left(\beta m_{k}(\sigma)+h\delta_{k,1}\right)}\right] $$ and the same $R_n^{(2)}(i)$ given in \eqref{R2i}. Again $\Lambda = \frac{1}{\beta n} \bigl[ D^2 G_{\beta,h}(x_0) \bigr]$. This matrix has a simple structure. With Lemma \ref{hessiancal} we obtain $$ a = \frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial^2 u_1} G_{\beta,h} \right) (x_0) = \frac{1-\beta(q-1)x_{0,1}x_{0,q}}{n}, \, \, b = \frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial u_1\partial u_q} G_{\beta,h} \right) (x_0) =\frac{\beta x_{0,1}x_{0,q}}{n}. $$ Moreover $$ d= \frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial^2 u_q} G_{\beta,h} \right) (x_0) = \frac{1-\beta(x_{0,1}x_{0,q}+(q-2)x_{0,q}^2)}{n}, \, \, c=\frac{1}{\beta \, n} \left( \frac{\partial^2}{\partial u_2 u_q}G_{\beta,h}\right) (x_0)=\frac{\beta x_{0,q}^2}{n}. $$ Hence $\Lambda$ has the form \eqref{Matrix} and according to Lemma \ref{DetLe} we have $$ \det(\Lambda) = \frac{1}{n^q}\left(1-\beta x_{0,q}\right)^{q-2}(1-q\beta x_{0,1}x_{0,q}). $$ So if $\beta \notin \{\frac{1}{x_{0,q}},\frac{1}{qx_{0,1}x_{0,q}}\}$, the matrix $\Lambda$ is invertible. With Lemma \ref{wangimprovement} we get that $\Lambda$ is invertible for all $(\beta,h) \neq (\beta_0,h_0)$ and hence we are able to apply Theorem \ref{RR}. The bound of $R_n^{(1)}(i,h)$ is $e^h$ times the bound of $R_n^{(1)}(i)$ implying the same order of $C$. The proof of bounding $B$ is unchanged. Bounding $A$ needs once more the bound $R_n^{(1)}(i,h)$ and hence the proof is almost the same as in the case $h=0$. For the q-dependence we note that $\mathbb{E}\bigl[|W'_i-W_i||W'_j-W_j||W'_k-W_k|\bigr]$ is independent of q and $\lambda^{(i)}=\mathcal{O}(q)$. Thus $B=\mathcal{O}\bigl(q^4\bigr)$ by summing three times to q. Previous estimations show that $R_n^{(1)}=\mathcal{O}(q)$ and because we have two sums from 1 to q for $W_k$ and $W_t$ we have that $R_n^{(2)}=\mathcal{O}\bigl(q^2\bigr)$. Thus $\mathbb{E}\bigl[R(i)^2\bigr]=\mathcal{O}\bigl(q^4\bigr)$. Hence $C=\mathcal{O}\bigl(q^4\bigr)$ by summing over q. Next we consider $A_1,A_2$ and $A_3$ taken from the proof. While $A_1$ and $A_2$ are independent of q, $A_3$ depends on q via $R_n^{(1)}$ and $R_n^{(2)}$. Summing two times over q, we obtain $A=\mathcal{O}\bigl(q^5\bigr)$. Since the brackets in the bound of Reinert and R\"ollin still contain the parameter q and since $\|\Sigma\|^{1/2}=\mathcal{O}(q)$, the constant C of our theorem satisfies $C=\mathcal{O}\bigl(q^6\bigr)$. \end{proof} \begin{proof}[Proof of Theorem \ref{THUM2}] Since the first part of the proof follows the lines of the proof of \ref{THUM} we notice that Theorem \ref{Kolmogorov} can be applied. Thus it remains to estimate the bound given there. For the first expression in the bound we notice that $A_1$ is the same expression as the A-term we just calculated for the proof of \ref{THUM}. Hence, $\log(n)A_1=\mathcal{O}\bigl(\log(n)n^{-1/2}\bigr)$. With Lemma \ref{EnMo} and the estimation for the C-term in \ref{THUM} we obtain that the second expression is $\mathcal{O}\bigl(\log(n)n^{-1/2}\bigr)$. For the third expression we notice that $a>1$ is a constant and that $A_3=\mathcal{O}(n)$. Using again Lemma \ref{EnMo} combined with the fact that $A=\frac{1}{\sqrt{n}}$ yields that the third expression is also $\mathcal{O}\bigl(\log(n)n^{-1/2}\bigr)$. Likewise we obtain that the fourth expression is $\mathcal{O}\bigl(n^{-1/2}\bigr)$. Combining these estimation yields the result. \end{proof} \begin{remark}[Heuristics] \label{heuristic} By definition of $G_{\beta,h}$, \eqref{Gfunktion}, the Hessian of $G_{\beta,h}$ fulfills $D^2 G_{\beta,h}(x) = \beta \text{Id} - \beta^2 D^2 \Phi (x)$, where $\Phi$ is the $\log$-moment generating function of the single-spin distribution in the Curie-Weiss-Potts model and $x$ is any minimum point. Hence $D^2 \Phi$ is the covariance structure of the single-spins, which is \begin{equation} \label{heusingle} D^2 \Phi(x) = - \frac{1}{\beta^2} \bigl(D^2 G_{\beta,h}(x) - \beta \text{Id} \bigr). \end{equation} We know from Stein's method that if $(W,W')$ is exchangeable and \eqref{regressioncond} is satisfied with $R=0$ we have $$ \frac 12 \E[(W'-W)(W'-W)^t] = \Sigma \, \Lambda^t. $$ On the one hand in the Curie-Weiss-Potts model we have $\Sigma = [D^2 G_{\beta,h}(x)]^{-1} - \beta^{-1} \text{Id}$. On the other hand the left hand side describes the empirical covariance structure of the single-spins: $$ \frac 12 \E[(W_i'-W_i)(W_j'-W_j)] = \frac{1}{2n} \E \bigl(Y_{I,i}'-Y_{I,i} \bigr) \bigl(Y_{I,j}' - Y_{I,j} \bigr). $$ Therefore with \eqref{heusingle}, heuristically $$ \frac 12 \E[(W'-W)(W'-W)^t] \approx \frac 1n \frac{1}{\beta^2} \bigl(- D^2 G_{\beta,h}(x) + \beta \text{Id} \bigr) = \bigl( [D^2 G_{\beta,h}(x)]^{-1} - \beta^{-1} \text{Id} \bigr) \, \Lambda^t. $$ If we now choose $\Lambda = \Lambda^t = \frac{1}{\beta \, n} D^2G_{\beta,h}(x)$, the right hand identity is fulfilled. \end{remark} \begin{proof}[Proof of Theorem \ref{THMM}] The proof uses the fact that the conditional joint distribution of the $(\sigma_i)_i$, conditioned on the event $\big\{ \frac{N}{n}\in B(x_i,\epsilon) \bigr\}$, is given by $$P_{ \beta,h,n,\epsilon}( \sigma)=\frac{1}{Z_{ \beta,h,n,\epsilon}}\exp\left(\frac{ \beta}{2n}\sum\limits_{1\leq i\leq j\leq n}\delta_{ \sigma_i,\sigma_j}+ h\sum\limits_{i=1}^n\delta_{ \sigma_i,1}\right)\mathbf{1}_{B(x_i,\epsilon)}(N/n), $$ where $Z_{ \beta,h,n,\epsilon}$ denotes a normalization. Thus we are able to start with any minimum point $x_0$ and follow the lines of the proof of Theorem \ref{THUM}. \end{proof} \begin{proof}[Proof of Theorem \ref{TT}] We will apply Theorem \ref{EL}. Obviously the density $p$ is nice. Note that the logarithmic derivative is $\psi(t) = \frac{p'(t)}{p(t)} = - \frac{16(q-1)^4}{3} \, t^3$. The solutions $f_g$ of the corresponding Stein equations \eqref{Steineqp} - with respect to absolutely continuous test functions $g$ and with respect to $g(x)=1_{\{x \leq z\}}(x)$, $z \in \R$, respectively - fulfill all boundedness assumption of Theorem \ref{EL}. This was proved in \cite[Lemma 2.2]{Eichelsbacher/Loewe:2010}. By definition of $T$, see \eqref{defTV}, we have $$ T = \frac{1}{(1-q) n^{3/4}} \bigl( N_1 - n x_1 - \sqrt{n} V_1 \bigl) = \frac{1}{(1-q) n^{3/4}} \biggl( \sum_{i=1}^n Y_{i,1} - n x_1 - \sqrt{n} V_1 \biggr). $$ We make use of the choice $V \in \mathcal{M}\cap u^{\bot}$. With $V \in \mathcal{M}$ we have $\sum_{i=1}^q V_i=0$ and with $\langle V, u \rangle = V_1 (1-q) + \sum_{i=2}^q V_i=0$ we obtain $$ \sum\limits_{i=2}^q V_i = -V_1 =0. $$ Constructing an exchangeable pair $(T,T')$ is just the same as in the introduction: $T'$ is a random variable being the same as $T$ except that we pick an index $I$ uniformly and exchange $Y_{I,1}$ with $Y_{I,1}'$ (for $I=i$ distributed according to the conditional distribution of $Y_{i,1}$ given $(Y_{j,1})_{j \not= i}$, independently of $Y_{i,1}$). Now we calculate $\E [T'-T | {\mathcal F}]$ with $\mathcal{F}=\sigma(\sigma_1,\ldots,\sigma_q)$. \begin{eqnarray*} \mathbb{E}[T'-T \mid \mathcal{F}] &=& \frac{1}{(1-q) \, n^{7/4}} \sum\limits_{i=1}^n \mathbb{E}\left[ Y_{i,1}'-Y_{i,1} \mid \mathcal{F}\right] \\ & = & \frac{1}{(1-q) \, n^{7/4}} \sum_{i=1}^n \E[ Y_{i,1}' | {\mathcal F}] - \frac 1n T - \frac{x_1}{(1-q) n^{3/4}}. \end{eqnarray*} With Lemma \ref{expGD} we obtain $$ \frac{1}{(1-q) \, n^{7/4}} \sum_{i=1}^n \E[ Y_{i,1}' | {\mathcal F}] = \frac{1}{(1-q) n^{3/4}} \bigl( m_1(\sigma) - \frac{1}{\beta_0} \frac{\partial}{\partial x_1} G_{\beta_0, h_0} (m(\sigma)) \bigr) + \frac{1}{(1-q) n^{1/4}} R_n^{(1)}(i,h_0). $$ Hence using $m_1(\sigma) = x_1 + \frac{(1-q) T}{n^{1/4}}$ and defining $\widetilde{R} :=\frac{1}{(q-1)n^{1/4}}R_n^{(1)}(i,h_0)$ we have \begin{equation*} \mathbb{E}[T'-T\mid \mathcal{F}] = - \frac{1}{(1-q)\beta_0 n^{3/4}} \frac{\partial}{\partial x_1 }G_{\beta_0,h_0}\left(x+\frac{Tu}{n^{1/4}}+\frac{V}{n^{1/2}}\right) -\widetilde R. \end{equation*} A quite tedious {\it fourth-order} Taylor expansion of $G_{\beta_0,h_0}$ at $x+\frac{Tu}{n^{1/4}}+\frac{V}{n^{1/2}}$ is affiliated in the Appendix, see \eqref{finaleins}, which leads to \begin{eqnarray} \label{FIN1} \frac{\partial}{\partial x_1 }G_{\beta_0,h_0}\left(x+\frac{Tu}{n^{1/4}}+\frac{V}{n^{1/2}}\right) & = & -\frac{16(q-1)^4}{3 q n^{3/4}} T^3 \\ & & \hspace{-6cm} + \mathcal{O} \left(\sum_{j=2}^q \frac{V_j^2}{n}\right) + \mathcal{O} \left(\frac{T^4}{n}\right) + {\mathcal O} \biggl( f \bigl( V/\sqrt{n}, T/n^{1/4} \bigr) \biggr) \nonumber \end{eqnarray} with $f(v,t)$ given in \eqref{arbeit2}. Hence we obtain $\mathbb{E}[T'-T\mid \mathcal{F}] = \lambda \psi(T) - R$ with $$ \lambda: = \frac{1}{q(q-1)\beta_0 n^{3/2}} $$ and $$ -R:= \mathcal{O} \left(\sum_{j=2}^q \frac{V_j^2}{n^{7/4}}\right) +\mathcal{O}\left(\frac{T^4}{n^{7/4}}\right) + {\mathcal O} \biggl( \frac{1}{n^{3/4}} f \bigl( V/\sqrt{n}, T/n^{1/4} \bigr) \biggr) -\widetilde R. $$ Now $0<\lambda<1$ for all $n \in \N$ and thus we can apply Theorem \ref{EL}. The moments of $T$ and $V$ are finite, see Lemma \ref{EnMo2}. With $V_1=0$ we get $T =\frac{1}{(1-q) n^{1/4}} \, W_1$. Now we are able to compute the expressions of the bound in Theorem \ref{EL}. We have $$ \E [ (T'-T)^2 | T] = \frac{1}{(1-q)^2 n^{1/2}} \E [ (W_1'-W_1)^2 | W]. $$ Reproducing the proof of Theorem \ref{THUM} we get $\mathbb{V}\left(\mathbb{E}[(W_1'-W_1)^2\mid \mathcal{F}]\right)=\mathcal{O}(n^{-7/2})$, using $\E|W^l| \leq n^{l/4} \E|T^l| = {\mathcal O}(n^{l/4})$, $l \in \N$. Thus \begin{center} $\frac{c_2}{2\lambda}\left(\mathbb{V}\left(\mathbb{E}[(T'-T)^2\mid T]\right)\right)^{1/2}=\mathcal{O}(n^{-1/4}).$ \end{center} Moreover $\E|T'-T|^3 =\frac{1}{(1-q)^3 n^{3/4}} \frac{1}{n^{3/2}} \E|Y_I'-Y_I| = \mathcal{O}(n^{-9/4})$ and therefore $\frac{c_3}{4 \lambda} \E|T'-T|^3 = \mathcal{O}(n^{-3/4})$. From the proof of Theorem \ref{THUM} we know that $|R_n^{(1)}(i,h_0)|= \mathcal{O}(n^{-3/2})$, so $\widetilde{R} = \mathcal{O}(n^{-7/4})$. Remark that by \eqref{arbeit2} we see that the expectation of ${\mathcal O} \biggl( \frac{1}{n^{3/4}} f \bigl( V/\sqrt{n}, T/n^{1/4} \bigr) \biggr)$ is of order ${\mathcal O} (n^{-2})$. Summarizing we obtain $\sqrt{\mathbb{E}[R^2]}=\mathcal{O}(n^{-7/4})$, hence \begin{center} $\frac{c_1+c_2\sqrt{\mathbb{E}[T^2]}}{\lambda}\sqrt{\mathbb{E}[R^2]}=\mathcal{O}(n^{-1/4})$. \end{center} Hence the $\delta$ in \eqref{boundp} is of order $\mathcal{O}(n^{-1/4})$. We obtain the same rate of convergence in the Kolmogorov distance, using $|T'-T| \leq \frac{\text{const.}}{n^{3/4}} =: A$. The order of the first two summands in \eqref{kolall} is $\mathcal{O}(n^{-1/4})$. The third term in \eqref{kolall} is of order $\mathcal{O}(n^{-3/4})$ and finally $$ \frac{3A}{2} \E |\psi(T)| \leq \frac{\text{const}}{n^{3/4}} \E|T^3| = \mathcal{O}(n^{-3/4}), $$ which completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{TV}] Since $V_1=0$, by the continuous mapping theorem it suffices to show that the random vector $\bar{V}:= (V_2,\ldots,V_q)$ converges towards the $(q-1)$-dimensional centered Gaussian vector with covariance matrix \begin{center} $\frac{q}{2(q-1)^2(q-2)}\begin{pmatrix} q-2 & -1 & ... & ... & ...&-1\\ -1 & q-2 & -1 & ... & ...&-1 \\ ... &... & ... & ...&...&...\\ -1 & ... & ... & ... & -1&q-2 \end{pmatrix}$. \end{center} We will apply Theorem \ref{RR}. We see for any $i \geq 2$ $$ \E [V_i'-V_i \mid \mathcal{F}] = \E[W_i' - W_i \mid \mathcal{F}] = -\frac{1}{n\sqrt{n}}\sum\limits_{j=1}^n Y_{ji}+\frac{1}{n\sqrt{n}}\sum\limits_{j=1}^n\mathbb{E}[Y_{ji}'\mid \mathcal{F}]. $$ With Lemma \ref{expGD} and $R_n^{(1)}(i, h_0)$ defined as in \eqref{R1i} we obtain \begin{eqnarray*} \mathbb{E}[V_i'-V_i\mid \mathcal{F}]&=& -\frac{1}{n\sqrt{n}}\sum\limits_{j=1}^nY_{ji}+\frac{1}{\sqrt{n}} m_i(\sigma) -\frac{1}{\beta_0\sqrt{n}} \frac{\partial}{\partial x_i}G_{\beta_0,h_0}(m(\sigma))+R_n^{(1)}(i, h_0) \\ &=& - \frac{1}{\beta_0\sqrt{n}}\frac{\partial}{\partial x_i}G_{\beta_0,h_0}\left(x+\frac{T\,u}{n^{1/4}}+\frac{V}{n^{1/2}}\right)+R_n^{(1)}(i,h_0). \end{eqnarray*} By the {\it fourth-order} Taylor expansion of $\frac{\partial}{\partial x_i}G_{\beta_0,h_0}\left(x+\frac{T\,u}{n^{1/4}}+\frac{V}{n^{1/2}}\right)$, see \eqref{arbeit1}, \eqref{arbeit2}, \eqref{arbeit3} and \eqref{finalzwei} in the Appendix, we obtain for any $i \in \{2, \ldots, q\}$ \begin{equation} \label{forheuristic} \mathbb{E}[V_i'-V_i\mid V]= - \frac{4}{\beta_0 \, n \, q^2} \langle (1, \ldots, 1, (q^2-3q+3), 1 \ldots, 1), \bar{V} \rangle + R_i, \end{equation} where $(q-1)(q-2)$ is the $i$'th entry of the vector in $\R^{q-1}$ and $$ R_i:= {\mathcal O} \bigl(\sum_{l=1}^q \frac{V_l \, T}{n^{5/4}} \bigr) + {\mathcal O} \biggl( \frac{1}{\sqrt{n}} A_2(i, V/\sqrt{n}, T/n^{1/4}) \biggr) + {\mathcal O} \bigl( \frac{T^3}{n^{5/4}} \bigr) + {\mathcal O} \bigl( \frac{T^4}{n^{5/2}} \bigr) + R_n^{(1)}(i,h_0) $$ (see \eqref{arbeit2} for the definition of $A_2$). Using $\sum_{k=2, k \not= i}^q V_k = - V_i$ we get $$ \langle (1, \ldots, 1, (q^2-3q+3), 1 \ldots, 1), \bar{V} \rangle = \bigl( (q-1)(q-2)\bigr) V_i. $$ Thus the linearity condition of Theorem \ref{RR} is satisfied with $\Lambda= \frac 1n \frac{(q-2)}{q} \,Id_{q-1\times q-1}$ and $R=(R_2, \ldots, R_q)$. With $q-2 >0$ for $q \geq 3$ we get the invertibility of $\Lambda$ and $\lambda^{(i)}= \mathcal{O}(n)$. From the proof of Theorem \ref{THUM} we see that $$ \mathbb{E}[(V_i'-V_i)(V_j'-V_j)\mid \mathcal{F}]=\mathbb{E}[(W_i'-W_i)(W_j'-W_j)\mid \mathcal{F}]=\mathcal{O}(n^{-3/2}) $$ and thus $A$ in Theorem \ref{RR} is of order $\mathcal{O}(n^{-1/2})$. Moreover $|V_i'-V_i| \leq \frac{1}{\sqrt{n}}$ for all $i$ and thus $B$ in Theorem \ref{RR} is of order $\mathcal{O}(n^{-1/2})$. It remains to calculate $C$ in Theorem \ref{RR}. From the proof of Theorem \ref{THUM} we know that $\E (R_n^{(1)}(i,h_0)) = {\mathcal O}(n^{-3/2})$. Using bounded moments of $V_i$ and $T$ we obtain that the expectations of the first and third term of $R_i$ are ${\mathcal O}(n^{-5/4})$. With \eqref{arbeit2} we further get that the expectations of the second and fourth term of $R_i$ are ${\mathcal O}(n^{-3/2})$ and ${\mathcal O}(n^{-5/2})$, respectively. By the Cauchy-Schwartz inequality we get $\sqrt{\mathbb{E}[R_i^2]}=\mathcal{O}(n^{-5/4})$, hence $C=\mathcal{O}(n^{-1/4})$, which completes the proof. \end{proof} \begin{remark} In Remark \ref{heuristic} we gave a heuristic, that the matrix $\Lambda$ in the regression condition \eqref{regressioncond} should be expected to be $\frac{1}{\beta n} D^2G_{\beta,h}(x)$. Our heuristic is confirmed in the proof of Theorem \ref{TV}, since we can rewrite \eqref{forheuristic} as $$ \mathbb{E}[\bar{V}'- \bar{V} \mid \bar{V}]= - \frac{1}{\beta_0 \, n} D^2 G_{q-1}(x) \, \bar{V} + R $$ where $D^2 G_{q-1}(x)$ denotes the upper $(q-1) \times (q-1)$ part of $D^2 G_{\beta_0, h_0}(x)$. The limiting covariance matrix $\Sigma$ of $\bar{V}$ is given by $[D^2 G_{q-1}(x)]^{-1} - \beta_0^{-1} \text{Id}_{q-1}$. \end{remark} \begin{remark} These rates of convergence remain still valid if we change our probability measure $P_{ \beta,h,n}$ to $$ P_{ \beta, h, n}( \sigma)=\frac{1}{Z_{ \beta,h,n}}\exp\left(\frac{ \beta}{2n}\sum\limits_{1\leq i\leq j\leq n} \delta_{ \sigma_i,\sigma_j}+\sum\limits_{j=1}^n\sum\limits_{i=1}^q\delta_{ \sigma_j,i}h_i\right) $$ for $\beta \in \mathbb{R}^+$ and $h \in \mathbb{R}^q$. This measure and the characteristics of the corresponding function $$ G_{\beta,h}(u):=\frac{\beta}{2} \langle u,u \rangle - \log \left(\sum\limits_{i=1}^q\exp\left(\beta u_i+h_i\right)\right) $$ were studied in \cite{Wang:1994}. First of all we note that \eqref{PBH} is the same model with $h_1=h$ and $h_i=0$ for $i\in \{2,\ldots,q\}$. Based on the results of \cite{Ellis/Wang:1990} and \cite{Wang:1994} and following the same procedures as above our results can easily be extended to the case that $h_i\neq 0$ for $i\in \{2,\ldots,q\}$. We omit these extensions here. \end{remark} \section{Appendix} For the proofs of Theorems \ref{THUM}, \ref{THUM2} and \ref{THMM} and for Lemma \ref{wangimprovement} and Lemma \ref{EnMo} we need a multivariate {\it second-order Taylor-expansion} of $G_{\beta,h}$ defined in \eqref{Gfunktion}, for every $(\beta,h) \not= (\beta_0, h_0)$. Let us denote by $D^2G_{\beta,h}(x)$ the Hessian matrix $\{ \partial^2 G_{\beta,h}(x) / \partial x_i \partial x_j, i,j = 1, \ldots, q\}$ of $G_{\beta,h}$ at $x$. We obtain \begin{eqnarray} \label{taylor1} G_{\beta,h}(u)& = & G_{\beta,h}(x)+\sum\limits_{k=1}^q\frac{\partial}{\partial u_k}G_{\beta,h}(x)(u_k-x_{k})+ \frac{1}{2} \langle (u-x),D^2G_{\beta,h}(x)\cdot (u-x) \rangle \\ \nonumber & + & \frac{1}{6}\sum\limits_{t,k,j=1}^q \widetilde R_{t,k,j}(u_t-x_{t})(u_k-x_{k})(u_j-x_{j}) \end{eqnarray} with $| \widetilde R_{t,k,j} | \leq \parallel \frac{\partial^3}{\partial u_k\partial u_t\partial u_j}G_{\beta,h} \parallel$. For any fixed $m\in \{1,\ldots,q\}$ and any $x, u \in \mathbb{R}^q$ it follows that \begin{eqnarray} \label{taylorsecond} \frac{\partial}{\partial u_m}G_{\beta,h}(u) & = & \frac{\partial}{\partial u_m}G_{\beta,h}(x) + \frac{\partial^2}{\partial^2 u_m} G_{\beta,h}(x)(u_m-x_{m})\\ & + & \sum\limits_{k\neq m}\frac{\partial^2}{\partial u_k\partial u_m}G_{\beta,h}(x)(u_k-x_{k}) + \sum\limits_{k=1}^q\mathcal{O}((u_m-x_{m})(u_k-x_{k})) \nonumber \\ & +& \sum\limits_{k,t \neq m}^q\mathcal{O}((u_k-x_{k})(u_t-x_{t})). \nonumber \end{eqnarray} If $x$ is a global minimum point of $G_{\beta,h}$ we are able to calculate the Hessian as follows: \begin{lemma} \label{hessiancal} The Hessian $D^2 G_{\beta,h}(x_0)$ at an arbitrary global minimum point $x_0$ looks like: $$ \frac{\partial^2}{\partial^2 u_1} G_{\beta,h} (x_0) = \beta - \beta^2 (q-1) x_{0,1} \, x_{0,q}, \quad \frac{\partial^2}{\partial u_1\partial u_q} G_{\beta,h} (x_0) = \beta^2 x_{0,1}\, x_{0,q}, $$ and $$ \frac{\partial^2}{\partial^2 u_q} G_{\beta,h} (x_0) = \beta - \beta^2 (x_{0,1} \, x_{0,q} + (q-2) x_{0,q}^2), \quad \frac{\partial^2}{\partial u_2 \partial u_q}G_{\beta,h} (x_0) = \beta^2 x_{0,q}^2. $$ \end{lemma} \begin{proof} According to Proposition \ref{ChMi} we know that for any minimizer $x_0$ of $G_{\beta,h}$ we have $x_{0,i} = x_{0,k}$ for all $i,k \in \{2,\ldots,q\}$ and $x_{0,1} \geq x_{0,k}$ for all $k\in\{2,\ldots,q\}$ and $\sum\limits_{i=1}^qx_{0,i}=1$. Notice that the equation $\nabla G_{\beta,0}(x_0)=0$ implies \begin{align} x_{0,1}&=\frac{\exp(\beta x_{0,1}+ h \delta_{1,1})}{\exp(\beta x_{0,1} +h )+(q-1)\exp(\beta x_{0,q})},\nonumber\\ x_{0,q}&=\frac{\exp(\beta x_{0,q})}{\exp(\beta x_{0,1}+ h )+(q-1)\exp(\beta x_{0,q})}.\nonumber \end{align} Now we can calculate \begin{eqnarray*} \frac{\partial^2}{\partial^2 u_1} G_{\beta,h}(x_0) & = & \beta - \beta^2 (q-1) \frac{\exp(\beta (x_{0,1} + x_{0,q}) + h)}{(\exp(\beta (x_{0,1}) + h) +(q-1) \exp(\beta x_{0,q}))^2} \\ & = & \beta - \beta^2 (q-1) x_{0,1} \, x_{0,q} \end{eqnarray*} and $$ \frac{\partial^2}{\partial u_1\partial u_q} G_{\beta,h} (x_0) = \beta^2 \frac{\exp(\beta (x_{0,1}+x_{0,q})+h)}{(\exp(\beta x_{0,1}+h)+(q-1)\exp(\beta x_{0,q}))^2}= \beta^2 \, x_{0,1} \, x_{0,q}. $$ Moreover \begin{eqnarray*} \frac{\partial^2}{\partial^2 u_q} G_{\beta,h} (x_0) & = & \beta -\beta^2 \frac{\exp(\beta x_{0,q} +h)(\exp(\beta x_{0,1}+h)+(q-2)\exp(\beta x_{0,q}))}{(\exp(\beta x_{0,1}+h)+(q-1)\exp(\beta x_{0,q}))^2} \\ & =& \beta-\beta^2 (x_{0,1}\, x_{0,q}+(q-2) x_{0,q}^2) \end{eqnarray*} and $$ \frac{\partial^2}{\partial u_2 \partial u_q}G_{\beta,h} (x_0)= \beta^2 \frac{\exp(2\beta x_{0,q})}{(\exp(\beta x_{0,1}+h)+(q-1)\exp(\beta x_{0,q}))^2}= \beta^2 \, x_{0,q}^2. $$ \end{proof} With Lemma \ref{hessiancal} we get, that the Hessian of $G_{\beta,h}$ at a global minimum point is a matrix of type \eqref{Matrix}. The following Lemma is some Linear Algebra for a matrix of the form \eqref{Matrix}. \begin{lemma} \label{DetLe} For any $a,b,c,d\in\mathbb{R}$ consider the following matrix: \begin{equation} \Lambda:=\begin{pmatrix} a & b & ... & ... & ... & ...&b \\ b & d & c & ... & ... & ...&c\\ b & c & d & c & ... & ...&c \\ ... & ... &... & ... & ...&...&...\\ b & c & ... & ... & ... & c&d \end{pmatrix}\in\mathbb{R}^{q\times q}. \label{Matrix} \end{equation} Then $\det(\Lambda)=(d-c)^{q-2}(a(d+(q-2)c)-(q-1)b^2)$. \end{lemma} \begin{proof} We applied the formula due to Laplace. \end{proof} \begin{remark} \label{extrem} At the extremity $(\beta_0,h_0)= (4 \frac{q-1}{q}, \log(q-1) - 2 \frac{q-2}{q})$ of the critical line, $x=(1/2, 1/2(q-1), \ldots, 1/2(q-1))$ is the unique minimum point of $G_{\beta_0,h_0}$. Remark that $$ \exp(\beta_0\cdot x_1+h_0) = (q-1) \exp(2/q), \quad \exp(\beta_0\cdot x_q) = \exp(2/q). $$ With Lemma \ref{hessiancal} we obtain $$ \frac{\partial^2 G_{\beta_0,h_0}}{\partial^2 x_1}(x) = \frac{\partial^2 G_{\beta_0,h_0}}{\partial x_1\partial x_q}(x) = \frac{4(q-1)}{q^2} $$ and $\frac{\partial^2 G_{\beta_0,h_0}}{\partial^2 x_q}(x)=\frac{4(q^2-3q+3)}{q^2}$ and $\frac{\partial^2 G_{\beta_0,h_0}}{\partial x_q\partial x_{q-1}}(x)=\frac{4}{q^2}$. Thus $a=b$ in \eqref{Matrix}. \end{remark} For the proofs of the results at {\it criticality}, Theorems \ref{TT}, \ref{TV} and Lemma \ref{EnMo2}, we need a multivariate {\it fourth-order Taylor-expansion} of $G_{\beta_0,h_0}$ (defined in \eqref{Gfunktion}). We fix the notation $G := G_{\beta_0,h_0}$ for the following calculations. We know that $x=(1/2, 1/2(q-1), \ldots, 1/2(q-1))$ is the unique minimum point of $G$. Let $u=(1-q, 1, \ldots, 1) \in {\mathcal M} \subset \R^q$, $v \in {\mathcal M} \cap u^{\bot}$ and $t \in \R$. For any $p \in \N$ and $z \in \R^q$ let us fix the notation $$ R_{i_1, \ldots, i_p} (z) := \bigl( \frac{\partial^p G}{\partial x_{i_1} \cdots \partial x_{i_p}} \bigr) (z). $$ A {\it second-order} Taylor-expansion yields $$ G(x+tu+v) = G(x+tu) + \frac 12 \langle v, (D^2G)(x+tu) \, v \rangle + \frac 16 \sum_{j,k,l=1}^q R_{j,k,l}(x+tu+\gamma v) \, v_j v_k v_l $$ for some $\gamma \in (0,1)$, since $\langle (\nabla G)(x+tu), v \rangle=0$: the last $q-1$ coordinates of $x+tu$ are equal and with Lemma \ref{expGD} the last $q-1$ coordinates of the gradient $(\nabla G)(x+tu)$ are equal, and hence it is orthogonal to $v$. A {\it fourth-order} Taylor-expansion for $G(x+tu)$ yields \begin{equation} \label{huebsch} G(x+tu) = G(x) + \frac{1}{24} \sum_{j,k,l,m=1}^q R_{j,k,l,m}(x+ \widetilde{\gamma} t u) \, t^4 \, u_j u_k u_l u_m \end{equation} for some $\widetilde{\gamma} \in (0,1)$. To see \eqref{huebsch} notice that the first-order term is zero since $x$ is a global minimizer of $G$. The second-order term is zero, since we know from Lemma \ref{wangimprovement} that $D^2G_{\beta,h}(x)$ is positive definite if and only if $(\beta,h) \not= (\beta_0,h_0)$. Hence the third-order term is zero yielding the identity \eqref{huebsch}. Summarizing we obtain: \begin{eqnarray} \label{4thorder} G(x+tu+v) & = & G(x) + \frac 12 \langle v, (D^2G)(x+tu) \, v \rangle \nonumber + \frac 16 \sum_{j,k,l=1}^q R_{j,k,l}(x+tu+ \gamma v) \, v_j v_k v_l \\ &+& \frac{1}{24} \sum_{j,k,l,m=1}^q R_{j,k,l,m}(x+ \widetilde{\gamma} tu) \, t^4 \, u_j u_k u_l u_m. \end{eqnarray} With $y := x+tu+v$ we will calculate $\frac{\partial}{\partial y_i} G(y)$ for $i \in \{1, \ldots, q\}$ using \eqref{4thorder}. The derivative of the first summand in \eqref{4thorder} is zero since $x$ is the global minimizer of $G$. With \begin{eqnarray*} \frac 12 \langle v, (D^2G)(x+tu) \, v \rangle & = & \frac 12 R_{i,i}(x+tu)(y_i-x_i-t u_i)^2 \\ & + & \sum_{k \not=i} R_{i,k}(x+tu)(y_k-x_k-tu_k)(y_i-x_i-tu_i) \\ & + & \sum_{k,l \not=i} R_{l,k}(x+tu)(y_k-x_k-tu_k)(y_l-x_l-tu_l) \end{eqnarray*} we obtain $$ A_1(i) := \frac{\partial}{\partial y_i} \bigl( \frac 12 \langle v, (D^2G)(x+tu) \, v \rangle \bigr) = R_{i,i}(x+tu) v_i + \sum_{k \not= i} R_{i,k}(x+tu) v_k. $$ With Lemma \ref{hessiancal} we obtain $R_{1,2} = R_{1,3}=\cdots =R_{1,q}$ and since $v \in u^{\bot}$ we have $A_1(1)=0$. With a second-order Taylor expansion for $R_{i,k}(x+tu)$ we get for $t$ small \begin{equation*} A_1(i) = \langle R_{i, \cdot}, v \rangle + {\mathcal O}_q \bigl( \sum_{l=1}^q v_l \, t \bigr). \end{equation*} Here ${\mathcal O}_q$ is the notation that the constant does depend on $q$. This is because all $R_{i_1, \ldots, i_p}(x)$ only depend on $q$, since $x$ only depends on $q$. The second partial derivatives of $G$ were listed in Remark \ref{extrem}, hence we end up with \begin{equation} \label{arbeit1} A_1(1)=0, \quad A_1(i) = \frac{4(q^3-3q+3)}{q^2} v_i + \frac{4}{q^2} \sum_{k=2}^q v_k + {\mathcal O}_q \bigl( \sum_{l=1}^q v_l \, t \bigr), \,\, i \geq 2 \end{equation} for small $t$. The last formula can even be simplified since $\sum_{k=2}^q v_k =0$ using $v \in u^{\bot}$. For reasons of application we will not use this simplification. The partial derivative $\frac{\partial}{\partial y_i}$ of the third term in \eqref{4thorder} is $$ A_2(i) := \frac 12 R_{i,i,i}(x+tu+\gamma v) v_i^2 + \sum_{k \not=i}^q R_{i,i,k}(x+tu+\gamma v) \, v_i v_k + \frac 12 \sum_{j,k \not= i} R_{i,j,k}(x+tu + \gamma v) \, v_k v_j. $$ Using Taylor for $R_{i,j,k}(x+tu + \gamma v)$ we obtain for small $t$ and small $v$ \begin{equation} \label{arbeit2} A_2(i) := A_2(i,v,t) := \frac 12 R_{i,i,i}(x) v_i^2 + \sum_{k \not=i}^q R_{i,i,k}(x) \, v_i v_k + \frac 12 \sum_{j,k \not= i}^q R_{i,j,k}(x) \, v_k v_j + {\mathcal O}_q( f(v,t)) \end{equation} with $$ {\mathcal O}_q (f(v,t)) = {\mathcal O}_q \biggl( (t + \sum_{k =2}^q c_k(q) v_k) \bigl( v_i^2 + v_i \sum_{k \not= i} v_k + \sum_{k,j \not= i} v_k v_j \bigr) \biggr). $$ Here $c_k(q)$ is denoted (a constant depending on $q$) just to see that the relation $\sum_{k=2}^q v_k=0$ cannot be applied. In our application the order (in $n$) of the $v_k$'s will not depend on $k$, and $v_2$ will be smaller in order than $t$. In this situation we have ${\mathcal O}_q (f(v,t)) = {\mathcal O}_q (t \, v_2^2)$. We will calculate $A_2(1)$ with the help of the third derivatives which are $R_{1,1,1}(x) = R_{1,1,k}(x)=0$ and $$ R_{1,j,j}(x) =\frac{16(q-1)(q-2)}{q^3}, \quad R_{1,j,k}(x)= -\frac{16(q-1)}{q^3}. $$ Therefore the first two summands in \eqref{arbeit2} are zero and the third term is, using $\sum_{k=2}^q v_k=0$, \begin{eqnarray} \label{arbeit2fall1} \frac 12 \sum_{j,k=2}^q R_{1,j,k}(x) v_j v_k & =& \frac 12 \sum_{j=2}^q \frac{16(q-1)(q-2)}{q^3} \, v_j^2 - \frac{1}{2}\sum\limits_{j,k=2, j\neq k}^q \frac{16(q-1)}{q^3}\, v_j v_k \nonumber \\ &=& \frac{8(q-1)}{q^3}\sum\limits_{j=2}^q v_j \bigl( (q-1)v_j - v_j - \sum\limits_{k=2, k\neq j}^q v_k\bigr) \\ &=& \frac{8(q-1)^2}{q^3} \sum\limits_{j=2}^q v_j^2. \nonumber \end{eqnarray} Finally, the partial derivative $\frac{\partial}{\partial y_i}$ of the fourth term in \eqref{4thorder} is \begin{eqnarray} \label{arbeit3} A_3(i) := A_3(i,t) & := & \frac 16 R_{i,i,i,i}(x) t^3 u_i^3 + \frac 12 \sum_{k \not= i} R_{i,i,i,k}(x) t^3 u_i^2 u_k + \frac 12 \sum_{k,j \not= i} R_{i,i,j,k}(x) t^3 u_i u_k u_j \nonumber \\ & + & \frac 16 \sum_{j,k,l \not= i} R_{i,j,k,l}(x) t^3 u_j u_k u_l + {\mathcal O}_q(t^4), \end{eqnarray} where we applied a second-order Taylor expansion for $ R_{i,j,k,l}(x + \widetilde{\gamma} t u)$. Again we calculate $A_3(1)$, using the fourth derivatives $$ R_{1,1,1,1}(x) = \frac{32(q-1)^4}{q^4}, R_{1,1,1,k}(x)= \frac{32(q-1)^3}{q^4},\quad R_{1,1,k,k}(x) = R_{1,1,j,k}(x)= \frac{32(q-1)^2}{q^4} $$ and $$ R_{1,k,k,k}(x) =\frac{32(q-1)(2q^2-10q+11)}{q^4}, \quad R_{1,j,j,k}(x) =-\frac{32(q-1)(2q-5)}{q^4} $$ and $R_{1,j,k,l} =\frac{96(q-1)}{q^4}$. We obtain $$ \frac 16 R_{1,1,1,1}(x) t^3 (1-q)^3 = - \frac{16(q-1)^7 t^3}{3 q^4}, \quad \frac 12 \sum_{k \not= i} R_{1,1,1,k}(x) t^3 (1-q)^2 = - \frac{16(q-1)^6 t^3}{q^4}, $$ and $$ \frac 12 \sum_{k,j \not= i} R_{1,1,j,k}(x) t^3 (1-q) = - \frac{16(q-1)^5 t^3}{q^4}. $$ Moreover we have \begin{eqnarray*} \frac 16 \sum_{j,k,l \not= i} R_{1,j,k,l}(x) t^3 & = & \frac{16(q-1)^2(2q^2-10q+11) t^3}{3 q^4} - \frac{16 (q-1)^2(q-2)(2q-5) t^3}{q^4} \\ & & \hspace{1cm} + \frac{16(q-1)^2(q-2)(q-3) t^3}{q^4}. \end{eqnarray*} Hence \begin{equation} \label{arbeit3fall1} A_3(1) = - \frac{16(q-1)^4}{3q} t^3 + {\mathcal O}_q(t^4). \end{equation} We summarize that the first partial derivative of $G(y)$ in \eqref{4thorder} satisfies \begin{equation} \label{finaleins} \frac{\partial}{\partial y_1} G (x+tu+v) = \frac{8(q-1)^2}{q^3} \sum\limits_{j=2}^q v_j^2 - \frac{16(q-1)^4}{3q} t^3 +{\mathcal O}_q\bigl( f(v,t) \bigr) + {\mathcal O}_q(t^4), \end{equation} using the notation of \eqref{arbeit2}. The $i$'th partial derivative for $i \in \{2, \ldots,q \}$ is given by \begin{equation} \label{finalzwei} \frac{\partial}{\partial y_i} G (x+tu+v) = A_1(i) + A_2(i,v,t) + A_3(i,t) \end{equation} with $A_j(i)$ defined in \eqref{arbeit1}, \eqref{arbeit2} and \eqref{arbeit3}. \begin{proof}[Proof of Lemma \ref{wangimprovement}] For the proof we will use the following alternative parametrization of the minimum points of $G_{\beta,h}$ given by permutations of $$ x_s=\left(\frac{1+(q-1)s(\beta,h)}{q},\frac{1-s(\beta,h)}{q},\cdots,\frac{1-s(\beta,h)}{q}\right), \quad s(\beta,h)\in [0,1]. $$ It is important to notice, see for example \cite{Blanchard:2008}, that $s(\beta,h)$ is positive, well-defined and strictly increasing in $\beta$ on an open interval containing $[\beta_c,\infty)$ and that $s(\beta_c,0)=(q-2)/(q-1)$ is a global minimum. The Lemma follows once we proof that $\beta q x_{s,1} x_{s,q}-1<0$ and $\beta x_{s,q}-1<0$. The second inequality follows directly from Proposition \ref{ChMi}. To prove $\beta q x_{s,1} x_{s,q}-1<0$, we first consider the case $h=h_0$. First of all we note that the minima are the solutions of $$ f(s(\beta,h_0))=\log(1+(q-1)s(\beta,h_0))-\log(1-s(\beta,h_0))-\beta s(\beta,h_0)-h_0=0. $$ If $\beta<\beta_0$ we have that $\partial f(s(\beta,h_0))/\partial s>0$. Rearranging this equality and using the parametrization of $x_s$ yields the result. If $\beta>\beta_0$ we use $\nabla G_{\beta,h_0}(x_s)=0$ to obtain $$ \log(x_{s,1})-\beta x_{s,1}-h_0=\log(x_{s,q})-\beta x_{s,q}. $$ This equation yield $$ \beta q x_{s,1}x_{s,q}-1 =\left(\frac{\log(x_{s,1})-\log(x_{s,q})-h_0}{x_{s,1}-x_{s,q}}\right)qx_{s,1}x_{s,q}-1 =\left(\log\left(\frac{x_{s,1}}{x_{s,q}}\right)-h_0\right)q\frac{x_{s,1}x_{s,q}}{x_{s,1}-x_{s,q}}-1. $$ Using the fact that $x_{s,1}+(q-1)x_{s,q}=1$ we obtain \begin{eqnarray*} \beta q x_{s,1}x_{s,q}-1 &= & \left(\log\left(\frac{(q-1)x_{s,1}}{1-x_{s,1}}\right)-h_0\right)q\frac{x_{s,1}(1-x_{s,1})}{(q-1)x_{s,1}-(1-x_{s,1})}-1\nonumber\\\nonumber\\ &=&\left(\log\left(\frac{(q-1)x_{s,1}}{1-x_{s,1}}\right)-h_0\right)q\frac{x_{s,1}(1-x_{s,1})}{qx_{s,1}-1}-1 =q\frac{x_{s,1}(1-x_{s,1})}{qx_{s,1}-1}f_{h_0}(x_{s,1}),\nonumber \end{eqnarray*} where $$ f_h(x_{s,1}):=\log\left(\frac{(q-1)x_{s,1}}{1-x_{s,1}}\right)-h-\frac{qx_{s,1}-1}{qx_{s,1}(1-x_{s,1})}. $$ For $\beta_0$ the global minimizer is given by $y=(1/2,1/2(q-1),\ldots,1/2(q-1))$ and it follows easily that $f_{h_0}\left(\frac{1}{2}\right)=0$. Additionally $\frac{\partial f_{h_0}}{\partial x}(x)<0$ for $x \in[2^{-1},1)$. Thus we obtain that $f_{h_0}(x_{s,1})<0$ for $x_{s,1}\in[2^{-1},1)$, and this is equivalent to $\beta q x_{s,1}x_{s,q}-1<0$. Now we consider the case: $h\neq h_0$. If $\beta<\beta_0$, the proof is identical to the case of $\beta<\beta_0$ for $h=h_0$. If $\beta\geq\beta_0$ we have $$ \beta q x_{s,1}x_{s,q}-1=q\frac{x_{s,1}(1-x_{s,1})}{qx_{s,1}-1}f_{h}(x_{s,1}). $$ For $\beta\geq\beta_0$ we know that $1>s(\beta,h)\geq s(\beta_0,h)\geq s(\beta_c,0)=(q-2)/(q-1)$. The rest of the proof is identical to the case (ii) of the proof of Proposition 2.2 in \cite{Ellis/Wang:1990}. \end{proof} \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \newcommand{\SortNoop}[1]{}\def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,116,691,500,529
arxiv
\section{Introduction} In~\cite{Monsky-Washnitzer:Formal}, Monsky and Washnitzer introduce a cohomology theory for affine non-singular varieties defined over a field~\(\resf\) of nonzero characteristic. Let~\(\dvr\) be a discrete valuation ring such that the fraction field~\(\dvf\) of~\(\dvr\) has characteristic~\(0\). Let \(\dvgen\in \dvr\) be a uniformiser and let \(\resf=\dvr/\dvgen \dvr\) be the residue field. Monsky and Washnitzer lift the coordinate ring of a smooth affine variety~\(X\) over~\(\resf\) to a smooth commutative algebra~\(A\) over~\(\dvr\). The dagger completion~\(A^\updagger\) of~\(A\) is a certain subalgebra of the \(\dvgen\)\nb-adic completion of~\(A\). If~\(A\) is the polynomial algebra over~\(\dvr\), then~\(A^\updagger\) is the ring of overconvergent power series. The Monsky--Washnitzer cohomology is defined as the de Rham cohomology of the algebra \(\dvf\otimes_\dvr A^\updagger\). The dagger completion is interpreted in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean} in the setting of bornological algebras, based on considerations about the joint spectral radius of bounded subsets. The main achievement in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean} is the construction of a chain complex that computes the rigid cohomology of the original variety~\(X\) and that is strictly functorial. In addition, this chain complex is related to periodic cyclic homology. Here we continue the study of dagger completions. We define dagger algebras by adding a bornological torsion-freeness condition to the completeness and spectral radius conditions already present in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}. We also show that the category of dagger algebras is closed under extensions, subalgebras, and certain quotients, by showing that all three properties that define them are hereditary for these constructions. The results in this article should help to reach the following important goal: define an analytic cyclic cohomology theory for algebras over~\(\resf\) that specialises to Monsky--Washnitzer or rigid cohomology for the coordinate rings of smooth affine varieties over~\(\resf\). A general machinery for defining such cyclic cohomology theories is developed in~\cite{Meyer:HLHA}. It is based on a class of nilpotent algebras, which must be closed under extensions. This is why we are particularly interested in properties hereditary for extensions. If~\(S\) is a bounded subset of a \(\dvf\)\nb-algebra~\(A\), then its spectral radius \(\varrho(S)\in [0,\infty]\) is defined in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}. If~\(A\) is a bornological \(\dvr\)\nb-algebra, then only the inequalities \(\varrho(S) \le s\) for \(s>1\) make sense. This suffices, however, to characterise the \emph{linear growth bornology} on a bornological \(\dvr\)\nb-algebra: it is the smallest \(\dvr\)\nb-algebra bornology with \(\varrho(S) \le 1\) for all its bounded subsets~\(S\). We call a bornological algebra~\(A\) with this property \emph{semi-dagger} because this is the main feature of dagger algebras. Any bornological algebra~\(A\) carries a smallest bornology with linear growth. This defines a semi-dagger algebra~\(\ling{A}\). If~\(A\) is a torsion-free, finitely generated, commutative \(\dvr\)\nb-algebra with the fine bornology, then the bornological completion~\(\comling{A}\) of~\(\ling{A}\) is the Monsky--Washnitzer completion of~\(A\). Any algebra over~\(\resf\) is also an algebra over~\(\dvr\). Equipped with the fine bornology, it is complete and semi-dagger. We prefer, however, not to call such algebras ``dagger algebras.'' The feature of Monsky--Washnitzer algebras that they lack is torsion-freeness. The purely algebraic notion of torsion-freeness does not work well for bornological algebras. In particular, it is unclear whether it is preserved by completions. We call a bornological \(\dvr\)\nb-module~\(A\) \emph{bornologically torsion-free} if multiplication by~\(\dvgen\) is a bornological isomorphism onto its image. This notion has very good formal properties: it is preserved by bornological completions and linear growth bornologies and hereditary for subalgebras and extensions. So~\(\comling{A}\) remains bornologically torsion-free if~\(A\) is bornologically torsion-free. The bornological version of torsion-freeness coincides with the usual one for bornological \(\dvr\)\nb-modules with the fine bornology. Thus~\(\comling{A}\) is bornologically torsion-free if~\(A\) is a torsion-free \(\dvr\)\nb-algebra with the fine bornology. A bornological \(\dvr\)\nb-module~\(M\) is bornologically torsion-free if and only if the canonical map \(M\to \dvf\otimes_\dvr M\) is a bornological embedding. This property is very important. On the one hand, we must keep working with modules over~\(\dvr\) in order to keep the original algebra over~\(\resf\) in sight and because the linear growth bornology only makes sense for algebras over~\(\dvr\). On the other hand, we often need to pass to the \(\dvf\)\nb-vector space \(\dvf\otimes_\dvr M\) -- this is how de Rham cohomology is defined. Bornological vector spaces over~\(\dvf\) have been used recently to do analytic geometry in \cites{Bambozzi:Affinoid, Bambozzi-Ben-Bassat:Dagger, Bambozzi-Ben-Bassat-Kremnizer:Stein}. The spectral radius of a bounded subset of a bornological \(\dvr\)\nb-algebra~\(A\) is defined in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean} by working in \(\dvf\otimes_\dvr A\), which only works well if~\(A\) is bornologically torsion-free. Here we define a truncated spectral radius in \([1,\infty]\) without reference to \(\dvf\otimes_\dvr A\), in order to define semi-dagger algebras independently of torsion issues. We prove that the properties of being complete, semi-dagger, or bornologically torsion-free are hereditary for extensions. Hence an extension of dagger algebras is again a dagger algebra. To illustrate our theory, we describe the dagger completions of monoid algebras and crossed products. Dagger completions of monoid algebras are straightforward generalisations of Monsky--Washnitzer completions of polynomial algebras. We thank the anonymous referee for helpful comments to improve the presentation in the paper. \section{Basic notions} \label{Review_bornologies} In this section, we recall some basic notions on bornological modules and bounded homomorphisms. See~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean} for more details. We also study the inheritance properties of separatedness and completeness for submodules, quotients and extensions. A \emph{bornology} on a set~\(X\) is a collection~\(\bdd_X\) of subsets of~\(X\), called \emph{bounded} sets, such that all finite subsets are bounded and subsets and finite unions of bounded subsets are bounded. Let~\(\dvr\) be a complete discrete valuation ring. A \emph{bornological \(\dvr\)\nb-module} is a \(\dvr\)\nb-module~\(M\) with a bornology such that every bounded subset is contained in a bounded \(\dvr\)\nb-submodule. In particular, the \(\dvr\)\nb-submodule generated by a bounded subset is again bounded. We always write~\(\bdd_M\) for the bornology on~\(M\). Let \(M'\subseteq M\) be a \(\dvr\)\nb-submodule. The \emph{subspace bornology} on~\(M'\) consists of all subsets of~\(M'\) that are bounded in~\(M\). The \emph{quotient bornology} on~\(M/M'\) consists of all subsets of the form \(q(S)\) with \(S\in\bdd_M\), where \(q \colon M \to M/M'\) is the canonical projection. We always equip submodules and quotients with these canonical bornologies. Let~\(M\) and~\(N\) be two bornological \(\dvr\)\nb-modules. A \(\dvr\)\nb-module map \(f\colon M \to N\) is \emph{bounded} if \(f(S)\in \bdd_N\) for all \(S\in \bdd_M\). Bornological \(\dvr\)\nb-modules and bounded \(\dvr\)\nb-module maps form an additive category. The isomorphisms in this category are called \emph{bornological isomorphisms}. A bounded \(\dvr\)\nb-module map \(f \colon M \to N\) is a \emph{bornological embedding} if the induced map \(M \to f(M)\) is a bornological isomorphism, where \(f(M)\subseteq N\) carries the subspace bornology. It is a \emph{bornological quotient map} if the induced map \(M/\ker f \to N\) is a bornological isomorphism. Equivalently, for each \(T\in\bdd_N\) there is \(S\in\bdd_M\) with \(f(S) = T\). An \emph{extension} of bornological \(\dvr\)\nb-modules is a diagram of \(\dvr\)\nb-modules \[ M' \xrightarrow{f} M \xrightarrow{g} M'' \] that is algebraically exact and such that~\(f\) is a bornological embedding and~\(g\) a bornological quotient map. Equivalently, \(g\) is a cokernel of~\(f\) and~\(f\) a kernel of~\(g\) in the additive category of bornological \(\dvr\)\nb-modules. A \emph{split extension} is an extension with a bounded \(\dvr\)\nb-linear map \(s\colon M'' \to M\) such that \(g\circ s = \mathrm{id}_{M''}\). Let~\(M\) be a bornological \(\dvr\)\nb-module. A sequence \((x_n)_{n\in\mathbb N}\) in~\(M\) \emph{converges} towards \(x\in M\) if there are \(S\in\bdd_M\) and a sequence \((\delta_n)_{n\in\mathbb N}\) in~\(\dvr\) with \(\lim {}\abs{\delta_n} = 0\) and \(x_n-x \in \delta_n\cdot S\) for all \(n\in\mathbb N\). It is a \emph{Cauchy} sequence if there are \(S\in\bdd_M\) and a sequence \((\delta_n)_{n\in\mathbb N}\) in~\(\dvr\) with \(\lim {}\abs{\delta_n} = 0\) and \(x_n-x_m \in \delta_j\cdot S\) for all \(n,m,j\in\mathbb N\) with \(n,m\ge j\). Since any bounded subset is contained in a bounded \(\dvr\)\nb-submodule, a sequence in~\(M\) converges or is Cauchy if and only if it converges or is Cauchy in the \(\dvgen\)\nb-adic topology on some bounded \(\dvr\)\nb-submodule of~\(M\). We call a subset~\(S\) of~\(M\) \emph{closed} if \(x\in S\) for any sequence in~\(S\) that converges in~\(M\) to \(x\in M\). These are the closed subsets of a topology on~\(M\). Bounded maps preserve convergent sequences and Cauchy sequences. Thus pre-images of closed subsets under bounded maps remain closed. That is, bounded maps are continuous for these canonical topologies. \subsection{Separated bornological modules} \label{sec:separated} We call~\(M\) \emph{separated} if limits of convergent sequences in~\(M\) are unique. If~\(M\) is not separated, then the constant sequence~\(0\) has a non-zero limit. Therefore, \(M\) is separated if and only if \(\{0\}\subseteq M\) is closed. And~\(M\) is separated if and only if any \(S\in\bdd_M\) is contained in a \(\dvgen\)\nb-adically separated bounded \(\dvr\)\nb-submodule. \begin{lemma} \label{lem:separated_hereditary} Let \(M' \xrightarrow{f} M \xrightarrow{g} M''\) be an extension of bornological \(\dvr\)\nb-modules. \begin{enumerate} \item \label{lem:separated_hereditary_1}% If~\(M\) is separated, so is~\(M'\). \item \label{lem:separated_hereditary_2}% The quotient~\(M''\) is separated if and only if~\(f(M')\) is closed in~\(M\). \item \label{lem:separated_hereditary_3}% If \(M'\) and~\(M''\) are separated and~\(M''\) is torsion-free, then~\(M\) is separated. \end{enumerate} \end{lemma} \begin{proof} Assertion~\ref{lem:separated_hereditary_1} is trivial. If \(M''\) is separated, then \(\{0\}\subseteq M''\) is closed. Hence \(g^{-1}(\{0\}) = f(M')\) is closed in~\(M\). If~\(M''\) is not separated, then the constant sequence~\(0\) in~\(M''\) converges to some non-zero \(x''\in M''\). That is, there are a bounded subset \(S''\subseteq M''\) and a null sequence~\((\delta_n)_{n\in\mathbb N}\) in~\(\dvr\) with \(x''-0 \in \delta_n\cdot S''\) for all \(n\in\mathbb N\). Since~\(g\) is a bornological quotient map, there are \(x\in M\) and \(S\in\bdd_M\) with \(g(x)=x''\) and \(g(S) = S''\). We may choose \(y''_n\in S''\) with \(x'' = \delta_n\cdot y''_n\) and \(y_n\in S\) with \(g(y_n) = y_n''\). So \(g(x - \delta_n y_n)=0\). Thus the sequence \((x-\delta_n y_n)\) lies in~\(f(M')\). It converges to~\(x\), which does not belong to~\(f(M')\) because \(x''\neq0\). So~\(f(M')\) is not closed. This finishes the proof of~\ref{lem:separated_hereditary_2}. We prove~\ref{lem:separated_hereditary_3}. Let \(x\in M\) belong to the closure of~\(\{0\}\) in~\(M\). That is, there are \(S\in\bdd_M\) and a null sequence \((\delta_n)_{n\in\mathbb N}\) in~\(\dvr\) with \(x\in \delta_n \cdot S\) for all \(n\in\mathbb N\). Then \(g(x) \in \delta_n \cdot g(S)\) for all \(n\in\mathbb N\). This implies \(g(x)=0\) because~\(M''\) is separated. So there is \(y\in M'\) with \(f(y)=x\). And \(f(y) = x \in \delta_n\cdot S\). Choose \(x_n\in S\) with \(f(y) = \delta_n\cdot x_n\). We may assume \(\delta_n\neq0\) for all \(n\in\mathbb N\) because otherwise \(x\in\delta_n\cdot S\) is~\(0\). Since~\(M''\) is torsion-free, \(\delta_n\cdot x_n\in f(M')\) implies \(g(x_n)=0\). So we may write \(x_n=f(y_n)\) for some \(y_n\in M'\). Since~\(f\) is a bornological embedding, the set \(\setgiven{y_n}{n\in\mathbb N}\) in~\(M'\) is bounded. Since~\(M'\) is separated and \(y = \delta_n\cdot y_n\) for all \(n\in\mathbb N\), we get \(y=0\). Hence \(x=0\). So~\(\{0\}\) is closed in~\(M\). \end{proof} The quotient \(M/\overline{\{0\}}\) of a bornological \(\dvr\)\nb-module~\(M\) by the closure of~\(0\) is called the \emph{separated quotient} of~\(M\). It is separated by Lemma~\ref{lem:separated_hereditary}, and it is the largest separated quotient of~\(M\). Even more, the quotient map \(M\to M/\overline{\{0\}}\) is the universal arrow to a separated bornological \(\dvr\)\nb-module, that is, any bounded \(\dvr\)\nb-linear map from~\(M\) to a separated bornological \(\dvr\)\nb-module factors uniquely through \(M/\overline{\{0\}}\). The following example shows that Lemma~\ref{lem:separated_hereditary}.\ref{lem:separated_hereditary_3} fails without the torsion-freeness assumption. \begin{example} \label{exa:extension_not_separated} Let \(M'=\dvr\) and let \(M = \dvr[x]/ S\), where~\(S\) is the \(\dvr\)\nb-submodule of~\(\dvr[x]\) generated by \(1 - \dvgen^n x^n\) for all \(n\in \mathbb N\). We embed \(M'=\dvr\) as multiples of \(1=x^0\). Then \[ M/M' = \bigoplus_{n=1}^\infty \dvr/(\dvgen^n), \] We endow \(M\), \(M'\) and \(M/M'\) with the bornologies where all subsets are bounded. We get an extension of bornological \(\dvr\)\nb-modules \(\dvr \rightarrowtail M \twoheadrightarrow \bigoplus_{n=1}^{\infty}\dvr/ (\dvgen^n)\). Here \(\dvr\) and \(\bigoplus_{n=1}^\infty \dvr/(\dvgen^n)\) are \(\dvgen\)\nb-adically separated, but~\(M\) is not: the constant sequence~\(1\) in~\(M\) converges to~\(0\) because \(1 = 1-\dvgen^n x^n + \dvgen^n x^n \equiv \dvgen^n x^n\) in~\(M\). \end{example} \subsection{Completeness} \label{sec:complete} We call a bornological \(\dvr\)\nb-module~\(M\) \emph{complete} if it is separated and for any \(S\in\bdd_M\) there is \(T\in\bdd_M\) so that all \(S\)\nb-Cauchy sequences are \(T\)\nb-convergent. Equivalently, any \(S\in\bdd_M\) is contained in a \(\dvgen\)\nb-adically complete bounded \(\dvr\)\nb-submodule (see \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~2.8}). By definition, any Cauchy sequence in a complete bornological \(\dvr\)\nb-module has a unique limit. \begin{theorem} \label{the:extension_complete} Let \(M' \xrightarrow{f} M \xrightarrow{g} M''\) be an extension of bornological \(\dvr\)\nb-modules. \begin{enumerate} \item \label{the:extension_complete_1}% If~\(M\) is complete and~\(f(M')\) is closed in~\(M\), then~\(M'\) is complete. \item \label{the:extension_complete_1b}% If~\(M'\) is complete, \(M\) separated, and~\(M''\) torsion-free, then~\(f(M')\) is closed in~\(M\). \item \label{the:extension_complete_2}% Let~\(M\) be complete. Then~\(M''\) is complete if and only if~\(f(M')\) is closed in~\(M\). \item \label{the:extension_complete_3}% If \(M'\) and~\(M''\) are complete and~\(M\) is separated, then~\(M\) is complete. If \(M'\) and~\(M''\) are complete and~\(M''\) is torsion-free, then~\(M\) is complete. \end{enumerate} \end{theorem} \begin{proof} Statement~\ref{the:extension_complete_1} is~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma~2.13}, and there is no need to repeat the proof here. It is somewhat similar to the proof of~\ref{the:extension_complete_3}. Next we prove~\ref{the:extension_complete_1b}. Assume that~\(M'\) is complete, that \(M''\) is torsion-free, and that~\(f(M')\) is not closed in~\(M\). We are going to prove that~\(M\) is not separated. There is a sequence \((x_n)_{n\in\mathbb N}\) in~\(M'\) for which \(f(x_n)_{n\in\mathbb N}\) converges in~\(M\) towards some \(x\notin f(M')\). So there is a bounded set \(S\subseteq M\) and a sequence \((\delta_k)_{k\in\mathbb N}\) in~\(\dvr\) with \(\lim {}\abs{\delta_k} = 0\) and \(f(x_n)-x \in \delta_n\cdot S\) for all \(n\in\mathbb N\). We may assume without loss of generality that~\(S\) is a bounded \(\dvr\)\nb-submodule and that the sequence of norms~\(\abs{\delta_n}\) is decreasing: let~\(\delta_n^*\) be the~\(\delta_m\) for \(m\ge n\) with maximal norm. Then \(f(x_n) - x \in \delta_n\cdot S\subseteq \delta_n^* \cdot S\) and still \(\lim {}\abs{\delta_n^*} =0\). We may write \(f(x_n) - x = \delta^*_n y_n\) with \(y_n \in S\). Let \(m<n\). Then \(\delta^*_m g(y_m) = -g(x) = \delta_n^* g(y_n)\) and \(\delta_n^*/\delta_m^* \in \dvr\); this implies first \(\delta^*_m\cdot \bigl(g(y_m)- g(y_n) \cdot\delta_n^*/\delta_m^*\bigr) = 0\) and then \(g(y_m)= g(y_n) \cdot \delta_n^*/\delta_m^*\) because~\(M''\) is torsion-free. So there is \(z_{m,n} \in M'\) with \(y_m + f(z_{m,n}) = y_n\cdot \delta_n^*/\delta_m^*\). We even have \(z_{m,n} \in f^{-1}(S)\) because~\(S\) is a \(\dvr\)\nb-submodule. The subset \(f^{-1}(S)\subseteq M'\) is bounded because~\(f\) is a bornological embedding. We get \(f(x_n) - f(x_m) = \delta^*_n y_n - \delta^*_m y_m = f(\delta_m^* z_{m,n})\) and hence \(x_n - x_m = \delta_m^* z_{m,n}\) for \(n>m\). This witnesses that the sequence~\((x_n)_{n\in\mathbb N}\) is Cauchy in~\(M'\). Since~\(M'\) is complete, it converges towards some \(y\in M'\). Then \(f(x_n)\) converges both towards \(f(y)\in f(M')\) and towards \(x\notin f(M')\). So~\(M\) is not separated. This finishes the proof of~\ref{the:extension_complete_1b}. Next we prove~\ref{the:extension_complete_2}. If~\(f(M')\) is not closed, then Lemma~\ref{lem:separated_hereditary} shows that~\(M''\) is not separated and hence not complete. Conversely, we claim that~\(M''\) is complete if~\(f(M')\) is closed. Lemma~\ref{lem:separated_hereditary} shows that~\(M''\) is separated. Let \(S''\in\bdd_{M''}\). There is \(S\in\bdd_M\) with \(g(S)=S''\) because~\(g\) is a bornological quotient map. And there is \(T\in\bdd_M\) so that any \(S\)\nb-Cauchy sequence is \(T\)\nb-convergent. We claim that any \(S''\)\nb-Cauchy sequence is \(g(T)\)\nb-convergent. So let \((x''_n)_{n\in\mathbb N}\) be an \(S''\)\nb-Cauchy sequence. Thus there is a null sequence \((\delta_n)_{n\in\mathbb N}\) in~\(\dvr\) with \(x_n'' - x''_m \in \delta_j \cdot S''\) for all \(n,m,j\in\mathbb N\) with \(n,m \ge j\). As above, we may assume without loss of generality that the sequence of norms~\(\abs{\delta_n}\) is decreasing. Choose any \(x_0\in M\) with \(g(x_0) = x_0''\). For each \(n\in\mathbb N\), choose \(y_n\in S\) with \(x_{n+1}''-x''_n = \delta_n\cdot g(y_n)\). Let \[ x_n \defeq x_0 + \delta_0\cdot y_0 + \dotsb + \delta_{n-1}\cdot y_{n-1}. \] Then \(g(x_n) = x_n''\). And \(x_{n+1} - x_n = \delta_n\cdot y_n \in \delta_n \cdot S\). Since~\(\abs{\delta_n}\) is decreasing, this implies \(x_m - x_n \in \delta_n\cdot S\) for all \(m\ge n\). So the sequence~\((x_n)_{n\in\mathbb N}\) is \(S\)\nb-Cauchy. Hence it is \(T\)\nb-convergent. Thus \(g(x_n) = x_n''\) is \(g(T)\)\nb-convergent as asserted. This finishes the proof of~\ref{the:extension_complete_2}. Finally, we prove~\ref{the:extension_complete_3}. So we assume \(M'\) and~\(M''\) to be complete. If~\(M''\) is torsion-free, then~\(M\) is separated by Lemma~\ref{lem:separated_hereditary}. Hence the second statement in~\ref{the:extension_complete_3} is a special case of the first one. Let \(S\in\bdd_M\). We must find \(T\in\bdd_M\) so that every \(S\)\nb-Cauchy sequence is \(T\)\nb-convergent. Since~\(M\) is separated, this says that it is complete. Since~\(M''\) is complete, there is a \(\dvgen\)\nb-adically complete \(\dvr\)\nb-submodule \(T_0\in\bdd_{M''}\) that contains \(g(S)\). Since~\(g\) is a bornological quotient map, there is \(T_1\in\bdd_M\) with \(g(T_1) = T_0\). Replacing it by \(T_1+S\), we may arrange, in addition, that \(S\subseteq T_1\). Since \(f\) is a bornological embedding, \(T_2 \defeq f^{-1}(T_1)\) is bounded in~\(M'\). As~\(M'\) is complete, there is \(T_3\in\bdd_{M'}\) so that every \(T_2\)\nb-Cauchy sequence is \(T_3\)\nb-convergent. We claim that any \(S\)\nb-Cauchy sequence is \(T_1 + f(T_3)\)-convergent. The proof of this claim will finish the proof of the theorem. Let \((x_n)_{n\in\mathbb N}\) be an \(S\)\nb-Cauchy sequence. So there are \(\delta_n\in \dvr\) and \(y_n\in S\) with \(\lim {}\abs{\delta_n} = 0\) and \(x_{n+1} - x_n = \delta_n \cdot y_n\). As above, we may assume that~\(\abs{\delta_n}\) is decreasing and that \(\delta_0=1\). Since \(g(y_{n+k})\in g(S) \subseteq T_0\) and~\(T_0\) is \(\dvgen\)\nb-adically complete, the following series converges in~\(T_0\): \begin{equation} \label{eq:def_tilde_wn} \tilde{w}_n \defeq -\sum_{k=0}^\infty \frac{\delta_{n+k}}{\delta_n} g(y_{n+k}). \end{equation} Since \(\tilde{w}_n\in T_0\), there is \(w_n\in T_1\) with \(g(w_n) = \tilde{w}_n\). So \begin{multline*} \delta_n g(w_n) = \lim_{N\to\infty} -g\left(\sum_{k=0}^N \delta_{n+k} y_{n+k} \right) \\= \lim_{N\to\infty} g(x_n - x_{N+n+1}) = g(x_n) - \lim_{N\to\infty} g(x_N). \end{multline*} In particular, \(g(w_0) = g(x_0) - \lim_{N\to\infty} g(x_N)\). Now let \[ \tilde{x}_k \defeq x_k - \delta_k w_k + w_0 - x_0. \] Then \[ g(\tilde{x}_k) = g(x_k) - g(x_k) + \lim_{N \to \infty} g(x_N) + g(x_0) - \lim_{N \to \infty} g(x_N) - g(x_0) = 0. \] So \(\tilde{x}_k\in f(M')\) for all \(k\in\mathbb N\). And \begin{multline} \label{eq:tilde_x_Cauchy} \tilde{x}_{n+1} - \tilde{x}_n = x_{n+1} - x_n - \delta_{n+1}w_{n+1} + \delta_n w_n \\= \delta_n y_n + \delta_n w_n - \delta_{n+1}w_{n+1} = \delta_n \cdot \left(y_n + w_n - \frac{\delta_{n+1}}{\delta_n}w_{n+1}\right). \end{multline} Let \(z_n \defeq y_n + w_n - \frac{\delta_{n+1}}{\delta_n}w_{n+1}\). A telescoping sum argument shows that \begin{equation} \label{telescoping_sum} g(z_n) = g(y_n) + \tilde{w}_n - \frac{\delta_{n+1}}{\delta_n} \tilde{w}_{n+1} = 0. \end{equation} So \(z_n \in f(M')\). And \(z_n \in S + T_1 + T_1 = T_1\). Thus there is \(\hat{z}_n \in f^{-1}(T_1) = T_2\) with \(z_n = f(\hat{z}_n)\). Equation~\eqref{eq:tilde_x_Cauchy} means that the sequence \(f^{-1}(\tilde{x}_n)\) is \(T_2\)\nb-Cauchy. Hence it is \(T_3\)\nb-convergent. So~\((\tilde{x}_n)\) is \(f(T_3)\)\nb-convergent. Then~\((x_n)\) is \(T_1+f(T_3)\)-convergent. \end{proof} The following examples show that the technical extra assumptions in \ref{the:extension_complete_1b} and~\ref{the:extension_complete_3} in Theorem~\ref{the:extension_complete} are necessary. They only involve extensions of \(\dvr\)\nb-modules with the bornology where all subsets are bounded. For this bornology, bornological completeness and separatedness are the same as \(\dvgen\)\nb-adic completeness and separatedness, respectively, and any extension of \(\dvr\)\nb-modules is a bornological extension. \begin{example} Let \(M' \defeq \{0\}\) and \(M \defeq \dvf\) with the bornology of all subsets. Then~\(M'\) is bornologically complete, but not closed in~\(M\), and \(M/M' = M\) is torsion-free. So Theorem~\ref{the:extension_complete}.\ref{the:extension_complete_1b} needs the assumption that~\(M\) be separated. \end{example} \begin{example} \label{exa:quotient_complete_by_non-closed} Let~\(M\) be the \(\dvr\)\nb-module of all power series \(\sum_{n=0}^\infty c_n x^n\) with \(\lim {}\abs{c_n} = 0\) and with the bornology where all subsets are bounded; this is the \(\dvgen\)\nb-adic completion of the polynomial algebra~\(\dvr[x]\). Let \(M' = M\) and define \(f\colon M' \to M\), \(f\bigl( \sum_{n=0}^\infty c_n x^n\bigr) \defeq \sum_{n=0}^\infty c_n \dvgen^n x^n\). This is a bornological embedding simply because all subsets in \(M=M'\) are bounded. Let \(p_n \defeq \sum_{j=0}^n x^j\). This sequence in \(M'=M\) does not converge. Nevertheless, the sequence \(f(p_n) = \sum_{j=0}^n \dvgen^j x^j\) converges in~\(M\) to \(\sum_{j=0}^\infty \dvgen^j x^j\). Thus \(f(M')\) is not closed in~\(M\), although \(M\) and~\(M'\) are complete and~\(f\) is a bornological embedding. So Theorem~\ref{the:extension_complete}.\ref{the:extension_complete_1b} needs the assumption that~\(M''\) be torsion-free. \end{example} \begin{example} \label{exa:extension_of_complete_not_separated} We modify Example~\ref{exa:extension_not_separated} to produce an extension of \(\dvr\)\nb-modules \(N' \rightarrowtail N \twoheadrightarrow N''\) where \(N'\) and~\(N''\) are \(\dvgen\)\nb-adically complete, but~\(N\) is not \(\dvgen\)\nb-adically separated and hence not \(\dvgen\)\nb-adically complete. We let \(N' \defeq \dvr/(\dvgen) = \resf\). We let~\(N''\) be the \(\dvgen\)\nb-adic completion of the \(\dvr\)\nb-module~\(M''\) of Example~\ref{exa:extension_not_separated}. That is, \[ N'' \defeq \setgiven[\bigg]{ (c_n)_{n\in\mathbb N} \in \prod_{n=0}^\infty \dvr/(\dvgen^n)}{\lim{} \abs{c_n}=0}. \] This is indeed \(\dvgen\)\nb-adically complete. So is \[ N_1 \defeq \setgiven[\bigg]{ (c_n)_{n\in\mathbb N} \in \prod_{n=0}^\infty \dvr/(\dvgen^{n+1})}{\lim{} \abs{c_n}=0}. \] The kernel of the quotient map \(q\colon N_1\twoheadrightarrow N''\) is isomorphic to \(\prod_{n=0}^\infty \dvr/(\dvgen) = \prod_\mathbb N \resf\). This is a \(\resf\)\nb-vector space, and it contains the \(\resf\)\nb-vector space \(\sum_{n=0}^\infty \resf\). Since any \(\resf\)\nb-vector space has a basis, we may extend the linear functional \(\sum_{n=0}^\infty \resf \to \resf\), \((c_n)_{n\in\mathbb N} \mapsto \sum_{n=0}^\infty c_n\), to a \(\resf\)\nb-linear functional \(\sigma\colon \prod_\mathbb N \resf \to \resf\). Let \(L\defeq \ker \sigma \subseteq \ker q\) and let \(N\defeq N_1/L\). The map~\(q\) descends to a surjective \(\dvgen\)\nb-linear map \(N \twoheadrightarrow N''\). Its kernel is isomorphic to \(\prod_\mathbb N \resf/\ker \sigma \cong \resf = N'\). The functional \(\sigma\colon \prod_\mathbb N \resf \to \resf\) vanishes on \(\delta_0 - \delta_k\) for all \(k\in\mathbb N\), but not on~\(\delta_0\). When we identify \(\prod_\mathbb N \resf \cong \ker q\), we map~\(\delta_k\) to \(\dvgen^k \delta_k \in N_1\). So \(\delta_0\) and~\(\dvgen^k \delta_k\) get identified in~\(N\), but~\(\delta_0\) does not become~\(0\): it is the generator of \(N' = \dvr/(\dvgen)\) inside~\(N\). Since \([\delta_0] = \dvgen^k [\delta_k]\) in~\(N\), the \(\dvr\)\nb-module~\(N\) is not \(\dvgen\)\nb-adically separated. \end{example} The \emph{completion} \(\comb{M}\) of a bornological \(\dvr\)\nb-module~\(M\) is a complete bornological \(\dvr\)\nb-module with a bounded \(\dvr\)\nb-linear map \(M\to \comb{M}\) that is universal in the sense that any bounded \(\dvr\)\nb-linear map from~\(M\) to a complete bornological \(\dvr\)\nb-module~\(X\) factors uniquely through~\(\comb{M}\). Such a completion exists and is unique up to isomorphism (see~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~2.15}). We shall describe it more concretely later when we need the details of its construction. \subsection{Vector spaces over the fraction field} \label{sec:dvf-vector_spaces} Recall that~\(\dvf\) denotes the quotient field of~\(\dvr\). Any \(\dvr\)\nb-linear map between two \(\dvf\)\nb-vector spaces is also \(\dvf\)\nb-linear. So \(\dvf\)\nb-vector spaces with \(\dvf\)\nb-linear maps form a full subcategory in the category of \(\dvr\)\nb-modules. A \(\dvr\)\nb-module~\(M\) comes from a \(\dvf\)\nb-vector space if and only if the map \begin{equation} \label{eq:mult_dvgen} \dvgen_M\colon M\to M,\qquad m\mapsto \dvgen\cdot m, \end{equation} is invertible. We could define bornological \(\dvf\)\nb-vector spaces without reference to~\(\dvr\). Instead, we realise them as bornological \(\dvr\)\nb-modules with an extra property: \begin{definition} A bornological \(\dvr\)\nb-module~\(M\) is a \emph{bornological \(\dvf\)\nb-vector space} if the map~\(\dvgen_M\) in~\eqref{eq:mult_dvgen} is a bornological isomorphism, that is, an invertible map with bounded inverse. \end{definition} Given a bornological \(\dvr\)\nb-module~\(M\), the tensor product \(\dvf\otimes M \defeq \dvf\otimes_\dvr M\) with the tensor product bornology (see \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma~2.18}) is a bornological \(\dvf\)\nb-vector space because multiplication by~\(\dvgen\) is a bornological isomorphism on~\(\dvf\). \begin{lemma} \label{lem:universal_to_vector_space} The canonical bounded \(\dvr\)\nb-linear map \[ \iota_M \colon M \to \dvf\otimes M,\qquad m \mapsto 1 \otimes m, \] is the universal arrow from~\(M\) to a bornological \(\dvf\)\nb-vector space, that is, any bounded \(\dvr\)\nb-linear map \(f\colon M\to N\) to a bornological \(\dvf\)\nb-vector space~\(N\) factors uniquely through a bounded \(\dvr\)\nb-linear map \(f^\#\colon \dvf\otimes M \to N\), and this map is also \(\dvf\)\nb-linear. \end{lemma} \begin{proof} A \(\dvr\)\nb-linear map \(f^\#\colon \dvf\otimes M \to N\) must be \(\dvf\)\nb-linear. Hence the only possible candidate is the \(\dvf\)\nb-linear map defined by \(f^\#(x\otimes m) \defeq x\cdot f(m)\) for \(m\in M\), \(x\in \dvf\). Any bounded submodule of \(\dvf\otimes M\) is contained in \(\dvgen^{-k}\dvr \otimes S\) for some bounded submodule \(S\subseteq M\) and some \(k\in\mathbb N\), and \(f^\#(\dvgen^{-k}\dvr \otimes S) = \dvgen_N^{-k}(f(S))\) is bounded in~\(N\) because~\(\dvgen_N\) is a bornological isomorphism. Thus~\(f^\#\) is bounded. \end{proof} \section{Spectral radius and semi-dagger algebras} \label{sec:semi-dagger} A \emph{bornological \(\dvr\)\nb-algebra} is a bornological \(\dvr\)\nb-module~\(A\) with a bounded, \(\dvr\)\nb-linear, associative multiplication. We do not assume~\(A\) to have a unit element. We fix a bornological \(\dvr\)\nb-algebra~\(A\) throughout this section. We recall some definitions from~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}. Let \(\varepsilon = \abs{\dvgen}\). Let \(S\in\bdd_A\) and let \(r\le1\). There is a smallest integer~\(j\) with \(\varepsilon^j \le r\), namely, \(\ceil{\log_{\varepsilon}(r)}\). Define \[ r \star S \defeq \dvgen^{\ceil{\log_{\varepsilon}(r)}} \cdot S. \] Let \(\sum_{n=1}^\infty r^n \star S^n\) be the \(\dvr\)\nb-submodule generated by \(\bigcup_{n=1}^\infty r^n \star S^n\). That is, its elements are finite \(\dvr\)\nb-linear combinations of elements in \(\bigcup_{n=1}^\infty r^n \star S^n\). \begin{definition} The \emph{truncated spectral radius} \(\varrho_1(S) = \varrho_1(S;\bdd_A)\) of \(S \in\bdd_A\) is the infimum of all \(r\ge 1\) for which \(\sum_{n=1}^\infty r^{-n} \star S^n\) is bounded. It is~\(\infty\) if no such~\(r\) exists. \end{definition} By definition, \(\varrho_1(S) \in [1,\infty]\). If~\(A\) is an algebra over the fraction field~\(\dvf\) of~\(\dvr\), then we may define \(\sum_{n=1}^\infty r^{-n} \star S^n\) also for \(0<r<1\). Then the full spectral radius \(\varrho(S) \in [0,\infty]\) is defined like~\(\varrho_1(S)\), but without the restriction to \(r\ge1\). The arguments in the beginning of Section~3.1 in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean} assume implicitly that the \(\dvr\)\nb-algebra~\(A\) is a bornological subalgebra in a \(\dvf\)\nb-algebra, so that the spectral radius is defined without truncation. This means that~\(A\) is bornologically torsion-free (see Proposition~\ref{pro:bornological_torsion_characterisation}). The results in \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Section~3.1} work without this assumption if the truncated spectral radius is used througout and the following lemma is used instead of \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma~3.1.2}: \begin{lemma} \label{lgb_equivalence} Let \(S\subseteq A\) be a bounded \(\dvr\)\nb-submodule and \(m\in \mathbb N_{\ge1}\). Then \(\varrho_1(S) = 1\) if and only if \(\sum_{l=1}^\infty (\dvgen^m S^j)^l\) is bounded for all \(j \in \mathbb N_{\ge1}\). \end{lemma} \begin{proof} Let \(\varrho_1(S)= 1\) and \(j \in \mathbb N_{\ge1}\). Then \(\sum_{l=1}^\infty \dvgen^{m \cdot l} S^{j\cdot l} \subseteq \sum_{k=1}^\infty (\varepsilon^{m/j})^k \star S^k\) is bounded because \(\varepsilon^{m/j}<1\). Conversely, let \(\sum_{l=1}^\infty (\dvgen^m S^j)^l\) be bounded. Then \(\varrho_1(\dvgen^m S^j)= 1\). The proof of \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma~3.1.2} shows that \(\varrho_1(S) \le \varepsilon^{-m/j}\). This inequality for all \(j\in \mathbb N_{\ge1}\) implies \(\varrho_1(S) = 1\). \end{proof} \begin{definition} A bornological \(\dvr\)\nb-algebra~\(A\) is \emph{semi-dagger} if \(\varrho_1(S) = 1\) for all \(S\in\bdd_A\). \end{definition} \begin{proposition}[\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~3.1.3}] \label{pro:semi-dagger_criterion} A bornological \(\dvr\)\nb-algebra~\(A\) is semi-dagger if and only if \(\sum_{i=0}^\infty \dvgen^i S^{i+1}\) is bounded for all \(S\in\bdd_A\), if and only if \(\sum_{i=0}^\infty \dvgen^i S^{ci+d}\) is bounded for all \(S\in\bdd_A\) and \(c,d\in \mathbb N\) with \(d\ge 1\), if and only if any \(S\in\bdd_A\) is contained in a bounded \(\dvr\)\nb-submodule \(U\subseteq A\) with \(\dvgen\cdot U\cdot U \subseteq U\). \end{proposition} \begin{definition} The \emph{linear growth bornology} on a bornological \(\dvr\)\nb-algebra~\(A\) is the smallest semi-dagger bornology on~\(A\). That is, it is the smallest bornology~\(\bdd'_A\) with \(\varrho_1(S;\bdd'_A)= 1\) for all \(S\in\bdd'_A\). Let~\(\ling{A}\) be~\(A\) with the linear growth bornology. \end{definition} The existence of a smallest semi-dagger bornology is shown in~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean} by describing it explicitly as follows: \begin{lemma}[\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~3.1.3 and Lemma~3.1.10}] \label{equivalent_semidagger} Let \(T\subseteq A\). The following are equivalent: \begin{enumerate} \item \label{equivalent_semidagger_1}% \(T\) is bounded in~\(\ling{A}\); \item \label{equivalent_semidagger_2}% \(T \subseteq \sum_{i=0}^\infty \dvgen^i S^{i+1}\) for some \(S\in\bdd_A\). \item \label{equivalent_semidagger_3}% \(T \subseteq \sum_{i=0}^\infty \dvgen^i S^{ci+d}\) for some \(S\in\bdd_A\) and \(c,d\in \mathbb N\) with \(d\ge 1\). \end{enumerate} \end{lemma} More precisely, the proof of \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma~3.1.10} shows that the subsets in~\ref{equivalent_semidagger_2} form a semi-dagger bornology that contains~\(\bdd_A\). And Proposition~\ref{pro:semi-dagger_criterion} shows that these subsets are bounded in any semi-dagger bornology on~\(A\) that contains~\(\bdd_A\). So they form the smallest semi-dagger bornology containing~\(\bdd_A\). By definition, the algebra~\(\ling{A}\) has the following universal property: if~\(B\) is a semi-dagger \(\dvr\)\nb-algebra, then an algebra homomorphism \(A \to B\) is bounded if and only if it is bounded on~\(\ling{A}\). The algebra~\(A\) is semi-dagger if and only if \(A=\ling{A}\). \begin{theorem} \label{the:extension_lgb} Let \(A \xrightarrow{i} B \xrightarrow{q} C\) be an extension of bornological \(\dvr\)\nb-algebras. Then~\(B\) is a semi-dagger algebra if and only if both \(A\) and~\(C\) are. \end{theorem} \begin{proof} First assume~\(B\) to be semi-dagger. Let \(S\in\bdd_A\). Then \(\sum_{j=0}^\infty \dvgen^j i(S)^{j+1}\) is bounded in~\(B\). Since~\(i\) is a bornological embedding, it follows that \(\sum_{j=0}^\infty \dvgen^j S^{j+1}\) is bounded in~\(A\). That is, \(\varrho_1(S;\bdd_A) = 1\). So~\(A\) is semi-dagger. Now let \(S\in\bdd_C\). Since~\(q\) is a bornological quotient map, there is \(T\in\bdd_B\) with \(q(T) = S\). The subset \(\sum_{j=0}^\infty \dvgen^j T^{j+1}\) is bounded in~\(B\) because~\(B\) is semi-dagger. Its image under~\(q\) is also bounded, and this is \(\sum_{j=0}^\infty \dvgen^j S^{j+1}\). So \(\varrho_1(S;\bdd_C) = 1\) and~\(C\) is semi-dagger. Now assume that \(A\) and~\(C\) are semi-dagger. We show that \(\sum_{l=1}^\infty(\dvgen^2 S^j)^l\) is bounded in~\(B\) for all \(S\in\bdd_B\), \(j\in\mathbb N_{\ge1}\). This implies \(\varrho_1(S;\bdd_B)=1\) by Lemma~\ref{lgb_equivalence}. Since~\(C\) is semi-dagger, \(\varrho_1(q(S);\bdd_C)= 1\). Thus \(S_2 \defeq \sum_{l=1}^\infty q(\dvgen S^j)^l\) is bounded in~\(C\) by Lemma~\ref{lgb_equivalence}. Since~\(q\) is a quotient map, there is \(T \in\bdd_B\) with \(q(T) = S_2\). We may choose~\(T\) with \(\dvgen S^j \subseteq T\). For each \(x,y \in T\), we have \(q(x\cdot y) \in S_2 \cdot S_2 \subseteq S_2 = q(T)\). Hence there is \(\omega(x,y) \in T\) with \(x\cdot y - \omega(x,y) \in i(A)\). Let \[ \Omega \defeq \setgiven{x\cdot y - \omega(x,y)}{x,y\in T}. \] This is contained in \(T^2 - T\). So \(\Omega\in\bdd_B\). And \(T^2 \subseteq T + \Omega\). By construction, \(\Omega\) is also contained in~\(i(A)\). Since~\(i\) is a bornological embedding, \(i^{-1}(\Omega)\) is bounded in~\(A\). Since~\(A\) is semi-dagger, we have \(\varrho_1(i^{-1}(\Omega); \bdd_A) = 1\). So \(\sum_{n=1}^\infty (\dvgen \cdot \Omega)^n\) is bounded. Thus the subset \[ U \defeq \sum_{n=1}^\infty (\dvgen \cdot \Omega)^n + \sum_{n=0}^\infty T \cdot (\dvgen \cdot \Omega)^n = \sum_{n=1}^\infty (\dvgen \cdot \Omega)^n + T + \sum_{n=1}^\infty T \cdot (\dvgen \cdot \Omega)^n \] of~\(B\) is bounded. Using \(T^2 \subseteq T + \Omega\), we prove that \(\dvgen T\cdot U \subseteq U\). Hence \((\dvgen T)^n\cdot U \subseteq U\) for all \(n\in\mathbb N_{\ge1}\) by induction. Since \(T \subseteq U\), this implies \(\sum_{l=1}^\infty \dvgen^l T^{l+1} \subseteq U\). Hence \(\sum_{l=2}^\infty (\dvgen T)^l = \dvgen \cdot \sum_{l=1}^\infty \dvgen^l T^{l+1} \subseteq \dvgen U\). Therefore, \(\sum_{l=1}^\infty (\dvgen T)^l\) is bounded. Since \(\dvgen^2 S^j \subseteq \dvgen T\), it follows that \( \sum_{l=1}^\infty (\dvgen^2 S^j)^l\) is bounded for all~\(j\), as desired. \end{proof} \begin{proposition}[\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma 3.1.12}] \label{pro:completion_semidagger} If~\(A\) is semi-dagger, then so is its completion~\(\comb{A}\). \end{proposition} Let~\(\comling{A}\) be the completion of~\(\ling{A}\). This algebra is both complete and semi-dagger by Proposition~\ref{pro:completion_semidagger}. The canonical bounded homomorphism \(A\to \comling{A}\) is the universal arrow from~\(A\) to a complete semi-dagger algebra, that is, any bounded homomorphism \(A\to B\) for a complete semi-dagger algebra~\(B\) factors uniquely through it. This follows immediately from the universal properties of the linear growth bornology and the completion. \section{Bornological torsion-freeness} \label{sec:born_tf} Let~\(M\) be a bornological \(\dvr\)\nb-module. The bounded linear map \(\dvgen_M\colon M\to M\), \(m\mapsto \dvgen\cdot m\), is defined in~\eqref{eq:mult_dvgen}. \begin{definition} A bornological \(\dvr\)\nb-module~\(M\) is \emph{bornologically torsion-free} if~\(\dvgen_M\) is a bornological embedding. Equivalently, \(\dvgen \cdot m = 0\) for \(m\in M\) only happens for \(m = 0\) and any bounded subset of~\(M\) that is containd in \(\dvgen \cdot M\) is of the form \(S = \dvgen \cdot T\) for some \(T \in \bdd_M\). \end{definition} Bornological \(\dvf\)\nb-vector spaces are bornologically torsion-free because bornological isomorphisms are bornological embeddings. We are going to show that~\(M\) is bornologically torsion-free if and only if the canonical map \(\iota_M\colon M\to \dvf\otimes M\) defined in Lemma~\ref{lem:universal_to_vector_space} is a bornological embedding. The proof uses the following easy permanence property: \begin{lemma} \label{sub_tf} Let~\(M\) be a bornological \(\dvr\)\nb-module and let \(N\subseteq M\) be a \(\dvr\)\nb-submodule with the subspace bornology. If~\(M\) is bornologically torsion-free, then so is~\(N\). \end{lemma} \begin{proof} Let \(j\colon N\to M\) be the inclusion map, which is a bornological embedding by assumption. Since~\(\dvgen_M\) is a bornological embedding, so is \(\dvgen_M\circ j = j\circ \dvgen_N\). Since~\(j\) is a bornological embedding, this implies that~\(\dvgen_N\) is a bornological embedding. That is, \(N\) is bornologically torsion-free. \end{proof} \begin{proposition} \label{pro:bornological_torsion_characterisation} A bornological \(\dvr\)\nb-module~\(M\) is bornologically torsion-free if and only if the canonical map \(\iota_M\colon M \to \dvf\otimes M\) is a bornological embedding. \end{proposition} \begin{proof} As a bornological \(\dvf\)\nb-vector space, \(\dvf\otimes M\) is bornologically torsion-free. Hence~\(M\) is bornologically torsion-free by Lemma~\ref{sub_tf} if~\(\iota_M\) is a bornological embedding. Conversely, assume that~\(M\) is bornologically torsion-free. The map~\(\iota_M\) is injective because~\(M\) is algebraically torsion-free. It remains to show that a subset~\(S\) of~\(M\) is bounded if \(\iota_M(S)\subseteq \dvf\otimes M\) is bounded. If~\(\iota_M(S)\) is bounded, then it is contained in \(\dvgen^{-k}\cdot \dvr\otimes T\) for some \(k\in\mathbb N\) and some \(T\in\bdd_M\). Equivalently, \(\dvgen_M^k(S) = \dvgen^k\cdot S\) is bounded in~\(M\). Since~\(\dvgen_M\) is a bornological embedding, induction shows that \(\dvgen_M^k\colon M\to M\), \(m\mapsto \dvgen^k\cdot m\), is a bornological embedding as well. So the boundedness of \(\dvgen_M^k(S)\) implies that~\(S\) is bounded. \end{proof} \begin{proposition} \label{universal_torsionfree} Let \(\torf{M} \defeq \iota_M(M)\subseteq \dvf\otimes M\) equipped with the subspace bornology and the surjective bounded linear map \(\iota_M\colon M\to \torf{M}\). This is the universal arrow from~\(M\) to a bornologically torsion-free module, that is, any bounded linear map \(f\colon M\to N\) into a bornologically torsion-free module~\(N\) factors uniquely through a bounded linear map \(f^\#\colon \torf{M} \to N\). \end{proposition} \begin{proof} Since \(\dvf\otimes M\) is bornologically torsion-free as a bornological \(\dvf\)\nb-vector space, \(\torf{M}\) is bornologically torsion-free as well by Lemma~\ref{sub_tf}. We prove the universality of the canonical map \(\iota_M\colon M\to \torf{M}\). Let~\(N\) be a bornologically torsion-free \(\dvr\)\nb-module and let \(f\colon M \to N\) be a bornological \(\dvr\)\nb-module map. Then \(N \hookrightarrow \dvf\otimes N\) is a bornological embedding by Proposition~\ref{pro:bornological_torsion_characterisation}, and we may compose to get a bounded \(\dvr\)\nb-linear map \(M \to \dvf\otimes N\). By Lemma~\ref{lem:universal_to_vector_space}, there is a unique bounded \(\dvf\)\nb-linear map \(f'\colon \dvf\otimes M \to \dvf\otimes N\) with \(f'(\iota_M(m)) = f(m)\) for all \(m\in M\). Since \(f'(\iota_M(M)) \subseteq N\), \(f'\) maps the submodule \(\torf{M}\subseteq \dvf\otimes M\) to the submodule \(N\subseteq \dvf\otimes N\). The restricted map \(f^\#\colon \torf{M} \to N\) is bounded because both submodules carry the subspace bornology. This is the required factorisation of~\(f\). It is unique because \(\iota_M\colon M\to\torf{M}\) is surjective. \end{proof} We have seen that being bornologically torsion-free is hereditary for submodules. The obvious counterexample \(\resf = \dvr \mathbin{/} \dvgen \dvr\) shows that it cannot be hereditary for quotients. Next we show that it is hereditary for extensions: \begin{theorem} \label{the:extension_tf} Let \(M' \xrightarrow{i} M \xrightarrow{q} M''\) be an extension of bornological \(\dvr\)\nb-modules. If \(M'\) and~\(M''\) are bornologically torsion-free, then so is~\(M\). \end{theorem} \begin{proof} The exactness of the sequence \(0 \to \ker \dvgen_{M'} \to \ker \dvgen_M \to \ker \dvgen_{M''}\) shows that~\(\dvgen_M\) is injective. Let \(S\in \bdd_M\) be contained in~\(\dvgen M\). We want a bounded subset \(S' \in \bdd_M\) with \(\dvgen \cdot S' = S\). We have \(q(S) \subseteq q(\dvgen\cdot M) \subseteq \dvgen\cdot M''\), and \(q(S)\in \bdd_{M''}\) because~\(q\) is bounded. Since~\(M''\) is bornologically torsion-free, there is \(T''\in\bdd_{M''}\) with \(\dvgen\cdot T'' = q(S)\). Since~\(q\) is a bornological quotient map, there is \(T\in \bdd_M\) with \(q(T) = T''\). Thus \(q(\dvgen\cdot T) = q(S)\). So for any \(x\in S\) there is \(y\in T\) with \(q(\dvgen\cdot y) = q(x)\). Since \(i = \ker(q)\), there is a unique \(z\in M'\) with \(x-\dvgen y = i(z)\). Let~\(T'\) be the set of these~\(z\). Since \(x\in \dvgen\cdot M\) by assumption and~\(M''\) is torsion-free, we have \(z \in \dvgen \cdot M'\). So \(T'\subseteq \dvgen\cdot M'\). And~\(T'\) is bounded because \(T' \subseteq i^{-1}(S-\dvgen\cdot T)\) and~\(i\) is a bornological embedding, Since~\(M'\) is bornologically torsion-free, there is a bounded subset \(U'\in \bdd_{M'}\) with \(\dvgen \cdot U' = T'\). Then \(S\subseteq \dvgen\cdot T + i(\dvgen\cdot U') = \dvgen\cdot (T + i(U'))\). \end{proof} Next we prove that bornological torsion-freeness is inherited by completions: \begin{theorem} \label{the:completion_tf} If~\(M\) is bornologically torsion-free, then so is its bornological completion~\(\comb{M}\). \end{theorem} The proof requires some preparation. We must look closely at the construction of completions of bornological \(\dvr\)\nb-modules. \begin{proposition}[\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~2.15}] \label{completions_exist} Let~\(M\) be a bornological \(\dvr\)\nb-module. A completion of~\(M\) exists and is constructed as follows. Write \(M = \varinjlim M_i\) as an inductive limit of the directed set of its bounded \(\dvr\)\nb-submodules. Let~\(\coma{M_i}\) denote the \(\dvgen\)\nb-adic completion of~\(M_i\). These form an inductive system as well, and \(\comb{M} \cong (\varinjlim \coma{M_i} ) \bigm/ \overline{\{0\}}\) is the separated quotient of their bornological inductive limit. The completion functor commutes with colimits, that is, the completion of a colimit of a diagram of bornological \(\dvr\)\nb-modules is the separated quotient of the colimit of the diagram of completions. \end{proposition} Since taking quotients may create torsion, the information above is not yet precise enough to show that completions inherit bornological torsion-freeness. This requires some more work. First we write~\(M\) in a certain way as an inductive limit, using that it is bornologically torsion-free. For a bounded submodule~\(S\) in~\(M\), let \begin{align*} \dvgen^{-n} S &\defeq \setgiven{x\in M}{\dvgen^n \cdot x \in S} \subseteq M\\ M_S &\defeq \bigcup_{n\in\mathbb N} \dvgen^{-n} S \subseteq M. \end{align*} The \emph{gauge semi-norm} of~\(S\) is defined by \(\norm{x}_S \defeq \inf \setgiven{\varepsilon^n}{x\in\dvgen^n S}\), where \(\varepsilon=\abs{\dvgen}\) (see \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Example~2.4}). A subset is bounded for this semi-norm if and only if it is contained in~\(\dvgen^{-n} S\) for some \(n\in\mathbb N\). Since~\(M\) is bornologically torsion-free, \(\dvgen^{-n} S\in\bdd_M\) for \(n\in\mathbb N\). So subsets that are bounded in the gauge semi-norm on~\(M_S\) are bounded in~\(M\). If \(S\subseteq T\), then \(M_S\subseteq M_T\) and the inclusion is contracting and hence bounded. The bornological inductive limit of this inductive system is naturally isomorphic to~\(M\) because any bounded subset of~\(M\) is bounded in~\(M_S\) for some bounded submodule \(S\subseteq M\) (compare the proof of \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~2.5}). The bornological completion~\(\comb{M_S}\) of~\(M_S\) as a bornological \(\dvr\)\nb-module is canonically isomorphic to its Hausdorff completion as a semi-normed \(\dvr\)\nb-module. We call this a \emph{Banach \(\dvr\)\nb-module}. Both completions are isomorphic to the increasing union of the \(\dvgen\)\nb-adic completions \(\coma{\dvgen^{-n}S}\). If \(S\subseteq T\), then \(M_S\subseteq M_T\) and this inclusion is norm-contracting. So we get an induced contractive linear map \(i_{T,S}\colon \comb{M_S} \to \comb{M_T}\). This map need not be injective any more (see \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Example~2.15}). Hence the canonical maps \(i_{\infty,S}\colon \comb{M_S} \to \comb{M}\) need not be injective. The bornological completion commutes with (separated) inductive limits by Proposition~\ref{completions_exist}. So the completion of~\(M\) is isomorphic to the separated quotient of the colimit of the inductive system formed by the Banach \(\dvr\)\nb-modules \(\comb{M_S}\) and the norm-contracting maps \(i_{T,S}\) for \(S\subseteq T\). \begin{lemma} \label{lem:kernel_to_completion} The submodules \[ Z_S\defeq \ker(i_{\infty,S}\colon \comb{M_S} \to \comb{M}) = i_{\infty,S}^{-1}(\{0\}) \subseteq \comb{M_S} \] are norm-closed and satisfy \(i_{T,S}^{-1}(Z_T) = Z_S\) if \(S\subseteq T\). They are minimal with these properties in the sense that if \(L_S \subseteq \comb{M_S}\) are norm-closed and satisfy \(i_{T,S}^{-1}(L_T) = L_S\) for \(S\subseteq T\), then \(Z_S \subseteq L_S\) for all bounded \(\dvr\)\nb-submodules \(S\subseteq M\). \end{lemma} \begin{proof} The property \(i_{T,S}^{-1}(Z_T) = Z_S\) is trivial. The map~\(i_{\infty,S}\) is bounded and hence preserves convergence of sequences. Since~\(\comb{M}\) is separated, the subset \(\{0\}\subseteq \comb{M}\) is bornologically closed. Therefore, its preimage~\(Z_S\) in~\(\comb{M_S}\) is also closed. Let~\((L_S)\) be any family of closed submodules with \(i_{T,S}^{-1}(L_T) = L_S\). The quotient seminorm on \(\comb{M_S}/L_S\) is again a norm because~\(L_S\) is closed. And \(\comb{M_S}/L_S\) inherits completeness from~\(\comb{M_S}\) by Theorem~\ref{the:extension_complete}. If \(S\subseteq T\), then~\(\iota_{T,S}\) induces an injective map \(i_{T,S}'\colon \comb{M_S}/L_S \to \comb{M_T}/L_T\) because \(L_S = i_{T,S}^{-1}(L_T)\). Hence the colimit of the inductive system \((\comb{M_S}/L_S,i_{T,S}')\) is like a directed union of subspaces, and each \(\comb{M_S}/L_S\) maps faithfully into it. Thus this colimit is separated. It is even complete because each \(\comb{M_S}/L_S\) is complete. Hence the map from~\(M\) to this colimit induces a map on the completion~\(\comb{M}\). This implies \(Z_S \subseteq L_S\). \end{proof} Next we link~\(\comb{M}\) to the \(\dvgen\)\nb-adic completion \(\coma{M} \defeq \varprojlim M/\dvgen^j M\). Equip the quotients \(M/\dvgen^j M\) with the quotient bornology. Since \(\dvgen^j\cdot (M/\dvgen^j) = 0\), any Cauchy sequence in \(M/\dvgen^j M\) is eventually constant. So each \(M/\dvgen^j M\) is complete. Hence the quotient map \(M\to M/\dvgen^j M\) induces a bounded \(\dvr\)\nb-module homomorphism \(\comb{M} \to M/\dvgen^j M\). Putting them all together gives a map \(\comb{M} \to \coma{M}\), which is bounded if we give~\(\coma{M}\) the projective limit bornology. Let \(S\subseteq M\) be a bounded \(\dvr\)\nb-submodule and let \(j\in\mathbb N\). We have defined the submodules~\(M_S\) so that \(M_S \cap \dvgen^j M = \dvgen^j M_S\). That is, the map \(M_S/ \dvgen^j M_S \to M/\dvgen^j M\) is injective. Since~\(M_S\) is dense in its norm-completion~\(\comb{M_S}\), we have \(\comb{M_S} = M_S + \dvgen^j \coma{S}\) and hence \(\comb{M_S} = M_S + \dvgen^j \comb{M_S}\). Thus the inclusion \(M_S \to \comb{M_S}\) induces an isomorphism \[ M_S/\dvgen^j M_S \cong \comb{M_S}/\dvgen^j \comb{M_S}. \] Letting~\(j\) vary, we get an injective map \(\coma{M_S} \to \coma{M}\) and an isomorphism between the \(\dvgen\)\nb-adic completions of \(M_S\) and~\(\comb{M_S}\). \begin{proof}[Proof of Theorem~\textup{\ref{the:completion_tf}}] For each bounded \(\dvr\)\nb-submodule \(S\subseteq M\), define \(L_S \defeq \bigcap_{j\in\mathbb N} \dvgen^j\cdot \comb{M_S} \subseteq \comb{M_S}\). This is the kernel of the canonical map to the \(\dvgen\)\nb-adic completion of~\(\comb{M_S}\). The completion~\(\comb{M_S}\) is torsion-free because it carries a norm. Hence~\(L_S\) is also the largest \(\dvf\)\nb-vector space contained in~\(\comb{M_S}\). The subspace~\(L_S\) is closed because the maps \(\comb{M_S} \to \comb{M_S}/\dvgen^j \comb{M_S}\) for \(j\in\mathbb N\) are bounded and their target spaces are separable, even complete. Let \(S\subseteq T\). The maps \(M_S/\dvgen^j M_S \to M_T/\dvgen^j M_T\) are injective for all \(j\in\mathbb N\), and \(\comb{M_S}/\dvgen^j \comb{M_S} \cong M_S/\dvgen^j M_S\), \(\comb{M_T}/\dvgen^j \comb{M_T} \cong M_T/\dvgen^j M_T\). So~\(i_{T,S}\) induces an injective map \(\comb{M_S}/\dvgen^j \comb{M_S} \to \comb{M_T}/\dvgen^j \comb{M_T}\). This implies \(i_{T,S}^{-1}(\dvgen^j \comb{M_T}) = \dvgen^j \comb{M_S}\) for all \(j\in\mathbb N\) and then \(i_{T,S}^{-1}(L_T) = L_S\). By Lemma~\ref{lem:kernel_to_completion}, the kernel \(Z_S = \ker(i_{\infty,S})\) is contained in~\(L_S\) for all~\(S\). Since \(\dvgen_{L_S}\) is a bornological isomorphism, the subsets \(\dvgen\cdot Z_S \subseteq Z_S\) are also bornologically closed, and they satisfy \(i_{T,S}^{-1}(\dvgen\cdot Z_T) = \dvgen\cdot i_{T,S}^{-1}(Z_T) = \dvgen\cdot Z_S\). Hence \(Z_S \subseteq \dvgen\cdot Z_S\) for all~\(S\) by Lemma~\ref{lem:kernel_to_completion}. Thus \(Z_S\subseteq L_S\) is a \(\dvf\)\nb-vector subspace in~\(\comb{M_S}\). So the quotient \(\comb{M_S}/Z_S\) is still bornologically torsion-free. And any element of \(\comb{M_S}/Z_S\) that is divisible by~\(\dvgen^j\) lifts to an element in \(\dvgen^j\cdot \comb{M_S}\). Any bounded subset of~\(\comb{M}\) is contained in \(i_{\infty,S}(\coma{S})\) for some bounded \(\dvr\)\nb-submodule \(S\subseteq M\), where we view~\(\coma{S}\) as a subset of~\(\comb{M_S}\). Let \(j\in\mathbb N\). To prove that~\(\comb{M}\) is bornologically torsion-free, we must show that \(\dvgen^{-j} i_{\infty,S}(\coma{S})\) is bounded. Let \(x\in\comb{M}\) satisfy \(\dvgen^j x \in i_{\infty,S}(\coma{S})\). We claim that \(x=i_{\infty,S}(y)\) for some \(y\in \comb{M_S}\) with \(\dvgen^j y \in\coma{S}\). This implies that \(\dvgen^{-j}\cdot i_{\infty,S}(\coma{S})\) is bounded in~\(\comb{M}\). It remains to prove the claim. There are a bounded \(\dvr\)\nb-submodule \(T\subseteq M\) and \(z\in \comb{M_T}\) with \(x=i_{\infty,T}(z)\). We may replace~\(T\) by \(T+S\) to arrange that \(T\supseteq S\). Let \(w\in \coma{S}\) satisfy \(\dvgen^j x = i_{\infty,S}(w)\). This is equivalent to \(\dvgen^j z - i_{T,S}(w) \in \ker i_{\infty,T} = Z_T\). Since~\(Z_T\) is a \(\dvf\)\nb-vector space, there is \(z_0\in Z_T\) with \(\dvgen^j z - i_{T,S}(w) = \dvgen^j z_0\). Since \(x=i_{\infty,T}(z-z_0)\), we may replace~\(z\) by \(z-z_0\) to arrange that \(\dvgen^j z = i_{T,S}(w)\). Since \(i_{T,S}^{-1}(\dvgen^j \comb{M_T}) = \dvgen^j \comb{M_S}\), there is \(y\in \comb{M_S}\) with \(\dvgen^j\cdot y = w\). Then \(\dvgen^j z = \dvgen^j i_{T,S}(y)\). This implies \(z = i_{T,S}(y)\) because~\(\comb{M_T}\) is torsion-free. This proves the claim. \end{proof} \begin{proposition} \label{pro:born_tf_linear_completion} Let~\(M\) be a bornologically torsion-free bornological \(\dvr\)\nb-module. Then \(\dvf\otimes \comb{M} \cong \comb{\dvf\otimes M}\) with an isomorphism compatible with the canonical maps from~\(M\) to both spaces. \end{proposition} \begin{proof} The canonical map \(M\to \comb{M}\) is the universal arrow from~\(M\) to a complete \(\dvr\)\nb-module. The canonical map \(M\to \dvf\otimes M\) is the universal arrow from~\(M\) to a bornological \(\dvf\)\nb-vector space by Lemma~\ref{lem:universal_to_vector_space}. Since \(\dvf\otimes \comb{M}\) is again complete, the canonical map \(M\to \dvf\otimes \comb{M}\) is the universal arrow from~\(M\) to a complete bornological \(\dvf\)\nb-vector space. The completion \(\comb{\dvf\otimes M}\) is also a bornological \(\dvf\)\nb-vector space. The canonical map \(M\to \comb{\dvf\otimes M}\) is another universal arrow from~\(M\) to a complete bornological \(\dvf\)\nb-vector space. Since the universal property determines its target uniquely up to canonical isomorphism, there is a unique isomorphism \(\dvf\otimes \comb{M} \cong \comb{\dvf\otimes M}\) that makes the following diagram commute: \[ \begin{tikzcd}[column sep=small, row sep=small] \dvf\otimes \comb{M} \arrow[rr, "\cong"] && \comb{\dvf\otimes M}\\ & M \arrow[ul] \arrow[ur] \end{tikzcd}\qedhere \] \end{proof} \begin{corollary} \label{cor:tf_embedding} If~\(M\) is bornologically torsion-free, then the canonical map \(\comb{M} \to \comb{\dvf\otimes M}\) is a bornological embedding. \end{corollary} \begin{proof} Use the isomorphism \(\dvf\otimes \comb{M} \cong \comb{\dvf\otimes M}\) to replace the canonical map \(\comb{M} \to \comb{\dvf\otimes M}\) by the canonical map \(\comb{M} \to \dvf\otimes \comb{M}\). This is a bornological embedding if and only if~\(\comb{M}\) is bornologically torsion-free by Proposition~\ref{pro:bornological_torsion_characterisation}. And this is true by Theorem~\ref{the:completion_tf}. \end{proof} Finally, we show that being bornologically torsion-free is compatible with linear growth bornologies: \begin{proposition} \label{pro:lgb_inherits_tf} If~\(A\) is a bornologically torsion-free \(\dvr\)\nb-algebra, then so is~\(\ling{A}\). \end{proposition} \begin{proof} Let \(S \subseteq \dvgen \cdot A\) be bounded in~\(\ling{A}\). Then there is \(T\in\bdd_A\) with \(S \subseteq T_1 \defeq \sum_{i=0}^\infty \dvgen^i T^{i+1}\) by Lemma~\ref{equivalent_semidagger}. The subset \(T_2 \defeq \sum_{i=0}^\infty \dvgen^i T^{i+2}\) also has linear growth. And \[ T_1 = T + \sum_{i=1}^\infty \dvgen^i T^{i+1} = T + \sum_{i=0}^\infty \dvgen^{i+1} T^{i+2} = T + \dvgen T_2. \] Since~\(T\) is bounded in~\(A\) and~\(A\) is bornologically torsion-free, \(\dvgen^{-1} \cdot T \defeq \setgiven{x\in A}{\dvgen \cdot x \in T}\) is also bounded. We have \(\dvgen^{-1}S \subseteq \dvgen^{-1} T_1 \subseteq \dvgen^{-1}\cdot T + T_2\). This is bounded in~\(\ling{A}\). \end{proof} The following proposition answers a question by Guillermo Corti\~nas: \begin{proposition} \label{pro:tensor_tf} Let \(M\) and~\(N\) be bornological \(\dvr\)\nb-modules. If \(M\) and~\(N\) are bornologically torsion-free, then so is \(M\otimes N\) with the tensor product bornology. \end{proposition} \begin{proof} Since \(M\) and~\(N\) are torsion-free, so is \(M\otimes N\), that is, multiplication by~\(\dvgen\) on \(M\otimes N\) is injective. Let \(U\subseteq M\otimes N\) be a subset such that \(\dvgen U\) is bounded. We must show that~\(U\) is bounded. By the definition of the tensor product bornology, there are bounded \(\dvr\)\nb-submodules \(S\subseteq M\), \(T\subseteq N\) such that \(\dvgen\cdot U \subseteq S\otimes T\). Define \[ \dvgen^{-1} S \defeq \setgiven{x\in M}{\dvgen x\in S},\qquad \dvgen^{-1} T \defeq \setgiven{y\in N}{\dvgen y\in T}. \] These subsets are bounded because \(\dvgen\cdot (\dvgen^{-1} S) \subseteq S\) and \(\dvgen\cdot (\dvgen^{-1} T) \subseteq T\) and \(M\) and~\(N\) are bornologically torsion-free. We claim that \(U\subseteq \dvgen^{-1} S \otimes \dvgen^{-1} T\). This shows that~\(U\) is bounded. Let \(u\in U\). We may write \(u = \sum_{j=1}^N x_j \otimes y_j\) with \(x_j\in M\), \(y_j\in N\). Since \(\dvgen \cdot u\in S\otimes T\), we may write \(\dvgen u = \sum_{k=1}^M z_k\otimes w_k\) with \(z_k\in S\), \(w_k\in T\). Let \(A\subseteq M\) and \(B\subseteq N\) be the \(\dvr\)\nb-submodules generated by the elements \(x_j,z_k\) and \(y_j,w_k\), respectively. These submodules are finitely generated and torsion-free, hence free. And the canonical map \(A\otimes B \to M \otimes N\) is injective. The submodules \(A\cap S\) and \(B\cap T\) are also free. Any \(\dvr\)\nb-module homomorphism between finitely generated free \(\dvr\)\nb-modules may be brought into diagonal form with entries in \(\{\dvgen^\mathbb N\} \cup\{0\}\) along the diagonal by choosing appropriate bases in the \(\dvr\)\nb-modules. Therefore, there are \(\dvr\)\nb-module bases \(a_1,\dotsc,a_n\) and \(b_1,\dotsc,b_m\) of \(A\) and~\(B\), respectively, and \(1\le n' \le n\), \(1\le m' \le m\), \(0 \ge \alpha_1 \ge \alpha_2 \ge \dotsb \ge \alpha_{n'}\), \(0 \ge \beta_1 \ge \beta_2 \ge \dotsb \ge \beta_{m'}\), such that \(\dvgen^{\alpha_i}\cdot a_i\) and \(\dvgen^{\beta_j}\cdot b_j\) for \(1 \le i \le n'\) and \(1 \le j \le m'\) are \(\dvr\)\nb-module bases of \(A\cap S\) and \(B\cap T\), respectively. We may write \(u\in A \otimes B\) uniquely in this basis as \(u = \sum_{i,j} u_{i,j} a_i \otimes b_j\) with \(u_{i,j}\in \dvr\). By assumption, \(\dvgen\cdot u = \sum_{k=1}^M y_k \otimes w_k \in (S\cap A) \otimes (T\cap B)\). By construction, the elements \(\dvgen^{\alpha_i+\beta_j} a_i \otimes b_j\) form a \(\dvr\)\nb-module basis of \((S\cap A) \otimes (T\cap B)\). Since the coefficients of~\(\dvgen u\) in the basis \(a_i \otimes b_j\) of \(A\otimes B\) are unique, it follows that \(u_{i,j}=0\) if \(i>n'\) or \(j>m'\), and \(\dvgen u_{i,j} \in \dvgen^{\alpha_i+\beta_j} \dvr\) for \(1\le i \le n'\) and \(1\le j \le m'\). Hence~\(u\) is a \(\dvr\)\nb-linear combination of \(\dvgen^{(\alpha_i-1)_+} a_i \otimes \dvgen^{(\beta_j-1)_+} b_j\) for \(1 \le i \le n'\), \(1 \le j \le m'\), where \(n_+ \defeq \max \{n,0\}\). Since \(\dvgen^{(\alpha_i-1)_+} a_i \in \dvgen^{-1} S\), \(\dvgen^{(\alpha_j-1)_+} b_j \in \dvgen^{-1} T\), this implies \(u \in \dvgen^{-1} S \otimes \dvgen^{-1} T\). Since \(u\in U\) was arbitrary, we get \(U \subseteq \dvgen^{-1} S \otimes \dvgen^{-1} T\). \end{proof} Theorem~\ref{the:completion_tf} and Proposition~\ref{pro:tensor_tf} imply that bornological torsion-freeness for complete bornological \(\dvr\)\nb-modules is hereditary for completed tensor products. \section{Dagger algebras} \label{sec:dagger} \begin{definition} A \emph{dagger algebra} is a complete, bornologically torsion-free, semi-dagger algebra. \end{definition} \begin{theorem} Let \(A \xrightarrow{i} B \xrightarrow{p} C\) be an extension of bornological \(\dvr\)\nb-algebras. If~\(A\) and~\(C\) are dagger algebras, so is~\(B\). \end{theorem} \begin{proof} All three properties defining dagger algebras are hereditary for extensions by Theorems \ref{the:extension_complete} (because~\(C\) is torsion-free), \ref{the:extension_lgb} and~\ref{the:extension_tf}. \end{proof} We have already seen that there are universal arrows \(A\to \torf{A} \subseteq \dvf\otimes A\), \(A\to\ling{A}\), \(A\to \comb{A}\) from a bornological algebra~\(A\) to a bornologically torsion-free algebra, to a semi-dagger algebra, and to a complete bornological algebra, respectively. We now combine them to a universal arrow to a dagger algebra: \begin{theorem} \label{dagger_completion} Let~\(A\) be a bornological algebra. Then the canonical map from~\(A\) to \(A^\updagger \defeq \comling{(\torf{A})}\) is the universal arrow from~\(A\) to a dagger algebra. That is, any bounded algebra homomorphism from~\(A\) to a dagger algebra factors uniquely through~\(A^\updagger\). If~\(A\) is already bornologically torsion-free, then \(A^\updagger \cong \comling{A}\). \end{theorem} \begin{proof} The bornological algebra~\(A^\updagger\) is complete by construction. It is semi-dagger by Proposition~\ref{pro:completion_semidagger}. And it is bornologically torsion-free by Proposition~\ref{pro:lgb_inherits_tf} and Theorem~\ref{the:completion_tf}. So it is a dagger algebra. Let~\(B\) be a dagger algebra. A bounded homomorphism \(A\to B\) factors uniquely through a bounded homomorphism \(\torf{A}\to B\) by Proposition~\ref{universal_torsionfree} because~\(B\) is bornologically torsion-free. This factors uniquely through a bounded homomorphism \(\ling{(\torf{A})}\to B\) because~\(B\) is semi-dagger. And this factors uniquely through a bounded homomorphism \(\comling{(\torf{A})}\to B\) because~\(B\) is complete. So~\(A^\updagger\) has the asserted universal property. If~\(A\) is bornologically torsion-free, then \(A\cong \torf{A}\) and hence \(A^\updagger \cong \comling{A}\). \end{proof} \begin{definition} We call~\(A^\updagger\) the \emph{dagger completion} of the bornological \(\dvr\)\nb-algebra~\(A\). \end{definition} \section{Dagger completions of monoid algebras} \label{sec:dagger_monoid} As a simple illustration, we describe the dagger completions of monoid algebras. The monoid algebra of~\(\mathbb N^j\) is the algebra of polynomials in \(j\)~variables, and its dagger completion is the Monsky--Washnitzer algebra of overconvergent power series equipped with a canonical bornology (see~\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}). The case of general monoids is similar. The monoid algebra~\(\dvr[S]\) of~\(S\) over~\(\dvr\) is defined by its universal property: if~\(B\) is a unital \(\dvr\)\nb-algebra, then there is a natural bijection between algebra homomorphisms \(\dvr[S] \to B\) and monoid homomorphisms \(S \to (B,\cdot)\) into the multiplicative monoid of~\(B\). More concretely, \(\dvr[S]\) is the free \(\dvr\)\nb-module with basis~\(S\) or, equivalently, the \(\dvr\)\nb-module of formal linear combinations of the form \[ \sum_{s\in S} x_s \delta_s,\qquad x_s \in \dvr,\ s\in S, \] with \(x_s = 0\) for all but finitely many~\(s\), and equipped with the multiplication \[ \sum_{s\in S} x_s \delta_s * \sum_{t\in S} y_t \delta_t = \sum_{s,t\in S} x_s y_t \delta_{s\cdot t}. \] We give~\(\dvr[S]\) the fine bornology. Then it has an analogous universal property in the category of bornological \(\dvr\)\nb-algebras. So the dagger completion \(\dvr[S]^\updagger\) is a dagger algebra with the property that bounded algebra homomorphisms \(\dvr[S]^\updagger\to B\) for a dagger algebra~\(B\) are in natural bijection with monoid homomorphisms \(S\to (B,\cdot)\). Assume first that~\(S\) has a finite generating set~\(F\). Let \(F^n\subseteq S\) be the set of all words \(s_1\dotsm s_k\) with \(s_1,\dotsc,s_k\in F\) and \(k\le n\). This gives an increasing filtration on~\(S\) with \(F^0 = \{1\}\) and \(S = \bigcup_{n=0}^\infty F^n\). For \(s\in S\), we define \(\ell(s) \in \mathbb N\) as the smallest~\(n\) with \(s\in F^n\). This is the \emph{word length} generated by~\(F\). Let \(\dvr[F^n] \subseteq \dvr[S]\) be the free \(\dvr\)\nb-submodule of~\(\dvr[S]\) spanned by~\(F^n\). Any finitely generated \(\dvr\)\nb-submodule of~\(\dvr[S]\) is contained in~\(\dvr[F^n]\) for some \(n\in \mathbb N\). By Lemma~\ref{equivalent_semidagger}, a subset of~\(\dvr[S]\) has linear growth if and only if it is contained in \(M_n \defeq \sum_{j=0}^\infty \dvgen^j (\dvr[F^n])^{j+1}\) for some \(n\in \mathbb N_{\ge 1}\). That is, \[ \ling{M} = \varinjlim M_n. \] Recall the valuation \(\nu \colon \dvr \to \mathbb N \cup \{\infty\}\) defined by \[ \nu(x) \defeq \sup {}\setgiven{n\in \mathbb N}{x \in \dvgen^n \dvr}. \] By definition, the submodule~\(M_n\) consists of all finite sums of terms~\(x_s \delta_s\) with \(x_s \in \dvgen^j\cdot \dvr\) and \(\ell(s) \le n(j+1)\) for some \(j\in\mathbb N\) or, equivalently, \(\ell(s)/n \le j+1 \le \nu(x_s)+1\). That is, \(M_n\) contains a finite sum \(\sum_{s\in S} x_s \delta_s\) with \(x_s\in \dvr\) and \(x_s=0\) for all but finitely many \(s\in S\) if and only if \(\nu(x_s) + 1\ge \ell(s)/n\) for all \(s\in S\). The \(\dvgen\)\nb-adic completion~\(\coma{M_n}\) of~\(M_n\) is the set of all formal power series \(\sum_{s\in S} x_s \delta_s\) such that \(x_s \in \dvr\), \(\nu(x_s) + 1\ge \ell(s)/n\) for all \(s\in S\) and \(\lim_{\ell(s)\to\infty} \nu(x_s) +1 - \ell(s)/n = \infty\). This implies \(x_s \to 0\) in the \(\dvgen\)\nb-adic norm, so that \(\coma{M_n} \subseteq \coma{\dvr[S]}\). So the extension \(\coma{M_n} \to \coma{M_{n+1}}\) of the inclusion map \(M_n \to M_{n+1}\) remains injective. Therefore, \(\varinjlim \coma{M_n}\) is separated, and it is contained in~\(\coma{\dvr[S]}\). Proposition~\ref{completions_exist} implies \[ \dvr[S]^\updagger = \varinjlim \coma{M_n}. \] Elements of~\(\coma{\dvr[S]}\) are formal series \(\sum_{s\in S}x_s \delta_s\) with \(x_s\in \dvr\) for all \(s\in S\) and \(\lim {} \abs{x_s} = 0\). We have seen above that such a formal series belongs to~\(\coma{M_n}\) if and only if \(\nu(x_s) + 1\ge \ell(s)/n\) for all \(s\in S\) and \(\lim_{\ell(s)\to\infty} \nu(x_s) +1 - \ell(s)/n = \infty\). If \(0<1/n<c\), then \(\nu(x_s) + 1\ge c\ell(s)\) implies \(\nu(x_s) + 1\ge \ell(s)/n\) and \(\lim_{\ell(s)\to\infty} \nu(x_s) +1 - \ell(s)/n = \infty\). Thus all \(\sum_{s\in S}x_s \delta_s \in \coma{\dvr[S]}\) with \(\nu(x_s) + 1\ge c\ell(s)\) belong to \(\coma{M_n}\). Conversely, all elements of \(\coma{M_n}\) satisfy this for \(c=1/n\). Letting \(c\) and~\(n\) vary, we see that~\(\dvr[S]^\updagger\) is the set of all \(\sum_{s\in S}x_s \delta_s\) in~\(\coma{\dvr[S]}\) for which there is \(c>0\) with \begin{equation} \label{eq:dagger_monoid_growth} \nu(x_s) + 1\ge c\ell(s) \qquad\text{for all }s\in S, \end{equation} and that a subset of~\(\dvr[S]^\updagger\) is bounded if and only if all its elements satisfy~\eqref{eq:dagger_monoid_growth} for the same \(c>0\). The growth condition~\eqref{eq:dagger_monoid_growth} does not depend on the word length function~\(\ell\) because the word length functions for two different generating sets of~\(S\) are related by linear inequalities \(\ell \le a \ell'\) and \(\ell' \le a \ell\) for some \(a>0\). Now we drop the assumption that~\(S\) be finitely generated. Then we may write~\(S\) as the increasing union of its finitely generated submonoids. By the universal property, the monoid algebra of~\(S\) with the fine bornology is a similar inductive limit in the category of bornological \(\dvr\)\nb-algebras, and its dagger algebra is the inductive limit in the category of dagger algebras. Since \(\dvr[S']^\updagger \subseteq \coma{\dvr[S']} \subseteq \coma{\dvr[S]}\) for any finitely generated \(S'\subseteq S\), we may identify this inductive limit with a subalgebra of \(\coma{\dvr[S]}\) as well, namely, the union of \(\dvr[S']^\updagger\) over all finitely generated submonoids \(S'\subseteq S\). That is, \(\dvr[S]^\updagger\) is the set of elements of~\(\coma{\dvr[S]}\) that are supported in some finitely generated submonoid \(S'\subseteq S\) and that satisfy~\eqref{eq:dagger_monoid_growth} for some length function on~\(S'\). We may also twist the monoid algebra. Let \(\dvr^\times = \setgiven{x\in\dvr}{\abs{x}=1}\) and let \(c\colon S\times S\to \dvr^\times\) be a normalised \(2\)\nb-cocycle, that is, \begin{equation} \label{eq:twist_cocycle} c(r,s\cdot t)\cdot c(s,t) = c(r\cdot s,t) \cdot c(r,s), \qquad c(s,1) = c(1,s) = 1 \end{equation} for all \(r,s,t\in S\). The \emph{\(c\)\nb-twisted monoid algebra of~\(S\)}, \(\dvr[S,c]\), is the \(\dvr\)\nb-module \(\dvr[S]\) with the twisted multiplication \begin{equation} \label{eq:twisted_mult} \sum_{s\in S} x_s \delta_s * \sum_{t\in S} y_t \delta_t = \sum_{s,t\in S} x_s y_t c(s,t)\cdot \delta_{s\cdot t}. \end{equation} The condition~\eqref{eq:twist_cocycle} is exactly what is needed to make this associative and unital with unit~\(\delta_1\). Since we assume~\(c\) to have values in~\(\dvr^\times\), the twist does not change the linear growth bornology. Therefore, the dagger completion~\(\dvr[S,c]^\updagger\) consists of all infinite sums \(\sum_{s\in S} x_s \delta_s\) that are supported in a finitely generated submonoid of~\(S\) and satisfy the growth condition~\eqref{eq:dagger_monoid_growth}, and a subset is bounded if and only if all its elements satisfy these two conditions uniformly. Only the multiplication changes and is now given by~\eqref{eq:twisted_mult}. \begin{example} Let \(S=(\mathbb Z^2,+)\) with the unit element~\(0\). Define \(c((s_1,s_2),(t_1,t_2)) \defeq \lambda^{s_2\cdot t_1}\) for some \(\lambda\in\dvr^\times\). This satisfies~\eqref{eq:twist_cocycle}. The resulting twisted convolution algebra is an analogue of a noncommutative torus over~\(\dvr\). Indeed, let \(U_1 \defeq \delta_{(1,0)}\) and \(U_2 \defeq \delta_{(0,1)}\) as elements of \(\dvr[\mathbb Z^2,c]\). Then \(\delta_{(-1,0)} = U_1^{-1}\) and \(\delta_{(0,-1)} = U_2^{-1}\) are inverse to them, and \(\delta_{(s_1,s_2)} = U_1^{s_1}\cdot U_2^{s_2}\). So \(U_1,U_2\) generate~\(\dvr[\mathbb Z^2,c]\) as a \(\dvr\)\nb-algebra. They satisfy the commutation relation \begin{equation} \label{eq:nc_torus} U_2\cdot U_1 = \lambda \cdot U_1\cdot U_2. \end{equation} And this already dictates the multiplication table in~\(\dvr[\mathbb Z^2,c]\). The dagger completion \(\dvr[\mathbb Z^2,c]^\updagger\) is isomorphic as a bornological \(\dvr\)\nb-module to the Monsky--Washnitzer completion of the Laurent polynomial algebra~\(\dvr[U_1^{\pm1},U_2^{\pm1}]\), equipped with a twisted multiplication satisfying~\eqref{eq:nc_torus}. \end{example} \section{Dagger completions of crossed products} \label{sec:crossed} Let~\(A\) be a unital, bornological \(\dvr\)\nb-algebra, let~\(S\) be a finitely generated monoid and let \(\alpha \colon S \to \Endo(A)\) be an action of~\(S\) on~\(A\) by bounded algebra homomorphisms. The \emph{crossed product} \(A\rtimes_\alpha S\) is defined as follows. Its underlying bornological \(\dvr\)\nb-module is \(A \rtimes_\alpha S = \bigoplus_{s\in S} A\) with the direct sum bornology. So elements of \(A \rtimes_\alpha S\) are formal linear combinations \(\sum_{s \in S} a_s \delta_s\) with \(a_s\in A\) and \(a_s=0\) for all but finitely many \(s\in S\). The multiplication on \(A \rtimes_\alpha S\) is defined by \[ \Bigl(\sum_{s \in S}a_s \delta_s\Bigr) \cdot \Bigl(\sum_{t \in S}b_t \delta_t\Bigr) \defeq \sum_{s,t \in S} a_s \alpha_s(b_t) \delta_{s t}. \] This makes \(A\rtimes_\alpha S\) a bornological \(\dvr\)\nb-algebra. What is its dagger completion? It follows easily from the universal property that defines \(A\subseteq A\rtimes_\alpha S\) that \[ (A\rtimes_\alpha S)^\updagger \cong (A^\updagger\rtimes_{\alpha^\updagger} S)^\updagger; \] here~\(\alpha^\updagger\) is the canonical extension of~\(\alpha\) to the dagger completion~\(A^\updagger\), which exists because the latter is functorial for bounded algebra homomorphisms. Therefore, it is no loss of generality to assume that~\(A\) is already a dagger algebra. It is easy to show that \((A\rtimes S)^\updagger\) is the inductive limit of the dagger completions \((A\rtimes S')^\updagger\), where~\(S'\) runs through the directed set of finitely generated submonoids of~\(S\). Hence we may also assume that~\(S\) is finitely generated to simplify. First we consider the following special case: \begin{definition} \label{def:unif_bounded} The action \(\alpha\colon S\to\Endo(A)\) is called \emph{uniformly bounded} if any \(U\in\bdd_A\) is contained in an \(\alpha\)\nb-invariant \(T\in\bdd_A\); \(\alpha\)\nb-invariance means \(\alpha_s(T) = T\) for all \(s\in S\). \end{definition} If~\(T\) is \(\alpha\)\nb-invariant, so is the \(\dvr\)\nb-module generated by~\(T\). Therefore, \(\alpha\) is uniformly bounded if and only if any bounded subset of~\(A\) is contained in a bounded, \(\alpha\)\nb-invariant \(\dvr\)\nb-submodule. If~\(A\) is complete, then the image of~\(\coma{T}\) in~\(A\) is also \(\alpha\)\nb-invariant because the maps~\(\alpha_s\) are bornological isomorphisms. Hence we may assume in this case that~\(T\) in Definition~\ref{def:unif_bounded} is a bounded, \(\alpha\)\nb-invariant \(\dvgen\)\nb-adically complete \(\dvr\)\nb-submodule. \begin{proposition} \label{pro:uniformly_bounded_induced_actions} Let~\(A\) carry a uniformly bounded action~\(\alpha\) of~\(S\). Then the induced actions on \(\comb{A}\), \(\torf{A}\), and~\(\ling{A}\) are uniformly bounded as well. Hence so is the induced action on~\(A^\updagger\). \end{proposition} \begin{proof} If~\(\alpha\) is uniformly bounded, then~\(A\) is the bornological inductive limit of its \(\alpha\)\nb-invariant bounded \(\dvr\)\nb-submodules. The action of~\(\alpha\) restricts to any such submodule~\(T\) and then extends canonically to its \(\dvgen\)\nb-adic completion~\(\coma{T}\). Then the image of~\(\coma{T}\) in~\(\comb{A}\) is \(S\)\nb-invariant as well. This gives enough \(S\)\nb-invariant bounded \(\dvr\)\nb-submodules in~\(\comb{A}\). So the induced action on~\(\comb{A}\) is uniformly bounded. If the action~\(\alpha\) on~\(A\) is uniformly bounded, then so is the action \(\mathrm{id}_B\otimes\alpha\) on \(B\otimes A\) for any bornological algebra~\(B\). In particular, the induced action on \(\dvf\otimes A\) is uniformly bounded. Since the canonical map \(A\to \dvf\otimes A\) is \(S\)\nb-equivariant, the image~\(\torf{A}\) of~\(A\) in \(\dvf\otimes A\) is \(S\)\nb-invariant. The restriction of the uniformly bounded action of~\(S\) on \(\dvf\otimes A\) to this invariant subalgebra inherits uniform boundedness. So the induced action on~\(\torf{A}\) is uniformly bounded. Any subset of linear growth in~\(A\) is contained in \(\sum_{j=0}^\infty \dvgen^j T^{j+1}\) for a bounded \(\dvr\)\nb-submodule~\(T\). Since~\(\alpha\) is uniformly bounded, \(T\) is contained in an \(\alpha\)\nb-invariant bounded \(\dvr\)\nb-submodule~\(U\). Then \(\sum_{j=0}^\infty \dvgen^j U^{j+1} \supseteq \sum_{j=0}^\infty \dvgen^j T^{j+1}\) is \(\alpha\)\nb-invariant and has linear growth. So~\(\alpha\) remains uniformly bounded for the linear growth bornology. The uniform boundedness of the induced action on the dagger completion~\(A^\updagger\) follows from the inheritance properties above and Theorem~\ref{dagger_completion}. \end{proof} \begin{example} \label{exa:finite_S_uniformly_bounded} Let~\(S\) be a finite monoid. Any bounded action of~\(S\) by bornological algebra endomorphisms is uniformly bounded because we may take \(T = \sum_{s\in S} \alpha_s(U)\) in Definition~ \ref{def:unif_bounded}. \end{example} \begin{example} \label{affine_action} We describe a uniformly bounded action of~\(\mathbb Z\) on the polynomial algebra \(A\defeq \dvr[x_1,\dotsc,x_n]\) with the fine bornology. So a subset of~\(A\) is bounded if and only if it is contained in \((\dvr + \dvr x_1 + \dotsb + \dvr x_n)^k\) for some \(k \in \mathbb N_{\ge 1}\). Let \(a\in \mathrm{GL}_n(\dvr) \subseteq \Endo(\dvr^n)\) and \(b \in \dvr^n\). Then \[ \alpha_1\colon \dvr[x_1,\dotsc, x_n] \to \dvr[x_1,\dotsc,x_n],\qquad (\alpha_1 f)(x) \defeq f(a x + b), \] is an algebra automorphism~\(\alpha_1\) of~\(A\) with inverse \((\alpha_1^{-1} f)(x) \defeq f(a^{-1}(x - b))\). This generates an action of the group~\(\mathbb Z\) by \(\alpha_n \defeq \alpha_1^n\) for \(n\in\mathbb Z\). If a polynomial~\(f\) has degree at most~\(m\), then the same is true for \(\alpha_1 f\) and \(\alpha_{-1} f\), and hence for \(\alpha_n f\) for all \(n\in\mathbb Z\). That is, the \(\dvr\)\nb-submodules \((\dvr + \dvr x_1 + \dotsb + \dvr x_n)^k\) in~\(A\) for \(k\in\mathbb N\) are \(\alpha\)\nb-invariant. So the action~\(\alpha\) on~\(A\) is uniformly bounded. Proposition~\ref{pro:uniformly_bounded_induced_actions} implies that the induced action on \(\dvr[x_1,\dotsc,x_n]^\updagger\) is uniformly bounded as well. \end{example} \begin{proposition} \label{pro:dagger_completion_crossed_uniformly_bounded} Let~\(S\) be a finitely generated monoid with word length function~\(\ell\). Let~\(A\) be a dagger algebra and let \(\alpha\colon S\to\Endo(A)\) be a uniformly bounded action by algebra endomorphisms. Then \((A\rtimes S)^\updagger \subseteq \prod_{s\in S} A\). A formal series \(\sum_{s\in S} a_s \delta_s\) with \(a_s\in A\) for all \(s\in S\) belongs to \((A\rtimes S)^\updagger\) if and only if there are \(\varepsilon>0\) and \(T\in\bdd_A\) with \(a_s \in \dvgen^{\floor{\varepsilon \ell(s)}} T\) for all \(s\in S\), and a set of formal series is bounded in \((A\rtimes S)^\updagger\) if and only if \(\varepsilon>0\) and \(T\in\bdd_A\) for its elements may be chosen uniformly. \end{proposition} \begin{proof} We first describe the linear growth bornology on~\(A\rtimes S\). Let~\(\bdd'\) be the set of all subsets \(U\subseteq A\rtimes S\) for which there are \(T\in\bdd_A\) and \(\varepsilon>0\) such that any element of~\(U\) is of the form \(\sum_{s\in S} a_s \delta_s\) with \(a_s \in \dvgen^{\floor{\varepsilon \ell(s)}} T\) for all \(s\in S\). We claim that~\(\bdd'\) is the linear growth bornology on \(A\rtimes S\). The inclusion \(\dvr[S] \subseteq A\rtimes S\) induces a bounded algebra homomorphism \(\ling{\dvr[S]} \to \ling{(A\rtimes S)}\). We have already described the linear growth bornology on~\(\dvr[S]\) in Section~\ref{sec:dagger_monoid}. This implies easily that all subsets in~\(\bdd'\) have linear growth: write \(\dvgen^{\floor{\varepsilon \ell(s)}} a'_s \delta_s = a'_s\cdot \dvgen^{\floor{\varepsilon \ell(s)}} \delta_s\). We claim, conversely, that any subset of \(A\rtimes S\) of linear growth is contained in~\(\bdd'\). All bounded subsets of~\(A\rtimes S\) are contained in~\(\bdd'\). It is routine to show that~\(\bdd'\) is a \(\dvr\)\nb-algebra bornology. We only prove that the bornology~\(\bdd'\) has linear growth. Since~\(\alpha\) is uniformly bounded, any \(T\in\bdd_A\) is contained in a bounded, \(\alpha\)\nb-invariant \(\dvr\)\nb-submodule~\(T_2\). Then \(T_3 \defeq \sum_{j=0}^\infty \dvgen^j T_2^{j+1}\) is a bounded, \(\alpha\)\nb-invariant \(\dvr\)\nb-submodule with \(\dvgen \cdot T_3^2 \subseteq T_3\) and \(T \subseteq T_3\) (see \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Equation~(5)}). If \(a_s \in \dvgen^{\floor{\varepsilon \ell(s)}} T_3\) and \(a_t \in \dvgen^{\floor{\varepsilon \ell(t)}} T_3\), then \[ \dvgen^2\cdot a_s\cdot \alpha_t \in \dvgen^{2+\floor{\varepsilon \ell(s)} +\floor{\varepsilon \ell(t)}} T_3^2 \subseteq \dvgen^{\floor{\varepsilon \ell(s\cdot t)}} \dvgen T_3^2 \subseteq \dvgen^{\floor{\varepsilon \ell(s\cdot t)}} T_3 \] because \(1+\floor{\varepsilon \ell(s)} +\floor{\varepsilon \ell(t)} \ge \floor{\varepsilon \ell(s\cdot t)}\). This implies \[ \dvgen^2 \cdot \sum_{s\in S} \dvgen^{\floor{\varepsilon \ell(s)}} T_3 \delta_s * \sum_{t\in S} \dvgen^{\floor{\varepsilon \ell(t)}} T_3 \delta_t \subseteq \sum_{s,t\in S} \dvgen^{\floor{\varepsilon \ell(s\cdot t)}} T_3 \delta_{s t} \subseteq \sum_{s\in S} \dvgen^{\floor{\varepsilon \ell(s)}} T_3 \delta_s. \] So any subset in~\(\bdd'\) is contained in \(U\in \bdd'\) with \(\dvgen^2 \cdot U^2 \subseteq U\). By induction, this implies \((\dvgen^2 U)^k \cdot U \subseteq U\) for all \(k\in\mathbb N\). Hence \(\sum_{j=0}^\infty \dvgen^{2 k} U^{k+1}\) is in~\(\bdd'\). Now Lemma~\ref{equivalent_semidagger} shows that the bornology~\(\bdd'\) is semi-dagger. This proves the claim that~\(\bdd'\) is the linear growth bornology on~\(A\rtimes S\). Since~\(A\) as a dagger algebra is bornologically torsion-free, so is \(A\rtimes S\). So \((A\rtimes S)^\updagger\) is the completion of \(\ling{(A\rtimes S)} = (A\rtimes S,\bdd')\). It is routine to identify this completion with the bornological \(\dvr\)\nb-module described in the statement. \end{proof} Propositions \ref{pro:uniformly_bounded_induced_actions} and~\ref{pro:dagger_completion_crossed_uniformly_bounded} describe the dagger completion of \(A\rtimes S\) for a uniformly bounded action of~\(S\) on~\(A\) even if~\(A\) is not a dagger algebra. Namely, the universal properties of the crossed product and the dagger completion imply \begin{equation} \label{eq:dagger_complete_twice} (A\rtimes S)^\updagger \cong (A^\updagger \rtimes S)^\updagger. \end{equation} \begin{example} \label{exa:dagger_crossed_polynomial} Let~\(\alpha\) be the uniformly bounded action of~\(\mathbb Z\) on \(\dvr[x_1,\dotsc,x_k]\) from Example~\ref{affine_action}. The induced action~\(\alpha^\updagger\) on \(\dvr[x_1,\dotsc,x_k]^\updagger\) is also uniformly bounded by Proposition~\ref{pro:uniformly_bounded_induced_actions}. And~\eqref{eq:dagger_complete_twice} implies \[ (\dvr[x_1,\dotsc,x_k] \rtimes_\alpha \mathbb Z)^\updagger \cong (\dvr[x_1,\dotsc,x_k]^\updagger \rtimes_{\alpha^\updagger} \mathbb Z)^\updagger. \] The latter is described in Proposition~\ref{pro:dagger_completion_crossed_uniformly_bounded}. Namely, \((\dvr[x_1,\dotsc,x_k]^\updagger \rtimes_{\alpha^\updagger} \mathbb Z)^\updagger\) consists of those formal series \(\sum_{n\in \mathbb Z} a_n \delta_n\) with \(a_n \in \dvr[x_1,\dotsc,x_k]^\updagger\) for which there are \(\varepsilon>0\) and a bounded \(\dvr\)\nb-submodule~\(T\) in \(\dvr[x_1,\dotsc, x_k]^\updagger\) such that \(a_n \in \dvgen^{\floor{\varepsilon \abs{n}}} T\) for all \(n\in \mathbb Z\); notice that~\(\abs{n}\) is indeed a length function on~\(\mathbb Z\). And a subset is bounded if some pair \(\varepsilon,T\) works for all its elements. We combine this with the description of bounded subsets of \(\dvr[x_1,\dotsc, x_k]^\updagger\) in Section~\ref{sec:dagger_monoid}: there is some \(\delta>0\) so that a formal power series \(\sum_{m\in \mathbb N^k} b_m x^m\) belongs to~\(T\) if and only if \(b_m \in \dvgen^{\floor{\delta \abs{m}}} \dvr\) for all \(m\in\mathbb N^k\). Here we use the length function \(\abs{(m_1,\dotsc,m_k)} = \sum_{j=1}^k m_j\). We may merge the parameters \(\varepsilon,\delta>0\) above, taking their minimum. So \((\dvr[x_1,\dotsc,x_k]\rtimes \mathbb Z)^\updagger\) consists of the formal series \(\sum_{n\in\mathbb Z,m\in\mathbb N^k} a_{n,m} x^m \delta_n\) with \(a_{n,m} \in\dvgen^{\floor{\varepsilon (\abs{n} + \abs{m})}} \dvr\) or, equivalently, \(\nu(a_{n,m}) +1 > \varepsilon (\abs{n} + \abs{m})\) for all \(n\in\mathbb Z\), \(m\in\mathbb N^k\). \end{example} If the action of~\(S\) on~\(A\) is not uniformly bounded, then the linear growth bornology on~\(A\rtimes S\) becomes much more complicated. It seems unclear whether the description below helps much in practice. Let \(F\subseteq S\) be a finite generating subset containing~\(1\). Any bounded subset of \(A\rtimes S\) is contained in \(\bigl(\sum_{s\in F} T\cdot\delta_s\bigr)^N\) for some \(N\in\mathbb N\) and some \(T\in\bdd_A\) with \(1\in T\). Therefore, a subset of~\(A\rtimes S\) has linear growth if and only if it is contained in the \(\dvr\)\nb-submodule generated by \[ \bigcup_{n=1}^\infty \dvgen^{\floor{\varepsilon n}}(T\cdot \setgiven{\delta_s}{s\in F})^n \] for some \(\varepsilon>0\), \(T\in\bdd_A\). Using the definition of the convolution, we may rewrite the latter set as \[ \bigcup_{n=1}^\infty \bigcup_{s_1,\dotsc,s_n\in F} \dvgen^{\floor{\varepsilon n}} \cdot T\cdot \alpha_{s_1}(T)\cdot \alpha_{s_1 s_2}(T) \dotsm \alpha_{s_1 \dotsm s_{n-1}}(T) \, \delta_{s_1\dotsm s_n}. \] The resulting \(\dvr\)\nb-module is the sum \(\sum_{s\in S} U_s \delta_s\), where~\(U_s\) is the \(\dvr\)\nb-submodule of~\(A\) generated by finite products \[ \setgiven{\dvgen^{\floor{\varepsilon n}} \cdot T\cdot \alpha_{s_1}(T) \dotsm \alpha_{s_1\dotsm s_{n-1}}(T)} {n\in\mathbb N_{\ge1},\ s_1,\dotsc,s_n\in F,\ s_1\dotsm s_n = s}. \] Here taking a factor \(1 \in T\) is allowed. Thus we may leave out a factor \(\alpha_{s_1\dotsm s_i}(T)\). This has the same effect as increasing~\(n\) by~\(1\) and putting \(s_i = s_i^1\cdot s_i^2\) with \(s_i^1,s_i^2\in F\). Since~\(F\) generates~\(S\) as a monoid, we may allow arbitrary \(s_i\in S\) when we change the exponent of~\(\dvgen\) appropriately. Namely, we must then replace~\(n\) in the exponent of~\(\dvgen\) by the number of factors in~\(F\) that are needed to produce the desired elements~\(s_i\), which is \(\ell_{\ge1}(s_1) + \dotsb + \ell_{\ge1}(s_n)\), where \(\ell_{\ge1}(1) = 1\) and \(\ell_{\ge1}(s) = \ell(s)\) for \(s\in S\setminus\{1\}\). As a result, \(U_s\) is the \(\dvr\)\nb-submodule of~\(A\) generated by \begin{multline*} \dvgen^{\floor{\varepsilon (\ell_{\ge1}(s_1) + \dotsb + \ell_{\ge1}(s_n))}} \cdot x_0\cdot \alpha_{s_1}(x_1) \dotsm \alpha_{s_1\dotsm s_{n-1}}(x_{n-1}),\\ n\in\mathbb N_{\ge1},\ x_0,\dotsc,x_{n-1}\in T,\ s_1,\dotsc,s_n\in S,\ s_1\dotsm s_n = s. \end{multline*} Now assume that~\(S\) is a group, not just a monoid. Then any sequence of elements \(g_1,\dotsc,g_n\in S\) may be written as \(g_i = s_1\dotsm s_i\) by putting \(s_i\defeq g_{i-1}^{-1} g_i\) with \(g_0\defeq 1\). So~\(U_g\) is the \(\dvr\)\nb-submodule of~\(A\) generated by \begin{multline*} \dvgen^{\floor{\varepsilon (\ell_{\ge1}(g_0^{-1} g_1) + \ell_{\ge1}(g_1^{-1} g_2) + \dotsb + \ell_{\ge1}(g_{n-1}^{-1} g_n))}} \cdot \alpha_{g_0}(x_0)\cdot \alpha_{g_1}(x_1) \dotsm \alpha_{g_{n-1}}(x_{n-1}),\\ n\in\mathbb N_{\ge1},\ x_0,\dotsc,x_{n-1} \in T,\ g_0,\dotsc,g_n\in S,\ g_0=1,\ g_n=g. \end{multline*} These subsets~\(U_g\) for fixed \(T\) and~\(\varepsilon\) depend on~\(g\) in a complicated way. The bornology on~\(A\rtimes G\) generated by these subsets is, however, also generated by the sets of the form \(\sum_{g\in G} \dvgen^{\varepsilon \ell(g)}\cdot U\,\delta_g\), where~\(U\) is the \(\dvr\)\nb-submodule of~\(A\) generated by \begin{multline*} \dvgen^{\floor{\varepsilon (\ell_{\ge1}(g_0^{-1} g_1) + \ell_{\ge1}(g_1^{-1} g_2) + \dotsb + \ell_{\ge1}(g_{n-2}^{-1} g_{n-1}))}} \cdot \alpha_{g_0}(x_0)\cdot \alpha_{g_1}(x_1) \dotsm \alpha_{g_{n-1}}(x_{n-1}),\\ n\in\mathbb N_{\ge1},\ x_0,\dotsc,x_{n-1} \in T,\ g_0,\dotsc,g_{n-1}\in S,\ g_0=1, \end{multline*} for some \(T\in\bdd_A\), \(\varepsilon>0\). The reason is that \[ \ell(g_n) - \ell(g_{n-1}) \le \ell(g_{n-1}^{-1} g_n) \le \ell(g_{n-1}) + \ell(g_n) \] and \(\ell(g_{n-1}) \le \sum_{j=1}^{n-1} \ell(g_{j-1}^{-1} g_j)\). Therefore, replacing the exponents of~\(\dvgen\) as above does not change the bornology on \(A\rtimes G\) that is generated by the sets above when \(\varepsilon>0\) varies. \begin{bibdiv} \begin{biblist} \bibselect{references} \end{biblist} \end{bibdiv} \end{document}
1,116,691,500,530
arxiv
\section{Introduction} \label{sec:intro} \citet{2021arXiv210102212J} reported that the red giant (RG) star V723 Mon is orbited by a dark $\approx 3\,M_\odot$ companion on a nearly circular and edge-on orbit with the period of $P \approx 60\,\mathrm{days}$. If the companion is a single compact object, it is the nearest known black hole that falls within the ``mass gap" \citep{1998ApJ...499..367B, 2011ApJ...741..103F}, perhaps along with another similar system discovered by \citet{2019Sci...366..637T}. This scenario successfully explained most of the data presented in \citet{2021arXiv210102212J}, but there remained one signal yet to be explained: the radial velocity (RV) time series of V723 Mon exhibit periodic residuals from the signal due to Keplerian orbital motion. The residual RVs have the period of $P/2$ when the binary orbit is assumed to be circular, and $P/3$ when the orbital eccentricity is fitted (in which case a small eccentricity eliminates the $P/2$ component). The periodic residuals motivated \citet{2012AN....333..663S} to propose the presence of a third body, although the triple scenario would suffer from fine tuning due to dynamical stability \citep{2014Obs...134..109G, 2021arXiv210102212J} and the residual signal appears to be too large to arise from a dynamically stable triple configuration \citep{2020ApJ...890..112H}. Here we show that the periodic RV residuals originate from tidal deformation of the red giant whose rotation is synchronized with the binary orbit, as was argued to be a plausible scenario by \citet{2021arXiv210102212J}. The deformation causes orbital phase-dependent modulation of the fractions of the visible RG surfaces that are moving toward and away from us (Figure \ref{fig:schematic}). The imbalance causes asymmetric distortion of the absorption line profile and produces RV anomalies. This picture is consistent with the phase-dependent variations of the projected rotation velocity $v\sin i$ \citep{2014Obs...134..109G, 2021arXiv210102212J} and also explains the different RV anomalies around the orbital phases at 0 and 0.5 \citep[as seen in the middle panel of Figure 3 in][]{2021arXiv210102212J} because the RG surface becomes more elongated at the near side of the companion than the far side due to strong tides (Figure \ref{fig:schematic} left). Furthermore, the shape of the RV curve is sensitive to the orbital inclination: the RV anomaly is small for nearly pole-on systems, while a sharp anomaly arises around the conjunction where the companion is in front for nearly edge-on systems \citep[e.g., Figure 7 of][]{2008ApJ...681..562E}. All these features make the ``tidal RVs" sensitive to the masses of the binary components. \begin{figure*} \centering \epsscale{1.1} \plottwo{velmap.png}{velprof2.png} \caption{Schematic illustration of the tidal effect on the absorption line profile. At this orbital phase the red giant star appears to be blue shifted and exhibits a negative anomalous radial velocity. {\it (Left)---} The thick gray line shows the equator of the red giant deformed by the companion. The companions' orbit and stellar equator are both assumed to be edge-on as seen from the observer. Note that the companion's orbit and red giant radius are not to scale. The dotted line shows a circle to emphasize the asymmetric deformation of the red giant. {\it (Right)---} The absorption line profile corresponding to the configuration shown in the left panel, computed in a manner as described in Section \ref{ssec:method_rv}.} \label{fig:schematic} \end{figure*} Significant tidal deformation also manifests as ellipsoidal variations in the photometric light curve, which was used by \citet{2021arXiv210102212J} to precisely constrain the component masses and orbital inclination in combination with the precise binary mass function from RVs and the prior on the RG radius. The inferred orbit is very close to edge-on, and this is also consistent with the eclipses of the Balmer emission observed when the companion is supposed to be behind the red giant. That said, the light curve model includes wavelength-dependent dilution (the ``veiling" component) whose physical origin is yet to be understood, and any inaccurate assumption on such non-stellar flux could become a source of systematic errors in the mass/inclination measurements with ellipsoidal variations \citep{2012ApJ...757...36K}. Similarly, it is not yet clear how and where the Balmer emission is produced \citep{2021arXiv210102212J}. The mass measured with ellipsoidal variations could also be biased by quasi-periodic photometric modulations associated with active regions (spots) on the RG surface, whose contribution is difficult to evaluate a priori. Therefore a constraint on the component masses that does not rely on these signals would be valuable. Such a technique would also allow for mass measurements in a larger number of similar systems for which photometric data are not available and/or ellipsoidal variations are significantly contaminated by other signals. To our knowledge, this tidal RV signal was first discussed by \citet{1941PNAS...27..168S} as a source of spurious eccentricity in spectroscopic binaries, and a numerical scheme for more accurate modeling was given by \citet{1976ApJ...203..182W} \citep[see also other references therein, including][]{1959cbs..book.....K}. Although this signal has been clearly detected only in a handful of systems (but see Figure 4 of \citet{1989A&A...218..152H} and Figure 10 of \citet{2008ApJ...681..562E} for some notable examples), the signal has still been recognized as a correction that needs to be taken into account in interpreting the RV data of tidally-locked binaries \citep[e.g.][]{1986ApJ...308..110M, 1986AJ.....91..125K}, and is also implemented in the widely used ELC code \citep{2000A&A...364..265O}. We find, however, that the scheme adopted in these previous works to evaluate the tidal RV signal as the flux-weighted mean of the surface velocity field, following the prescriptions in the earlier works \citep{1941PNAS...27..168S, 1976ApJ...203..182W}, significantly underestimates its amplitude in the RVs of V723 Mon measured with a cross correlation technique (Figure \ref{fig:rvmethod}). This is because the tidal effects work to {\it distort} the stellar lines, rather than to shift them, and the peak (trough) of the distorted, asymmetric profile as probed by cross correlation is different from its centroid as evaluated by computing the flux-weighted mean (see Figure \ref{fig:schematic} right). In other words, the RV derived from the flux-weighted mean of the Doppler shifted profiles is not the same as the flux-weighted mean of the Doppler shifts. In this paper, we present a formulation that explicitly models the line profile and cross-correlation procedure, and show that it is indeed crucial for modeling the high signal-to-noise anomaly in the RVs of V723 Mon.\footnote{We note that the need for such a treatment was also recognized in some earlier works including \citet{1985A&A...152...25V}, \citet{1989A&A...218..152H}, and \citet{1993ASPC...38..127H}, although the scheme was not used to fit the actual data.} The remainder of this paper is organized as follows. In Section \ref{sec:model}, we present our model for the tidal RV signal in a circularized and synchronized binary. In Section \ref{sec:results}, we show that the model quantitatively reproduces the RV residuals observed in V723 Mon, and derive constraints on the system parameters based on the RV data. We find independent support for a $3\,M_\odot$ companion in a nearly edge-on orbit, and have eliminated the need for a third body to explain the RV residuals. In Section \ref{sec:summary} we summarize the results and discuss future prospects. \section{The Model}\label{sec:model} We assume that the binary orbit has been circularized, and rotation of the red giant has been synchronized with the orbital motion (i.e. rotation period and axis are the same as the orbital ones). In this case, the red giant is static in the rotating frame, and each surface element moves on a circular orbit. We compute the geometric shape and flux distribution over the deformed surface of the red giant following a standard procedure (Section \ref{ssec:method_flux}). We then use them to model variations in the absorption line profiles as the companion and red giant rotate together, and translate the phase-dependent distortion of the line profile into the tidal RV signal $v_\mathrm{tidal}$ (Section \ref{ssec:method_rv}) --- this step is not included in the formulation by \citet{1976ApJ...203..182W}. The model is compared with observed RVs and the parameters are constrained via a Bayesian formalism adopting appropriate priors (Sections \ref{ssec:method_full} and \ref{ssec:method_priors}). \subsection{Shape of the Tidally Deformed Surface and Flux Distribution}\label{ssec:method_flux} We divide the stellar surface into $768$ pixels\footnote{This gives angular resolution of $\approx 7.3^\circ$, which corresponds to the velocity resolution of $\sim 1\,\mathrm{km\,s^{-1}}$ for the red giant star of interest. This resolution is sufficient because it is smaller than the intrinsic velocity width of the absorption lines (Section \ref{ssec:method_rv}).} with equal solid angle $\Delta\Omega$ using the {\tt HEALPix}/{\tt healpy} package \citep{2005ApJ...622..759G, Zonca2019}.\footnote{\url{http://healpix.sourceforge.net}} For each pixel labeled by $j$, we compute: \begin{itemize} \item normalized distance from the star's center $R_j/R_\star$, \item normalized surface gravity $g_j/g_\star$, \item foreshortening factor $\cos \gamma_j$, where $\gamma_j$ is the angle between the surface normal and our line of sight, \item angle $\delta_j$ between the surface normal and radius vector, \item intensity $I_j(g_j,\cos\gamma_j)$ that depends on $g_j$ and $\cos\gamma_j$ through gravity- and limb-darkening, respectively, \item line-of-sight velocity with respect to the star's center of mass normalized by the equatorial rotation velocity, $V_j/(2\piR_\star/P)$ \end{itemize} as a function of the orbital phase, orbital/spin inclination $i$, RG mass $M_\star$, companion mass $M_\bullet$, and semi-major axis scaled by the RG radius $a/R_\star$. The flux contribution of each pixel $\Delta F_j$ is given by \begin{equation} \Delta F_j \propto I_j(g_j, \cos\gamma_j)\,\cos\gamma_j\,{R_j^2\Delta\Omega \over \cos\delta_j} \end{equation} for $\cos \gamma_j>0$ (i.e. visible to the observer), and $0$ otherwise. The flux change due to Doppler beaming is of order $10^{-4}$ for the rotation velocity of $\approx 20\,\mathrm{km\,s^{-1}}$ and is not included. We also ignore the effects of irradiation and reflection because the companion appears to be non-luminous. A potential microlensing effect due to the compact companion eclipsing the RG star is also negligible given the large RG radius and relatively tight orbit \citep{1969ApJ...156.1013T}. The quantities $R_j$, $g_j$, $\cos\gamma_j$ and $\cos\delta_j$ were computed assuming that the RG surface is described by a surface of the constant Roche potential \citep{1979ApJ...234.1054W}, where $R_j$ was solved iteratively for each grid point. The formulation is thus similar to the one in the {\tt PHOEBE} model \citep[e.g.][]{2016ApJS..227...29P}. The normalizations $R_\star$ and $g_\star$ were chosen to be the values at the points on the stellar equator perpendicular to the star--companion axis. The intensity $I_j$ was computed adopting the quadratic limb-darkening law $I(\cos\gamma) \propto 1 - u_1(1-\cos\gamma) - u_2(1-\cos\gamma)^2$, multiplied by $(g_j/g_\star)^y$ \citep{1959cbs..book.....K}. Since the limb- and gravity-darkening coefficients $u_1$, $u_2$, and $y$ are chromatic, the values of the coefficients need to be evaluated for an appropriate wavelength band, as will be discussed in Section \ref{ssec:method_priors}. Here the quadratic law is adopted considering a balance between the accuracy and computational cost, and the model can also be modified to incorporate more complex profiles as we try in Section \ref{ssec:results_ld}. Although the quadratic law may fail to reproduce the limb-darkening at the very edge of the stellar disk accurately, the results from the modeling in this paper was found to be insensitive to the adopted profile. \subsection{Tidal RVs}\label{ssec:method_rv} One simple method to evaluate tidal RV anomalies is to compute the mean of $V_j$ weighted by the flux $\Delta F_j$, $(\sum_j V_j \Delta F_j)/(\sum_j \Delta F_j)$. This is the prescription proposed in the seminal works by \citet{1941PNAS...27..168S} and \cite{1976ApJ...203..182W}, and has been adopted in many other works. In reality, however, the RVs are derived via a more complicated procedure specific to each pipeline, and the simple ``flux-weighted mean velocity" has been shown to deviate from actual measurements in the case of the Rossiter--McLaughlin signal \citep{1924ApJ....60...15R, 1924ApJ....60...22M} originating from line-profile distortion due to transiting exoplanets \citep{2005ApJ...622.1118O, 2005ApJ...631.1215W, 2010ApJ...709..458H, 2011ApJ...742...69H}. This is also found to be the case in our current problem: Figure \ref{fig:rvmethod} compares the tidal RV signal evaluated as the flux-weighted mean (crosses) against the values from a model described below (thick solid line), for the same set of model parameters (mean of the posterior distribution) from our analysis in Section \ref{sec:results}. Because the RV data modeled in Section \ref{sec:results} were derived by computing cross correlation between the observed spectra and a synthetic template spectrum \citep{2012AN....333..663S}, we try to replicate the process as possible to model tidal RVs. The formulation here largely follows the one in \citet{2011ApJ...742...69H}. \begin{figure} \centering \epsscale{1.1} \plotone{s12acritb_rveil_n8_s500_methods.png} \caption{Tidal RV signal evaluated as the flux-weighted mean of the surface velocity field (crosses) has a smaller amplitude than the signal computed by modeling the cross-correlation function and by computing its peak (thick solid line). Note that the two models differ not only in terms of the amplitudes but also on the phases of the local maxima/minima and zero-crossings.} \label{fig:rvmethod} \end{figure} We mainly consider the distortion of a single line at some specific wavelength and evaluate how this affects the RV values derived from a cross-correlation analysis. The line profile in velocity space $\mathcal{F}(v)$, in the presence of rigid rotation and macroturbulence, is given by the following convolution \begin{equation} \mathcal{F}(v) = S(v) * M(v). \end{equation} Here \begin{equation} M(v) = {{\sum_j \Theta_j(v-V_j) \Delta F_j}\over{\sum_j \Delta F_j}} \end{equation} is the broadening kernel, where \begin{align} \label{eq:theta} \Theta_j(v) = {1\over2}\,\left[\mathcal{N}\left(v; 0, {1\over2}\,\zeta^2\cos^2\gamma_j\right) + \mathcal{N}\left(v; 0, {1\over2}\,\zeta^2\sin^2\gamma_j\right)\right], \quad \mathcal{N}(x; \mu, \sigma^2) \equiv {1 \over \sqrt{2\pi\sigma^2}}\,\exp\left[-{(x-\mu)^2 \over {2\sigma^2}}\right] \end{align} is the macroturbulence kernel for the radial-tangential model \citep{2005oasp.book.....G}. Here we assume equal contributions from the radial and tangential motions, and ignore the small Doppler shift due to flux difference between the rising and sinking gas streams (convective blueshift).\footnote{Another implicit assumption is that the absorption lines arise from the same equipotential surface on which we evaluated $\Delta F_j$. This is not exactly the case, but the difference has a negligible effect on the profile \citep{1998MNRAS.298..153S}.} We assume that the intrinsic line profile $S(v)$ in velocity space is given by a Gaussian\footnote{Although the Vogit profile (convolution of Gaussian and Lorenzian) is physically more appropriate, the contribution from the Lorenzian part is minor here and this simplification is justified.} \begin{equation} S(v) = \mathcal{N}(v; 0, \beta_\mathrm{S}^2), \end{equation} where $\beta_\mathrm{S}^2=\beta^2_\mathrm{thermal} + \beta^2_\mathrm{mic} + \beta^2_\mathrm{IP}$ includes broadening contributions from the thermal motion, microturbulence, and instrumental profile, respectively. We assume $\beta_\mathrm{thermal}=0.82\,\mathrm{km\,s^{-1}}$ (corresponding to $T_\mathrm{eff}=4500\,\mathrm{K}$ and iron atoms), $\beta_\mathrm{mic}=1.0\,\mathrm{km\,s^{-1}}$ \citep{2018AJ....156..125H, 2021arXiv210102212J}, and $\beta_\mathrm{IP}=2.31\,\mathrm{km\,s^{-1}}$ (corresponding to the wavelength resolution $R=55,000$) for V723 Mon. These yield the total $\beta_\mathrm{S}=2.65\,\mathrm{km\,s^{-1}}$. The resulting line profile is \begin{equation} \mathcal{F}(v) \propto \sum_j \left[\mathcal{N}\left(v-V_j; 0,\beta_\mathrm{S}^2 + {1\over 2}\,\zeta^2\cos^2\gamma_j\right) + \mathcal{N}\left(v-V_j; 0, \beta_\mathrm{S}^2 + {1\over 2}\,\zeta^2\sin^2\gamma_j\right)\right]\,\Delta F_j. \end{equation} The cross correlation function (CCF) $C(v)$ is computed by convolving a template spectrum $T(v)$ against $\mathcal{F}(v)$. We assume that the template is a theoretical spectrum similar to the observed one \citep[as was the case in][]{2012AN....333..663S}, but without broadening due to instrumental profile: \begin{equation} T(v)=\mathcal{N}(v; 0, \beta_\mathrm{T}^2), \quad \beta_\mathrm{T}^2 \equiv \beta^2_\mathrm{thermal} + \beta^2_\mathrm{mic}. \end{equation} Then the CCF is given by \begin{equation} \label{eq:ccf} C(v) = (T \star \mathcal{F})(v) \propto \sum_j \left[\mathcal{N}\left(v-V_j; 0, \beta_\mathrm{S}^2 + \beta_\mathrm{T}^2 + {1\over 2}\,\zeta^2\cos^2\gamma_j\right) + \mathcal{N}\left(v-V_j; 0, \beta_\mathrm{S}^2 + \beta_\mathrm{T}^2 + {1\over 2}\,\zeta^2\sin^2\gamma_j\right)\right]\,\Delta F_j. \end{equation} Thus the shape of the CCF profile and the resulting RVs do not only depend on $V_j$ and $\Delta F_j$, but on the line-profile parameters $\zeta$ and $\beta \equiv \sqrt{\beta_\mathrm{S}^2 + \beta_\mathrm{T}^2}$. The model RVs $v_\mathrm{tidal}$ are then derived as $v_\mathrm{tidal} = \mathrm{argmax}_v C(v)$. Given that the derivatives of $C(v)$ can be computed easily, $v_\mathrm{tidal}$ can often be found efficiently as the root of $\mathrm{d}C(v)/\mathrm{d}v=0$ using the Newton--Raphson method. However, we found that this method sometimes fails when the tidal deformation is large and the CCF has many points of inflection.\footnote{The CCF can even have two local maxima when the star is almost filling its Roche lobe, but this does not happen in our solution.} Thus we adopted a slower but more robust procedure: we first cast $\mathrm{d}C(v)/\mathrm{d}v=0$ into the form $v=f(v)$ and solve it iteratively starting from the flux-weighted mean of $V_j$ (which can be readily computed from the quantities in Section \ref{ssec:method_flux}), and use two Newton--Raphson steps to make the iterative solution converge efficiently to the best one. We have so far evaluated $v_\mathrm{tidal}$ by modeling the CCF for a single line at a particular wavelength, using the weights $\Delta F_j$ evaluated at single effective wavelength (see Section \ref{ssec:method_flux}). In reality, however, the CCF is computed from the spectrum with many absorption lines at different wavelengths, and so the actual CCF would be a weighted sum of the many CCFs computed in Equation~\ref{eq:ccf}. Assuming that $\beta$ and $\zeta$ are achromatic, this summation is equivalent to replacing $\Delta F_j$ in Equation~\ref{eq:ccf} with the value integrated over the wavelength range of the spectrum with a certain weight $W(\lambda)$. Since $\Delta F_j$ depends on the wavelength only through the limb- and gravity-darkening, this operation reduces to choosing the limb- and gravity-darkening coefficients $u_1$, $u_2$, and $y$ evaluated in the appropriate band. The detailed shape of the weight $W(\lambda)$ is difficult to model, because it depends on the strengths and amounts of the lines as well as the \'{e}chelle orders used for RV extraction that affect the wavelength region to which the CCF, and hence the derived RV, is most sensitive. We thus introduce this effective band as another parameter that may vary within a physically reasonable range, and take into account its uncertainty in evaluating the coefficients. See Section \ref{ssec:method_priors} for practical implementation of this model. We note that the formulation presented here is not necessarily a unique one but needs to be adjusted depending on the exact procedure adopted to extract RVs. For example, RVs may be derived from features of the CCF other than the peak or by fitting a Gaussian to the CCF; the CCF may be computed using a binary mask rather than a theoretical template spectrum; or the RVs may be derived by directly fitting the observed spectra with a theoretical model. Nevertheless the framework presented here remains useful for constructing similar models for RVs from different pipelines. \newcommand{\sigma_\mathrm{jitter}}{\sigma_\mathrm{jitter}} \subsection{Full RV Model, Likelihood, and Sampling}\label{ssec:method_full} The RV measured at time $t_i$ was modeled as \begin{equation} v(t_i) = -K \cos\left[2\pi\left({t_i-t_0} \over P\right)\right] + \gamma + v_\mathrm{tidal}(t_i), \end{equation} where $t_0$ is the time of the conjunction where the companion is in front of the red giant and $\gamma$ denotes the RV zero point. The RV semi-amplitude $K$ is \begin{equation} K = \left( 2\pi G \over P\right)^{1/3}\,{M_\bullet \sin i \over (M_\star+M_\bullet)^{2/3}}, \end{equation} where $G$ is Newton's gravitational constant. We assume that the measurement errors for RVs are independent and identical Gaussians with the variance of $\sigma_i^2+\sigma_\mathrm{jitter}^2$. Here $\sigma_i$ is an internal error of the $i$th data point, and $\sigma_\mathrm{jitter}$ models any other excess scatter that is not included in $\sigma_i$\footnote{ It is not uncommon that field red giant stars exhibit RV jitters of up to $\sim1\,\mathrm{km\,s^{-1}}$ level \citep{2003AJ....125..293C}. Some of the presumably single giant stars observed by \citet{2012AN....333..663S} also show RV variations of $\mathcal{O}(0.1\,\mathrm{km\,s^{-1}})$.} and was inferred along with the other model parameters. Therefore the log-likelihood for a set of RV measurements $y_i$ at times $t_i$ is given by \begin{equation} \ln \mathcal{L} = -{1\over 2} \sum_i \left\{ {\left[y_i-v(t_i)\right]^2 \over \sigma_i^2+\sigma_\mathrm{jitter}^2} + \ln\left[2\pi(\sigma_i^2+\sigma_\mathrm{jitter}^2)\right] \right\}. \end{equation} The whole code was implemented using {\tt JAX} \citep{jax2018github} and {\tt NumPyro} \citep{bingham2018pyro, phan2019composable}. We assumed the priors as described in Table \ref{tab:params} and Section \ref{ssec:method_priors}, and obtained posterior samples for the parameters using Hamiltonian Monte Carlo \citep{DUANE1987216, 2017arXiv170102434B}. We sampled until the resulting chains had the split $\hat{R}<1.01$ \citep{BB13945229} for all the parameters. \subsection{Priors}\label{ssec:method_priors} We adopted the priors summarized in Table \ref{tab:params}. They are uninformative unless otherwise specified below. We note that these priors are independent from the information derived from ellipsoidal variations or eclipses of the Balmer lines.\\ \noindent {\it RG mass $M_\star$ and radius $R_\star$} --- We adopt a prior uniform in $[0.5\,M_\odot, 3.0\,M_\odot]$ for the RG mass. For the RG radius, we assume a Gaussian prior $\mathcal{N}(R_\star; 24.0\,R_\odot, 0.9\,R_\odot)$ based on the value derived from the SED, Gaia EDR3 distance \citep{2020arXiv201201533G}, and correction for non-stellar flux using measured dilution of the absorption lines \citep{2021arXiv210102212J}. Our definition of $R_\star$ (Section \ref{ssec:method_flux}) is not exactly the same as that in \citet{2021arXiv210102212J}, but the difference in the resulting solution is significantly smaller than the prior uncertainty and so can be ignored. \\ \noindent {\it macroturbulence $\zeta$} --- The macroturbulence $\zeta$, along with rotation, shapes the rotation kernel (Equation \ref{eq:theta}) and can affect the RVs derived from CCFs. We adopt a Gaussian prior based on the relation from APOGEE DR13 \citep{2018AJ....156..125H}, which encompasses the values estimated by \citet{2021arXiv210102212J} from spectra and agrees with other measurements for RGB stars \citep[e.g.][]{2008AJ....135..892C}. The effect due to this uncertainty turns out to be minor, but would have been significant if the actual value was close to $v\sin i$ (see Section \ref{ssec:results_dep}). \\ \noindent {\it profile width $\beta$} --- As detailed in Section \ref{ssec:method_rv}, this parameter represents the broadening of the CCF due to intrinsic widths of the absorption lines and that of the template. Larger values of $\beta$ tend to result in smaller $v_\mathrm{tidal}$ by smearing out the difference between pixels with different line-of-sight velocities $V_j$. The estimate in Section \ref{ssec:method_rv} gives $\beta=2.95\,\mathrm{km\,s^{-1}}$, but the actual value could be larger depending on the factors including exact broadening of the theoretical template used for the analysis, microturbulence, and wavelength dependence of the instrumental profile. Thus we adopt a half-normal prior centered on $2.95\,\mathrm{km\,s^{-1}}$ with the width of $1\,\mathrm{km\,s^{-1}}$. \\ \noindent {\it limb- and gravity-darkening coefficients $u_1$, $u_2$, and $y$} --- Limb-darkening reduces the flux contribution from the surface elements with large line-of-sight velocities and reduces the amplitude of $v_\mathrm{tidal}$. Gravity darkening, on the other hand, enhances the amplitude because the effect decreases the flux from the underrepresented side in velocity space (e.g.~further reduces the ``red" flux in Figure \ref{fig:schematic}). Their values in our model depend on the effective wavelength band defined through the CCF (see Section \ref{ssec:method_rv}). The effective band is difficult to quantify, but the following assumptions are reasonable: (i) the value is unlikely to be far from that obtained by integrating over the whole spectrum range with a uniform weight, because the lines relevant for RV measurements exist over the entire range, and (ii) the value should be bracketed by the values computed for the shortest and the longest wavelengths $(\lambda_\mathrm{min}, \lambda_\mathrm{max})$ in the spectrum. We implement this prior knowledge as follows. We take $u'g'r'i'z'$-band coefficients theoretically computed with the ATLAS model \citep{2011A&A...529A..75C} for the effective temperature of $4500\,\mathrm{K}$, log surface gravity of $1.5$, and metallicity of $-1$ \citep{2021arXiv210102212J}, and interpolate them over the central wavelengths of the bands to obtain the coefficients $(u_1, u_2, y)$ as a function of wavelength. We then introduce a new parameter $\lambda_\mathrm{eff}$ that represents the effective band. This parameter was sampled from a Gaussian with the central value of $(\lambda_\mathrm{min}+\lambda_\mathrm{max})/2$ and the width of $(\lambda_\mathrm{max}-\lambda_\mathrm{max})/4$. Then we sample the coefficients from three independent Gaussians centered around $u_1(\lambda_\mathrm{eff})$, $u_2(\lambda_\mathrm{eff})$, and $y(\lambda_\mathrm{eff})$ computed using the above deterministic relations and widths of 0.1 to incorporate uncertainties in the theoretical calculations. This prior is insensitive to the adopted spectroscopic parameters within their uncertainties as evaluated by \citet{2021arXiv210102212J}. \\ \noindent {\it projected rotation velocity $v\sin i$} --- Assuming tidal synchronization, our model automatically computes $v\sin i = 2\pi R_\star\sin i/P$. This has also been evaluated from several different sets of spectra \citep[see][]{2021arXiv210102212J}, but we did {\it not} include this information in the fit for two reasons. First, interpretation of the ``$v\sin i$" values of a tidally deformed star depends on how exactly they were extracted \citep{1998MNRAS.298..153S}. Second, the measurements also depend on macroturbulence velocities adopted in those analyses, which are most likely different from each other and are not readily available. Nevertheless, the value predicted from our model turned out to be in reasonable agreement with those existing measurements. \section{Results}\label{sec:results} We modeled RVs measured by \citet{2012AN....333..663S} from high-resolution ($R=55,000$) spectra obtained with the STELLA \'{e}chelle spectrograph on the 1.2~m STELLA-I telescope at the Teide Observatory \citep{2004AN....325..527S, 2008SPIE.7019E..0LW, 2010AdAst2010E..19S} between November 2006 and April 2010.\footnote{We also performed the same modeling for RVs from \citet{2014Obs...134..109G} with larger uncertainties and found a consistent result.} The spectra cover the wavelength range 388--882~nm and were reduced using the pipeline described in \citet{2008SPIE.7019E..0LW}. The RVs were determined from an order-by-order cross correlation analysis adopting a synthetic template spectrum \citep{1993sssp.book.....K} that roughly matches the target spectral classification. Since \citet{2012AN....333..663S} initially identified the system as double-lined, the RVs were derived from the peak of the two-dimensional CCF \citep{2011A&A...531A..89W} as we exactly model here. Our priors on $\beta$ and $\lambda_\mathrm{eff}$ were chosen based on these information (Section \ref{ssec:method_priors}). The mean internal RV error is $\approx 0.2\,\mathrm{km\,s^{-1}}$. We removed one outlier at $\mathrm{BJD}=2454073.62965$ because the point was found to deviate from the model by more than $5\sigma$ and was not adequately modeled. We checked that the choice did not make a significant difference in the inferred parameters. The model based on the posterior samples of the parameters is compared with the data in Figure \ref{fig:fit}, and the resulting constraints on the parameters are summarized in Table \ref{tab:params} and Figure \ref{fig:corner}. Our model successfully explains the periodic RV residuals from the Keplerian model, as shown in the middle and bottom panels of Figure \ref{fig:fit}. The shape and amplitude of the tidal RV signal constrain $\cos i$, $M_\bullet/M_\star$, and $a/R_\star$, while the RV semi-amplitude pins down the mass function. Thus $M_\bullet$ is determined from the RV signal and the prior on $R_\star$ alone. The derived masses $M_\bullet=2.95^{+0.17}_{-0.17}\,M_\odot$, $M_\star=0.82^{+0.13}_{-0.14}\,M_\odot$, and inclination $i=82.9^{+7.0}_{-3.3}\,\mathrm{deg}$ (medians and 68.3\% highest density intervals of the marginal posteriors) are all consistent with $M_\bullet=3.04\pm0.06\,M_\odot$, $M_\star=1.00\pm0.07\,M_\odot$, and $i=87.0^{+1.7}_{-1.4}\,\mathrm{deg}$ derived by \citet{2021arXiv210102212J} using both RVs and ellipsoidal variations but without modeling the RV residuals. The result provides additional, independent evidence for the companion mass and limits on the companion's luminosity derived by \citet{2021arXiv210102212J}, and eliminates the need for a third body as the origin of the non-Keplerian RVs. Our larger error bars can partly be attributed to taking into account the uncertainties of the limb- and gravity-darkening coefficients and the RV scatter slightly larger than the internal error bars. \begin{figure*}[htbp] \centering \epsscale{1.15} \plotone{s12acritb_rveil_n8_s500_models.png} \caption{The observed and modeled RVs as a function of orbital phase. The blue filled circles are the RV data from \citet{2012AN....333..663S}. The orange solid line and the shaded region respectively show the mean and standard deviation of the models computed for posterior samples of the parameters. The gray-shaded region (phase less than 0.5 and larger than 1.5) shows the periodic repetition of the data and the model. {\it (Top)} --- RVs relative to the zero point $\gamma$. {\it (Middle)} --- RVs relative to the Keplerian component plus $\gamma$. {\it (Bottom)} --- RVs relative to the full model. } \label{fig:fit} \end{figure*} \begin{deluxetable*}{l@{\hspace{.1cm}}cc@{\hspace{.15cm}}c}[!ht] \tablecaption{System Parameters from our RV Modeling.\label{tab:params}} \tablehead{ \colhead{} & \colhead{median \& $68.3\%$ HPDI} & \colhead{$90\%$ HPDI} & \colhead{prior} } \startdata red giant mass $M_\star$ ($M_\odot$) & $0.82^{+0.13}_{-0.14}$ & $[0.62, 1.07]$ & $\u(0.5, 3.0)$\\ red giant radius $R_\star$ ($R_\odot$) & $24.25^{+0.88}_{-0.89}$ & $[22.74, 25.67]$ & $\mathcal{N}(24.0, 0.9, 15)$ \\ mass ratio $M_\bullet/M_\star$ & $3.58^{+0.38}_{-0.54}$ & $[2.85, 4.37]$ & ${\mathcal{U}_{\ln}} (\exp(0), \exp(3))$\\ companion mass $M_\bullet$ ($M_\odot$) & $2.95^{+0.17}_{-0.17}$ & $[2.68, 3.23]$ & \nodata \\ semi-major axis over red giant radius $a/R_\star$ & $4.142^{+0.091}_{-0.093}$ & $[4.001, 4.301]$ & \nodata\\ RV semi-amplitude $K$ ($\mathrm{km\,s^{-1}}$) & $65.268^{+0.061}_{-0.052}$ & $[65.17, 65.36]$ & \nodata\\ binary mass function ($M_\odot$) & $1.7264^{+0.0049}_{-0.0041}$ & $[1.7188, 1.7337]$ & \nodata\\ time of conjunction $t_0$ ($\mathrm{BJD-2450000}$) & $5575.0653^{+0.0073}_{-0.0071}$ & $[5575.0533, 5575.0767]$ & $\u(5574.5954, 5575.5954)$\\ orbital period $P$ (days) & $59.9376^{+0.0009}_{-0.0010}$ & $[59.9359, 59.9392]$ & $\mathcal{N}(60, 1, 58)$\\ cosine of orbital inclination $\cos i$ & $0.12^{+0.06}_{-0.12}$ & $[0.00, 0.28]$ & $\u(0,1)$\\ profile width $\beta$ ($\mathrm{km\,s^{-1}}$) & $3.93^{+0.42}_{-0.90}$ & $[2.96, 4.99]$ & $\mathcal{N}(2.95, 1, 2.95)$\\ macroturbulence velocity $\zeta$ ($\mathrm{km\,s^{-1}}$) & $5.52^{+0.97}_{-1.36}$ & $[3.78, 7.48]$ & $\mathcal{N}(5.3, 1, 1)$\\ effective wavelength $\lambda_\mathrm{eff}$ ($\mathrm{nm}$) & $626^{+100}_{-112}$ & $[475, 803]$ & $\mathcal{N}(635.0, 123.5, 388)$ \\ limb-darkening coefficient $u_1$ & $0.64^{+0.15}_{-0.20}$ & $[0.37, 0.94]$ & \nodata\\ limb-darkening coefficient $u_2$ & $0.17^{+0.13}_{-0.12}$ & $[-0.04, 0.39]$ & \nodata\\ gravity-darkening coefficient $y$ & $0.46^{+0.09}_{-0.10}$ & $[0.31, 0.64]$ & \nodata\\ RV zero point $\gamma$ ($\mathrm{km\,s^{-1}}$) & $1.885^{+0.031}_{-0.030}$ & $[1.835, 1.937]$ & $\u(-10,10)$\\ RV jitter $\sigma_\mathrm{jitter}$ ($\mathrm{km\,s^{-1}}$) & $0.168^{+0.026}_{-0.030}$ & $[0.121, 0.214]$ & ${\mathcal{U}_{\ln}} (\exp(-5), \exp(0))$\\ projected rotation velocity $v\sin i$ ($\mathrm{km\,s^{-1}}$) & $20.18^{+0.76}_{-0.88}$ & $[18.80, 21.52]$ & \nodata \\ \enddata \tablecomments{Values listed here report the medians and $68.3\%$/$90\%$ highest posterior density intervals (HPDIs) of the marginal posteriors, which were found to be unimodel for all the parameters. Priors --- $\mathcal{N}(\mu, \sigma^2, l)$ means the normal distribution centered on $\mu$ and with variance $\sigma^2$; when $l$ is specified the normal distribution is truncated at the lower limit $l$. $\mathcal{U} (a,b)$ and $\mathcal{U}_{\ln} (a,b)$ are the uniform- and log-uniform probability density functions between $a$ and $b$, respectively. Dots indicate the parameters that were computed from the samples of the ``fitted" parameters whose priors were explicitly specified. } \end{deluxetable*} We note that the derived RG mass is sensitive to the adopted prior on the RG radius, while the companion mass is less so, as was also noted by \citet{2021arXiv210102212J}. This is because the tidal RVs (as well as ellipsoidal variations) constrain $\cos i$ and the degree of tidal deformation $(M_\bullet/M_\star)(R_\star/a)^3$, and so the decrease in $R_\star$ must be compensated by a larger $M_\bullet/M_\star$, and hence smaller $M_\star$ for the fixed mass function. This positive correlation between $M_\star$ and $R_\star$ is seen in Figure \ref{fig:corner}. If we instead adopt $R_\star=22.2\pm0.8\,R_\odot$ from the SED modeling without veiling correction by \citet{2021arXiv210102212J}, we find $M_\star=0.63^{+0.06}_{-0.12}\,M_\odot$, $R_\star=22.55^{+0.69}_{-0.75}\,R_\odot$, and $M_\bullet=2.74^{+0.10}_{-0.19}\,M_\odot$. The change in the companion mass $M_\bullet$ is smaller than that in $M_\star$ because of the anti-correlation between $M_\star$ and $M_\bullet/M_\star$ as described above. Thus the conclusion that the companion has $\approx 3\,M_\odot$ is robust against the uncertainty in the RG mass. In Figure \ref{fig:lc}, we compare the light curve {\it predicted} from our RV model (computed as $\sum_j \Delta F_j$) with the data from the Kilodegree Extremely Little Telescope \citep[KELT;][]{2007PASP..119..923P}. The data were retrieved from the NASA Exoplanet Archive,\footnote{\url{https://exoplanetarchive.ipac.caltech.edu/}} phase-folded using the mean ephemeris derived from the RV modeling, and averaged into 100 bins. In computing the light curve models, the limb- and gravity-darkening coefficients derived from the RV fit were replaced with the values computed \citep{2011A&A...529A..75C} for the $R$-band, which is similar to the KELT band pass \citep{2007PASP..119..923P}. We show two sets of predictions: the orange solid line and shaded region show the mean and standard deviation of the posterior models assuming no dilution, respectively, and the blue dashed line shows the mean posterior model assuming 10\% dilution relative to the RG flux (not the total flux) due to the veiling effect, where normalization of each model is adjusted to best match the data. Figure \ref{fig:lc} shows that the data barely match the prediction of the zero-dilution model, and that the agreement is better for the model with $10\%$ dilution. Although \citet{2021arXiv210102212J} assumed no dilution in their analysis of the KELT light curve, the $\sim10\%$ dilution favored by our RV model appears to be reasonable given the line dilution analysis by \citet{2021arXiv210102212J} (their Figure 8 left) and the wide effective width ($318\,\mathrm{nm}$) of the KELT band \citep{2007PASP..119..923P}. Thus we conclude that our RV model is consistent with the observed ellipsoidal variations within the uncertainty of the veiling flux and RG radius. This comparison illustrates the importance of the tidal RV signal as an independent means to check any flux contamination in the light curve. \begin{figure*}[htbp] \centering \epsscale{1.15} \plotone{s12acritb_rveil_n8_s500_modelsf.png} \caption{The flux variation {\it predicted} from our RV modeling compared with the KELT light curve. The data were phase-folded using the mean ephemeris derived from the RV modeling and averaged into 100 bins.} \label{fig:lc} \end{figure*} \clearpage \subsection{Sensitivity to the Line-profile Parameters} \label{ssec:results_dep} Our RV model includes two additional parameters, macroturbulence $\zeta$ and profile width $\beta$, which are not required when the tidal RV is modeled as the flux-weighted mean velocity \citep{1976ApJ...203..182W}. These parameters are not constrained by the RV data but mostly determined by the adopted priors (see Table \ref{tab:params}). Although we believe they are reasonable \citep[and $\zeta$ is constrained from the spectra;][]{2021arXiv210102212J}, we show in Figure \ref{fig:dependence} how the model $v_\mathrm{tidal}$ depends on these parameters to gauge their potential impacts on the other inferred parameters. Here the thick gray lines show the model computed for the mean parameter values of the posterior distribution, and the dashed and dotted lines show models where each parameter is perturbed by the values shown in the legends. The results show that it is essential to take into account their uncertainties as we did. In particular, the prior on the macroturbulence parameter needs to be chosen carefully. When the value is a significant fraction of $v\sin i$, as is the case for the dotted curve corresponding to $\zeta\approx 11\,\mathrm{km\,s^{-1}}$, the amplitude of the tidal RV depends significantly on this parameter. This can indeed be the case for some other giant stars. \begin{figure*} \centering \epsscale{0.55} \gridline \fig{s12acritb_rveil_n8_s500_zeta.png}{0.5\textwidth}{(a) macroturbulence $\zeta$} \fig{s12acritb_rveil_n8_s500_beta.png}{0.5\textwidth}{(b) profile width $\beta$} } \caption{Dependence of the tidal RV signal on (a) macroturbulence $\zeta$ and (b) profile width $\beta$.} \label{fig:dependence} \end{figure*} \subsection{Sensitivity to the Limb-Darkening Profile} \label{ssec:results_ld} The model atmospheres adopting spherical geometry suggest that the limb of giant stars with low surface gravity can be substantially darker than predicted by simple parametric laws \citep[see, e.g., Figure 1 of ][]{2000A&A...364..265O} as we have adopted in the above analysis. To check on the sensitivity of our analysis to the limb-darkening profile, we redid the fit replacing the quadratic profile with the ones based on the intensity calculations from the PHOENIX model atmospheres with spherical geometry \citep{2013A&A...553A...6H}.\footnote{\url{http://phoenix.astro.physik.uni-goettingen.de}} We used a model computed for the same atmospheric parameters as adopted in Section \ref{ssec:method_priors}, retrieved the intensity values for $78$ different $\cos \gamma$ and for the wavelengths spanning $352.5$--$952.5\,\mathrm{nm}$ at $100\,\mathrm{nm}$ intervals, and linearly interpolated them to compute $I(\cos\gamma, \lambda_\mathrm{eff})$ in our RV model. We also incorporated $10\%$ fractional uncertainty in the limb-darkening profile to take into account uncertainties in the model and in the adopted atmospheric parameters. For this analysis, we adopted 3072 {\tt HEALPix} pixels to ensure numerical stability for the updated limb-darkening profile with a sudden intensity drop at the limb. We found the results consistent with the above analysis using the quadratic law, including $M_\bullet=2.97^{+0.15}_{-0.17}\,M_\odot$, $i=82.8^{+7.2}_{-3.3}\,\mathrm{deg}$, and $M_\star=0.83^{+0.11}_{-0.15}\,M_\odot$. Thus we conclude that our result is robust against the uncertainty of the limb-darkening profile. This appears reasonable given that the deviation from the quadratic law occurs mainly at $\cos\gamma\lesssim 0.25$, where the PHOENIX atmospheres predict almost zero intensities. The severe darkening results in the loss of stellar flux in the outermost $\approx 1-\sqrt{1-0.25^2}=3\%$ of the stellar disk. This is equivalent to a $<1\,\mathrm{km\,s^{-1}}$ change in $v\sin i$ and so plays a minor role in shaping the line profile. On the other hand, we found a slightly larger amplitude for the tidal RVs computed as the flux-weighted mean when the profile from the PHOENIX model was adopted. \section{Summary and Discussion} \label{sec:summary} We showed that the periodic RV residuals of V723 Mon are quantitatively explained by a model incorporating tidal deformation of the RG star and associated distortion of the absorption line profile. Our RV modeling constrains the companion mass to be $M_\bullet = 2.95\pm0.17\,M_\odot$ and orbital inclination to be $i=82.9^{+7.0}_{-3.3}\,\mathrm{deg}$. This provides additional evidence for the low-mass black hole companion in the mass gap as inferred by \citet{2021arXiv210102212J}, and eliminates the need for a third body to explain the periodic RV residuals. The derived inclination indicates that the companion should be eclipsed by the red giant, and thus also supports the limits on the companion's luminosity based on the absence of eclipses \citep{2021arXiv210102212J}. Importantly, the constraint is independent from ellipsoidal variations or the eclipses of Balmer emission, which both include signals of unclear physical origin. Indeed, our RV modeling mildly favors $\sim10\%$ flux dilution in the KELT band that was not taken into account in the analysis by \citet{2021arXiv210102212J}. This illustrates an advantage of the tidal RV signal as a means to measure component masses in tidally interacting, single-lined binaries: any contaminating non-stellar flux, as long as its spectrum is continuous, does not significantly affect the positions and shapes of the absorption lines from which RVs are measured. The same signal will be useful for ``dynamical" mass measurements in other non-eclipsing post main-sequence binaries, including the ones where the companion is not a black hole, without photometric light curves. The amplitude of $v_\mathrm{tidal}$ is of order $(v\sin i)\cdot q_\mathrm{comp} \cdot (R_\star/a)^3$, where $q_\mathrm{comp}$ is the companion mass relative to the star for which RVs are measured. Thus for a synchronized and circularized binary, its amplitude $K_\mathrm{tidal}$ relative to the semi-amplitude of the orbital RV $K$ is simply given by \begin{equation} {K_\mathrm{tidal} \over K} \sim \left(R_\star \over a \right)^4(1+q_\mathrm{comp}), \end{equation} which is $\mathcal{O}(1\%)$ for V723 Mon. This estimate suggests that the amplitude of $v_\mathrm{tidal}$ can well reach $\mathcal{O}(100\,\mathrm{m\,s^{-1}})$ even in less extreme systems than V723 Mon. This is not the precision usually required for binary studies, but precisions better than this are routinely achieved in Doppler searches for exoplanets. This work motivates such high-precision RV measurements for binaries exhibiting strong tidal interactions. We also echo the original note by \citet{1941PNAS...27..168S} that the tidal signal could be relevant for interpreting the eccentricities of such binaries when their precise values matter, e.g. for studying details of tidal orbital circularization \citep[e.g.][]{1995A&A...296..709V, 2018ApJ...867....5P} or for precise evaluation of the apsidal precession rate \citep[e.g.][]{2017AJ....154....4P}. The mass measurement with the tidal RVs will be especially valuable for non-eclipsing systems where the ellipsoidal variations are swamped or contaminated by other light sources, including quasi-periodic modulations due to active regions (spots) that have a similar period to the orbital one in synchronized binaries. Although the presence of spots may also make it challenging to measure precise RVs, line-profile distortion by spots is usually localized in velocity space, and so could still be distinguished from the global distortion by tides via a careful analysis of the CCF shapes. Even in the absence of spots, direct modeling of the line-profile variations, as has been proposed by \citet{1998MNRAS.298..153S}, in principle provides more information and is less model dependent than modeling RV time series alone as we have done. Such an analysis is beyond the scope of this paper. \acknowledgements The authors thank Chris Kochanek for useful comments on the early manuscript of the paper, and the anonymous referee for an important note on the limb-darkening profile. The authors are also grateful to Todd Thompson, Kris Stanek, Tharindu Jayasinghe, Klaus G. Strassmeier, and Michael Weber for sharing the STELLA RV data as well as the information on the relevant references. KM thanks Hajime Kawahara for introducing {\tt JAX} and {\tt NumPyro} and for sharing computational resources. Work by TH was supported by JSPS KAKENHI Grant Number JP19K14783. \software{ corner \citep{corner}, HEALPix \citep{2005ApJ...622..759G}, healpy \citep{Zonca2019}, JAX \citep{jax2018github}, NumPyro \citep{bingham2018pyro, phan2019composable} }
1,116,691,500,531
arxiv
\section{Abstract} Inverse Compton scattering appears to play a more important role in the diffuse Galactic continuum emission than previously thought, from MeV to GeV energies. We compare models having a large inverse Compton component with EGRET data, and find good agreement in the longitude and latitude distributions at low and high energies. We test an alternative explanation for the $\ge$1 GeV $\gamma$-ray\ excess, the hard nucleon spectrum, using secondary antiprotons and positrons. \\ \section{Introduction.} We are developing a model which aims to reproduce self-consistently observational data of many kinds related to cosmic-ray origin and propagation: direct measurements of nuclei, electrons and positrons, gamma rays, and synchrotron radiation (Strong 1998)\footnote{For more details see \it http://www.gamma.mpe--garching.mpg.de/$\sim$aws/aws.html}. Here we concentrate on the inverse Compton (IC) contribution to the diffuse Galactic continuum gamma-ray emission. Recent results from both COMPTEL and EGRET indicate that IC scattering is a more important contributor to the diffuse emission that previously believed. COMPTEL results (Strong 1997) for the 1--30 MeV range show a latitude distribution in the inner Galaxy which is broader than that of HI and H$_2$, so that bremsstrahlung of electrons on the gas does not appear adequate and a more extended component such as IC is required. The broad distribution is the result of the large $z$-extent of the interstellar radiation field which can interact cosmic-ray electrons up to several kpc from the plane. At much higher energies, the puzzling excess in the EGRET data above 1 GeV relative to that expected for $\pi^0$-decay has been suggested to orginate in IC scattering (e.g., Pohl 1998) from a hard interstellar electron spectrum. We test this scenario with comparisons of the predicted gamma-ray sky with EGRET data. We also test an alternate hypothesis, the hard nucleons spectrum, using antiprotons and positrons. \\ \section{Models.} We consider a propagation model with reacceleration using parameters derived from isotopic composition (Strong 1998). Energy losses for electrons by ionization, Coulomb scattering, bremsstrahlung, IC and synchrotron are included. A new calculation of the interstellar radiation field (ISRF) has been made based on stellar population models and IRAS and COBE data. An investigation of the effect of the anisotropy of the ISRF has shown that this has a significant influence on the intensity and distribution of the IC radiation. Photons moving away from the observer are scattered anisotropically, enhancing the radiation for example at high latitude. This effect is included in our models. The $\pi^0$-decay gamma rays are calculated explicitly from the propagated proton and Helium spectra (Dermer 1986, Moskalenko 1998a). The electron injection spectral index is taken as --1.8 in the case of reacceleration models; this value is necessary to obtain consistency with radio synchrotron spectrum towards the Galactic pole. Without reacceleration an injection index near --2.0 is required. Figure~1 shows the electron spectrum at $R_\odot = 8.5$ kpc in the disk for these models, and the synchrotron spectrum towards the Galactic pole. Following Pohl (1998), for the present study we do not require consistency with the locally measured electron spectrum above 10 GeV since the rapid energy losses mean that this is not necessarily representative of even the local interstellar average. (Agreement with the locally measured electron spectrum would require a break in the injection spectrum at a few GeV, as has often been adopted in the past). A halo size (distance from plane to boundary) of $z_h$=4 kpc is adopted, consistent with the $^{10}Be$ analysis in a the accompanying paper (Strong 1998). \\ \begin{figure}[t! \begin{picture}(148,69)(5,-59) \put(0,0){ \makebox(75,0)[tl]{ \psfig{file=fig1a.ps,% height=62mm,width=75mm,clip=}}} \put(75,0){ \makebox(75,0)[tl]{ \psfig{file=fig1b.ps,% height=62mm,width=75mm,clip=}}} \end{picture} \small Fig. 1. {\it Left:} Electron spectrum at $R_\odot$=8.5 kpc in the plane, for models with and without reacceleration. Data points: direct measurements, see references in Moskalenko (1998a). {\it Right:} Synchrotron spectrum towards NGP for these electron spectra, compared to observational data. \end{figure \section{Comparison with EGRET data.} Figure~2 shows the model latitude and longitude $\gamma$-ray\ distributions for the inner Galaxy for 70--100 MeV, convolved with the EGRET point-spread function, compared to EGRET Phase 1--4 data. The separate components are also shown. In this model the contributions from IC, bremsstrahlung and $\pi^0$-decay are about equal at 100 MeV. (Note that point sources such as the Vela pulsar have not been removed from the data, but we are here only interested in the large-scale profiles). The comparison shows that a model with large IC component can indeed reproduce the data. This energy range is close to that in which COMPTEL data led to similar conclusions (Strong 1997). Turning to high energies, Figure~3 shows profiles for 4000--10000 MeV; again the comparison shows that the adoption of a hard electron injection spectrum is a viable explanation for the $>$1 GeV excess. The latitude distribution here is not as wide as at low energies owing to the rapid energy losses of the electrons, so that an observational distinction between a gas-related $\pi^0$-component from a hard nucleon spectrum and the present IC model does not seem possible. \\ \begin{figure}[t! \begin{picture}(148,73)(5,-61) \put(0,0){ \makebox(75,0)[tl]{ \psfig{file=fig2a.ps,% height=62mm,width=75mm,clip=}}} \put(75,0){ \makebox(75,0)[tl]{ \psfig{file=fig2b.ps,% height=62mm,width=75mm,clip=}}} \end{picture} \small Fig. 2. {\it Left:} Latitude distribution for 70--100 MeV as measured by EGRET, compared to reacceleration model with {\it hard electron} spectrum. {\it Right:} Longitude distribution for $|b|<5^\circ$. \end{figure \begin{figure}[thb! \begin{picture}(148,64)(5,-61) \put(0,0){ \makebox(75,0)[tl]{ \psfig{file=fig3a.ps,% height=62mm,width=75mm,clip=}}} \put(75,0){ \makebox(75,0)[tl]{ \psfig{file=fig3b.ps,% height=62mm,width=75mm,clip=}}} \end{picture} \small Fig. 3. {\it Left:} Latitude distribution for 4000--10000 MeV as measured by EGRET, compared to reacceleration model with {\it hard electron} spectrum. {\it Right:} Longitude distribution for $|b|<5^\circ$. \end{figure \begin{figure}[t! \begin{picture}(148,60)(5,-59) \put(0,0){ \makebox(75,0)[tl]{ \psfig{file=fig4a.ps,% height=62mm,width=75mm,clip=}}} \put(75,0){ \makebox(75,0)[tl]{ \psfig{file=fig4b.ps,% height=62mm,width=75mm,clip=}}} \end{picture} \small Fig. 4. {\it Left:} Gamma-ray spectrum of inner Galaxy as measured by EGRET (Strong 1996b) compared to model with a {\it hard nucleon} spectrum. {\it Right:} $\bar{p}/p$ ratio for the `normal' spectrum (solid lines) and for the hard nucleon spectrum (dashes) used for the $\gamma$-ray\ calculation shown on the left. The thick lines show the case with reacceleration. Dotted lines: calculations of Simon (1998). Data from: \rule[0pt]{1.5ex}{1.5ex}\ Boezio (1997), {\large $\circ$} Bogomolov (1987,1990), $\bigtriangleup$ Hof (1996), \raisebox{0.7ex}{\framebox[1.2ex]{}} Mitchell (1996), $\diamondsuit$ Moiseev (1997). \end{figure \section{Test for a hard nucleon spectrum using antiprotons and positrons.} Another possible origin for the $>$1 GeV excess could be an interstellar nucleon spectrum which is harder than observed locally (e.g., Hunter 1997, Gralewicz 1997, Mori 1997). Figure~4 (left) illustrates such a possibility; here we have used a nucleon injection spectrum which is a power law in momentum with index --1.7 (no reacceleration) giving after propagation a $\gamma$-ray\ spectrum which agrees reasonably with the EGRET data. \begin{wrapfigure}[15]{r}[0mm]{73.2mm \begin{picture}(68,61)(5,-3) \put(3,0){\makebox(68,60)[l]% {\psfig{file=fig5.ps,width=73mm,height=60mm,clip=}}} \end{picture} \vskip -7mm \small Fig. 5. Spectra of secondary positrons for `normal' (thin line) and hard (dashes) nucleon spectra (no reacceleration). Thick line: `normal' case with reacceleration. Data from Barwick (1998). \end{wrapfigure The $\bar{p}/p$ ratio expected for this case and the standard model compared to recent data is shown in Figure~4 (right). Our standard model calculation agrees with that of Simon (1998). For the case of a hard nucleon spectrum the ratio is still consistent with the data at low energies but becomes $\sim$4 times higher at 10 GeV. Up to 3 GeV it does not confict with the data with their large error bars. It is however larger than the point at 3.7--19 GeV (Hof 1996) by about $5\sigma$. On the basis of the $\bar{p}/p$ data point above 3 GeV we seem already to be able to exclude the hard nucleon spectrum, but confirmation of this conclusion must await more accurate data at high energies. Figure 5 shows the interstellar positron spectrum, again for the standard and hard nucleon spectra. The formalism used is given in Moskalenko (1998a). The flux for the standard case agrees with recent data (Barwick 1998). For the hard nucleon spectrum the flux is higher than observed by factor $\sim$4; this provides more evidence against a hard nucleon spectrum. However this test is less direct than $\bar{p}$ due to the difference in particle type and the large effect of energy losses. \smallskip \secref{References.} \setlength{\parindent}{-5mm} \begin{list}{}{\topsep 0pt \partopsep 0pt \itemsep 0pt \leftmargin 5mm \parsep 0pt \itemindent -5mm} \item S.W.~Barwick et al.\ {\it Ap.J.} 498 (1998) 779--789. \item M.~Boezio et al.\ {\it Ap.J.} 487 (1997) 415--423. \item E.A.~Bogomolov et al.\ {\it 20th ICRC.} 2 (1987) 72--75. \item E.A.~Bogomolov et al.\ {\it 21st ICRC.} 3 (1990) 288--290. \item C.D.~Dermer.\ {\it A\&A.} 157 (1986) 223--229. \item P.~Gralewicz et al.\ {\it A\&A.} 318 (1997) 925--930. \item M.~Hof et al.\ {\it Ap.J.} 467 (1996) L33--L36. \item S.D.~Hunter et al.\ {\it Ap.J.} 481 (1997) 205--240. \item J.W.~Mitchell et al.\ {\it Phys.Rev.Let.} 76 (1996) 3057--3060. \item A.~Moiseev et al.\ {\it Ap.J.} 474 (1997) 479--489. \item M.~Mori.\ {\it Ap.J.} 478 (1997) 225--232. \item I.V.~Moskalenko and A.W.~Strong.\ {\it Ap.J.} 493 (1998a) 694--707. \item I.V.~Moskalenko, A.W.~Strong and O.~Reimer.\ {\it A\&A.} (1998b) submitted.\ (astro-ph/9808084) \item M.~Pohl and J.A.~Esposito.\ {\it Ap.J.} 507 (1998) in press. \item M.~Simon, A.~Molnar and S.~Roesler.\ {\it Ap.J.} 499 (1998) 250--257. \item A.W.~Strong et al.\ {\it A\&AS.} 120 (1996a) C381--C387. \item A.W.~Strong and J.R.~Mattox.\ {\it A\&A.} 308 (1996b) L21--L24. \item A.W.~Strong et al.\ {\it 4th Compton Symp.\ AIP 410.} Ed.\ C.D.~Dermer et al.\ 1198--1202.\ AIP.\ NY.\ (1997). \item A.W.~Strong and I.V.~Moskalenko.\ {\it 16th ECRS.} (1998) OG-2.5.\ (astro-ph/9807289) \end{list} \end{document
1,116,691,500,532
arxiv
\section{Introduction}\label{sec:intro} Let $\boldsymbol k$ be a field of characteristic $0$. A unital commutative associative $\boldsymbol k$-algebra $A$ is called a \emph{Poisson algebra} if it is endowed with a Lie bracket $\{\:,\:\}:A\times A\to A$ such that $\{a,bc\}=b\{a,c\}+\{a,b\}c$ for all $a,b,c\in A$. The bracket $\{\:,\:\}$ is referred to as the \emph{Poisson bracket}. If $S$ is a Poisson algebra, an ideal $I\subseteq S$ with respect to the multiplication is called a \emph{Poisson ideal} if $\{I,S\}\subseteq I$. If $I$ is a Poisson ideal, the Poisson bracket descends to the quotient algebra $S/I$. In this paper we study Poisson algebras of the form $A=S/I$ where $I$ is a finitely generated Poisson ideal in $S$. Throughout the paper we focus on the case when $S$ is a polynomial algebra $\boldsymbol k[x_1,x_2,\dots,x_n]$. However, many results are also valid for other Poisson algebras, e.g., algebras of regular functions on a regular Poisson varieties or algebras of smooth functions on Poisson manifolds. For aesthetic reasons, we mainly investigate examples where there is a nonstandard $\mathbb{Z}_{\ge 0}$-grading on $S$. This means that to each of the variables $x_i$ is attached an \emph{internal degree} $\deg(x_i)\ge 1$ such that the ideal $I$ is generated by elements that are homogeneous with respect to the internal degree. The Poisson bracket on $S$ is uniquely determined by the brackets between the coordinates, denoted \begin{align}\label{eq:Lambda} \{x_i,x_j\}=:\Lambda_{ij}\in S. \end{align} We use this notation throughout the paper. We typically assume that the bracket respects the internal degree so that $\deg(\Lambda_{ij})=\deg(\{\:,\:\})+\deg(x_i)+\deg(x_j)$. An element $f\in S$ is called a \emph{Casimir} if $\{f,\:\}$ acts trivially on $S$. The set $\mathrm H^0_{\mathrm{Poiss}}(S)$ of Casimirs in $S$ is called the \emph{Poisson center} of $S$. The reason for the notation is that it can be identified with the zeroth Poisson cohomology of $S$ (cf. Appendix \ref{ap:Poissoncohomology}). We say that the ideal $I\subseteq S$ is \emph{generated by Casimirs} if there exist generators $f_1,\dots, f_k$ for $I$ that are Casimirs. More generally, an ideal $I$ generated by $f_1,\dots, f_k\in S$ is Poisson if and only if there exist $Z_{i\mu}^\nu\in S$ with $i\in\{1,2,\dots,n\}$ and $\mu, \nu\in \{1,2,\dots,k\}$ such that \begin{align}\label{eq:theZs} \{x_i,f_\mu\}=\sum_\nu Z_{i\mu}^\nu f_\nu. \end{align} The constructions in this paper assume a fixed choice of such a tensor $Z_{i\mu}^\nu\in S$. If $f_1,\dots, f_k\in S$ form a complete intersection, then $Z_{i\mu}^\nu$ is unique up to $I$ (cf. Section \ref{sec:ci}). If the variables have internal degree $\deg(x_i)\ge 1$, the $f_\mu$'s are homogeneous, and the bracket respects the internal degree, then the $Z_{i\mu}^\nu$'s should be chosen such that $\deg(Z_{i\mu}^\nu)=\deg(x_i)+\deg(f_\mu)+\deg(\{\:,\:\})-\deg(f_\nu)$. If $A$ is a Poisson algebra over $\boldsymbol k$, then there is a Lie bracket, the so-called \emph{Koszul bracket} (cf. \cite{Huebschmann}), on the module of K\"ahler differentials $\Omega_{A|\boldsymbol k}$ given by the formula \begin{align} \label{eq:Koszulbr} [a_1\mathrm{d} a_2,b_1\mathrm{d} b_2]:=a_1\{a_2,b_1\}\mathrm{d} b_2+b_1\{a_1,b_2\}\mathrm{d} a_2+a_1b_1\mathrm{d}\{a_2,b_2\} \end{align} for $a_1,a_2,b_1,b_2\in A$. If $\operatorname{Spec}(A)$ is smooth, then $\Omega_{A|\boldsymbol k}$ with this bracket forms a Lie algebroid over $\operatorname{Spec}(A)$ in the following sense. \begin{definition}\label{def:liealgebroid} A \emph{Lie algebroid over $\operatorname{Spec}(A)$} is a projective $A$-module $L$ together with a Lie bracket $[\:,\:]$ and an $A$-linear morphism of Lie algebras $\rho:L\to D_A$, $X\mapsto \rho_X$, to the module of derivations $D_A:=\mathrm{Der}(A,A)$ such that $[X,aY]=a[X,Y]+(\rho_X a)Y$ for $a\in A$ and $X,Y\in L$. The morphism $\rho$ is referred to as the \emph{anchor}. \end{definition} If we drop the assumption that $L$ is projective, we say that $(A,L)$ forms a \emph{Lie-Rinehart pair} over $\boldsymbol k$ (see \cite{Rinehart, Huebschmann}). If $A$ is a Poisson algebra, then $(A,\Omega_{A|\boldsymbol k})$ forms a Lie-Rinehart pair with anchor $\rho_{a\mathrm{d} b}(c)=a\{b,c\}$ for $a,b,c\in A$ \cite{Huebschmann}. Whenever $\operatorname{Spec}(A)$ is non-smooth the module of K\"ahler differentials $\Omega_{A|\boldsymbol k}$ is \emph{non-projective} (see \cite{AvramovHerzog}), which makes its homological algebra more intricate. The objective of this paper is to lift the Koszul bracket to the cotangent complex $\mathbb L_{A|\boldsymbol k}$ in the form of an $L_\infty$-algebroid over $\operatorname{Spec}(A)$ (for details on the cotangent complex see Section \ref{sec:cotangent}). The principal tool to achieve this is Theorem \ref{thm:homotopyPoisson} below, which provides a $P_\infty$-algebra structure on a resolvent $R$ of $A$ (for information on resolvents see Section \ref{sec:cotangent}). Before stating Theorem \ref{thm:homotopyPoisson} let us recall some definitions. An \emph{$L_\infty$-algebra} is a graded vector space $L=\oplus_{k\in \mathbb{Z}}L^k$ over $\boldsymbol k$, whose degree is denoted $|\:|$, with a sequence $([\:,\dots,\:]_m)_{m\ge 1}$ of $\boldsymbol k$-linear operations $[\:,\dots,\:]_m:\bigwedge^m L\to L$ of degree $|l_m|=2-m$ such that for all $m\ge 1$ \begin{align}\label{eqn:Linftyalgebra} \sum_{p+q=m+1} \sum_{\sigma \in \operatorname{UnSh}_{q, p-1}} (-1)^\sigma \varepsilon(\sigma,\boldsymbol x)(-1)^{q(p-1)} \left[[x_{\sigma(1)},\dots,x_{\sigma(q)}]_q,x_{\sigma(q+1)},\dots,x_{\sigma(m)}\right]_p=0 \end{align} for homogeneous $x_1,\dots,x_{m}\in L$. Here $\varepsilon(\sigma,\boldsymbol x)=(-1)^{\sum_{i<j,\sigma(i)>\sigma(j)}|x_i||x_j|}$ is the \emph{Koszul sign} of the permutation $\sigma$, $(-1)^\sigma$ its sign, and $\operatorname{UnSh}_{q,p-1}$ stands for the $(q,p-1)$-unshuffle permutations, i.e., the set of permutations $\sigma$ of $\{1,2,\dots,m\}$ such that $ \sigma(1)<\sigma(2)<\dots<\sigma(q)$ and $\sigma(q+1)<\sigma(q+2)<\dots<\sigma(m)$. Note that $[\:]_1$ is a codifferential. The grading $|\:|$ is referred to as the \emph{cohomological grading}. By an \emph{$L_\infty$[1]-algebra} structure on a graded vector space $E=\oplus_{k\in \mathbb{Z}}E^k$ over $\boldsymbol k$ with degree $|\:|$ we mean a sequence $(l_m)_{m\ge 1}$ of $\boldsymbol k$-linear operations $l_m:\operatorname{S}^m E\to E$ of degree $|l_m|=1$ such that for all $m\ge 1$ \begin{align}\label{eqn:Linftyalgebra} \sum_{p+q=m+1} \sum_{\sigma \in \operatorname{UnSh}_{q, p-1}} \varepsilon(\sigma,\boldsymbol e) l_p(l_q(e_{\sigma(1)},\dots,e_{\sigma(q)}),e_{\sigma(q+1)},\dots,e_{\sigma(m)})=0 \end{align} for homogeneous $e_1,\dots,e_{m}\in E$. An \emph{$L_\infty[1]$-algebra} structure on $L[1]$ is equivalent to an \emph{$L_\infty$-algebra} structure on $L$ by putting \begin{align}\label{eq:decalage} [x_1,\dots, x_m]_m=(-1)^{\sum_{i=1}^m(m-i)|x_i|} l_m(x_1[1],\dots, x_m[1])[-1], \end{align} where $\downarrow:L\to L[1]$, $x\mapsto \downarrow x=x[1]$ is the identity seen as a map of degree $-1$. A more conceptual way to write this is $[\:,\dots,\:]_m=\uparrow\circ l_m\circ \downarrow^{\otimes n}$ where $\uparrow$ is the inverse of $\downarrow$. For details the reader may consult, e.g., \cite{Reinhold}. The notion of a left $L_\infty$-module goes back to \cite{LadaMarkl, Lada}. By a \emph{left $L_\infty$-module} over the $L_\infty$-algebra $L$ with brackets $[\:,\dots,\:]_m:\bigwedge^m L\to L$, $m\ge 1$ we mean a graded vector space $M=\oplus_k M^k$ over $\boldsymbol k$ with degree $|\:|$ and a sequence of operations $\rho_m:\bigwedge^{m-1} L \otimes M\to M$, $m\ge 1$, of degree $|\rho_m|=2-m$ such that for each $m\ge 1$ \begin{align}\label{eqn:Linftymodule} \sum_{p+q=m+1} \sum_{\sigma \in \operatorname{UnSh}_{q, p-1}} (-1)^\sigma\varepsilon(\sigma,\boldsymbol x)(-1)^{q(p-1)} k_p(k_q(x_{\sigma(1)},\dots,x_{\sigma(q)}),x_{\sigma(q+1)},\dots,x_{\sigma(m)})=0 \end{align} for homogeneous $x_1,\dots,x_{m-1}\in L$ and $x_m\in M$, where $k_m:\bigwedge^{m}(L \oplus M)\to L \oplus M$, $m\ge 1$, is the unique extension of the operations $[\:,\dots,\:]_m$ and $\rho_m$ such that $k_m$ vanishes when two or more arguments are from $M$. The definition entails that $\rho_1^2=0$, so that $(M,\rho_1)$ is actually a cochain complex. The next definition is closely related to what has been called in \cite{Vitagliano} a \emph{strong homotopy Lie-Rinehart algebra}. \begin{definition} \label{def:Linftyalgebroid} By an \emph{$L_\infty$-algebroid over $\operatorname{Spec}(A)$} we mean an $L_\infty$-algebra $L=\oplus_{k\in \mathbb{Z}}L^k$ with brackets $([\:,\dots,\:]_m)_{m\ge 1}$, such that each $L^k$ is an $A$-module, together with operations $\rho_m:\bigwedge^{m-1} L \otimes A\to A$, $m\ge 1$, that make $A$ a left $L_\infty$-module over $L$ satisfying the following properties \begin{enumerate} \item $\partial:=[\:]_1$ is $A$-linear and $\rho_1=0$, \item for each $k\in \mathbb{Z}$, $L^k$ is a finitely generated projective $A$-module, \item for all $m\ge 2$, homogeneous $x_1,\dots, x_m\in L$, and $a,b\in A$ we have \begin{align*} [x_1,\dots, x_{m-1},ax_m]_m&=\rho_m(x_1,\dots, x_{m-1},a)x_m+a\,[x_1,\dots, x_{m-1},x_m]_m,\\ \rho_m(x_1,\dots, x_{m-1},ab)&=\rho_m(x_1,\dots, x_{m-1},a)b+a\,\rho_m(x_1,\dots, x_{m-1},b),\\ \rho_m(ax_1,\dots, x_{m-1},b)&=a\rho_m(x_1,\dots, x_{m-1},b). \end{align*} \end{enumerate} We refer to the collection of operations $(\rho_m)_{m\ge 1}$ as the \emph{homotopy anchor}. In the special case when $[\:,\dots,\:]_m=0$ for all $m\ge 3$ and $\rho_m=0$ for all $m\ge 2$ we say that $L$ is a \emph{dg Lie algebroid} over $\operatorname{Spec}(A)$. \end{definition} Our definition of an $L_\infty$-algebroid is more general than the one used in \cite{Strobl}. When $\rho_m=0$ for $m\ge 3$ we recover their definition after applying \eqref{eq:decalage}. A more general definition has been suggested in \cite{Kjeseth} under the name homotopy Lie-Rinehart pair. We also need to recall the notion of a $P_\infty$-algebra (see, for example, \cite{VoronovHigherDerived,CF}). \begin{definition} A \emph{$P_\infty$-algebra} is a supercommutative $\mathbb{Z}$-graded $\boldsymbol k$-algebra $R=\oplus_k R^k$ with degree $|\:|$ that is also an $L_\infty$-algebra with brackets $\{\:,\dots,\:\}_m: \bigwedge^m R\to R$ such that the Leibniz rule \begin{align*} \{ab,a_2,\dots, a_m\}_m=a\{b,a_2,\dots, a_m\}_m+(-1)^{|a||b|}b\{a,a_2,\dots, a_m\}_m \end{align*} holds for $a,b,a_2,\dots, a_m \in R$ with $a,b$ homogeneous. If all $\{\:,\dots,\:\}_m$ are zero for $m\ge 3$, we say $R$ is a \emph{dg Poisson} algebra. \end{definition} Evidently, by symmetry, the Leibniz rule holds for any argument of $\{\:,\dots,\:\}_m$. To construct the $P_\infty$-algebra of our main theorem we use the \emph{higher derived brackets} of T. Voronov \cite{VoronovHigherDerived}. The notions of resolvent and cotangent complex will be reviewed in Section \ref{sec:cotangent}. \begin{theorem} \label{thm:homotopyPoisson} Let $I\subset S=\boldsymbol k[x_1,x_2,\dots,x_n]$ be a Poisson ideal, let $A=S/I$, and let $f_1,\dots, f_k$ be generators for $I$. Let $R$ be a resolvent of $S\to A$ on the generators $f_1,\dots, f_k$. Then there is the structure of a $P_\infty$-algebra $(\{\:,\dots,\:\}_m)_m$ on $(R,\partial)$ such that $\partial=\{\:\}_1$ and the quasi-isomorphism $R\to A$ is compatible with the brackets. If the internal degree of the Poisson bracket on $S$ is $p$, the internal degree of the $n$-ary bracket on $R$ is $(n-1)p$. If the generators $f_1,\dots, f_k$ are Casimir and form a complete intersection, the $P_\infty$-algebra structure is trivial in the sense that the only nonzero Poisson brackets are $\partial=\{\:,\:\}_1$ and the bracket $\{\:,\:\}=\{\:,\:\}_2$ on $S$. \end{theorem} In the case of complete intersections a version of the theorem has been suggested in \cite{FresseCI}. In order to get a clear picture of the construction, we felt it necessary to delve into details and elaborate examples. The higher Koszul brackets are deduced from the $P_\infty$-algebra as a corollary. \begin{corollary}\label{cor:homotopyLiealgebroid} Under the assumptions of Theorem \ref{thm:homotopyPoisson} there is the structure of an $L_\infty$-algebroid on the cotangent complex $\mathbb L_{A|\boldsymbol k}=A\otimes_R \Omega_{R|\boldsymbol k}$ such that the morphism $\mathbb L_{A|\boldsymbol k}\to\Omega_{A|\boldsymbol k}$ is compatible with the brackets and the anchors. If the generators $f_1,\dots, f_k$ are Casimir and form a complete intersection, the $L_\infty$-algebroid structure is trivial in the sense that the only nonzero Lie brackets are given by $\left[\mathrm{d} x_i,\mathrm{d} x_j\right]=\mathrm{d}\{x_i,x_j\}$. If the internal degree of the Poisson bracket on $S$ is $p$, the internal degree of the $n$-ary bracket and $n$-ary anchor on $\mathbb L_{A|\boldsymbol k}$ is $(n-1)p$. \end{corollary} In this paper two tensors play a major role. Both depend on the choice of $Z_{i\mu}^\nu$. The first tensor is given by \begin{align}\label{eq:Amunu} \mathcal A_{\mu \nu}^\lambda :=\sum_{i=1}^n\left( {\partial f_\mu\over \partial x_i}Z_{i\nu}^\lambda+{\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda \right)\in S \end{align} for indices $\mu,\nu,\lambda\in\{1,\dots,k\}$. Evidently $\mathcal A_{\mu \nu}^\lambda=\mathcal A_{\nu \mu}^\lambda$. The second tensor is constructed from Poisson cohomology (cf. Appendix \ref{ap:Poissoncohomology}) as follows. Fixing the index $i$ we can view $Z_{i\mu}^\nu$ as the entry in row $\mu$ and column $\nu$ of a $k\times k$-matrix $Z_i$ with coefficients in $S$. For the set of $k\times k$-matrices with entries in $S$ we write $\mathfrak{gl}_k(S)$. Using the commutator of matrices $\mathfrak{gl}_k(S)$ forms a Lie algebra over $S$. Tensoring this Lie algebra with a supercommutative dg algebra we can form a dg Lie algebra, for example $\mathrm{C}^\bullet_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(S)$. For the definition of the complex $(\mathrm{C}^\bullet_{\operatorname{Poiss}}(S),\delta_{\operatorname{Poiss}})$ see Appendix \ref{ap:Poissoncohomology}. Writing $Z:=\sum_i \partial/\partial x_i\otimes Z_i\in \mathrm{C}^1_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(S)$, the tensor relevant for our discussion is \begin{align}\label{eq:MC} \delta_{\operatorname{Poiss}} Z-[Z,Z]\in\mathrm{C}^2_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(S). \end{align} In a similar way $\mathrm C^\bullet_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(A)$ is a dg Lie algebra as well and, taking classes modulo $I$, we have a surjective morphism $\mathrm C^\bullet_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(S)\to \mathrm C^\bullet_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(A)$. It is an easy exercise to check that the image of $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ in $\mathrm C^\bullet_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(A)$ does merely depend on the classes $Z_{i\mu}^\nu+I$. If the aforementioned tensors vanish certain simplifications occur at the start of the iterative procedure constructing the $P_\infty$-algebra structure of Theorem \ref{thm:homotopyPoisson}. If all of the tensors vanish and $f_1,\dots, f_k$ is a complete intersection our $P_\infty$-algebra simplifies to a dg Poisson algebra. This happens for example for principal Poisson ideals, see Theorem \ref{thm:principaldgPoisson}. In order to get a better impression which simplifications can occur we present numerous examples. The main examples of Poisson ideals we consider come from symplectic reduction or from linear Poisson structures. We also include some examples of Poisson brackets of higher degree. These enable us to concoct more academic examples where the resolvent is more tractable in low cohomological degrees. To our knowledge the technology of how to construct Poisson ideals for polynomial Poisson structures is not completely developed, and we feel that the problem deserves more attention. As the example calculations get quickly overwhelming we relied on computer calculations using \emph{Mathematica} and \emph{Macaulay2}. We mention that all examples of quadratic Poisson ideals $I$ that occur in this work give rise to Koszul algebras $S/I$ (for an introduction to Koszul algebras see, e.g., \cite{polishchuk2005quadratic}). In the Koszul case simplifications in the $P_\infty$-algebra structure of Theorem \ref{thm:homotopyPoisson} arise if the degree of the Poisson bracket is $\le 0$, see Theorem \ref{thm:Koszul}. The adaptation of our results to the situation when $S=\boldsymbol k[x_1,\dots,x_n]$ is replaced by the non-Noetherian $\mathbb{R}$-algebra $\mathcal C^\infty(\mathbb{R}^k)$ of smooth functions on $\mathbb{R}^k$ is possible and will be elaborated at another occasion. We are also working on an application to the problem of deformation quantization of singular Poisson varieties. We are planning to address applications to deformation theory of Poisson singularities and investigate repercussions for Poisson cohomology. Our paper is structured as follows. In Section \ref{sec:affine} we present examples of Poisson ideals. Here we focus on symplectic quotients, conical varieties stable under coadjoint actions and certain polynomial Poisson structures (diagonal and determinantal brackets). In Section \ref{sec:ci} we present a version of Corollary \ref{cor:homotopyLiealgebroid} for the case of complete intersections with a proof independent of Theorem \ref{thm:homotopyPoisson}. In Section \ref{sec:connection} we interpret the $Z_{i\mu}^\nu$ as the Christoffel symbols of a Poisson connection; this material is not used in later sections. In Section \ref{sec:cotangent} we recall the notions of resolvent and cotangent complex and fix notations for later use. In Section \ref{sec:homotopystuff} we develop our homological perturbation theory argument that enables us to prove our main results Theorem \ref{thm:homotopyPoisson} and Corollary \ref{cor:homotopyLiealgebroid}. In Section \ref{sec:examples} we provide sample calculations that were mostly obtained using a combination of \emph{Mathematica} and the \emph{Macaulay2} package \emph{dgalgebras}. The paper is mainly addressed to two audiences: 1) people from commutative algebra and 2) people from Poisson geometry and physics. We have resisted the temptation to write coordinates with upper indices to not irritate the former. However, we often use index notation for tensors and Einstein summation convention to make the presentation of the material clearer to the latter. Our article touches on three seemingly unrelated subjects that are named after J.-L. Koszul (1921--2018): the Koszul bracket, the Koszul complex, and Koszul algebras. \vspace{1cm} \noindent\emph{Acknowledgements.} The authors would like to thank Daniel Levcovitz for sharing information on the cotangent complex. They are indebted to Srikanth Iyengar and Benjamin Briggs for their interest in this project and, in particular, for catching an error in an earlier draft. HCH would like to thank Martin Bordemann for indoctrination. We benefited from the \emph{Mathematica} packages DiracQ \newline \url{http://physics.ucsc.edu/~sriram/DiracQ/} \newline and the \emph{grassmann.m} \emph{Mathematica} package by Matthew Headrick \newline \url{https://people.brandeis.edu/~headrick/Mathematica/}\newline in learning how to perform the computations needed for this paper with \emph{Mathematica}. \section{Examples of affine Poisson algebras}\label{sec:affine} \subsection{Symplectic quotients of cotangent lifted $G$-modules} Let $G$ be a complex reductive Lie group and $V$ be an $m$-dimensional $G$-module. Let us write $q_1,q_2,\dots, q_m$ for linear coordinates on $V$ and $p_1,p_2,\dots, p_m$ for linear coordinates on $V^*$. We assign to them the internal degree $1$. Then $\mathbb{C}[V\times V^*]$ is a Poisson algebra with bracket $\{q_i,p_j\}=\delta_{ij}$ of degree $-2$. The moment map for the representation is the map $J:V\times V^*\to \mathfrak g^*$ defined by $J(q,p)(\xi)=p(\xi q)$ where $p\in V^*$, $\xi\in \mathfrak g$ and $q\in V$. Using the linear coordinates this is $\sum_{i,j}p_j\xi_{ij}q_i$, where $\xi_{ij}$ is the representation matrix of $\xi$ in the representation $V=\mathbb{C}^m$. Let $N\subseteq V\times V^*$ be locus of all the polynomials $J_\xi(q,p):=J(q,p)(\xi)$ where $\xi$ ranges over $\mathfrak g$. For simplicity we assume that the ideal $(J_\xi(q,p)\mid \xi\in\mathfrak g)$ in $\mathbb{C}[V\times V^*]$ is radical (for details see \cite{HSSCompositio}). The \emph{symplectic quotient} of the $G$-action on $V$ is the categorical quotient $N/\!\!/G$, i.e., $\operatorname{Spec} (\mathbb{C}[V\times V^*]^G/(\mathbb{C}[V\times V^*]^G\cap (J_\xi(q,p)\mid \xi\in\mathfrak g)))$. It inherits a Poisson bracket of degree $-2$. If $G\subset\operatorname{SL}_2(\mathbb{C})$ is a finite subgroup let us write $q,p$ for the linear coordinates of $\mathbb{C}^2$. Since the $G$-action is unimodular, it preserves the Poisson bracket defined by $\{q,p\}=1$. We set $\deg(q)=\deg(p)=1$ so that $\mathbb{C}[q,p]$ is a Poisson algebra whose bracket has internal degree $-2$. Since the moment map $J$ is zero here, the symplectic quotient $N/\!\!/G$ is simply $\mathbb{C}^2/G$. Using polynomial $G$-invariants, the latter is determined as a hypersurface with algebra of functions $A=\mathbb{C}[x_1,x_2,x_3]/(f_G)$. Recall that those finite subgroups are classified by Dynkin diagrams $A_m$ $(m\ge 1)$, $D_m$ ($m\ge 0$), and the three exceptional $E_6,E_7$ and $E_8$. Below we will analyze $A=\mathbb{C}[x_1,x_2,x_3]/(f_G)$ as Poisson algebras. The internal degrees of the variables $x_i$ are determined by the degrees of the corresponding invariants. Our convention is that it is weakly increasing with the index of the variable. For the polynomial invariant theory of these groups see \cite{klein1993felix,Dolgachev,ONAsh}. It is closely related to the \emph{Grundformen} of F. Klein. The Poisson bracket on the algebra of invariants has been addressed before (see, e.g., \cite{Alev}), but we could not find in the literature the observation that the Poisson ideals are generated by Casimirs. Low dimensional symplectic quotients are often symplectomorphic to (cartesian products of) Kleinian singularities, see for example \cite{FHSSigma,HSSadv,CHS}. We also include two examples of symplectic quotients by nonfinite $G$ that are not symplectomorphic to orbifolds. We have used \emph{Macaulay2} to present the algebra of the symplectic quotient in terms of generators and relations. We emphasize that for the majority of representations it is practically hopeless to find such a presentation. \subsubsection{Kleinian singularity $A_m$} In the case of the diagram $A_m$ the group $G$ is the cyclic group $\mathbb \mathbb{Z}_{N}$ with $N=m+1$ where $\zeta=\exp(2\pi\operatorname{\sqrt{-1}}/N)$ acts by $q\mapsto \zeta q$ and $p\mapsto \zeta^{-1}p$. A complete set of polynomial invariants is given by $\varphi_1(q,p)=qp$, $\varphi_2(q,p)=q^N$, and $ \varphi_3(q,p)=p^N$ and satisfies the relation $\varphi_2\varphi_3=\varphi_1^N$. The brackets can be easily evaluated: $\{\varphi_1,\varphi_2\}=-N \varphi_2$, $\{\varphi_1,\varphi_3\}=N \varphi_3$ and $\{\varphi_2,\varphi_3\}=N^2\varphi_1^{N-1}$. We conclude that the Poisson algebra for the case of $A_m$ is $\mathbb{C}[x_1,x_2,x_3]/(x_2x_3-x_1^N)$ with bracket table: \begin{align*} \begin{tabular}{c||c|c|c} $\{\:,\:\}_{A_m}$ & $x_1$&$x_2$&$x_3$\\\hline\hline $x_1$ & 0 &$-Nx_2$&$Nx_3$\\\hline $x_2$ & &$0$ & $N^2x_1^{N-1}$\\\hline $x_3$ & & & $0$ \end{tabular} \end{align*} Note that $\deg( \{x_1,f_{A_m}\})-\deg(f_{A_m})=\deg(x_1)-2$, hence $Z_{11}^1=0$. Let us point out that, by the same reasoning, in the case of a hypersurface with $\deg(\{\:,\:\})<0$ we have $Z_{11}^1=0$, as by our convention $x_1$ is a variable of lowest internal degree. For $i=2$ or $3$, $\deg( \{x_i,f_{A_m}\})-\deg(f_{A_m})=N-2$. If $N$ is odd or equal to $2$ this implies $Z_{21}^1=0=Z_{31}^1$ since $\deg(x_1)$ is even. Otherwise, we do not know a better way than verifying by hand $\{x_2,x_2x_3-x_1^N\}=x_2\{x_2,x_3\}-Nx_1^{N-1}\{x_2,x_1\}=N^2x_1^{N-1}x_2-N^2x_1^{N-1}x_2=0$, and similarly for $x_3$. We conclude that $f_{A_m}(x_1,x_2,x_3)=x_2x_3-x_1^N$ is a Casimir generator. \subsubsection{Kleinian singularity $D_m$} In the case of the diagram $D_m$ the group $G$ is the binary dihedral group $\operatorname{BD_{4N}}$ of order $4N$ with $N=m+2$. The action is generated by $(q,p)\mapsto (\zeta q,\zeta^{-1}p)$, where $\zeta=\exp(\pi\operatorname{\sqrt{-1}}/N)$, and $(q,p)\mapsto (p,-q)$. A complete set of polynomial invariants is given by $\varphi_1(q,p)=q^2p^2$, $\varphi_2(q,p)=q^{2N}+p^{2N}$, and $ \varphi_3(q,p)=qp(q^{2N}-p^{2N})$ and satisfies the relation $\varphi_1\varphi_2^2-\varphi_3^2=4\varphi_1^{N+1}$. The Poisson relations are $\{\varphi_1,\varphi_2\}=-4N\varphi_3=-2\deg(\varphi_2)\varphi_3$, $\{\varphi_1,\varphi_3\}=-4N\varphi_1\varphi_2=-2\deg(\varphi_2)\varphi_1\varphi_2$, and \begin{align*} \{\varphi_2,\varphi_3\}&=\{q^{2N}+p^{2N},qp(q^{2N}-p^{2N})\}=2N(q^{2N}-p^{2N})^2-2qp\{q^{2N},p^{2N}\}\\ &=2N\varphi_2^2-8N\varphi_1^N-8N^2\varphi_1^N=2N\varphi_2^2-8N(N+1)\varphi_1^N=\deg(\varphi_2)(\varphi_2^2-2\deg(\varphi_3)\varphi_1^N). \end{align*} We conclude that the Poisson algebra for the case of $D_m$ is $\mathbb{C}[x_1,x_2,x_3]/( x_1x_2^2-x_3^2-4x_1^{N+1})$ with bracket table: \begin{align*} \begin{tabular}{c||c|c|c} $\{\:,\:\}_{D_m}$ & $x_1$&$x_2$&$x_3$\\\hline\hline $x_1$ & 0 &$-2(\deg x_2)x_3$&$-2(\deg x_2)x_1x_2$\\\hline $x_2$ & &$0$ & $(\deg x_2)(x_2^2-2(\deg x_3)x_1^N)$\\\hline $x_3$ & & & $0$ \end{tabular} \end{align*} We invite the reader to verify from the bracket table that $f_{D_m}(x_1,x_2,x_3)=x_1x_2^2-x_3^2-4x_1^{N+1}$ is a Casimir generator, as degree considerations appear to be inconclusive. \subsubsection{Kleinian singularity $E_6$} In the case of the diagram $E_6$ the group $G$ is the binary tetrahedral group $\operatorname{BT}$ of order $24$. It is generated by the matrices \begin{align*} \left( \begin{matrix} \operatorname{\sqrt{-1}}&0\\ 0&-\operatorname{\sqrt{-1}} \end{matrix} \right), \quad \left( \begin{matrix} 0&-1\\ 1&0 \end{matrix} \right), \quad\mbox{and }\quad {1\over 2}\left( \begin{matrix} 1+\operatorname{\sqrt{-1}}&-1+\operatorname{\sqrt{-1}}\\ 1+\operatorname{\sqrt{-1}}&1-\operatorname{\sqrt{-1}} \end{matrix} \right). \end{align*} A complete set of polynomial invariants is given by \begin{align*} &\varphi_1(q,p)=q^{5}p-q p^5,\\ &\varphi_2(q,p)=q^{8}+14 q^{4}p^4+p^8,\\ &\varphi_3(q,p)=q^{12}-33 (q^{8}p^4+q^{4}p^{8})+p^{12} \end{align*} and satisfies the relation $\varphi_3^2-\varphi_2^3 =-108\varphi_1^4$. We invite the reader to check the commutation relations $\{\varphi_1,\varphi_2\} =-8 \varphi_3=-(\deg \varphi_2)\varphi_3$, $\{\varphi_1,\varphi_3\} =-12 \varphi_2^2=-(\deg \varphi_3)\varphi_2^2$ and $\{\varphi_2,\varphi_3\} =-1728 \varphi_1^3=-(\deg \varphi_3)^3\varphi_1^3$. Hence for the binary tetrahedral group the Poisson algebra is $\mathbb{C}[x_1,x_2,x_3]/(x_3^2-x_2^3 +108x_1^4)$ with bracket table: \begin{align*} \begin{tabular}{c||c|c|c} $\{\:,\:\}_{E_6}$ & $x_1$&$x_2$&$x_3$\\\hline\hline $x_1$ & 0 &$-(\deg x_2)x_3$&$-(\deg x_3)x_2^2$\\\hline $x_2$ & &$0$ & $-(\deg x_3)^3x_1^3$\\\hline $x_3$ & & & $0$ \end{tabular} \end{align*} Obviously $Z_{11}^1=0$. Also $\deg( \{x_3,f_{E_6}\})-\deg(f_{E_6})=\deg(x_3)-2=10$ is not in the $\mathbb{Z}_{\ge 0}$-span of $\deg(x_1)=6$ and $\deg(x_2)=8$, and hence $Z_{31}^1=0$. We leave is to the reader to verify from the bracket table that $Z_{21}^1=0$, and conclude that $f_{E_6}(x_1,x_2,x_3)=x_3^2-x_2^3 +108x_1^4$ is a Casimir generator. \subsubsection{Kleinian singularity $E_7$} In the case of the diagram $E_7$ the group $G$ is the binary octahedral group $\operatorname{BO}$ of order $48$. It is generated by the matrices \begin{align*}{1\over \sqrt{2}} \left( \begin{matrix} 1+\operatorname{\sqrt{-1}}&0\\ 0&1-\operatorname{\sqrt{-1}} \end{matrix} \right), \quad \left( \begin{matrix} 0&-1\\ 1&0 \end{matrix} \right), \quad\mbox{and }\quad {1\over 2}\left( \begin{matrix} 1+\operatorname{\sqrt{-1}}&-1+\operatorname{\sqrt{-1}}\\ 1+\operatorname{\sqrt{-1}}&1-\operatorname{\sqrt{-1}} \end{matrix} \right). \end{align*} A complete set of polynomial invariants is given by \begin{align*} &\varphi_1(q,p)=q^{8}+14q^4 p^4+p^{8},\\ &\varphi_2(q,p)=q^{10}p^{2}-2 q^{6}p^6+q^{2}p^{10},\\ &\varphi_3(q,p)=q^{17}p-34 (q^{13}p^5-q^{5}p^{13})-qp^{17} \end{align*} and satisfies the relation $\varphi_1^3\varphi_2-\varphi_3^2=108\varphi_2^3$. We invite the reader to check the commutation relations $\{\varphi_1,\varphi_2\} =16 \varphi_3=2(\deg \varphi_1)\varphi_3$, $\{\varphi_1,\varphi_3\} =8( \varphi_1^3-324\varphi_2^2)=\deg \varphi_1( \varphi_1^3-(\deg\varphi_3)^2\varphi_2^2)$, and $\{\varphi_2,\varphi_3\} =-24 \varphi_1^2 \varphi_2=-2(\deg \varphi_1)^2\varphi_1^2 \varphi_2$. Hence for the binary octahedral group the Poisson algebra is $\mathbb{C}[x_1,x_2,x_3]/(x_1^3x_2-x_3^2-108x_2^3)$ with bracket table: \begin{align*} \begin{tabular}{c||c|c|c} $\{\:,\:\}_{E_7}$ & $x_1$&$x_2$&$x_3$\\\hline\hline $x_1$ & 0 &$2(\deg x_1)x_3$&$\deg x_1( x_1^3-(\deg x_3)^2x_2^2)$\\\hline $x_2$ & &$0$ & $-2(\deg x_1)^2x_1^2x_2$\\\hline $x_3$ & & & $0$ \end{tabular} \end{align*} Obviously $Z_{11}^1=0$. Also $\deg( \{x_2,f_{E_7}\})-\deg(f_{E_7})=\deg(x_2)-2=10$ is not a multiple of $\deg(x_1)=6$, and hence $Z_{21}^1=0$. We leave is to the reader to verify from the bracket table that $Z_{31}^1=0$, and conclude that $f_{E_7}(x_1,x_2,x_3)=x_1^3x_2-x_3^2-108x_2^3$ is a Casimir generator. \subsubsection{Kleinian singularity $E_8$} In the case of the diagram $E_8$ the group $G$ is the binary icosahedral group $\operatorname{BI}$ of order $120$. It is generated by the matrices \begin{align*} \left( \begin{matrix} \zeta^3&0\\ 0&\zeta^2 \end{matrix} \right), \quad\left( \begin{matrix} 0&-1\\ 1&0 \end{matrix} \right),\quad \mbox{and }\quad {1\over \sqrt{5}}\left( \begin{matrix} -\zeta+\zeta^4&\zeta^2-\zeta^3\\ \zeta^2-\zeta^3&\zeta-\zeta^4\\ \end{matrix} \right), \end{align*} where $\zeta=\exp(2\pi\operatorname{\sqrt{-1}}/5)$. A set of complete polynomial invariants is given by \begin{align*} &\varphi_1(q,p)=qp(q^{10}+11q^5 p^5-p^{10}),\\ &\varphi_2(q,p)=-(q^{20}+p^{20})+228 (q^{15}p^5-q^{5}p^{15})-494q^{10}p^{10},\\ &\varphi_3(q,p)=q^{30}+p^{30}+522 (q^{25}p^5-q^{5}p^{25})-10005(q^{20}p^{10}+q^{10}p^{20}). \end{align*} Note that $\varphi_2$ is proportional to the determinant of the Hessian of $\varphi_1$ and that $\varphi_3$ is proportional to the Jacobian $\partial(\varphi_1,\varphi_2)/\partial(q,p)$. The invariants can be shown to satisfy the relation $\varphi_2^3+\varphi_3^2=1728\varphi_1^5$. We notice that $1728=12^3$. For degree reasons $\{\varphi_1,\varphi_2\}=a\varphi_3$, $\{\varphi_1,\varphi_3\} =b\varphi_2^2$ and $\{\varphi_2,\varphi_3\}=c\varphi_1^4$ for certain proportionality factors $a,b,c\in \mathbb{C}$. A tedious calculation gives $a=20$, $b=-30$ and $c=-86400$. We notice that $86400=6\cdot 120^2=6|\operatorname{BI}|^2=(\deg{\varphi_1})^2\deg{\varphi_2}\deg{\varphi_3}$. Hence for the binary icosahedral group the Poisson algebra is $\mathbb{C}[x_1,x_2,x_3]/(x_2^3+x_3^2-12^3x_1^5)$ with bracket table: \begin{align*} \begin{tabular}{c||c|c|c} $\{\:,\:\}_{E_8}$ & $x_1$&$x_2$&$x_3$\\\hline\hline $x_1$ & 0 &$(\deg x_2)x_3$&$-(\deg x_3) x_2^2$\\\hline $x_2$ & &$0$ & $- (\deg{x_1})^2(\deg {x_2})(\deg {x_3})x_1^4$\\\hline $x_3$ & & & $0$ \end{tabular} \end{align*} Obviously we have $Z_{11}^1=0$. Also $\deg(\{x_2,f_{E_8}\})-\deg(f_{E_8})=18$ is not a multiple of $\deg(x_1)$ and is smaller than $\deg(x_2)=20$, so $Z_{21}^1=0$. Finally, $\deg( \{x_3,f_{E_8}\})-\deg(f_{E_8})=28<\deg(x_3)=30$ is not in the $\mathbb{Z}_{\ge 0}$-span of $\deg(x_1)$ and $\deg(x_2)$, and hence $Z_{31}^1=0$. We conclude that $f_{E_8}(x_1,x_2,x_3)=x_2^3+x_3^2-12^3x_1^5$ is a Casimir generator. \subsubsection{Symplectic quotient of the circle action with weight vector $(-1,1,1)$} \label{subsubsec:-1,1,1} In this example $G$ is the complex circle $\mathbb{C}^\times$ and $V=\mathbb{C}^3$. The weight vector of our circle action is supposed to be $(-1,1,1)$ (see also \cite{FHSSigma}). Identifying $\mathfrak g$ with $\mathbb{C}$, the corresponding moment map is \[J(q_1,q_2,q_3,p_1,p_2,p_3)=-q_1p_1+q_2p_2+q_3p_3\in\mathbb{C}[q_1,q_2,q_3,p_1,p_2,p_3].\] Since $\mathfrak g$ is abelian $J(q_1,q_2,q_3,p_1,p_2,p_3)$ is $G$-invariant. A complete system of polynomial $G$-invariants is given by the quadratic polynomials \begin{align*} &\varphi_0 =q_1p_1, \quad \varphi_1 =q_2p_2, \quad \varphi_2 =q_3p_3, \quad \varphi_3 =q_1q_2, \quad\varphi_4 =p_1p_2, \\ &\varphi_5 =q_1q_3, \quad\varphi_6 =p_1p_3, \quad\varphi_7 =q_2p_3, \quad \varphi_8 =q_3p_2. \end{align*} The condition $J=0$ introduces the linear relation $\varphi_0=\varphi_1+\varphi_2$. Moreover, the invariants restricted to $N$ satisfy the nine degree four relations \begin{align*} &f_1(\boldsymbol\varphi) =\varphi_3\varphi_6 -\varphi_1\varphi_7 -\varphi_2\varphi_7=0, \quad f_2 (\boldsymbol\varphi)=\varphi_1\varphi_6 -\varphi_4\varphi_7=0, \quad f_3 (\boldsymbol\varphi)=\varphi_4\varphi_5 -\varphi_1\varphi_8 -\varphi_2\varphi_8=0, \\ &f_4 (\boldsymbol\varphi)=\varphi_1\varphi_5 -\varphi_3\varphi_8=0, \quad f_5 (\boldsymbol\varphi)=\varphi_2\varphi_4 -\varphi_6\varphi_8=0, \quad f_6 (\boldsymbol\varphi)=\varphi_2\varphi_3 -\varphi_5\varphi_7=0,\\ &f_7 (\boldsymbol\varphi)=\varphi_2^2 -\varphi_5\varphi_6 +\varphi_7\varphi_8=0, \quad f_8 (\boldsymbol\varphi)=\varphi_1\varphi_2 -\varphi_7\varphi_8=0, \quad f_9 (\boldsymbol\varphi)=\varphi_1^2 -\varphi_3\varphi_4 +\varphi_7\varphi_8=0. \end{align*} Hence the Poisson algebra of the symplectic quotient $N/\!\!/G$ is $S:=\mathbb{C}[\boldsymbol x]:=\mathbb{C}[x_i\mid 1\le i\le 8]$ modulo the ideal $I=(f_\mu(\boldsymbol x)\mid 1\le\mu\le 9)$. The internal degrees are all $\deg x_i=\deg\varphi_i=2$. The table of Poisson brackets is worked out in Table \ref{tab:brackettable-1,1,1}. \begin{table}[h!] \begin{align*} \begin{tabular}{c||c|c|c|c|c|c|c|c} $\{\:,\:\}$ & $x_1$ & $x_2$& $x_3$ & $x_4$ & $x_5$ &$x_6$ & $x_7$ & $x_8$\\\hline\hline $x_1$ & $0$ & $0$& $-x_3$& $x_4$& $0$ &$0$ & $-x_7$& $x_8$\\\hline $x_2$ & & $0$& $0$& $0$& $-x_5$ & $x_6$ &$x_7$& $-x_8$\\\hline $x_3$ & & & $0$&$2x_1+x_2$& $0$ &$x_7$ &$0 $& $x_5$\\\hline $x_4$ & & & & $0$& $-x_8$&$0 $ & $-x_6$& $0$\\\hline $x_5$ & & & & & $0$& $x_1+2x_2$ &$x_3$& $0$\\\hline $x_6$ & & & & & & $0$ &$0$& $-x_4$\\\hline $x_7$ & & & & & & &$0$& $-x_1+x_2$\\\hline $x_8$ & & & & & & & & $0$ \end{tabular} \end{align*} \caption{Poisson brackets of the symplectic circle quotient with weight vector $(-1,1,1)$} \label{tab:brackettable-1,1,1} \end{table} It turns out that the nine generators of the ideal $I$ are not Casimirs. The Poisson relations of the generators are depicted in Table \ref{tab:Poissonrelations-1,1,1}. \begin{table}[h!] \begin{align*} \begin{tabular}{c||c|c|c|c|c|c|c|c|c} $\{\:,\:\}$ & $f_1$ & $f_2$& $f_3$ & $f_4$ & $f_5$ &$f_6$ & $f_7$ & $f_8$&$f_9$\\\hline\hline $x_1$ & $-f_1$ & $0$& $f_3$& $0$& $f_5$ &$-f_6$ & $0$& $0$&$0$\\\hline $x_2$ & $f_1$ & $f_2$& $-f_3$& $-f_4$& $0$ &$0$ & $0$& $0$&$0$\\\hline $x_3$ & $0$ & $f_1$ &$f_4$& $0$ & $f_7+2f_8$ &$0$ &$0$ & $f_6$ &$-f_6$\\\hline $x_4$ & $-f_2$ & $0$ &$0$& $-f_3$ & $0$ &$-f_7-2f_8$ &$0$ & $-f_5$ &$f_5$\\\hline $x_5$ & $f_6$ & $2f_8+f_9$ &$0$& $0$ & $f_3$ &$0$ &$-f_4$ & $f_4$ &$0$\\\hline $x_6$ & $0$ & $0$ &$-f_5$& $-2f_8-f_9$ & $0$ &$-f_1$ &$f_2$ & $-f_2$ &$0$\\\hline $x_7$ & $0$ & $0$ &$-f_7+f_9$& $-f_6$ & $f_2$ &$0$ &$f_1$ & $0$ &$-f_1$\\\hline $x_8$ & $f_7-f_9$ & $f_5$ &$0$& $0$ & $0$ &$-f_4$ &$-f_3$ & $0$ &$f_3$ \end{tabular} \end{align*} \caption{Poisson relations for the symplectic circle quotient with weight vector $(-1,1,1)$} \label{tab:Poissonrelations-1,1,1} \end{table} We observe that the $Z_{i\mu}^\nu$ are constant and hence $\deg\left(\mathcal A_{\mu\nu}^\lambda\right)=2<4$ (cf. Equation \eqref{eq:Amunu}). We found nonzero components in the tensor $(\mathcal A_{\mu\nu}^\lambda)_{\mu,\nu,\lambda}$. Closer inspection reveals that $\delta_{\operatorname{Poiss}} Z-[Z,Z]=0$ (cf. Proposition \ref{prop:deg-1} below). An explanation for this is that $I\subset S$ can be seen as an example of the type discussed in Subsection \ref{subsec:deg-1} (see Proposition \ref{prop:deg-1}). For this interpretation, however, we have to divide $\deg(x_i)$, $\deg(f_\mu)$, and $\deg(\{\:,\:\})$ by $2$. If we interpret accordingly $\deg(f_\mu)$ as degree $2$ we observe that $A=S/I$ is a Koszul algebra. This follows from the observation that $f_1,\dots,f_9$ forms a Groebner basis in the graded reverse lexicographic term order. \subsubsection{Two particles in dimension three of zero total angular momentum}\label{subsubsec:angmom} Let $V=\mathbb{C}^3\oplus\mathbb{C}^3$ and let $\boldsymbol q_1=(q_{11},q_{12},q_{13})$ be linear coordinates for the first copy of $\mathbb{C}^3$ in $V$ and $\boldsymbol q_2=(q_{21},q_{22},q_{23})$ linear coordinates for the second copy. We let the orthogonal group $G:=\mathrm O_3$ act diagonally on $V$. After identifying $V\times V^*=\mathbb{C}^3\oplus\mathbb{C}^3\oplus(\mathbb{C}^3)^*\oplus(\mathbb{C}^3)^*$ with $\mathbb{C}^3\oplus\mathbb{C}^3\oplus\mathbb{C}^3\oplus\mathbb{C}^3$ the cotangent lifted action is in fact the diagonal action. We write $\boldsymbol p_1=(p_{11},p_{12},p_{13})$ for the linear coordinates of the third copy of $\mathbb{C}^3$ in $V\times V^*$ and $\boldsymbol p_2=(p_{21},p_{22},p_{23})$ for the linear coordinates of the fourth copy. A complete set of polynomial $G$-invariants is given by \begin{align*} \varphi_1 &= \langle \boldsymbol q_1,\boldsymbol q_1 \rangle, & \varphi_2 &= \langle \boldsymbol q_1, \boldsymbol q_2 \rangle, & \varphi_3 &= \langle \boldsymbol q_2, \boldsymbol q_2 \rangle, & \varphi_4 &= \langle \boldsymbol q_1, \boldsymbol p_1 \rangle, & \varphi_5 &= \langle \boldsymbol q_1, \boldsymbol p_2 \rangle, \\ \varphi_6 &= \langle \boldsymbol q_2, \boldsymbol p_1 \rangle, & \varphi_7 &= \langle \boldsymbol q_2, \boldsymbol p_2 \rangle, & \varphi_8 &= \langle \boldsymbol p_1,\boldsymbol p_1 \rangle, & \varphi_9 &= \langle \boldsymbol p_1, \boldsymbol p_2 \rangle, & \varphi_{10} &= \langle \boldsymbol p_2, \boldsymbol p_2 \rangle, \end{align*} where $\langle\:,\:\rangle$ denotes the Euclidean inner product on $\mathbb{C}^3$. The Poisson bracket is the canonical bracket $\{q_{mi},p_{lj}\}=\delta_{mn}\delta_{ij}$. Then moment map is given by $J(\boldsymbol q_1,\boldsymbol p_1,\boldsymbol q_2,\boldsymbol p_2)=\boldsymbol q_1\wedge \boldsymbol p_1+\boldsymbol q_2\wedge \boldsymbol p_2$, where we use the identification of $\mathfrak o_3^*$ with $\wedge^2 \mathbb{C}^3$. Sending $x_i\mapsto {\varphi_i}_{|N}$ we identify the Poisson algebra $\mathbb{C}[N/\!\!/ G]$ with $\boldsymbol k[x_1,x_2,\dots,x_{10}]/I$, where $I$ is the Poisson ideal generated by \begin{align*} f_1 &= - x_4 x_9 + x_5 x_8 - x_6 x_{10} + x_7 x_9, \quad f_2 = - x_2 x_9 - x_3 x_{10} + x_5 x_6 + x_7 x_7, \\ f_3 &= - x_2 x_8 - x_3 x_9 + x_4 x_6 + x_6 x_7, \quad f_4 = - x_1 x_9 - x_2 x_{10} + x_4 x_5 + x_5 x_7, \\ f_5 &= - x_1 x_8 + x_3 x_{10} + x_4 x_4 - x_7 x_7, \quad f_6 = - x_1 x_6 + x_2 x_4 - x_2 x_7 + x_3 x_5, \\ f_7 &= - x_3 x_8 x_{10} + x_3 x_9 x_9 + x_6 x_6 x_{10} - 2 x_6 x_7 x_9 + x_7 x_7 x_8, \\ f_8 &= x_2 x_6 x_{10} - 2 x_2 x_7 x_9 - x_3 x_4 x_{10} + x_3 x_5 x_9 - x_3 x_7 x_{10} + x_4 x_7 x_7 + x_7 x_7 x_7, \\ f_9 &= x_1 x_6 x_6 - x_2 x_2 x_8 - 2 x_2 x_3 x_9 + 2x_2 x_6 x_7 - x_3 x_3 x_{10} + x_3 x_7 x_7, \\ f_{10} &= - x_1 x_3 x_{10} + x_1 x_7 x_7 + x_2 x_2 x_{10} - 2 x_2 x_5 x_7 + x_3 x_5 x_5, \\ f_{11} &= x_1 x_3 x_6 x_{10} - x_1 x_6 x_7 x_7 - x_2 x_2 x_6 x_{10} + 2 x_2 x_2 x_7 x_9 - x_2 x_3 x_5 x_9 \\&\quad + 2 x_2 x_3 x_7 x_{10} - 2 x_2 x_7 x_7 x_7 - x_3 x_3 x_5 x_{10} + x_3 x_5 x_7 x_7. \end{align*} For more detail the reader may consult \cite{CHS}. For lack of space we will note spell out here the tensors $Z_{i\mu}^\nu$, $\mathcal A_{\mu\nu}^\lambda$ and $\delta_{\operatorname{Poiss}} Z-[Z,Z]$. \subsection{Brackets of degree $-1$} \label{subsec:deg-1} Brackets of degree $-1$ are deduced from the so-called \emph{linear Poisson} structures. Let us for simplicity restrict the discussion to the case $\boldsymbol k=\mathbb{C}$. Let $V$ be a finite dimensional $\mathbb{C}$-vector space and assign to each element in $V^*$ the internal degree $1$. A Poisson bracket of degree $-1$ on the algebra $S=\mathbb{C}[V]$ is nothing other than a $\mathbb{C}$-linear Lie algebra structure on $V^*$. Let us write for this Lie algebra $\mathfrak g$ so that $V=\mathfrak g^*$. Let $G$ be the connected, simply connected Lie group associated to $\mathfrak g$. Now we observe that an ideal $I\in S$ is a Poisson ideal if and only if the fundamental vector fields $X_{\mathfrak g^*}$ of the coadjoint $G$-action preserve the ideal $I$, i.e., $X_{\mathfrak g^*}I\subseteq I$ for all $X\in \mathfrak g$. This means that $I$ is the ideal of the Zariski closure $\overline{GW}$ of the $G$-saturation $GW$ of a subset $W\subseteq \mathfrak g^*$. For aesthetic reasons we would like to restrict the attention to ideals $I$ generated by homogeneous polynomials. Notice that if $W$ is conical, i.e., $tW\subseteq W$ for all $t\in \mathbb{C}^\times$, then $I$ will have this property. Another important class of examples is provided by nilpotent orbit closures (see e.g. \cite{Collingwood}). One way to construct concrete examples is to decompose the $\mathfrak g$-module $\mathbb{C}[\mathfrak g^*]$ into irreducible $\mathfrak g$-modules. Knowing explicit bases for the irreducible components one can form finite-dimensional $\mathfrak g$-submodules $U\subset S$. Their ideals $I_U=\{f\in S\mid f=\sum_\mu a_\mu f_\mu,\; a_\mu\in S, f_\mu\in U\}$ are clearly Poisson. We emphasize that for the situation described above the tensors $Z_{i\mu}^\nu$ can be chosen to be $-1$ times the representation matrices of the $\mathfrak g$-module $U$. In particular, in this case the $Z_{i\mu}^\nu$ are \emph{constant} and as such Casimirs in $S$. In principle one can also consider the categorical quotient $U^{-1}(0)/\!\!/G$, i.e., $\operatorname{Spec}(S^G/I_U\cap S^G)$. The Poisson bracket on $\operatorname{Spec}( S/I_U)$ descends to a Poisson bracket on $U^{-1}(0)/\!\!/G$ of degree $-1$. For lack of space we refrain from elaborating examples of this type. \begin{proposition} \label{prop:deg-1} In the situation described above we have \begin{enumerate} \item \label{item:MClinear} $\delta_{\operatorname{Poiss}} Z-[Z,Z]=0$, \item $I_U$ is generated by Casimirs if and only if $U\subset S^G$, \item \label{item:degAmunulambda} if $U\subseteq S_m$ then $\deg\left(\mathcal A_{\mu\nu}^\lambda\right)=m-1$, and hence a nonzero $\mathcal A_{\mu\nu}^\lambda$ cannot be in $I_U$. \end{enumerate} \end{proposition} \begin{proof} The fundamental vector fields of the coadjoint $\mathfrak g$-action are given by $-\{x_i,\:\}=-\sum_{j,k} c_{ij}^k x_k\partial /\partial x _j$. Here $c_{ij}^k$ are the structure constants of the Lie algebra in the basis $e_1,\dots,e_n$ of $\mathfrak g$ corresponding to the choice of linear coordinates $x_1,\dots,x_n$ for $\mathfrak g^*$, i.e., $[e_i,e_j]=\sum_k c_{ij}^k e_k$. To show \eqref{item:MClinear} note that, since $Z_{i\mu}^\nu$ are Casimir, we have \begin{align*} \delta_{\operatorname{Poiss}}\left(\sum_k Z_{k\mu}^\nu{\partial\over\partial x_k}\right)(\mathrm{d} x_i,\mathrm{d} x_j)=-\sum_{k} Z_{k\mu} { \partial \Lambda_{ij} \over \partial x_k}=-\sum_{k} c_{ij}^k Z_{k\mu}^\nu=[Z_i,Z_j]_\mu^\nu. \end{align*} The other statements are obvious. \end{proof} \subsubsection{Harmonic polynomials in $3$ variables}\label{subsubsec:harmonic} Let us consider $\mathfrak g=\mathfrak{so}_3$. Note that the adjoint representation is isomorphic to the dual of the standard representation $\mathbb{C}^3$. Hence $S=\mathbb{C}[\mathfrak{so}_3^*]=\mathbb{C}[x_1,x_2,x_3]$ with the standard grading and Poisson bracket \[\{x_1,x_2\}=x_3, \quad \{x_2,x_3\}=x_1,\quad \{x_3,x_1\}=x_2.\] The subspaces of $S$ of harmonic polynomials of degree $\ell$ \[\mathcal H_\ell:=\{f\in S\mid \Delta f=0,\;\deg(f)=\ell\}\] form the irreducible components in the decomposition $S=\oplus_{\ell\ge 0}\mathcal H_\ell$ of the $\mathfrak{so}_3$-module $S$. Here $\Delta=\sum_i(\partial/\partial x_i)^2$ is the Laplacian. The simplest nontrivial case is that of degree $2$. Here $\mathcal H_2$ is the $\mathbb{C}$-span of \begin{align*} f_1=x_1x_2,\quad f_2=x_1x_3,\quad f_3=x_2x_3,\quad f_4=x_1^2-x_2^2,\quad f_5=x_1^2-x_3^2. \end{align*} We can easily work out the bracket table: \begin{align*} \begin{tabular}{c||c|c|c|c|c} $\{\:,\:\}$ & $f_1$ & $f_2$& $f_3$ & $f_4$ & $f_5$ \\\hline\hline $x_1$ & $f_2$ & $-f_1$& $f_4-f_5$& $-2f_3$& $2f_3$ \\\hline $x_2$ & $-f_3$ & $f_5$& $f_1$& $-2f_1$& $-4f­_2$ \\\hline $x_3$ & $-f_4$ & $f_3$ &$-f_2$&$4f_1$&2$f_1$ \end{tabular} \end{align*} None of the generators $f_\mu$ are Casimirs. The Hilbert series of $A$ is $(1-5t^2+5t^3-5t^5)/(1-t)^3=1+3t+t^2$. This means that $f_1,\dots, f_5$ is not a complete intersection and $A$ is actually a finite dimensional algebra. However, as $f_1,\dots,f_5$ forms a Groebner basis for $I$ (e.g. in the graded reverse lexicographic term order), $A=S/I$ is a Koszul algebra. We found nonzero components in the tensors $(\mathcal A_{\mu\nu}^\lambda)_{\mu,\nu,\lambda}$. \subsubsection{$\ell\times \ell$-minors of a generic $m\times m$-matrix}\label{subsubsec:detideal} Suppose $S=\boldsymbol k[x_{ij}|1\le i,j\le m]$ and $U$ is the vector subspace of $S$ generated by the $\ell\times \ell$-minors. We denote by $I$ the ideal generated by $U$. The spectrum of $S/I$ is the determinantal variety of $m\times m$ matrices that have rank $\le \ell-1$. The ideal $I$ is stable under the action of $\mathfrak g=\mathfrak{gl}_m$. Using the coordinates $x_{ij}$ on $\mathfrak{gl}_m^*$ the commutation relations are \[\{x_{ij},x_{kl}\}=\delta_{jk}x_{il}-\delta_{li}x_{kj}.\] To be more specific we take a look into the case $\ell=2$ and $m=3$. We have nine $2\times 2$-minors in nine variables $x_{ij}$: \begin{align*} f_{11}=x_{22}x_{33}-x_{23}x_{32},\quad f_{12}=x_{21}x_{33}-x_{23}x_{31}, \quad f_{13}=x_{21}x_{32}-x_{22}x_{31},\\ f_{21}=x_{12}x_{33}-x_{13}x_{32},\quad f_{22}=x_{11}x_{33}-x_{13}x_{31},\quad f_{23}=x_{11}x_{32}-x_{12}x_{31},\\ f_{31}=x_{12}x_{23}-x_{13}x_{22},\quad f_{32}=x_{11}x_{23}-x_{13}x_{21},\quad f_{33}=x_{11}x_{22}-x_{12}x_{21}. \end{align*} We can check by hand that $I$ is preserved be the $\mathfrak{gl}_3$-action by writing \begin{align*} &\{x_{rs},x_{il}x_{jk}-x_{ik}x_{jl}\}=(x_{rl}\delta_{si}-x_{is}\delta_{rl})x_{jk}+(x_{rk}\delta_{sj}-x_{js}\delta_{rk})x_{il}-(x_{rk}\delta_{si}-x_{is}\delta_{rk})x_{jl}-(x_{rl}\delta_{sj}-x_{js}\delta_{rl})x_{ik}\\ &=\delta_{si}(x_{rl}x_{jk}-x_{rk}x_{jl})-\delta_{rl}(x_{is}x_{jk}-x_{ik}x_{js})+\delta_{sj}(x_{rk}x_{il}-x_{rl}x_{ik})-\delta_{rk}(x_{js}x_{il}-x_{is}x_{jl}). \end{align*} We conclude that $A=\boldsymbol k[x_{ij}|1\le i,j\le 3]/(f_{ij}|1\le i,j\le 3)$ is a Poisson algebra and notice that none of the $f_{ij}$ are Casimirs. The ideal $I$ is not a complete intersection. However, as $A$ is a quadratic extremal Gorenstein algebra it is a Koszul algebra \cite{Froeberg,polishchuk2005quadratic}. We found nonzero components in the tensors $(\mathcal A_{\mu\nu}^\lambda)_{\mu,\nu,\lambda}$. We expect $A/\operatorname{tr}$ to be isomorphic to the Poisson algebra of Subsection \ref{subsubsec:-1,1,1} but have not been able to work out the concrete isomorphism. Here $\operatorname{tr}=\sum_{i=1}^3 x_{ii}$ is a Casimir obtained by taking the trace of the generic $3\times 3$-matrix $(x_{ij})$. \subsection{Brackets of degree $\ge 0$}\label{subsec:brackets of degree >=0} \subsubsection{Diagonal Poisson bracket}\label{subsec:diag} \begin{lemma} Let $S=\boldsymbol k[x_1,x_2,\dots,x_n]$ be the polynomial algebra with $\deg(x_i)=1$ for all $i$. For an antisymmetric $n\times n$-matrix $(c_{ij})_{i,j=1,\dots,n}$ consider the diagonal Poisson bracket \[\{x_i,x_j\}:=c_{ij}x_ix_j,\quad i,j=1,\dots n.\] Then for any monomial $f=\prod_{j=1}^n x_j^{m_j}$ we have $\{x_i,f\}=\sum_{j=1}^{n}c_{ij}m_jx_i f$. \end{lemma} Let us introduce an antisymmetric bilinear form $(\:,\:)$ on $\boldsymbol k^n$ by $(e_i,e_j):=c_{ij}$ for $i,j=1,\dots,n$, where $e_1,\dots, e_n$ denotes the standard basis. Also for a monomial $f=\prod_{j=1}^n x_j^{m_j}$ we use the shorthand $\boldsymbol x^{\boldsymbol m}$, where $\boldsymbol m=(m_1,\dots,m_n)\in \boldsymbol k^n$ is the vector of exponents of $f$. \begin{proposition} \label{prop:diag} Let $(S,\{\:,\:\})$ be the Poisson algebra of the previous lemma. Then any monomial ideal $I=(f_1,f_2,\dots,f_k)$ is a Poisson ideal. Writing $f_\mu=\boldsymbol x^{\boldsymbol m_\mu}$ with $\boldsymbol m_\mu\in \boldsymbol k^n$ the tensors $Z_{i\mu}^\nu$ in $\{x_i,f_\mu\}=\sum_\nu Z_{i\mu}^\nu f_\nu$ can be chosen to be $Z_{i\mu}^\nu =(e_i,\boldsymbol m_\mu)x_i\delta_\mu^\nu$. As a consequence the tensors $\mathcal A_{\mu \nu}^\lambda$ (cf. Equation \eqref{eq:Amunu}) can be written as \begin{align*} \mathcal A_{\mu \nu}^\lambda =(\boldsymbol m_\mu,\boldsymbol m_\nu)(f_\mu\delta_{\nu}^\lambda-f_\nu\delta_{\mu}^\lambda). \end{align*} Moreover, $\delta_{\operatorname{Poiss}} Z=0=[Z,Z]$. \end{proposition} \begin{proof} We notice that $\delta_{\operatorname{Poiss}} Z=0$ is equivalent to $0=\{x_i,Z_{j\mu}^\nu\}-\{x_j,Z_{i\mu}^\nu\}-\sum_\ell Z_{\ell\mu}^\nu \partial \Lambda_{ij}/\partial x_\ell$, which in turn can be easily verified. On the other hand $[Z,Z]$ vanishes since the matrices $Z_i$ are diagonal. \end{proof} We emphasize that we are not obliged to choose $Z_{i\mu}^\nu$ as in the Proposition. Actually there are examples with choices of $Z_{i\mu}^\nu$ that make both $\mathcal A_{\mu \nu}^\lambda$ and $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ vanish (cf. Subsection \ref{subsec:nondiag}). \subsubsection{Brackets from the Volterra system} Let $S=\boldsymbol k[x_1,x_2,\dots,x_n]$ with $n\ge 2$ and $\deg(x_i)=1$ for all $i=1,\dots,n$. The following compatible Poisson brackets $\{\:,\:\}_0$ of degree zero and $\{\:,\:\}_1$ degree one can be deduced from \cite{faddeev2007hamiltonian} (see also \cite{Damianou}). They are defined by \begin{align*} &\{x_i,x_{i+1}\}_0:=x_ix_{i+1},\\ &\{x_i,x_{i+1}\}_1:=x_ix_{i+1}(x_i+x_{i+1}), \qquad \{x_i,x_{i+2}\}_1:=x_ix_{i+1}x_{i+2}. \end{align*} All other brackets between coordinates are understood to be zero. The bracket $\{\:,\:\}_0$ is a diagonal Poisson bracket. The bracket $\{\:,\:\}_1$ also preserves monomial ideals. If the ideal $I$ is generated by monomials $f_1,\dots,f_k$ with $f_\mu=\prod_i x^{m_{\mu,i}}$ then for the bracket $\{\:,\:\}_1$ the we get \begin{align*} Z_{i\mu}^\nu&= \big( (m_{\mu,i+1} + m_{\mu,i+2}) x_i x_{i+1} - (m_{\mu,i-1} + m_{\mu,i-2}) x_i x_{i-1} + (m_{\mu,i+1} - m_{\mu,i-1}) x_i^2 \big) \delta_\mu^\nu,\\ \mathcal A_{\mu \nu}^\lambda&= \sum_{i=1}^n \Big( m_{\mu,i} \big( (m_{\nu,i+1} + m_{\nu,i+2}) x_{i+1} - (m_{\nu,i-1} + m_{\nu,i-2}) x_{i-1} + (m_{\nu,i+1} - m_{\nu,i-1}) x_i \big) \delta_\nu^\lambda f_\mu \\&\quad + m_{\nu,i} \big( (m_{\mu,i+1} + m_{\mu,i+2}) x_{i+1} - (m_{\mu,i-1} + m_{\mu,i-2}) x_{i-1} + (m_{\mu,i+1} - m_{\mu,i-1}) x_i \big) \delta_\mu^\lambda f_\nu \Big),\\ \delta_{\operatorname{Poiss}} Z&=0=[Z,Z]. \end{align*} The verification of the last identity is sort of tedious. A monomial $f$ is Casimir for $\{\:,\:\}_0$ if and only if $n$ is odd and $f$ is a power of $x_1x_3x_5\cdots x_{n-2}x_n$. The bracket $\{\:,\:\}_1$ does not have nontrivial polynomial Casimirs. \subsection{Other brackets} \begin{theorem}\label{thm:detbracket} Let $\boldsymbol k$ be $\mathbb{R}$ or $\mathbb{C}$ and $S=\boldsymbol k[x_1,x_2,\dots, x_n]$. Assume that $X^1,X^2,\dots,X^{k+2}\in D_S$ are pairwise commuting derivations, $g\in S$, and $I$ is the ideal of $S$ generated by $f_1,f_2,\dots, f_k\in S$. For $a=:f_{k+1}, b=:f_{k+2}\in S$, let \begin{align} \label{eq:Palamodovgeneral} \{a,b\}=g\operatorname{Det}\left(((X^\nu(f_\mu))_{\mu,\nu=1,2,\dots,k+2}\right). \end{align} This defines a Poisson bracket on $S$ such that $f_1,f_2,\dots, f_k$ are Casimirs. In the graded case the degree of the bracket is $\deg(\{\:,\:\})=\sum_{\mu=1}^k\deg(f_\mu)+\sum_{\nu=1}^{k+2}\deg(X^\nu).$ \end{theorem} In \cite[Proposition 4.2]{PalamodovInfDef} it is only shown that this defines a bracket in $S/I$. A more direct proof can be deduced from \cite{Filippov} and a special case is treated in \cite[Section 8.3]{LGPV}. For the convenience of the reader we include another demonstration. \begin{proof} Let us assume that $\boldsymbol k=\mathbb{R}$ (the proof for $\boldsymbol k=\mathbb{C}$ is analogous to the real case). Let $\mathcal U\subset \mathbb{R} ^n$ be the open subset where $X^1,X^2,\dots,X^{k+2}$ are linearly independent. On $\mathbb{R}^n\backslash \mathcal U$ the bracket is identically zero and there is nothing to show. Let $x\in \mathcal U$ and $\phi$ be a diffeomorphism from a neighborhood $\mathcal V$ of $0\in \{(t_1,\dots t_n)\in \mathbb{R}^n\} $ to a neighborhood $ \mathcal U_x$ of $x$ in $\mathcal U$ such that $T\phi$ sends $\partial/\partial t_\nu$ to $X^\nu$ for $\nu=1,\dots,k+2$. Let us write $F_\mu:=f_\mu\circ \phi$ for $\mu=1,\dots, k$ and $G:=g\circ \phi$. For $A=:F_{k+1}, B=:F_{k+2}\in\mathcal C^\infty(\mathcal V)$ define a skew-symmetric bracket \begin{align*} \{A,B\}:=G\operatorname{Det}\left(\left({\partial F_\mu\over \partial t_\nu}\right)_{\mu,\nu=1,2,\dots,k+2}\right). \end{align*} It is enough to show that this is a Poisson bracket since $\{a,b\}\circ \phi=\{a\circ \phi,b\circ \phi\}$. Let $\mathcal J:=\operatorname{Ann}(\mathcal F)=\oplus_{p\ge 0} \mathcal J^p$, \begin{align*} \mathcal J^p:=\{\alpha \in\Omega^p(\mathcal V)\mid \alpha(V_1,\dots,V_p)=0\;\;\forall V_1,\dots,V_p\in \mathcal F\} \end{align*} be the annihilator in the algebra of smooth differential forms $\Omega(\mathcal V)$ of the distribution $\mathcal F=\mathcal C^\infty(\mathcal V)\{\partial /\partial t_\nu\mid 1 \le \nu\le k+2\}$. It is the differential ideal generated by $\mathrm{d} t_\mu$ for $\mu=k+3,\dots,n$. Now $\{A,B\}$ is uniquely determined by the relation \begin{align*} G\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge \mathrm{d} A\wedge \mathrm{d} B+\mathcal J=\{A,B\}\;\mathrm{d} t_1\wedge \dots \wedge \mathrm{d} t_{k+2}+\mathcal J. \end{align*} As $\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge \mathrm{d} (A_1A_2)\wedge \mathrm{d} B+\mathcal J=A_1\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge \mathrm{d} A_1\wedge \mathrm{d} B+A_2\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge \mathrm{d} A_2\wedge \mathrm{d} B+\mathcal J$ the bracket satisfies the Leibniz rule in the first argument. Hence there exists a bivector $\pi=\sum_{i,j=1}^{k+2}\pi_{ij}\partial /\partial t_i\wedge\partial /\partial t_j\in \wedge ^2D_{\mathcal C^\infty(\mathcal V)}$ in the second exterior power of the $\mathcal C^\infty(\mathcal V)$-module of derivations $D_{\mathcal C^\infty(\mathcal V)}$ of $\mathcal C^\infty(\mathcal V)$ such that $\{A,B\}=-i_\pi(\mathrm{d} A\wedge \mathrm{d} B)$. Here $i_\pi: \Omega^\bullet(\mathcal V)\to \Omega^{\bullet-2}(\mathcal V)$ is the contraction with the bivector $\pi$. Recall that the Lie derivative $L_\pi=[i_\pi,\mathrm{d}]=i_\pi\circ \mathrm{d}-\mathrm{d}\circ i_\pi: \Omega^\bullet(\mathcal V)\to \Omega^{\bullet-1}(\mathcal V)$ along $\pi$ is a second order superdifferential operator annihilating $\Omega^0(\mathcal V)$ and $\Omega^1(\mathcal V)\cap Z(\mathcal V)$, where $Z(\mathcal V)$ is the space of closed forms. It is straightforward to check that $L_\pi(\mathcal J\cap Z(\mathcal V))\subseteq \mathcal J$. If $\alpha,\beta$, and $\gamma$ are differential forms of degree $|\alpha|,|\beta|, |\gamma|$, then \begin{align}\label{eq:2ndOrder} \nonumber L_\pi(\alpha\wedge\beta\wedge\gamma)&=(-1)^{|\alpha|}\alpha\wedge L_\pi(\beta\wedge\gamma)+(-1)^{|\beta|(|\alpha|-1)}\beta\wedge L_\pi(\alpha\wedge\gamma)+(-1)^{|\gamma|(\beta|+|\alpha|-1)}\gamma\wedge L_\pi(\alpha\wedge\beta)\\ &\qquad\qquad-(-1)^{|\alpha|+|\beta|}\alpha\wedge\beta\wedge L_\pi(\gamma)-(-1)^{|\alpha|+|\gamma|(|\beta|-1)}\alpha\wedge\gamma\wedge L_\pi(\beta). \end{align} This can be easily shown from the fact that the supercommutator of the left multiplication $\alpha \wedge $ and $ L_\pi$ is a derivation of $\Omega(\mathcal V)$. In the special case when $\alpha=\mathrm{d} A,\beta=\mathrm{d} B$, and $\gamma=\mathrm{d} C$ this simplifies to \begin{align*} L_\pi( \mathrm{d} A\wedge\mathrm{d} B\wedge\mathrm{d} C) &=- \mathrm{d} A\wedge L_\pi(\mathrm{d} B\wedge\mathrm{d} C)+\mathrm{d} B\wedge L_\pi( \mathrm{d} A\wedge\mathrm{d} C)-\mathrm{d} C \wedge L_\pi( \mathrm{d} A\wedge\mathrm{d} B)\\ &=-\mathrm{d} A\wedge \mathrm{d}\{B,C\}-\mathrm{d} B\wedge \mathrm{d}\{C,A\}-\mathrm{d} C\wedge \mathrm{d}\{A,B\}. \end{align*} Consequently, with $\operatorname{Jac}(A,B,C):=\{A,\{B,C\}\}+\{B,\{C,A\}\}+\{C,\{A,B\}\}$ we have \begin{align*} \operatorname{Jac}(A,B,C)\;\mathrm{d} t_1\wedge \dots \wedge \mathrm{d} t_{k+2}+\mathcal J=-G\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge L_\pi( \mathrm{d} A\wedge\mathrm{d} B\wedge\mathrm{d} C)+ \mathcal J, \end{align*} and it remains to show that $\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge L_\pi( \mathrm{d} A\wedge\mathrm{d} B\wedge\mathrm{d} C)\in \mathcal J$. Let $\alpha\in\Omega(\mathcal V)$ be a closed form. Since $\{F_\mu,\:\}=0$ by construction of $\pi$ it follows that $L_\pi(\mathrm{d} F_\mu\wedge\alpha)=-\mathrm{d} F_\mu\wedge L_\pi(\alpha)$. Hence we have \begin{align*} \mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge L_\pi( \mathrm{d} A\wedge\mathrm{d} B\wedge\mathrm{d} C)=(-1)^k L_\pi(\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge \mathrm{d} A\wedge\mathrm{d} B\wedge\mathrm{d} C). \end{align*} But $\mathrm{d} F_1\wedge \dots \wedge \mathrm{d} F_k\wedge\mathrm{d} A\wedge\mathrm{d} B\wedge\mathrm{d} C$, being a $(k+3)$-form, is in $\mathcal J\cap Z(\mathcal V)$. \end{proof} \section{Complete intersections and the conormal sequence}\label{sec:ci} Let $S=\boldsymbol k[x_1,x_2,\dots, x_n]$ be our polynomial algebra and consider the ideal $I=(f_1,f_2,\dots,f_k)$ generated by homogeneous polynomials $f_1,f_2,\dots,f_k\in S$. We write $A=S/I$ for the quotient algebra. We use Latin indices $i,j,\dots$ to index the $x$'s and Greek indices $\mu,\nu,\dots$ to index the $f$'s. The \emph{Koszul complex} $K_\bullet(S,(f_1,f_2,\dots,f_k))=: K_\bullet(S,\boldsymbol f)$ is as an $S$-algebra the graded polynomial algebra $S[y_1,y_2,\dots,y_k]$ in the \emph{odd} variables $y_\mu$, i.e., we have $y_\mu y_\nu=-y_\nu y_\mu$. The \emph{Koszul differential} $\partial$ is the unique $S$-linear derivation that sends $y_\mu$ to $f_\mu$. In other words, we can write $\partial$ as a vector field \begin{align*} \partial=\sum_{\mu=1}^k f_\mu{\partial\over \partial y_\mu}. \end{align*} It is clear that $\partial^2=0$. The homological degree $|y_\mu|=1$. We see that $(K_\bullet(S,\boldsymbol f), \partial)$ is a chain complex. In fact, $(K_\bullet(S,\boldsymbol f), \partial)$ is a supercommutative dg algebra with $S$-linear differential $\partial$. We also declare the internal degree to be $\deg(y_\mu):=\deg(f_\mu)$ so that $\partial$ has internal degree zero. Note that the zeroth homology module $H_0{K}(S,\boldsymbol f)$ is isomorphic as an $S$-algebra to $A=S/I$. We say that $\boldsymbol f= (f_1,f_2,\dots,f_k)$ is a \emph{complete intersection} if the homologies $H_i{K}(S,\boldsymbol f)$ are trivial in homological degree $i\ge 1$. Based on the coefficients $Z_{i\mu}^\nu$ in the Poisson relation $\{x_i,f_\mu\}= \sum_\nu Z_{i\mu}^\nu f_\nu$, we defined the tensor $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ in $\mathrm C^1_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(S)$ (see Equation \eqref{eq:MC}). \begin{theorem} \label{thm:MC} Suppose $\boldsymbol f$ is a complete intersection. Then image of $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ in $\mathrm C^2_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(A)$ vanishes. Moreover, the image of $Z$ in $\mathrm C^1_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(A)$ is uniquely determined by $\boldsymbol f$. \end{theorem} \begin{proof} We use Einstein summation convention, i.e., summation over repeated indices is understood. If two terms $X$ and $Y$ are equal modulo $I$ we will write $X\sim Y$. From Appendix \ref{ap:Poissoncohomology} we deduce \begin{align}\label{eq:Poiss2coboundary} \delta_{\operatorname{Poiss}}\left(Z_{i\mu}^\nu {\partial \over\partial x_i}\right)(\mathrm{d} x_i,\mathrm{d} x_j)=\{x_i,Z_{j\mu}^\nu\}-\{x_j,Z_{i\mu}^\nu\}- Z_{k\mu}^\nu{\partial\Lambda_{ij}\over\partial x_k}. \end{align} Note furthermore that \begin{align}\label{eq:ZLambdaidentity} Z_{i\mu}^\nu f_\nu=\{x_i,f_\mu\}=\Lambda_{ij}{\partial f_\mu\over \partial x_j}. \end{align} We proceed by analyzing the Jacobi identity \begin{align*} &0=\{x_i,\{x_j,f_\mu\}\}-\{x_j,\{x_i,f_\mu\}\}+\{f_\mu,\{x_i,x_j\}\}\\ &=\{x_i,Z_{j\mu}^\nu f_\nu\}- \{x_j,Z_{i\mu}^\nu f_\nu\} +\{f_\mu,\Lambda_{ij}\}\\ &=\{x_i,Z_{j\mu}^\nu\} f_\nu+Z_{j\mu}^\nu Z_{i\nu}^\lambda f_\lambda - \{x_j,Z_{i\mu}^\nu\} f_\nu-Z_{i\mu}^\nu Z_{j\nu}^\lambda f_\lambda +\Lambda_{k\ell}{\partial f_\mu\over \partial x_k}{\partial \Lambda_{ij}\over \partial x_\ell}\\ &=\left(\{x_i,Z_{j\mu}^\nu\}- \{x_j,Z_{i\mu}^\nu\}\right)f_\nu+([Z_j,Z_i])_\mu^\nu f_\nu -Z_{\ell\mu}^\nu f_\nu{\partial \Lambda_{ij}\over \partial x_\ell}\\ &=\partial\left(\left(\{x_i,Z_{j\mu}^\nu\}- \{x_j,Z_{i\mu}^\nu \}-Z_{\ell\mu}^\nu {\partial \Lambda_{ij}\over \partial x_\ell} -([Z_i,Z_j])_\mu^\nu\right)y_\nu\right). \end{align*} This means that the expression in the last equation is a Koszul $1$-cycle, hence a Koszul $1$-boundary and as such $\sim 0$. We conclude that \begin{align*} \delta_{\operatorname{Poiss}}\left(Z_{i\mu}^\nu {\partial \over\partial x_i}\right)\sim {1\over 2}([Z_i,Z_j])_\mu^\nu {\partial \over\partial x_i}\wedge{\partial \over\partial x_j}. \end{align*} The uniticity statement is a rephrasing of a theorem of Vasconcelos (see \cite[Theorem 19.9]{Matsumura}). \end{proof} \begin{corollary} \label{cor:principalideal} If $I=(f_1)$ is a principal ideal with $f_1\ne 0\in S$, then $Z=\sum_i Z_{i1}^1\partial /\partial x_i\in \mathrm C^1_{\operatorname{Poiss}}(S)\cong\mathrm C^1_{\operatorname{Poiss}}(S)\otimes \mathfrak {gl}_1(S) $ is a cocycle: $\delta_{\operatorname{Poiss}} Z=0$. If in addition $Z_{i1}^1$ and $Z_{j1}^1$ are Casimirs in $S$, then $Z(\{x_i,x_j\})=0\in S$. If $Z_{i1}^1$ is a Casimir for all $i$, then $Z$ commutes with all Hamiltonian vector fields: $[Z,\delta_{\operatorname{Poiss}} S]=0$. \end{corollary} \begin{proof} Observing that the commutator in $\mathfrak{gl}_1$ vanishes, we analyze the relation \begin{align*} 0=\left(\{x_i,Z_{j1}^1\}- \{x_j,Z_{i1}^1\} -Z_{\ell1}^1 {\partial \Lambda_{ij}\over \partial x_\ell}\right)f_1 \end{align*} from the proof of the previous theorem. As the complement of the locus of $f_1$ is dense, we can discard the factor of $f_1$. The claims follow from the relation $Z(\{x_i,x_j\})=Z_{\ell1}^1 {\partial \Lambda_{ij}/ \partial x_\ell}$. \end{proof} In the following proposition we use our standing assumption that $S$ has a $\mathbb{Z}_{\ge 0}$-grading compatible with the Poisson bracket such that $S_0=\boldsymbol k$. \begin{proposition} Let $I$ be a graded principal Poisson ideal in $S$ and suppose that the first Poisson cohomology $\mathrm H^1_{\operatorname{Poiss}}(S)$ is trivial. Then any generator of the principal Poisson ideal $I$ is a Casimir. \end{proposition} \begin{proof} Let us simplify the notation by writing $f=f_1$ and $Z_i=Z_{i1}^1$. As $Z$ is a Poisson coboundary there is a function $a\in S$ such that $Z_i=\{x_i,a\}$ for all $i=1,\dots,n$. We can assume $a\in S$ is homogeneous. By the definition of $Z_i$ we have $\{x_i,f\}=\{x_i,a\}f$. This implies $\deg(a)=0$, hence $a$ is a scalar, which means that $Z_i=0$. \end{proof} \begin{corollary} For any of the Poisson brackets of Subsection \ref{subsec:brackets of degree >=0} $\mathrm H^1_{\operatorname{Poiss}}(S)$ is nontrivial. \end{corollary} \begin{lemma} Assuming $\boldsymbol f$ is a complete intersection, let $g_1,g_2,\dots,g_\ell\in S$ be another set of generators for the ideal $I$ such that each $g_\alpha$ is a Casimir modulo I, i.e., $\{x_i,g_\alpha\}\in I$ for all $i$ and $\alpha$. For the transition matrices $N\in S^{\ell\times k}$ and $M\in S^{k\times \ell}$ with \[g_\alpha=\sum_{\mu=1}^k N_\alpha^\mu f_\mu,\qquad f_\mu=\sum_{\alpha=1}^\ell M_\mu^\alpha g_\alpha\] we have \begin{align*} Z_{i\mu}^\nu+I=-\sum_{\alpha=1}^\ell M_\mu^\alpha \{x_i,N_\alpha^\nu\}+I=\sum_{\alpha=1}^\ell N_\alpha^\nu\{x_i,M^\alpha_\mu\}+I. \end{align*} This can be also written in the form $Z=-M\delta_{\operatorname{Poiss}} N=(\delta_{\operatorname{Poiss}} M)N \in\mathrm C^1_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(A)$. \end{lemma} \begin{proof} We use the same shorthand notations as in the proof of Theorem \ref{thm:MC}. We have $f_\nu=M_\nu^\alpha N_\alpha^\mu f_\mu$. We write this as $(\delta_\nu^\mu-M_\nu^\alpha N_\alpha^\mu)f_\mu=0$ using the Kronecker $\delta$-symbol. Therefore \begin{align}\label{eq:rightinverse} (\delta_\nu^\mu-M_\nu^\alpha N_\alpha^\mu)y_\mu \end{align} is a Koszul $1$-cycle, hence a Koszul $1$-boundary and as such $\sim 0$. We conclude that $M_\nu^\alpha N_\alpha^\mu\sim \delta_\nu^\mu$. Next we are analyzing \begin{align*} 0\sim\{x_i,g_\alpha\}=\{x_i,N_\alpha^\nu f_\nu\}=\{x_i,N_\alpha^\nu\} f_\nu+N_\alpha^\nu Z_{i\nu}^\mu f_\mu=\left(\{x_i,N_\alpha^\nu\} +N_\alpha^\mu Z_{i\mu}^\nu \right)f_\nu. \end{align*} We conclude that $\left(\{x_i,N_\alpha^\nu\} +N_\alpha^\mu Z_{i\mu}^\nu \right)y_\nu$ is a Koszul $1$-cycle modulo $I$, hence a Koszul $1$-boundary modulo $I$ and as such $\sim 0$. We write this as $N_\alpha^\mu Z_{i\mu}^\nu\sim \{x_i,N_\alpha^\nu\}$. Multiplying this with matrix $M$ we arrive at $Z_{i\mu}^\nu\sim-M_\mu^\alpha \{x_i,N_\alpha^\nu\}$. Applying \eqref{eq:rightinverse} and the fact that $\delta_\mu^\nu$ is a Casimir we can rewrite this as $N_\alpha^\nu\{x_i,M^\alpha_\mu\}$. \end{proof} Using $\boldsymbol f=(f_1,f_2,\dots,f_k)$, we form a two-term chain complex $\mathbb L=\mathbb L_0\oplus\mathbb L_1$ of free $A$-modules $\mathbb L_0:=A^n$ and $\mathbb L_1:=A^k$: \begin{align}\label{eq:cicotangent} 0\leftarrow\mathbb L_0\overset{\partial}{\longleftarrow} \mathbb L_1 \leftarrow 0 \end{align} The differential is given as follows. Let $e_i$ be the canonical basis for $\mathbb L_0=A^n$ and $\varepsilon_\mu$ be the canonical basis for $\mathbb L_1=A^k$. Then $\partial$ is the unique $A$-linear map such that $\partial \varepsilon_\mu=\sum_{i=1}^n\partial f_\mu/\partial x_i\:e_i$. By sending $e_i \mapsto \mathrm{d} x_i$ and $\varepsilon_\mu\mapsto 0$ we obtain an $A$-linear map $\mathbb L\to\Omega_{A|\boldsymbol k}$. The complex $\mathbb L$ is sometimes referred to as the \emph{conormal sequence} and is a special case of the cotangent complex (see Section \ref{sec:cotangent}). Recall the following well-known fact. \begin{lemma} If $\boldsymbol f$ is a reduced complete intersection it follows that $\mathbb L$ is exact in homological degree $1$, and hence the map $\mathbb L\to\Omega_{A|\boldsymbol k}$ is a quasiisomorphism. \end{lemma} \begin{proof} See for example \cite[Theorem 9.5]{Kunz}. \end{proof} We caution the reader that the symbols $e_i$ and $\epsilon_\mu$ will be written later as $\mathrm{d} x_i$ and $\mathrm{d} y_\mu$, respectively, see Section \ref{sec:cotangent}. To avoid confusion, in this section, we stick to the notation $e_i$ and $\epsilon_\mu$. \begin{theorem}\label{thm:ciLiealgebroid} If $\boldsymbol f$ is a complete intersection, there is a graded Lie bracket on $\mathbb L$ defined by \begin{align} \label{eq:Koszulbracketci} [e_i,e_j]:=\sum_{k=1}^n {\partial \Lambda _{ij}\over \partial x_k}e_k, \qquad [e_i,\varepsilon_\mu]:=\sum_{\nu=1}^k Z_{i\mu}^\nu \varepsilon_\nu, \end{align} making $(\mathbb L,\partial, [\:,\:])$ a dg Lie algebroid over $\operatorname{Spec}(A)$ with anchor $\rho: \mathbb L\to D_A$ given by $e_i\mapsto \{x_i,\:\}$ and $\varepsilon_\mu\mapsto 0$. The coefficients in \eqref{eq:Koszulbracketci} are understood to be taken modulo $I$. Moreover, the $A$-linear quasi-isomorphism $\mathbb L\to \Omega_{A|\boldsymbol k}$ is compatible with the brackets. We have that the internal degree of $[\:,\:]$ coincides with the internal degree of $\{\:,\:\}$ while $\deg\partial=0$. \end{theorem} \begin{proof} We use the shorthand notations from the proof of Theorem \ref{thm:MC}. To verify the Leibniz rule \begin{align*} \partial[X,Y]=[\partial X,Y]+(-1)^{|X|}[X,\partial Y] \end{align*} we have to address two cases: the bracket on the lefthand side evaluated on $\mathbb L_0\times\mathbb L_1$ or on $\mathbb L_1\times\mathbb L_1$. In the former case the Leibniz rule spells out as \begin{align}\label{eq:LeibnizL0L1} \partial\left[U^ie_i,\xi^\mu\varepsilon_\mu\right]\sim\left[U^ie_i,\partial(\xi^\mu\varepsilon_\mu)\right] \end{align} where $U^i,\xi^\mu\in S$. The lefthand side of \eqref{eq:LeibnizL0L1} can be understood as \begin{align*} \partial\left[U^ie_i,\xi^\mu\varepsilon_\mu\right]=\partial\left( U^i\xi^\mu Z_{i\mu}^\nu \varepsilon_\nu+U^i\{x_i,\xi^\mu\}\varepsilon_\mu\right)= U^i\xi^\mu Z_{i\mu}^\nu {\partial f_\nu\over \partial x_k}e_k+U^i\Lambda_{ij}{\partial \xi^\mu\over \partial x_j}{\partial f_\mu\over \partial x_k}e_k. \end{align*} We have the freedom to add terms in $I$, \begin{align*} Z_{i\mu}^\nu {\partial f_\nu\over \partial x_k}\sim Z_{i\mu}^\nu {\partial f_\nu\over \partial x_k}+{\partial Z_{i\mu}^\nu \over \partial x_k}f_\nu ={\partial \over \partial x_k}\left( \{x_i,f_\mu\}\right)={\partial \over \partial x_k}\left( \Lambda_{ij}{\partial f_\mu\over \partial x_j}\right) ={\partial \Lambda_{ij}\over \partial x_k}{\partial f_\mu\over \partial x_j}+\Lambda_{ij}{\partial^2 f_\mu\over \partial x_k\partial x_j}e_k. \end{align*} Hence \begin{align*} &\partial\left[U^ie_i,\xi^\mu\varepsilon_\mu\right]\sim U^i\xi^\mu{\partial \Lambda_{ij}\over \partial x_k}{\partial f_\mu\over \partial x_j}e_k+ U^i\xi^\mu \Lambda_{ij}{\partial^2 f_\mu\over \partial x_k\partial x_j} e_k+U^i\Lambda_{ij}{\partial \xi^\mu\over \partial x_j}{\partial f_\mu\over \partial x_k}e_k\\ &= U^i\xi^\mu{\partial \Lambda_{ij}\over \partial x_k}{\partial f_\mu\over \partial x_j}e_k+ U^i\xi^\mu \Lambda_{ij}{\partial \over \partial x_j}\left(\xi^\mu{\partial f_\mu\over \partial x_k}\right) =U^i\xi^\mu{\partial \Lambda_{ij}\over \partial x_k}{\partial f_\mu\over \partial x_j}e_k + U^i\xi^\mu \left\{x_i,\xi^\mu{\partial f_\mu\over \partial x_k}\right\}\\ &=\left[U^ie_i,\xi^\mu{\partial f_\mu\over \partial x_j}e_j\right]=\left[U^ie_i,\partial(\xi^\mu\varepsilon_\mu)\right], \end{align*} establishing \eqref{eq:LeibnizL0L1}. The other case boils down to \begin{align}\label{eq:LeibnizL1L1} \left[\partial(\xi^\mu\varepsilon_\mu),\eta^\mu\varepsilon_\nu\right]\sim\left[\xi^\mu\varepsilon_\mu,\partial(\eta^\nu\varepsilon_\nu)\right] \end{align} where $\xi^\mu, \eta^\nu\in S$. To show this we need an auxiliary result. Namely, we claim that (cf. Equation \eqref{eq:Amunu}) \begin{align}\label{eq:alphamunu} \alpha_{\mu\nu}:=\left( {\partial f_\mu\over \partial x_i}Z_{i\nu}^\lambda+ {\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda\right)y_\lambda=\mathcal A_{\mu\nu}^\lambda y_\lambda \end{align} is a Koszul $1$-cycle. This implies of course that $\alpha_{\mu\nu}$ is a Koszul $1$-boundary and as such $\sim 0$, meaning that \begin{align} {\partial f_\mu\over \partial x_i}Z_{i\nu}^\lambda\sim -{\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda. \end{align} The claim follows easily using relation \eqref{eq:ZLambdaidentity}: \begin{align*} \partial \alpha_{\mu\nu}=\left( {\partial f_\mu\over \partial x_i}Z_{i\nu}^\lambda+ {\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda\right)f_\lambda={\partial f_\mu\over \partial x_i}{\partial f_\nu\over \partial x_j}\Lambda_{ij}+{\partial f_\nu\over \partial x_i}{\partial f_\mu\over \partial x_j}\Lambda_{ij}=0. \end{align*} We remind the reader that terms of the form $\{f_\mu,\eta^\nu\}$ and $\{f_\nu,\xi^\mu\}$ are $\sim 0$, since $I$ is a Poisson ideal. Now we are ready to take a look into \begin{align*} \left[\partial(\xi^\mu\varepsilon_\mu),\eta^\nu\varepsilon_\nu\right] =\left[\xi^\mu {\partial f_\mu\over \partial x_i} e_i,\eta^\nu\varepsilon_\nu\right] =\xi^\mu \underbrace{{\partial f_\mu\over \partial x_i}\{x_i,\eta^\nu\}} _{=\{f_\mu,\eta^\nu\}\sim 0}\varepsilon_\nu +\xi^\mu\eta^\nu {\partial f_\mu\over \partial x_i}Z_{i\nu}^\lambda\varepsilon_\lambda \sim -\xi^\mu\eta^\nu {\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda\varepsilon_\lambda. \end{align*} Similarly, we find \begin{align*} \left[\xi^\mu\varepsilon_\mu,\partial(\eta^\nu\varepsilon_\nu)\right] =\left[\xi^\mu \varepsilon_\mu,\eta^\mu{\partial f_\nu\over \partial x_i} e_i\right] =-\eta^\nu {\partial f_\nu\over \partial x_i}\{x_i,\xi^\mu\} -\xi^\mu\eta^\nu {\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda\varepsilon_\lambda \sim -\xi^\mu\eta^\nu {\partial f_\nu\over \partial x_i}Z_{i\mu}^\lambda\varepsilon_\lambda, \end{align*} establishing \eqref{eq:LeibnizL1L1}. In order to check the Jacobi identity, our strategy might be to show that the Jacobiator \begin{align*} \operatorname{Jac}:\mathbb L\times\mathbb L\times\mathbb L\to\mathbb L, \quad \operatorname{Jac}(X,Y,Z)=(-1)^{|X||Z|}[X,[Y,Z]]+(-1)^{|X||Y|}[Y,[Z,X]]+(-1)^{|Y||Z|}[Z,[X,Y]] \end{align*} is $A$-trilinear and to check its vanishing on basis elements. There are two cases to consider: $\operatorname{Jac}$ on $\mathbb L_0\times\mathbb L_0\times\mathbb L_0$ and on $\mathbb L_0\times\mathbb L_0\times\mathbb L_1$. However, in the former case there is a more elegant way to check the vanishing of $\operatorname{Jac}(U^ie_i,V^je_j,W^ke_k)$. We remind ourselves that in $\mathbb L_0$, $e_i$ is nothing other than $\mathrm{d} x_i$. Moreover, in $S$ every $1$-form is exact: given coefficients $U^i\in S$ we can write $U^i\mathrm{d} x_i=\mathrm{d} U$ for some function $U\in S$. There are similar functions $V,W\in S$. For our argument we can use the relation $\left[\mathrm{d} U,\mathrm{d} V\right]=\mathrm{d}\{U,V\}$. Hence \[\left[U^ie_i,\left[V^je_j,W^k e_k\right]\right]=\left[\mathrm{d} U,\left[\mathrm{d} V,\mathrm{d} W\right]\right]=\left[\mathrm{d} U,\mathrm{d}\{V,W\}\right]=\mathrm{d}\{U,\{V,W\}\}.\] Therefore the Jacobiator is $\mathrm{d}$ applied to the Jacobi identity $\{U,\{V,W\}\}+\{V,\{W,U\}\}+\{W,\{U,V\}\}=0$. The vanishing of $\operatorname{Jac}(e_i,e_j,\varepsilon_\mu)$ is closely related to Theorem \ref{thm:MC}. We leave it to the reader to verify that $\operatorname{Jac}(e_i,e_j,\varepsilon_\mu)=0$ is a rephrasing of the Maurer-Cartan equation $\delta_{\operatorname{Poiss}} Z-[Z,Z]=0$. We are left with the task to prove that $\operatorname{Jac}$ is $A$-trilinear on $\mathbb L_0\times\mathbb L_0\times\mathbb L_1$. To check whether $\operatorname{Jac}(U^ie_i,V^je_j,\xi^\mu\varepsilon_\mu)$ is $A$-linear in the first argument, we notice that $\left[U^ie_i,\left[V^je_j,\xi^\mu\varepsilon_\mu\right]\right]$ is $A$-linear in the first argument. The claim follows from the calculation \begin{align*} \left[V^j e_j, \left[ \xi^\mu\varepsilon_\mu,U^ie_i \right]\right] &=V^j\left[e_j,U^i(-\{x_i,\xi^\nu\}\varepsilon_\nu-\xi^\mu Z_{i\mu}^\nu)\varepsilon_\nu\right]\\ &=-\{x_j,U^i\}V^j\left(\{x_i,\xi^\nu\}\varepsilon_\nu+\xi^\mu Z_{i\mu}^\nu\right)\varepsilon_\nu+\mbox{ $U^i$-linear terms},\\ \left[\xi^\mu\varepsilon_\mu, \left[ U^ie_i,V^j e_j \right]\right]&=\left[\xi^\mu\varepsilon_\mu,-\{x_j,U^i\}V^je_j+\mbox{ $U^i$-linear terms}\right]\\ &=-\{x_j,U^i\}V^j\left(-\{x_i,\xi^\nu\}\varepsilon_\nu-\xi^\mu Z_{i\mu}^\nu\right)\varepsilon_\nu+\mbox{ $U^i$-linear terms}. \end{align*} By symmetry it follows that $\operatorname{Jac}(U^ie_i,V^je_j,\xi^\mu\varepsilon_\mu)$ is $A$-linear in the second argument. To make sure that $\operatorname{Jac}(e_i,e_j,\xi^\mu\varepsilon_\mu)$ is $A$-linear in the last argument we calculate \begin{align*} \left[e_i, \left[ e_j,\xi^\mu\varepsilon_\mu \right]\right] &=\left[e_i,\xi^\mu Z_{j\mu}^\nu\varepsilon_\nu+ \left\{ x_j,\xi^\mu \right\}\varepsilon_\mu\right]\\ &= \left\{ x_i,\xi^\mu \right\}Z_{j\mu}^\nu\varepsilon_\nu+\left\{ x_i,\{x_j,\xi^\mu \}\right\}\varepsilon_\mu+ \left\{ x_j,\xi^\mu \right\}Z_{i\mu}^\nu\varepsilon_\nu +\mbox{ $\xi^\mu$-linear terms},\\ \left[e_j, \left[ \xi^\mu\varepsilon_\mu,e_i \right]\right] &=-\left[e_j,\xi^\mu Z_{i\mu}^\nu\varepsilon_\nu+ \left\{ x_i,\xi^\mu \right\}\varepsilon_\mu\right]\\ &= -\left\{ x_j,\xi^\mu \right\}Z_{i\mu}^\nu\varepsilon_\nu-\left\{ x_j,\{x_i,\xi^\mu \}\right\}\varepsilon_\mu- \left\{ x_i,\xi^\mu \right\}Z_{j\mu}^\nu\varepsilon_\nu +\mbox{ $\xi^\mu$-linear terms},\\ \left[\xi^\mu\varepsilon_\mu, \left[ e_i,e_j \right]\right]&=-\{\{x_i,x_j\},\xi^\mu\}\varepsilon_\mu+\mbox{ $\xi^\mu$-linear terms}. \end{align*} Adding things up, the claim follows from the Jacobi identity for $\{\:,\:\}$. \end{proof} The above theorem actually reflects of a more fundamental structure. In fact, we can make the following Ansatz for a differential graded Poisson algebra structure on the Koszul complex: \begin{align}\label{eq:dgPoisson} \{x_i,x_j\}=\Lambda_{ij},\quad \{x_i,y_\mu\}:=\sum_\nu Z_{i\mu}^\nu y_\nu,\quad \{y_\mu,y_\nu\}=-\beta_{\mu\nu}, \end{align} where the $\beta_{\mu\nu}=\beta_{\nu\mu}\in \operatorname K_2(S,\boldsymbol f)$ are chosen such that $\partial\beta_{\mu\nu}=\alpha_{\mu\nu}=\sum_\lambda\mathcal A_{\mu\nu}^\lambda y_\lambda$ (cf. Equation \eqref{eq:alphamunu}). The problem with this definition, however, is that the Jacobi identity is typically violated. More precisely, the Jacobiators $\operatorname{Jac}(x_i,x_j,y_\mu)$ are zero precisely when $\delta_{\operatorname{Poiss}} Z-[Z,Z]\in \mathrm{C}^1_{\operatorname{Poiss}}(S)\otimes_S\mathfrak{gl}_k(S)$ vanishes. For the Jacobiators $\operatorname{Jac}(x_i,y_\mu,y_\nu)$ and $\operatorname{Jac}(y_\mu,y_\nu,y_\lambda)$ one can work out concrete formulas that are already a bit messy. The only complete intersections known to us with $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ and all the $\beta_{\mu\nu}$ vanishing are hypersurfaces. The vanishing of $\beta_{\mu\nu}$ is due to the fact that the Koszul complex of a hypersurface is concentrated in homological degree zero and one, while Corollary \ref{cor:principalideal} tells us that $\delta_{\operatorname{Poiss}} Z-[Z,Z]=\delta_{\operatorname{Poiss}} Z=0$. \begin{theorem}\label{thm:principaldgPoisson} Let $I=(f)$ be a principal Poisson ideal in $S$ with $f\ne 0\in S$. Then the Koszul complex $(\operatorname K_\bullet(S,f)=S[y],\partial )$ is a dg Poisson algebra. The dg Poisson structure characterized by $\partial:y\mapsto f$, $\{x_i,x_j\}=\Lambda_{ij}$ and $\{x_i,y\}=Z_i$. Here the $Z_i\in S$ are uniquely determined up to $I$ by the requirement $\{x_i,f\}=Z_if$. \end{theorem} In general, when $\boldsymbol f=(f_1,\dots, f_k)$ is a complete intersection, the Koszul complex $\operatorname K_\bullet(S,\boldsymbol f)$ merely carries the structure of a $P_\infty$-algebra. One can find this statement and a sketch of a proof in a paper of Fresse \cite{FresseCI}. We will show this in Section \ref{sec:homotopystuff}, dropping the hypothesis of $\boldsymbol f$ being a complete intersection. \section{Poisson connection on the Koszul complex}\label{sec:connection} To put the calculations of the previous section into context, we offer here an explanation of the tensor $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ (defined in \eqref{eq:MC}) as $-1$ times the curvature of a Poisson connection (see, e.g., \cite{Bursztyn}) on the Koszul complex. Let us recall that a \emph{Poisson module} over the Poisson algebra $(S,\{\:,\:\})$ is an $S$-module $M$ such that $S\oplus M$ \begin{enumerate} \item is understood as the unique $\boldsymbol k$-algebra extending the $\boldsymbol k$-algebra structure on $S$ and the $S$-(bi)module structure on $M$ with the property that $M\cdot M=0$, \item carries a Poisson bracket $\{\:,\:\}$ that extends the bracket on $S$ such that $\{M,M\}=0$. \end{enumerate} This translates into the condition that $\{\:,\:\}:S\times M\to M$ satisfies \begin{align} \{\{f_1,f_2\},m\}&=-\{f_2,\{f_1,m\}\}+\{f_1,\{f_2,m\}\},\\ \{f_1,f_2 m\}&=\{f_1,f_2\} m+f_2\{f_1, m\},\\ \{f_1f_2, m\}&=f_2\{f_1,m\}+f_1\{f_2, m\} \end{align} for all $f_1,f_2\in S$ and $m\in M$. Conversely, a bracket $\{\:,\:\}:S\times M\to M$ with the above properties extends uniquely to a Poisson bracket on $S\oplus M$ that defines a Poisson module. Note that a free $S$-module is in an obvious way a Poisson module by taking brackets coordinate-wise. In particular, this applies to the $S$-module underlying the Koszul complex $\operatorname K_\bullet(S,\boldsymbol f)$. By a \emph{Poisson connection} on the Poisson module $M$ we mean a $\boldsymbol k$-linear map $\nabla:\Omega_{S|\boldsymbol k}\otimes_{\boldsymbol k} M\to M$, $\alpha\otimes m\mapsto \nabla_\alpha m$, that satisfies \begin{align} \nabla_{f\alpha} m&=f\nabla_{\alpha} m, \\ \nabla_{\alpha} (f m)&=f\nabla_{\alpha} m+\langle \alpha,\{f,\:\}\rangle m \end{align} for all $f\in S$, $m\in M$, and $\alpha\in\Omega_{S|\boldsymbol k}$. Here $\langle\:,\:\rangle$ is used for the pairing of K\"{a}hler forms with derivations of $S$. The connection is uniquely determined by its values $\nabla_{\mathrm{d} x_i}$ on the generators $\mathrm{d} x_i$. The \emph{curvature tensor} of the Poisson connection is defined to be \begin{align} \mathcal R(\alpha,\beta):=[\nabla_{\alpha},\nabla_{\beta}]-\nabla_{[\alpha,\beta]}, \end{align} for $\alpha,\beta\in\Omega_{S|\boldsymbol k}$. Here we use the Koszul bracket $[\alpha,\beta]$ between $\alpha$ and $\beta$. It turns out that $\alpha\wedge\beta\mapsto\mathcal R(\alpha,\beta)$ is an $S$-linear map $ \Omega_{S|\boldsymbol k}\wedge_S\Omega_{S|\boldsymbol k}\to \operatorname{End}_S(M)$. The Poisson connection $\nabla$ is called \emph{flat} if its curvature tensor vanishes identically. \begin{proposition} For each $t\in\boldsymbol k$ the expression $\nabla_{\mathrm{d} x_i}^t:=\{x_i,\:\}+t\sum_{\mu,\nu} Z_{i\mu}^\nu y_\nu\partial/\partial{y_\mu}$ defines a Poisson connection $\nabla^t$ on the $S$-module underlying the Koszul complex $(\operatorname K_\bullet(S,\boldsymbol f),\partial)$. It has the property that $\nabla_\alpha$ is a derivation of the product for all $\alpha\in \Omega_{S|\boldsymbol k}$. When we specialize to $t=1$, $\nabla^{1}$ is the unique Poisson connection having this property such that $[\partial,\nabla^{1}_\alpha]=0$ for all $\alpha\in \Omega_{S|\boldsymbol k}$. The curvature tensor of $\nabla^{t}$ equals $t\delta_{\operatorname{Poiss}} Z+t^2[Z,Z]$. This means in particular that the curvature tensor of $\nabla^{-1}$ is $-\delta_{\operatorname{Poiss}} Z+[Z,Z]$. \end{proposition} \begin{proof} The most general Poisson connection acting as a derivation takes the form \begin{align*} \nabla_{\mathrm{d} x_i}=\{x_i,\:\}+\sum_{\mu,\nu} \Gamma_{i\mu}^\nu y_\nu\partial/\partial{y_\mu} \end{align*} for some coefficients $\Gamma_{i\mu}^\nu \in S$ (the so-called Christoffel symbols of the connection). It is easy to verify that the commutator $[\nabla_{\mathrm{d} x_i},\partial]$ vanishes precisely when $Z_{i\mu}^\nu=\Gamma_{i\mu}^\nu$ for all $\mu,\nu=1,\dots,k$. In the formula for the curvature we implicitly use the canonical identification of $\mathfrak{gl}_k(S)$ with linear vector fields in the odd coordinates $y_\mu$ sending the $k\times k$-matrix $(a_\mu^\nu)_{\mu,\nu=\{1,\dots,k\}}$ to $\sum_{\mu,\nu}a_{\mu}^{\nu}y_\nu\partial/\partial{y_\mu}$. For the curvature tensor of $\nabla^t$ acting on $m\in\operatorname K_\bullet(S,\boldsymbol f),\partial)$ we find \begin{align*} &[\nabla^t_{\mathrm{d} x_i},\nabla^t_{\mathrm{d} x_j}]m-\nabla^t_{[\mathrm{d} x_i,\mathrm{d} x_j]}m=[\nabla^t_{\mathrm{d} x_i},\nabla^t_{\mathrm{d} x_j}]m-\nabla^t_{\mathrm{d} \Lambda_{ij}}m\\ &=\left[\{x_i,\:\}+t\sum_{\mu,\nu} Z_{i\mu}^\nu y_\nu{\partial\over\partial{y_\mu}},\{x_j,\:\}+t\sum_{\lambda,\rho} Z_{j\lambda}^\rho y_\rho{\partial\over\partial{y_\lambda}}\right]m\\ &= \{x_i,\{x_j,m\}\}-\{x_j,\{x_i,m\}\}+t\left[\{x_i,\:\},\sum_{\lambda,\rho} Z_{j\lambda}^\rho y_\rho{\partial\over\partial{y_\lambda}}\right]m-t\left[\{x_j,\:\},\sum_{\mu,\nu} Z_{i\mu}^\nu y_\nu{\partial\over\partial{y_\mu}}\right]m\\ &\qquad +\sum_{\mu,\nu}t^2[Z_i,Z_j]_{\mu}^{\nu} y_\nu{\partial m\over\partial{y_\mu}}-\{\{x_i,x_j\},m\}-t\sum_{k,\mu,\nu}{\partial \Lambda_{ij}\over \partial x_k}Z_{k\mu}^\nu y_\nu{\partial m\over\partial{y_\mu}}=\left(t\delta_{\operatorname{Poiss}} Z+t^2[Z,Z]\right)m. \qedhere \end{align*} \end{proof} \section{The resolvent and the cotangent complex}\label{sec:cotangent} In this section we recall material concerning the resolvent and the cotangent complex. We essentially follow \cite{Manetti, AvramovInfFree} but adapt the notation to our needs. By a graded set we mean a countable set $\mathcal I$ with a function $\phi:\mathcal I\to \mathbb{Z}_{>0}$ such that for each $m\ge 0$ the cardinality of $\mathcal I_m:=\phi^{-1}(m)$ is finite. Accordingly $\mathcal I$ decomposes as a disjoint union of finite sets $\mathcal I=\sqcup_{m>0}\mathcal I_m$. To each $i\in \mathcal I_m$ we attach a variable $x_i^{(m)}$ whose parity coincides with the parity of $m$. This means that in the graded polynomial ring over the commutative $\boldsymbol k$-algebra $S$ \begin{align*} S[\boldsymbol x]:=S\mleft[x_i^{(m)}\;\middle |\;m \ge 1,i\in \mathcal I_m\mright] \end{align*} we have relation $x_i^{(m)}x_j^{(n)}=(-1)^{mn}x_j^{(n)}x_i^{(m)}$. Let us assign the cohomological degree $|x_i^{(m)}|:=-m$ to the variables $x_i^{(m)}$. By considering only variables up to level $r\ge 0$ we also have the graded polynomial ring in finitely many variables \begin{align*} S[\boldsymbol x_{\le r}]:=S\mleft[x_i^{(m)}\;\middle |\;1\le m \le r,i\in \mathcal I_m\mright], \end{align*} with the convention that $S=S[\boldsymbol x_{\le 0}]$. By a dg $S$-algebra mean a cochain complex $(R,\partial)$ who is at the same time a supercommutative algebra such that $\partial$ is a derivation of the product. This means that for $a,b\in R$ of cohomological degree $|a|$ and $|b|$ we have $ab=(-1)^{|a||b|}ba$ and $\partial(ab)=(\partial a)b+(-1)^{|a|}a\partial b$. \begin{definition} A dg $S$-algebra $(R,\partial)$ is called \emph{semifree} if the following two conditions hold true. \begin{enumerate} \item As an $S$-algebra $R$ is a graded polynomial algebra $S[\boldsymbol x]$ over the graded set $\phi:\mathcal I\to \mathbb{Z}_{>0}$. \item For each $m>0$ and $i\in \mathcal I_m$ we have $\partial( x_i^{(m)})\in S[\boldsymbol x_{\le m-1}]$. \end{enumerate} Clearly for each $r\ge 0$ $S[\boldsymbol x_{\le r}]$ forms a semifree dg subalgebra which we denote by $(R_{\le r},\partial_{\le r})$. We will refer to it as the \emph{approximation of $(R,\partial)$ of level $\le r$}. The canonical algebra map $R\to S$ that sends each variable to zero is denoted by $\kappa$. \end{definition} We follow the habit of physics and interpret $\operatorname{Spec}(R_{\le r})$ as an affine supervariety with the homological vector field $\partial$. (If one takes the grading into account people refer to the setting as an NQ-manifold, see e.g. \cite{Strobl}.) $\operatorname{Spec}(R)$ however is merely a projective limit of affine supervarieties and one has to pay attention to issues related to the infinitely many degrees of freedom. Denoting the image of $x_i^{(m)}$ under $\partial$ by $F_i(\boldsymbol x_{\le {m-1}})$ we will find it convenient to write \begin{align}\label{eq:KoszulTate} \partial_{\le r}=\sum_{m=1}^r\sum_{j\in \mathcal I_m} F_{j}(\boldsymbol x_{\le m-1}){\partial\over\partial x_{j}^{(m)}}\quad\mbox{and }\quad \partial=\sum_{m=1}^{\infty}\sum_{j\in \mathcal I_m} F_{j}(\boldsymbol x_{\le m-1}){\partial\over \partial x_{j}^{(m)}}. \end{align} The fact that $\partial^2=0$ translates into the condition that for each $n\ge 1$ and $k\in \mathcal I_n$ we have \begin{align*} \sum_{m=1}^{n-1}\sum_{j\in \mathcal I_m} F_j(\boldsymbol x_{\le m-1}){\partial F_k(\boldsymbol x_{\le n-1})\over\partial x_{j}^{(m)}}=0. \end{align*} \begin{definition} Let $I$ be an ideal in $S$. We say that the semifree $S$-algebra $(R,\partial)$ is a \emph{resolvent} of $S/I$ if the composition of the algebra morphisms $\kappa: R\to S$ and $S\to S/I$ is a quasi-isomorphism, i.e., induces an isomorphism in cohomology. (Here $S/I$ is seen a cochain complex concentrated in degree $0$). \end{definition} Let us specialize to the situation when $S=\boldsymbol k[x_1,\dots,x_n]$. In this case $A$ admits a resolvent (cf., e.g., \cite[Prop. 2.1.10.]{AvramovInfFree}). It is constructed as follows. Let $f_1,\dots, f_k$ be generators for the ideal $I$. Put $\mathcal I_1:=\{1,\dots ,k\}$ and define $\partial_{\le 1}$ by $x_\mu^{(1)}\mapsto f_\mu$. Notice that $R_{\le 1}$ is nothing but the Koszul complex seen as a cochain complex. It has finite dimensional cohomology in degree $-2$. For each cohomology class pick a representative $F_j(\boldsymbol x_{\le 1})\in R_{\le 1}^{-2}$ and define $\partial_{\le 2}$ by $x_j^{(2)}\mapsto F_j(\boldsymbol x_{\le 1})$. The process continues by induction. That is, for each cohomology class in $H^{-k}R_{\le k-1}$ we a pick representative $F_j(\boldsymbol x_{\le k-1})\in R_{\le k-1}^{-k}$ and put $\partial_{\le k}:x_j^{(k)}\mapsto F_j(\boldsymbol x_{\le k-1})$. The resulting resolvent $(R,\partial)$ is unique up to $S$-linear homotopy. In the situation when $I=(f_1,\dots,f_k)$ is a homogeneous ideal in $S=\boldsymbol k[x_1,\dots,x_n]$ with $\deg(x_i)\ge 1$ for $i=1,\dots,n$ such that $I\subseteq\mathfrak m=(x_1,\dots, x_n)$, we assign to the variable $x_j^{(m)}$'s internal degrees such that $\deg({\partial})=0$. In this way the resolvent $(R,\partial)$ of $S/I$ becomes a bigraded dg algebra. \begin{definition} With the above assumptions the resolvent $(R,\partial)$ of $A=S/I$ is called a \emph{minimal model} if \begin{align*} \partial(x_j^{(1)})\in \mathfrak m \mbox{ for all }j=1,2,\dots,k, \quad \partial(x_j^{(r)})\in \mathfrak n^2 \mbox{ for all }r\ge 2,\:j\in \mathcal I_r, \end{align*} where $\mathfrak n$ is the kernel of the composition of $\kappa: R\to S$ and $S\to\boldsymbol k$. \end{definition} By \cite[Subsection 7.2]{AvramovInfFree} (see also \cite[Section 4.3]{ACI}) a minimal model exists and is unique up to isomorphism of bigraded dg algebras. All examples of resolvents in this paper are minimal models. Algorithms for computing minimal models have been incorporated into \emph{Macaulay2} \cite{M2} by Frank Moore as the package \emph{dgalgebras} \cite{dgAlgsM2}. \begin{definition} Let $(R,\partial)$ be a resolvent of the $S$-algebra $A=S/I$. The \emph{cotangent complex} of $A$ over $\boldsymbol k$ is $\mathbb L_{A|\boldsymbol k}=\Omega_{R|\boldsymbol k}\otimes_R A$, where $\Omega_{R|\boldsymbol k}$ are the K\"{a}hler differentials of $R$ and $\otimes_R$ is the tensor product in the category of complexes of $R$-modules. \end{definition} If the ideal $I$ is homogeneous the cotangent complex $\mathbb L_{A|\boldsymbol k}$ is bigraded in the obvious way. If $I$ is a complete intersection $\mathbb L_{A|\boldsymbol k}$ is actually isomorphic the complex of Equation \eqref{eq:cicotangent}, where the isomorphism is given by $\mathrm{d} x_j\mapsto e_j$ and $\mathrm{d} y_\mu\mapsto \varepsilon_\mu$ writing $y_\mu$ for $x^{(1)}_j$. \section{The $P_\infty$-algebra and the $L_\infty$-algebroid}\label{sec:homotopystuff} The aim of this section is to construct a $P_\infty$-algebra structure on the resolvent $(R,\partial)$ of a Poisson ideal and thereby prove Theorem \ref{thm:homotopyPoisson} and Corollary \ref{cor:homotopyLiealgebroid}. As the principal tool we need to recall the Schouten bracket on the multiderivations of the resolvent. Here we follow Cattaneo and Felder \cite{CF} who defined a Schouten bracket for graded supermanifolds. The adaptation from the setup of smooth supermanifolds to affine supervarieties is straightforward. The only catch here is to make sure that their formula also makes sense in the case when there are infinitely many generators. Let $V[m]=\oplus_k V[m]^k$ be the \emph{shift} of a graded vector space $V=\oplus_k V^k$ with components $V[m]^k=V^{m+k}$. The identity map gives rise to a map $\downarrow^m:=[m]:V\to V[m]$ of degree $-m$. Let us fix $r\ge 0$ and consider, using the notation of Section \ref{sec:cotangent}, the graded symmetric algebra $\mathfrak h_{\le r}:=\operatorname S_{R_{\le r}}(\operatorname{Der}_{R_{\le r}}[-1])$ of the shifted module $\operatorname{Der}_{R_{\le r}}[-1]$ of derivations of the algebra $R_{\le r}$. Accordingly the coordinate derivations of Section \ref{sec:cotangent} give rise to generators \begin{align*} \xi^i_{(m)}:={\partial\over\partial x_i^{(m)}}[-1]\in\operatorname{Der}_{R_{\le r}}[-1]\subset \mathfrak h_{\le r}, \quad r\ge m\ge 0, i\in \mathcal I_m \end{align*} of cohomological degree $m+1$. We can interpret $\mathfrak h_{\le r}$ as a graded polynomial ring \begin{align*} \mathfrak h_{\le r}=\boldsymbol k\left[x_i^{(m)},\xi^i_{(m)}|r\ge m\ge 0, i\in \mathcal I_m\right]. \end{align*} The cohomological degree $|\xi^{i_1}_{(m_1)}\cdots\xi^{i_\ell}_{(m_\ell)}|$ of $\xi^{i_1}_{(m_1)}\cdots\xi^{i_\ell}_{(m_\ell)}$ is evidently $m_1+\dots+m_\ell+\ell$. We extend the differential $\partial_{\le r}$ to $\mathfrak h_{\le r}$ by declaring $\partial_{\le r}\xi^i_{(m)}=0$. Furthermore, we introduce on $\mathfrak h_{\le r}$ the multiplicative \emph{filtration degree} $\operatorname{fd}$ by putting \begin{align*} \operatorname{fd}(\xi^{i}_{(m)}):=|\xi^{i}_{(m)}|,\quad \operatorname{fd}(x_{i}^{(m)}):=0. \end{align*} Let $\mathcal F^p\mathfrak h_{\le r}$ be the $R_{\le r}$-span of $\{X\in \mathfrak h_{\le r}\mid \operatorname{fd}(X)\ge p\}$. The collection $(\mathcal F^p\mathfrak h_{\le r})_{p\ge 0}$ forms a descending Hausdorff filtration such that $\partial_{\le r}(\mathcal F^p \mathfrak h_{\le r})\subseteq\mathcal F^p \mathfrak h_{\le r}$, i.e, $\mathfrak h_{\le r}$ is a filtered complex. We use the convention that if $p<0$ then $\mathcal F^{p}\mathfrak h_{\le r}=\mathcal F^{0}\mathfrak h_{\le r}$. From \cite{CF} we know that there is a unique bracket (of cohomological degree $-1$) $\llbracket\:,\:\rrbracket$, the so-called \emph{Schouten bracket}, on $\mathfrak h_{\le r}$ such that \begin{enumerate} \item $XY=(-1)^{|X||Y|}YX$, \item $\llbracket X,Y\rrbracket=-(-1)^{(|X|-1)(|Y|-1)}\llbracket Y,X\rrbracket$, \item $\llbracket X,YZ\rrbracket=\llbracket X,Y\rrbracket Z+(-1)^{|Y|(|X|-1)}Y\llbracket X,Z\rrbracket $, \item $\llbracket X,\llbracket Y,Z\rrbracket\rrbracket=\llbracket \llbracket X,Y\rrbracket,Z\rrbracket+(-1)^{(|X|-1)(|Y|-1)}\llbracket Y,\llbracket X,Z\rrbracket \rrbracket$. \end{enumerate} It makes $\mathfrak h_{\le r}$ a \emph{Gerstenhaber algebra} (confer, e.g., \cite{PingXu}) which entails in particular that $\mathfrak h_{\le r}[1]$ is a Lie superalgebra. A more convenient way to work with $\llbracket\:,\:\rrbracket$ is provided by the following formula from \cite{CF} \begin{align}\label{eq:CFformula} \llbracket X,Y\rrbracket=\sum_{m= 0}^r\sum_{i\in \mathcal I_m}X{\overleftarrow\partial \over \partial \xi^i_{(m)}}{\overrightarrow\partial \over \partial x_i^{(m)}}Y-X{\overleftarrow\partial \over \partial x_i^{(m)} }{\overrightarrow\partial \over \partial \xi^i_{(m)}}Y. \end{align} Here the arrows indicate in which direction the coordinate derivation in question acts. The collection $(\mathfrak h_{\le r})_{r\ge 0}$ forms a directed system of Gerstenhaber algebras. The Schouten bracket on the direct limit $\mathfrak h:=\cup_{r\ge 0}\mathfrak h_{\le r}$ is given by the formula \begin{align}\label{eq:CFformulainf} \llbracket X,Y\rrbracket=\sum_{m= 0}^\infty\sum_{i\in \mathcal I_m}X{\overleftarrow\partial \over \partial \xi^i_{(m)}}{\overrightarrow\partial \over \partial x_i^{(m)}}Y-X{\overleftarrow\partial \over \partial x_i^{(m)} }{\overrightarrow\partial \over \partial \xi^i_{(m)}}Y. \end{align} \begin{proposition}\label{prop:Fadic} The canonical isomorphism $\left(\operatorname{Der}_{R_{\le r}} R_{\le r}\right)[-1]\cong\operatorname{Hom}_{R_{\le r}}(\Omega_{R_{\le r}|\boldsymbol k}[1],R_{\le r})$ extents to an injective morphism of $R$-modules $\mathfrak h=\cup_{r\ge 0}\mathfrak h_{\le r}\to \mathfrak g:=\operatorname{Sym}_R(\Omega_{R|\boldsymbol k}[1],R)$. The $\boldsymbol k$-vector space $\mathfrak g$ is the completion of $\mathfrak h$ in the $\mathcal F$-adic topology. There is a unique structure of a Gerstenhaber algebra on $\mathfrak g$ such that $\mathfrak h\to \mathfrak g$ is a morphism of Gerstenhaber algebras. Likewise, the $\mathcal F$-adic completion of $\mathfrak h_{\le r}$ is $\mathfrak g_{\le r}:=\operatorname{Sym}_{R_{\le r}}(\Omega_{R_{\le r}|\boldsymbol k}[1],R_{\le r})$ and the collection $(\mathfrak g_{\le r})_{r\ge 0}$ forms a directed system of $\mathcal F$-adically complete Gerstenhaber algebras. \end{proposition} \begin{proof} Let us convince ourselves that $\mathfrak g=\operatorname{Sym}_R(\Omega_{R|\boldsymbol k}[1],R)$ and the completion of $\cup_{r\ge 0}\mathfrak h_{\le r}$ are isomorphic as graded $\boldsymbol k$- vector spaces. Recall that $\Omega_{R|\boldsymbol k}[1]$ is a free $R$-module generated by \begin{align}\label{eq:OmegaGenerators} \mathrm{d} x_j^{(r)}[1]\quad r\ge 0,\:j\in\mathcal I_r \end{align} of cohomological degree $-r$. For the sake of notational simplicity we will use the graded set $\mathcal I':=I\sqcup\{1,2,\dots, n\}$ with $\phi':\mathcal I'\to \mathbb{Z}_{\ge 0}$ where $\phi'_{|\mathcal I}=\phi$ and $\phi'_{|\{1,2,\dots, n\}}=0$. For the generators in Equation \eqref{eq:OmegaGenerators} we write $\mathrm{d} X_j$, where $j\in \mathcal I'$. An element $\Psi\in \mathfrak g$ of cohomological degree $\ell$ is nothing other than an assignment \begin{align}\label{eq:Gerstiso} (\mathrm{d} X_{j_1},\mathrm{d} X_{j_2},\dots, \mathrm{d} X_{j_l}) \mapsto \psi_{j_1j_2\dots j_l}\in R,\quad \mbox{ for }j_1,j_2\dots, j_l\in \mathcal I' \end{align} with $\psi_{j_1j_2\dots j_l}=\epsilon(\sigma,\boldsymbol j)\psi_{j_{\sigma(1)}j_{\sigma(2)}\dots j_{\sigma (l)}}$ for any $\sigma \in \mathrm S_l$. Here $\boldsymbol j=(\phi'(j_1),\dots,\phi'(j_l))\in \mathbb{Z}^l$ and $\epsilon(\sigma,\boldsymbol j)$ is the Koszul sign of $\sigma$. The cohomological degree of $\psi_{j_1j_2\dots j_l}$ is $|\psi_{j_1j_2\dots j_l}|=\ell+\sum_{m=1}^l|X_{j_m}|=\ell-\sum_{m=1}^l\phi'(j_m)$. The corresponding element in the completion of $\cup_{r\ge 0}\mathfrak h_{\le r}$ is \begin{align}\label{eq:Fseries} \psi:=\sum_{l=0}^\infty \sum_{j_1,j_2,\dots ,j_l\in \mathcal I'} \psi_{j_1j_2\dots j_l}\underbrace{\xi^{j_1}_{(\phi'(j_1))}\xi^{j_2}_{(\phi'(j_2))}\cdots \xi^{j_l}_{(\phi'(j_l))}}_{\in \mathcal F^{l+\sum_{m=1}^l\phi'(j_m)}(\cup_{r\ge 0}\mathfrak h_{\le r})}. \end{align} Conversely, the Taylor coefficients $\psi_{j_1j_2\dots j_l}$ of any such series of cohomological degree $\ell$ give rise to an element in $\mathfrak g=\operatorname{Sym}_R(\Omega_{R|\boldsymbol k}[1],R)$ by Equation \eqref{eq:Gerstiso}. This isomorphism is actually an isomorphism of graded commutative algebras. To sketch an argument showing this we use the canonical isomorphism $\operatorname{Sym}_R(\Omega_{R|\boldsymbol k}[1],R)\cong \operatorname{Hom}_R(\mathrm S_R(\Omega_{R|\boldsymbol k}[1]),R)$. Now $H:=\mathrm S_R(\Omega_{R|\boldsymbol k}[1])$ is a graded Hopf algebra over $R$ with the obvious multiplication $\mu:H\otimes_R H\to H$ and the unique comultiplication $\Delta:H\to H\otimes_R H$ such that $\Omega_{R|\boldsymbol k}[1]\subset H$ is primitive. The product of $\Psi$ and $\Phi$ in $\mathfrak g$ is given by convolution $\mu\circ\Psi\otimes\Phi\circ \Delta$. It is well known that this restricts to the usual product for the subspace $\mathfrak g_{\le r}$ as the dual Hopf algebra of a graded symmetric algebra over a finitely generated free $R_{\le r}$-module is the graded symmetric Hopf algebra of the dual module. Clearly this product uniquely extends to a product on the $\mathcal F$-adic completion. It remains to check that the Schouten bracket $\llbracket\psi,\varphi\rrbracket$ of two series $\psi$, $\varphi$ as in Equation \eqref{eq:Fseries} of cohomological degrees $|\psi|, |\varphi|\in\mathbb{Z}$ is well-defined. Let us inspect if \begin{align*} \sum_{m\ge 0}\sum_{i\in \mathcal I_m}\psi{\overleftarrow\partial \over \partial \xi^i_{(m)}}{\overrightarrow\partial \over \partial x_i^{(m)}}\varphi \end{align*} is well defined. This expression can be spelled out as \begin{align*} \sum_{l,m\ge 0}\sum_{j_1,\dots,j_l,k_1,\dots,k_m\in \mathcal I'}\sum_{u=1}^l\pm\psi_{j_1\dots j_l}{\partial\varphi_{k_1\dots k_m}\over \partial X_{j_u}}\xi^{j_1}_{(\phi'(j_1))}\cdots\widehat{\xi^{j_u}_{(\phi'(j_u))}}\cdots \xi^{j_l}_{(\phi'(j_l))}\xi^{k_1}_{(\phi'(k_1))}\cdots \xi^{k_m}_{(\phi'(k_m))}, \end{align*} where we took the liberty not to specify the signs and used $\:\widehat{\:}\:$ to indicate omission. The issue is now that in principle in this sum the same monomial in the $\xi$'s can occur for infinitely many $j_u$. This is not a problem, however, since $|\varphi_{k_1\dots k_m}|=|\varphi|-\sum_v \phi'(k_v)$. In fact, $\varphi_{k_1\dots k_m}$ cannot depend on $X_{j_u}$ if $\phi'(j_u)$ exceeds $|\varphi_{k_1\dots k_m}|$. The second term in \eqref{eq:CFformula} can be taken care of in an analogous manner. \end{proof} By a \emph{multiderivation} $X$ we mean an element of $\mathfrak g$. The degree of $X[1]\in\mathfrak g[1]$ is referred to a the \emph{total degree} $\overline X:=|X|-1$. Next we need to involve the differentials. The differential $\partial_{\le r}$ (see Equation \eqref{eq:KoszulTate}) gives rise to an element \begin{align} \pi_0^{\le r}:=\sum_{m=1}^r\sum_{j\in \mathcal I_m} F_{j}(\boldsymbol x_{\le m-1})\xi^{j}_{(m)}\in \mathfrak h_{\le r}. \end{align} Likewise, we define an element of $\mathfrak g$ analogous to the differential $\partial$: \begin{align} \pi_0:=\sum_{m=1}^\infty\sum_{j\in \mathcal I_m} F_{j}(\boldsymbol x_{\le m-1})\xi^{j}_{(m)}\in\mathfrak g, \end{align} noting that the infinite sum above converges in the $\mathcal F$-adic topology. With this we can formulate the following lemma whose proof is straightforward and left to the reader. \begin{lemma}\label{lem:filtration} Let $p,q,r,s$ be integers $\ge 0$. Then \begin{enumerate} \item for each $X\in \mathcal F^p\mathfrak h_{\le r}$ we have $\llbracket\pi_0^{\le r},X\rrbracket \in\partial_{\le r}X+\mathcal F^{p+1}\mathfrak h_{\le r}$, and \item $\left\llbracket\mathcal F^p\mathfrak h_{\le r},\mathcal F^q\mathfrak h_{\le s}\right\rrbracket\subseteq \mathcal F^{p+q-1-\min(r,s)}\mathfrak h_{\le \max(r,s)}$. \end{enumerate} \end{lemma} Our objective is to find a Maurer-Cartan element $\pi$, i.e., a multiderivation $\pi$ with total degree $\overline{\pi}=1$ that satisfies $\llbracket\pi,\pi\rrbracket =0$, such that for all $p$ and $X\in\mathcal F^p\mathfrak g$ \begin{align} \llbracket\pi,X\rrbracket\in\partial X+\delta_{\operatorname{Poiss}} X+\mathcal F^{p+2}\mathfrak g. \end{align} The recipe used is what is sometimes referred to as \emph{homological perturbation theory}. It has been employed, for example, in Fedosov quantization \cite{Fedosov} and in the construction of the BFV-charge \cite{Stasheff}. For better readability we introduce a special notation in low cohomological degrees. We write $y_\mu$ instead of $x_i^{(1)}$ and $\mu$ for $i\in \mathcal I_1$, $z_\alpha$ instead of $x_i^{(2)}$ and $\alpha$ for $i\in \mathcal I_2$ and $w_t$ for $x_i^{(2)}$ using index $t$ for $i\in \mathcal I_3$. Similarly we use $\xi^i$ for $\xi^i_{(0)}$, $\eta^\mu$ for $\xi^i_{(1)}$, $\zeta^\alpha$ for $\xi^i_{(2)}$, and $\theta^s$ instead of $\xi^i_{(2)}$. The first three coefficients of $\partial_{\le r}$ are denoted by $f_\mu$, $g_\alpha$ and $h_t$, i.e., \begin{align} &\partial_{\le 1}=\sum_\mu f_\mu \partial/\partial y_\mu, \quad \partial_{\le 2}=\sum_\mu f_\mu \partial/\partial y_\mu+\sum_\alpha g_\alpha \partial/\partial z_\alpha,\\ \nonumber &\partial_{\le 3}=\sum_\mu f_\mu \partial/\partial y_\mu+\sum_\alpha g_\alpha \partial/\partial z_\alpha+\sum_t h_t\partial/\partial w_t. \end{align} For convenience of the reader we record the relevant degrees in a table. \begin{align*} \begin{tabular}{c||c|c|c|c|c|c|c|c} & $x_i$ & $y_\mu$ & $z_\alpha$ & $w_t$ & $\xi^i$ & $\eta^\mu$ & $\zeta^\alpha$ & $\theta^t$ \\\hline\hline $|\:|$ & $0$ & $-1$ & $-2$ & $-3$ & $1$ & $2$ & $3$ & $4$\\\hline $^-$ &$-1$ & $-2$ & $-3$ & $-4$ & $0$ & $1$ & $2$ & $3$\\\hline $\operatorname{fd}$ & $0$ & $0$ & $0$ & $0$ & $1$ & $2$ & $3$ & $4$ \end{tabular} \end{align*} Our first approximation of $\pi$ is $\pi_0^{\le 1}+\pi_1$, where \begin{align*} \pi_0^{\le 1}:=\sum_{\mu=1}^k f_\mu\eta^\mu\quad\mbox{and }\quad \pi_1:=\sum_{i,j=1}^n {1\over 2}\Lambda_{ij}\xi^i\xi^j. \end{align*} When evaluating the bracket \begin{align*} \llbracket\pi_0^{\le 1}+\pi_1,\pi_0^{\le 1}+\pi_1\rrbracket =\llbracket\pi_0^{\le 1},\pi_0^{\le 1}\rrbracket +2\llbracket\pi_0^{\le 1},\pi_1\rrbracket +\llbracket\pi_1,\pi_1\rrbracket =2\llbracket\pi_0^{\le 1},\pi_1\rrbracket , \end{align*} we notice that $\llbracket\pi_0^{\le 1},\pi_0^{\le 1}\rrbracket $ and $\llbracket\pi_1,\pi_1\rrbracket $ vanish. In fact, the vanishing of $\llbracket\pi_1,\pi_1\rrbracket $ is equivalent to the Jacobi identity of the Poisson bracket on $S$. On the other hand \begin{align*} \llbracket\pi_0^{\le 1}, \pi_1\rrbracket &=\sum_{i,j,\mu}{\Lambda_{ij}\over 2}\left\llbracket f_\mu\eta^\mu,\xi^i\right\rrbracket\xi^j-{\Lambda_{ij}\over 2}\xi^i\left\llbracket f_\mu\eta^\mu,\xi^j\right\rrbracket =-\sum_{i,j,\mu}{\Lambda_{ij}\over 2}{\partial f_\mu\over \partial x_i}\eta^\mu\xi^j+{\Lambda_{ij}\over 2}{\partial f_\mu\over \partial x_j}\xi^i\eta^\mu \\ &=\sum_{i,j,\mu}\Lambda_{ij}{\partial f_\mu\over \partial x_j}\xi^i \eta^\mu=\sum_{i,\mu,\nu}Z_{i\mu}^\nu f_\nu\xi^i \eta^\mu \end{align*} is a coboundary for $\partial_{\le 1}$ in cohomological degree $-1$. The idea is to compensate for this by introducing $\pi_2:=-\sum_{i,\mu,\nu}Z_{i\mu}^\nu y_\nu\xi^i\eta^\mu=:-\{x_i,y_\mu\}_2\xi^i\eta^\mu$. At the second step of the recursion we put \begin{align*} &\pi_0^{\le 2}:=\sum_{\mu}f_\mu\eta^\mu+\sum_\alpha g_\alpha \zeta^\alpha\in \mathfrak h_{\le 2},\\ &\pi^{\le 2}:=\pi_0^{\le 2}+\pi_1+\pi_2. \end{align*} Our task is to work out the individual terms in \begin{align*} &\llbracket\pi^{\le 2},\pi^{\le 2}\rrbracket=\overbrace{\llbracket\pi^{\le 2}_0,\pi^{\le 2}_0\rrbracket}^{=0}+\overbrace{\llbracket\pi_1,\pi_1\rrbracket}^{=0}+2\llbracket\pi^{\le 1}_0,\pi_1\rrbracket+ 2\llbracket g_\alpha\zeta^\alpha,\pi_1\rrbracket+2\llbracket\pi^{\le 2}_0,\pi_2\rrbracket+2\llbracket\pi_1,\pi_2\rrbracket+\llbracket\pi_2,\pi_2\rrbracket. \end{align*} The result of this calculation is \begin{align*} &2\llbracket\pi^{\le 1}_0,\pi_1\rrbracket+2\llbracket\pi^{\le 2}_0,\pi_2\rrbracket -\sum_{\mu,\nu,\lambda}\mathcal A_{\mu\nu}^\lambda y_\lambda\eta^\mu\eta^\nu+\sum_{i,\mu,\nu,\alpha}2\left({\partial g_\alpha\over \partial x_i}Z_{i\mu}^\nu y_\nu\eta^\mu \zeta^\alpha-{\partial g_\alpha\over \partial y_\mu}Z_{i\mu}^\nu y_\nu\xi^i\zeta^\alpha\right)\\ &2\llbracket g_\alpha\zeta^\alpha,\pi_1\rrbracket=-2\sum_{i,j,\alpha}\Lambda_{ij}{\partial g_\alpha\over \partial x_j}\xi^i\zeta^\alpha=-2\sum_{i,j,\alpha}\{x_i,g_\alpha\}\xi^i\zeta^\alpha,\\ &2\llbracket\pi_1,\pi_2\rrbracket=\sum_{i,j,\mu,\nu}\left(\{x_i,Z_{j\mu}^\nu\}-\{x_j,Z_{i\mu}^\nu\}-\sum_k {\partial\Lambda_{ij}\over\partial x_k}Z_{k\mu}^\nu \right)y_\nu\xi^i\xi^j\eta^\mu\\ &\llbracket\pi_2,\pi_2\rrbracket=\sum_{i,j,\mu,\nu,\lambda,\rho}2Z_{i\mu}^\nu{\partial Z_{j\lambda}^\rho\over \partial x_i}y_\nu y_\rho\xi^j\eta^\mu\eta^\lambda -\sum_{i,j,\mu,\nu}(Z_{i\mu}^\nu Z_{j\nu}^\lambda-Z_{j\mu}^\nu Z_{i\nu}^\lambda)y_\lambda\xi^i\xi^j\eta^\mu. \end{align*} It turns out that at this step of the iteration we can ignore all terms in $\mathcal F^5\mathfrak h_{\le 2}$ and deduce \begin{align*} \llbracket\pi^{\le 2},\pi^{\le 2}\rrbracket+\mathcal F^5\mathfrak h_{\le 2}&=\sum_{i,j,\mu,\nu,}\left(\{x_i,Z_{j\mu}^\nu\}-\{x_j,Z_{i\mu}^\nu\}-\sum_k {\partial\Lambda_{ij}\over\partial x_k}Z_{k\mu}^\nu \right)y_\nu\xi^i\xi^j\eta^\mu\\ &\quad+ \sum_{i,j,\mu,\nu,\lambda}(Z_{i\mu}^\nu Z_{j\nu}^\lambda-Z_{j\mu}^\nu Z_{i\nu}^\lambda)y_\lambda\xi^i\xi^j\eta^\mu+\sum_{\mu,\nu,\lambda}-\mathcal A_{\mu\nu}^\lambda y_\lambda\eta^\mu\eta^\nu\\ &\quad-2\sum_{i,\alpha}\left(\{x_i,g_\alpha\}+\sum_{\mu,\nu}{\partial g_\alpha\over \partial y_\mu}Z_{i\mu}^\nu y_\nu\right)\xi^i\zeta^\alpha+\mathcal F^5\mathfrak h_{\le 2}\\ &=\sum_{\mu,\nu}2(\delta_{\operatorname{Poiss}} Z-[Z,Z])_\mu^\nu y_\nu\eta^\mu-\sum_{\mu,\nu,\lambda}\mathcal A_{\mu\nu}^\lambda y_\lambda\eta^\mu\eta^\nu\\ &\quad-2\sum_{i,\alpha}\left(\{x_i,g_\alpha\}+\sum_{\mu,\nu}{\partial g_\alpha\over \partial y_\mu}Z_{i\mu}^\nu y_\nu\right)\xi^i\zeta^\alpha+\mathcal F^5\mathfrak h_{\le 2}. \end{align*} We already know that $\partial_{\le 1}$ applied to the first two expressions gives zero (cf. the proof of Theorem \ref{thm:MC} and Equation \eqref{eq:alphamunu}). Let us verify that the last term is also a 1-cocycle: \begin{align*} \partial_{\le 1}\{x_i,g_\alpha\}&=\partial_{\le 1}\left(\sum_j\Lambda_{ij}{\partial g_\alpha\over \partial x_j}\right)=\sum_{\mu,j}f_\mu{\partial \over \partial y_\mu}\left(\Lambda_{ij}{\partial g_\alpha\over \partial x_j}\right)=\sum_{\mu,j}\Lambda_{ij}f_\mu{\partial \over \partial x_j}\left({\partial g_\alpha\over \partial y_\mu}\right)\\ &=\sum_{\mu,j}\left(\Lambda_{ij}{\partial \over \partial x_j}\left(f_\mu{\partial g_\alpha\over \partial y_\mu}\right)-\Lambda_{ij}{\partial g_\alpha\over \partial y_\mu}{\partial f_\mu\over \partial x_j}\right)=-\sum_{\mu,\nu}Z_{i\mu}^\nu f_\nu{\partial g_\alpha\over \partial y_\mu}=-\partial_{\le 1}\left(\sum_{\mu,\nu}Z_{i\mu}^\nu y_\nu {\partial g_\alpha\over \partial y_\mu}\right), \end{align*} where we have used the fact that $\sum_{\mu}f_\mu \partial g_\alpha/\partial y_\mu=0$. As the resolvent is exact in cohomological degree $-1$ we can solve the system \begin{align*} &-2\partial_{\le 2}\pi_3^{001}=\sum_{\mu,\nu}2(\delta_{\operatorname{Poiss}} Z-[Z,Z])_\mu^\nu y_\nu\eta^\mu,\\ &-2\partial_{\le 2}\pi_3^{11}=-\sum_{\mu,\nu,\lambda}\mathcal A_{\mu\nu}^\lambda y_\lambda\eta^\mu\eta^\nu,\\ &-2\partial_{\le 2}\pi_3^{02}=-2\sum_{i,\alpha}\left(\{x_i,g_\alpha\}+\sum_{\mu,\nu}{\partial g_\alpha\over \partial y_\mu}Z_{i\mu}^\nu y_\nu\right)\xi^i\zeta^\alpha \end{align*} and put $\pi_3=\pi_3^{001}+\pi_3^{11}+\pi_3^{02}$. For the coefficients we use the notation $\pi_3^{001}=\sum_{i,j,\mu}{1\over 2}\{x_i,x_j,y_\mu\}_3\xi^i\xi^j\eta^\mu$, $\pi_3^{11}=\sum_{\mu,\nu}{1\over 2}\{y_\mu,y_\nu\}_2\eta^\mu\eta^\nu$ and $\pi_3^{02}=-\sum_{i,\alpha}\{x_i,z_a\}_2\xi^i\zeta^a$. The proceeding calculations can be made systematic by the following argument. \begin{lemma} \label{lem:recursion} For $m\ge 2$ we can recursively find $\pi_m\in \mathcal F^{m+1}\mathfrak h_{\le m-1}$ such that \begin{align}\label{eq:recursion level m} \left\llbracket\pi_0^{\le m-1}+\sum_{i=1}^{m-1}\pi_i,\pi_0^{\le m-1}+\sum_{i=1}^{m-1}\pi_i\right\rrbracket+\mathcal F^{m+2}\mathfrak h_{\le m-1}=-2\partial_{\le m-1}\pi_m+\mathcal F^{m+2}\mathfrak h_{\le m-1}. \end{align} \end{lemma} \begin{proof} We proceed by induction on $m\ge 3$. Let us assume Equation \eqref{eq:recursion level m} holds for $m$. We need to construct $\pi_{m+1}$ such that \eqref{eq:recursion level m} holds after substituting $m\mapsto m+1$. Let us write $X_m:=\pi_0^{\le m}-\pi_0^{\le m-1}$ and decompose \begin{align}\label{eq:recursion} &\left\llbracket\pi_0^{\le m}+\sum_{i=1}^{m}\pi_i,\pi_0^{\le m}+\sum_{j=1}^{m}\pi_j\right\rrbracket =\left\llbracket X_m+\pi_m+\pi_0^{\le m-1}+\sum_{i=1}^{m-1}\pi_i,X_m+\pi_m+\pi_0^{\le m-1}+\sum_{j=1}^{m-1}\pi_j\right\rrbracket\\ \nonumber &=\left\llbracket\pi_0^{\le m-1}+\sum_{i=1}^{m-1}\pi_i,\pi_0^{\le m-1}+\sum_{j=1}^{m-1}\pi_j\right\rrbracket+\llbracket X_m,X_m\rrbracket +\llbracket\pi_m,\pi_m\rrbracket \\\nonumber &\hspace{5cm} + 2\left\llbracket X_m,\pi_0^{\le m-1}+\sum_{i=1}^{m-1}\pi_i\right\rrbracket + 2\left\llbracket\pi_m,\pi_0^{\le m}+\sum_{i=1}^{m-1}\pi_i\right\rrbracket. \end{align} We wish to show that this is in $\mathcal F^{m+2}\mathfrak h_{\le m}$. We have $ \llbracket X_m,X_m\rrbracket =0$ and using Lemma \ref{lem:filtration} we see that for $m\ge j$ we get $\llbracket\pi_m,\pi_j\rrbracket \in \mathcal F^{m+2}\mathfrak h_{\le m-1}$. Moreover, $\llbracket X_m,\pi_0^{\le m-1}\rrbracket =0$ since \begin{align*} 0=2\left(\partial_{\le m}\right)^2=\left\llbracket\pi_0^{\le m},\pi_0^{\le m}\right\rrbracket=\llbracket X_m,X_m\rrbracket +2\left\llbracket X_m,\pi_0^{\le m-1}\right\rrbracket+2\left(\partial_{\le m-1}\right)^2. \end{align*} Also, if $m>j$ it follows $\llbracket X_m,\pi_j\rrbracket \in \mathcal F^{j+1+m-(j-1)}\mathfrak h_{\le m}=\mathcal F^{m+2}\mathfrak h_{\le m}$. Finally, we have $\llbracket \pi_m,\pi_0^{\le m}\rrbracket =\partial_{\le m-1}\pi_m+\mathcal F^{m+2}\mathfrak h_{\le m-1}$, establishing the claim. It turns out that the only terms in Equation \eqref{eq:recursion} that are not necessarily in $\mathfrak h_{\le m-1}$ are the $\llbracket X_m,\pi_j\rrbracket $. For the coefficients (i.e. the resolvent part) are actually in $\mathfrak h_{\le m-1}$ while the derivation part contains possibly one derivation from $\mathfrak h_{\le m}$. Let now $A_m\in\mathcal F^{m+2}\mathfrak h_{\le m}\backslash \mathcal F^{m+3}\mathfrak h_{\le m}$ such that $A_m+\mathcal F^{m+3}\mathfrak h_{\le m}=\llbracket \pi_0^{\le m}+\sum_{i=1}^{m}\pi_i,\pi_0^{\le m}+\sum_{j=1}^{m}\pi_j\rrbracket +\mathcal F^{m+3}\mathfrak h_{\le m}$. The Jacobi identity for the Schouten bracket and Lemma \ref{lem:filtration} allows us to conclude that $\partial_{\le m}A_m=0$. The argument goes as follows: \begin{align*} 0=\left\llbracket\pi_0^{\le m}+\sum_{i=1}^{m}\pi_i,\left\llbracket\pi_0^{\le m}+\sum_{i=1}^{m}\pi_i,\pi_0^{\le m}+\sum_{j=1}^{m}\pi_j\right\rrbracket\right\rrbracket=\left\llbracket\pi_0^{\le m}+\sum_{i=1}^{m}\pi_i,A_m\right\rrbracket. \end{align*} But $\llbracket \pi_i,A_m\rrbracket \in \mathcal F^{m+4}\mathfrak h_{\le m}$ and $\llbracket \pi_0^{\le m},A_m\rrbracket \in\partial_{\le m} (A_m)+\mathcal F^{m+3}\mathfrak h_{\le m}$. We observe that, by construction, $\partial_{\le m}A_m= \partial_{\le m-1}A_m$. We choose $\pi_{m+1}$ such that $\partial_{\le m}\pi_{m+1}=-A_m/2$. \end{proof} \begin{theorem}\label{thm:derivedbr} With the notation of Lemma \ref{lem:recursion} $\pi:=\sum_{m=0}^\infty \pi_m$ defines an element in $\mathfrak g$ such that $\llbracket \pi,\pi\rrbracket =0$. Let $\downarrow:\mathfrak g\to \mathfrak g[1]$ denote the desuspension (i.e. the identity seen as a degree $-1$ map) and let $\uparrow$ be its inverse. Let us write $[\:,\:]=\downarrow\circ \llbracket\:,\:\rrbracket\circ \uparrow\otimes \uparrow$ for the Lie bracket on $\mathfrak g[1]$. Let $\operatorname{pr}:\mathfrak g[1]\to R[1]$ be the canonical projection onto arity zero. The operations $\left(l_m\right)_{n\ge 0}$ \begin{align}\label{eq:derived brackets} l_m(x_1,x_2,\dots,x_m):=-\operatorname{pr}\left(\left[\dots[[\pi[1],x_1],x_2],\dots,x_m\right]\right) \end{align} define an $L_\infty[1]$-algebra on $R[1]$. Using \eqref{eq:decalage} this induces a $P_\infty$-algebra structure $\left(\{\:,\dots,\:\}_m\right)_{m\ge 0}$, \begin{align} \nonumber \{a_1,\dots,a_m\}_m&:=(-1)^{\sum_{i=1}^m(m-i)|a_i|}l_m(a_1[1],\dots,a_m[1])[-1] \\ &=-(-1)^{\sum_{i=1}^m(m-i)|a_i|}\epsilon\left(\left\llbracket\dots\llbracket\llbracket \pi, a_1\rrbracket,a_2 \rrbracket,\dots,a_m\right\rrbracket\right), \end{align} on $R$, where $\epsilon:\mathfrak g\to R$ is the augmentation. The canonical map $p:R\to A$ is an $L_\infty$-quasiisomorphism. \end{theorem} \begin{proof} The series $\pi:=\sum_{m=0}^\infty \pi_m$ converges in the $\mathcal F$-adic topology. By completeness of $\mathfrak g$ (cf. Proposition \ref{prop:Fadic}) it follows that $\pi\in\mathfrak g$. The construction of $l_m$ is a special case of the higher derived brackets of \cite{VoronovHigherDerived}. It is straightforward to check that the Leibniz rule holds for the higher Poisson brackets $\{\:,\dots,\:\}_m$. To prove that $p$ is compatible with the brackets we show that the only contributions to the brackets $\{a_1,\dots,a_m\}_m$ that are not annihilated by $p$ come from terms involving $\pi_1$. Let $\mathfrak a$ be the kernel of the canonical map $\kappa:R\to S$, i.e., the ideal in $R$ generated by the $x_j^{(m)}$ with $m\ge 1,j\in \mathcal I_m$. Recall that for $m\ge 1$ the tensor $\pi_m$ is of filtration degree $m+1$ and of cohomological degree $2$. Hence for $m\ge 2$ we have \begin{align*} \left\llbracket\dots\llbracket\llbracket \pi_m, \mathfrak a,\rrbracket,R \rrbracket,\dots,R\right\rrbracket\subseteq \mathfrak a\subseteq \ker(p). \end{align*} Moreover, $\llbracket \pi_0,R\rrbracket=\partial R\subseteq\ker(p)$ and the claim follows.. \end{proof} If our ideal $I$ is generated by a complete intersection $f_1,\dots,f_k$ then for degree reasons $\pi_m=0$ for all $m\ge k+2$ and the sum in the theorem above is actually finite. \begin{corollary} Let $\left(\{\:,\dots,\:\}_m\right)_{m\ge 1}$ be a $P_\infty$-algebra structure on the resolvent $R$ with $\partial=\{\:\}_1$. Then the brackets $[\mathrm{d} a_1,\mathrm{d} a_2, \dots, \mathrm{d} a_m]_m:=\mathrm{d}\{a_1,a_2,\dots,a_m\}_m$, $m\ge 2$, define the structure of a $L_\infty$-algebroid over $\operatorname{Spec}(A)$ on $\mathbb L_{A|\boldsymbol k}=A\otimes _R\Omega_{R|\boldsymbol k}$ with anchor $\rho_m(\mathrm{d} a_1,\mathrm{d} a_2, \dots, \mathrm{d} a_{m-1},b)_m:=\{a_1,a_2,\dots,a_{m-1},b\}_m$, $m\ge 2$. Here $[\:]_1$ is understood to be the differential $A\otimes _R\partial$ of the cotangent complex. \end{corollary} \begin{proof} The higher Jacobi identities \eqref{eqn:Linftyalgebra} and \eqref{eqn:Linftymodule} follow directly from the higher Jacobi identities of the $P_\infty$-structure. The identities (3) of Definition \ref{def:Linftyalgebroid} follow from the Leibniz rule of the $P_\infty$-structure. \end{proof} We would like to emphasize the following phenomenon. Let again $\mathfrak a$ be the ideal $\ker(\kappa)$ with $\kappa:R\to S$. Then any bracket operation $[\:,\dots,\:]_m$, as well as the anchor $\rho_m$, on the cotangent complex $\mathbb L_{A|\boldsymbol k}$ that corresponds to a $\pi_m\in\mathfrak a^2$ vanish. In the case of a complete intersection for $m\ge 3$ we have $\pi_m\in\mathfrak a^2$ for degree reasons. This means that in the case of a complete intersection we get a dg Lie algebroid and we to recover Theorem \ref{thm:ciLiealgebroid}. Finally we would like to address a peculiarity of the Koszul case. Let us assume that in the constructions of Theorem \ref{thm:derivedbr} we chose the resolvent $R$ to be a minimal model. Let us introduce the \emph{Euler derivation} \begin{align*} \mathcal E:=\sum_{m\ge 0}\sum_{i\in \mathcal I_m}x_i^{(m)}{\partial \over \partial x_i^{(m)}}. \end{align*} We say that $X\in\mathfrak g$ is \emph{homogeneous of $x$-degree $k$} if $\mathcal E X=kX$. The $x$-degree can be interpreted as counting the number occurrences of the variables of the minimal model in a monomial of $X$, including the original variables $x_1,\dots,x_n$. \begin{theorem}\label{thm:Koszul} Let $S=\boldsymbol k[x_1,\dots, x_n]$ be standard graded, assume that $I$ is generated by quadrics such that $S/I$ is a Koszul algebra, and suppose that $p:=\deg\{\:,\:\}\le 0$. Then all $\pi_m$ of Theorem \ref{thm:derivedbr} with $m\ge 1$ can be chosen homogeneous of the same $x$-degree $p+2$. \end{theorem} \begin{proof} Due to Koszulness $\pi_0$ is homogeneous of $x$-degree $2$. With $\pi_1$ being of $x$-degree $k=p+2$ we can easily see that $\pi_2$ can be chosen of the same $x$-degree. In the inductive construction of $\pi_m$ of Lemma \ref{lem:recursion} we have to consider two cases. The $x$-degrees of \begin{align*} \left\llbracket\pi_0^{\le m-1},\sum_{i=1}^{m-1}\pi_i\right\rrbracket\quad \mbox{and }\quad \left\llbracket\sum_{i=1}^{m-1}\pi_i,\sum_{i=1}^{m-1}\pi_i\right\rrbracket \end{align*} are $k+1$ and $2k-1$, respectively. If $k\le 1$ these degrees differ and the second term has to be zero on the nose since a nontrivial boundary has to be of $x$-degree $\ge 2$. So only the first term contributes to the $\pi_m$, which can be chosen to be homogeneous of $x$-degree $k$. If $k=2$ the $x$-degrees of both terms are $k+1=3=2k-1$ and $\pi_m$ can be chosen to be homogeneous of $x$-degree $2$. \end{proof} It seems that there are Koszul examples with Poisson brackets of degree $\ge 1$ where it is unlikely to find $\pi_m$ of homogeneous $x$-degree for all $m\ge 3$. To be more specific, we looked into the ideal generated by $x_1^2$ and $x_1x_2$ in $\boldsymbol k[x_1,x_2]$ and bracket $\{x_1,x_2\}=x_1x_2(x_1+x_2)$. We were not able to make choices such that $\pi_3$ and $\pi_4$ are of homogeneous $x$-degree. \section{Sample calculations}\label{sec:examples} In this section, we give the results of computations of the structures indicated by Theorem 1.4 in various examples. In each cases, the resolvent was computed using \emph{Macaulay2} and the \emph{dgalgebras} package \cite{M2,dgAlgsM2}; the $\pi_m$ were computed using \emph{Mathematica} \cite{Mathematica}. Details of these computations are available from the authors by request. \subsection{A non-complete intersection generated by two monomials in dimension two} We consider the ideal in $\boldsymbol k[x_1,x_2]$ generated by the two quadratic monomials $f_1 = x_1^2$ and $f_2 = x_1 x_2$ with diagonal Poisson bracket $\{x_1,x_2\}=x_1x_2$. The differential $\partial_{\le 7}$ can be read off \begingroup \allowdisplaybreaks \begin{align*} \pi_0^{\leq 7} &= x_1 x_1 \xi_{(1)}^1 + x_1 x_2 \xi_{(1)}^2 + x_1 x^{(1)}_2 \xi_{(2)}^1 - x_2 x^{(1)}_1 \xi_{(2)}^1- x^{(1)}_1 x^{(1)}_2 \xi_{(3)}^1+ x_1 x^{(2)}_1 \xi_{(3)}^1+ x_1 x^{(3)}_1 \xi_{(4)}^2 + x_2 x^{(3)}_1 \xi_{(4)}^1\\ &\quad - x^{(1)}_1 x^{(2)}_1 \xi_{(4)}^2 - x^{(1)}_2 x^{(2)}_1 \xi_{(4)}^1 + x_1 x^{(4)}_1 \xi_{(5)}^1 + x_1 x^{(4)}_2 \xi_{(5)}^3 + 2 x_2 x^{(4)}_2 \xi_{(5)}^2 - x^{(1)}_1 x^{(3)}_1 \xi_{(5)}^3 - x^{(1)}_2 x^{(3)}_1 \xi_{(5)}^1 \\&\quad - 2 x^{(1)}_2 x^{(3)}_1 \xi_{(5)}^2 - x^{(2)}_1 x^{(2)}_1 \xi_{(5)}^2 \ + x_1 x_1^{(5)} \xi_{(6)}^3 + x_1 x_2^{(5)} \xi_{(6)}^2 + x_1 x_3^{(5)} \xi_{(6)}^4 - x_1^{(1)} x_1^{(4)} \xi_{(6)}^3 - x_1^{(1)} x_2^{(4)} \xi_{(6)}^4 \\&\quad + x_1^{(2)} x_1^{(3)} \xi_{(6)}^2 + x_2 x_1^{(5)} \xi_{(6)}^1 - x_2 x_3^{(5)} \xi_{(6)}^2 + x_2 x_3^{(5)} \xi_{(6)}^3 - x_2^{(1)} x_1^{(4)} \xi_{(6)}^1 - x_2^{(1)} x_2^{(4)} \xi_{(6)}^2 - x_2^{(1)} x_2^{(4)} \xi_{(6)} \\ &\quad+ x_1 x_1^{(6)} \xi_{(7)}^1 + x_1 x_2^{(6)} \xi_{(7)}^4 + x_1 x_3^{(6)} \xi_{(7)}^3 + x_1 x_4^{(6)} \xi_{(7)}^5 - x_1^{(1)} x_1^{(5)} \xi_{(7)}^3 - x_1^{(1)} x_2^{(5)} \xi_{(7)}^4 - x_1^{(1)} x_3^{(5)} \xi_{(7)}^5 \\&\quad - x_1^{(2)} x_1^{(4)} \xi_{(7)}^2 - x_1^{(2)} x_2^{(4)} \xi_{(7)}^4 + x_2 x_2^{(6)} \xi_{(7)}^2 + x_2 x_3^{(6)} \xi_{(7)}^2 + 3 x_2 x_4^{(6)} \xi_{(7)}^4 - x_2^{(1)} x_1^{(5)} \xi_{(7)}^1 - x_2^{(1)} x_1^{(5)} \xi_{(7)}^2 \\&\quad - x_2^{(1)} x_2^{(5)} \xi_{(7)}^2 - x_2^{(1)} x_3^{(5)} \xi_{(7)}^3 - 2 x_2^{(1)} x_3^{(5)} \xi_{(7)}^4 \end{align*} by substituting $\xi_{(m)}^i$ with $\partial/\partial x^{(m)}_i$. In this example the exponential growth of the number of terms in $\partial_{\le m}$ sets in relatively late so that we can calculate $\pi_m$ up to $m=8$. We present the results for two different choices of $Z_{i\mu}^\nu$. \subsubsection{A non-diagonal choice of $Z_{i\mu}^\nu$}\label{subsec:nondiag} Here we consider the $Z_{i\mu}^\nu$ with nonzero components $Z_{2,1}^2 =-2x_1$, $Z_{1,2}^2=x_1$, and $Z_{2,2}^2=-x_2$. Note that the tensor $Z_{i\mu}^\nu$ can in principal be read off $\pi_2$ so that in later examples we will not spell it out to avoid redundancy. We were able to calculate \begin{align*} \pi_1 &= x_1 x_2 \xi^1 \xi^2, \\ \pi_2 &= - x_1 x_2^{(1)} \xi^1 \xi_{(1)}^2 + 2 x_1 x_2^{(1)} \xi^2 \xi_{(1)}^1 + x_2 x_2^{(1)} \xi^2 \xi_{(1)}^2, \\ \pi_3 &= x_1 x_1^{(2)} \xi^1 \xi_{(2)}^1, \\ \pi_4 &= - x_1 x_1^{(3)} \xi^1 \xi_{(3)}^1 + 2 x_1 x_1^{(3)} \xi_{(1)}^1 \xi_{(2)}^1 + x_2 x_1^{(3)} \xi^2 \xi_{(3)}^1 + x_2 x_1^{(3)} \xi_{(1)}^2 \xi_{(2)}^1, \\ \pi_5 &= 2 x_1 x_1^{(4)} \xi^1 \xi_{(4)}^1 - 2 x_1 x_1^{(4)} \xi^2 \xi_{(4)}^2 - x_1 x_1^{(4)} \xi_{(1)}^2 \xi_{(3)}^1 + x_1 x_2^{(4)} \xi^1 \xi_{(4)}^2 - x_2 x_1^{(4)} \xi^2 \xi_{(4)}^1 - 2 x_1 x_1^{(4)} \xi^2 \xi_{(1)}^1 \xi_{(2)}^1, \\ \pi_6 &= - 2 x_1 x_1^{(5)} \xi^1 \xi_{(5)}^1 + 2 x_1 x_1^{(5)} \xi^2 \xi_{(5)}^3 + 4 x_1 x_1^{(5)} \xi_{(1)}^1 \xi_{(4)}^1 - x_1 x_1^{(5)} \xi_{(1)}^2 \xi_{(4)}^2 + x_1 x_1^{(5)} \xi_{(2)}^1 \xi_{(3)}^1 - 2 x_1 x_2^{(5)} \xi^1 \xi_{(5)}^2 \\&\quad - x_1 x_2^{(5)} \xi_{(2)}^1 \xi_{(3)}^1 - x_1 x_3^{(5)} \xi^1 \xi_{(5)}^3 + 2 x_1 x_3^{(5)} \xi_{(1)}^1 \xi_{(4)}^2 + 2 x_2 x_1^{(5)} \xi^2 \xi_{(5)}^1 + 4 x_2 x_1^{(5)} \xi^2 \xi_{(5)}^2 + 2 x_2 x_1^{(5)} \xi_{(1)}^2 \xi_{(4)}^1 \\&\quad + x_2 x_3^{(5)} \xi^2 \xi_{(5)}^3 + x_2 x_3^{(5)} \xi_{(1)}^2 \xi_{(4)}^2 + 2 x_2 x_3^{(5)} \xi_{(2)}^1 \xi_{(3)}^1 + 4 x_1 x_1^{(5)} \xi^2 \xi_{(1)}^1 \xi_{(3)}^1 + x_2 x_1^{(5)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1, \\ \pi_7 &= 3 x_1 x_1^{(6)} \xi^1 \xi_{(6)}^1 - 2 x_1 x_1^{(6)} \xi^2 \xi_{(6)}^2 - 4 x_1 x_1^{(6)} \xi^2 \xi_{(6)}^3 - 2 x_1 x_1^{(6)} \xi_{(1)}^2 \xi_{(5)}^1 - 2 x_1 x_1^{(6)} \xi_{(1)}^2 \xi_{(5)}^2 - 2 x_1 x_1^{(6)} \xi_{(2)}^1 \xi_{(4)}^1 \\&\quad + 2 x_1 x_2^{(6)} \xi^1 \xi_{(6)}^2 + 4 x_1 x_2^{(6)} \xi_{(1)}^1 \xi_{(5)}^2 + x_1 x_2^{(6)} \xi_{(2)}^1 \xi_{(4)}^2 + x_1 x_2^{(6)} \xi_{(3)}^1 \xi_{(3)}^1 + 2 x_1 x_3^{(6)} \xi^1 \xi_{(6)}^3 - 2 x_1 x_3^{(6)} \xi^2 \xi_{(6)}^4 \\&\quad - x_1 x_3^{(6)} \xi_{(1)}^2 \xi_{(5)}^3 - 2 x_1 x_3^{(6)} \xi_{(2)}^1 \xi_{(4)}^2 - x_1 x_3^{(6)} \xi_{(3)}^1 \xi_{(3)}^1 + x_1 x_4^{(6)} \xi^1 \xi_{(6)}^4 - 2 x_2 x_1^{(6)} \xi^2 \xi_{(6)}^1 - x_2 x_2^{(6)} \xi^2 \xi_{(6)}^2 \\&\quad + 2 x_2 x_2^{(6)} \xi_{(1)}^2 \xi_{(5)}^2 + x_2 x_2^{(6)} \xi_{(2)}^1 \xi_{(4)}^1 - x_2 x_3^{(6)} \xi^2 \xi_{(6)}^3 - x_2 x_3^{(6)} \xi_{(2)}^1 \xi_{(4)}^1 - 8 x_1 x_1^{(6)} \xi^2 \xi_{(1)}^1 \xi_{(4)}^1 \\&\quad + 4 x_1 x_1^{(6)} \xi^2 \xi_{(1)}^2 \xi_{(4)}^2 + 2 x_1 x_1^{(6)} \xi^2 \xi_{(2)}^1 \xi_{(3)}^1 + x_1 x_1^{(6)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(3)}^1 - 2 x_1 x_3^{(6)} \xi^2 \xi_{(1)}^1 \xi_{(4)}^2 + 4 x_1 x_1^{(6)} \xi^2 \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(2)}^1, \end{align*} \begin{align*} \pi_8 &= - 3 x_1 x_1^{(7)} \xi^1 \xi_{(7)}^1 + 4 x_1 x_1^{(7)} \xi^2 \xi_{(7)}^3 + 2 x_1 x_1^{(7)} \xi^2 \xi_{(7)}^4 + 6 x_1 x_1^{(7)} \xi_{(1)}^1 \xi_{(6)}^1 - 2 x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(6)}^3 + 4 x_1 x_1^{(7)} \xi_{(2)}^1 \xi_{(5)}^1 \\&\quad + 2 x_1 x_1^{(7)} \xi_{(2)}^1 \xi_{(5)}^2 - 2 x_1 x_1^{(7)} \xi_{(3)}^1 \xi_{(4)}^1 - 3 x_1 x_2^{(7)} \xi^1 \xi_{(7)}^2 + 2 x_1 x_2^{(7)} \xi^2 \xi_{(7)}^4 - x_1 x_2^{(7)} \xi_{(1)}^2 \xi_{(6)}^2 - x_1 x_2^{(7)} \xi_{(2)}^1 \xi_{(5)}^1 \\&\quad - x_1 x_2^{(7)} \xi_{(3)}^1 \xi_{(4)}^1 - 2 x_1 x_3^{(7)} \xi^1 \xi_{(7)}^3 + 2 x_1 x_3^{(7)} \xi^2 \xi_{(7)}^5 + 4 x_1 x_3^{(7)} \xi_{(1)}^1 \xi_{(6)}^3 - x_1 x_3^{(7)} \xi_{(1)}^2 \xi_{(6)}^4 + 3 x_1 x_3^{(7)} \xi_{(2)}^1 \xi_{(5)}^3 \\&\quad - 2 x_1 x_4^{(7)} \xi^1 \xi_{(7)}^4 - x_1 x_4^{(7)} \xi_{(2)}^1 \xi_{(5)}^3 - x_1 x_4^{(7)} \xi_{(3)}^1 \xi_{(4)}^2 - x_1 x_5^{(7)} \xi^1 \xi_{(7)}^5 + 2 x_1 x_5^{(7)} \xi_{(1)}^1 \xi_{(6)}^4 + 3 x_2 x_1^{(7)} \xi^2 \xi_{(7)}^1 \\&\quad + 6 x_2 x_1^{(7)} \xi^2 \xi_{(7)}^2 + 3 x_2 x_1^{(7)} \xi_{(1)}^2 \xi_{(6)}^1 + x_2 x_2^{(7)} \xi^2 \xi_{(7)}^2 + 2 x_2 x_3^{(7)} \xi^2 \xi_{(7)}^3 + 4 x_2 x_3^{(7)} \xi^2 \xi_{(7)}^4 + 2 x_2 x_3^{(7)} \xi_{(1)}^2 \xi_{(6)}^3 \\&\quad + 2 x_2 x_3^{(7)} \xi_{(2)}^1 \xi_{(5)}^1 + 4 x_2 x_3^{(7)} \xi_{(2)}^1 \xi_{(5)}^2 + 2 x_2 x_3^{(7)} \xi_{(3)}^1 \xi_{(4)}^1 + x_2 x_5^{(7)} \xi^2 \xi_{(7)}^5 + x_2 x_5^{(7)} \xi_{(1)}^2 \xi_{(6)}^4 \\&\quad + 3 x_2 x_5^{(7)} \xi_{(2)}^1 \xi_{(5)}^3 + 3 x_2 x_5^{(7)} \xi_{(3)}^1 \xi_{(4)}^2 + 12 x_1 x_1^{(7)} \xi^2 \xi_{(1)}^1 \xi_{(5)}^1 + 12 x_1 x_1^{(7)} \xi^2 \xi_{(1)}^1 \xi_{(5)}^2 - 4 x_1 x_1^{(7)} \xi^2 \xi_{(1)}^2 \xi_{(5)}^3 \\&\quad - 4 x_1 x_1^{(7)} \xi^2 \xi_{(3)}^1 \xi_{(3)}^1 + x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^2 - 2 x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(2)}^1 \xi_{(3)}^1 + 4 x_1 x_2^{(7)} \xi^2 \xi_{(1)}^1 \xi_{(5)}^2 \\&\quad + x_1 x_2^{(7)} \xi_{(1)}^2 \xi_{(2)}^1 \xi_{(3)}^1 + 4 x_1 x_3^{(7)} \xi^2 \xi_{(1)}^1 \xi_{(5)}^3 + 4 x_1 x_3^{(7)} \xi_{(1)}^1 \xi_{(2)}^1 \xi_{(3)}^1 + 3 x_2 x_1^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^1 \\&\quad + x_2 x_3^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^2 + 2 x_2 x_3^{(7)} \xi_{(1)}^2 \xi_{(2)}^1 \xi_{(3)}^1 - 8 x_1 x_1^{(7)} \xi^2 \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(3)}^1 + 2 x_1 x_1^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1 \\&\quad + x_2 x_1^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1. \end{align*} Here $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ as well as all $\mathcal A_{\mu\nu}^\lambda$ are zero. We observe that none of the terms in the above list are in $\mathfrak a^2$. We do not know if that is a general feature of this example. \subsubsection{Diagonal $Z_{i\mu}^\nu$} With the choices according to Proposition \ref{prop:diag} we get the following. \begin{align*} \pi_1 &= x_1 x_2 \xi^1 \xi^2, \\ \pi_2 &= - x_1 x_2^{(1)} \xi^1 \xi_{(1)}^2 + 2 x_2 x_1^{(1)} \xi^2 \xi_{(1)}^1 + x_2 x_2^{(1)} \xi^2 \xi_{(1)}^2, \\ \pi_3 &= x_1 x_1^{(2)} \xi^1 \xi_{(2)}^1 + 2 x_1 x_1^{(2)} \xi_{(1)}^1 \xi_{(1)}^2 - 2 x_2 x_1^{(2)} \xi^2 \xi_{(2)}^1, \\ \pi_4 &= - x_1 x_1^{(3)} \xi^1 \xi_{(3)}^1 + 3 x_2 x_1^{(3)} \xi^2 \xi_{(3)}^1 + x_2 x_1^{(3)} \xi_{(1)}^2 \xi_{(2)}^1, \\ \pi_5 &= 2 x_1 x_1^{(4)} \xi^1 \xi_{(4)}^1 - 3 x_1 x_1^{(4)} \xi_{(1)}^2 \xi_{(3)}^1 + x_1 x_2^{(4)} \xi^1 \xi_{(4)}^2 + 2 x_1 x_2^{(4)} \xi_{(1)}^1 \xi_{(3)}^1 - 3 x_2 x_1^{(4)} \xi^2 \xi_{(4)}^1 - 4 x_2 x_2^{(4)} \xi^2 \xi_{(4)}^2 \\&\quad - 2 x_1 x_1^{(4)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(1)}^2, \\ \pi_6 &= - 2 x_1 x_1^{(5)} \xi^1 \xi_{(5)}^1 + 4 x_1 x_1^{(5)} \xi_{(1)}^1 \xi_{(4)}^1 + 3 x_1 x_1^{(5)} \xi_{(1)}^2 \xi_{(4)}^2 + 3 x_1 x_1^{(5)} \xi_{(2)}^1 \xi_{(3)}^1 - 2 x_1 x_2^{(5)} \xi^1 \xi_{(5)}^2 \\&\quad - 2 x_1 x_2^{(5)} \xi_{(1)}^1 \xi_{(4)}^1 - 2 x_1 x_2^{(5)} \xi_{(1)}^2 \xi_{(4)}^2 - x_1 x_2^{(5)} \xi_{(2)}^1 \xi_{(3)}^1 - x_1 x_3^{(5)} \xi^1 \xi_{(5)}^3 + 4 x_2 x_1^{(5)} \xi^2 \xi_{(5)}^1 \\&\quad + 2 x_2 x_1^{(5)} \xi_{(1)}^2 \xi_{(4)}^1 + 4 x_2 x_2^{(5)} \xi^2 \xi_{(5)}^2 + 5 x_2 x_3^{(5)} \xi^2 \xi_{(5)}^3 + 2 x_2 x_3^{(5)} \xi_{(1)}^1 \xi_{(4)}^1 + x_2 x_3^{(5)} \xi_{(1)}^2 \xi_{(4)}^2 \\&\quad + 4 x_2 x_3^{(5)} \xi_{(2)}^1 \xi_{(3)}^1 + 4 x_1 x_1^{(5)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(2)}^1 - 2 x_1 x_2^{(5)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(2)}^1 + x_2 x_1^{(5)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1 \\&\quad + 2 x_2 x_3^{(5)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(2)}^1, \\ \pi_7 &= 3 x_1 x_1^{(6)} \xi^1 \xi_{(6)}^1 - 4 x_1 x_1^{(6)} \xi_{(1)}^2 \xi_{(5)}^1 - 6 x_1 x_1^{(6)} \xi_{(1)}^2 \xi_{(5)}^2 - 2 x_1 x_1^{(6)} \xi_{(2)}^1 \xi_{(4)}^1 + 2 x_1 x_2^{(6)} \xi^1 \xi_{(6)}^2 \\&\quad + 2 x_1 x_2^{(6)} \xi_{(1)}^1 \xi_{(5)}^1 + 4 x_1 x_2^{(6)} \xi_{(1)}^1 \xi_{(5)}^2 + 2 x_1 x_2^{(6)} \xi_{(1)}^2 \xi_{(5)}^3 - x_1 x_2^{(6)} \xi_{(2)}^1 \xi_{(4)}^2 + x_1 x_2^{(6)} \xi_{(3)}^1 \xi_{(3)}^1 \\&\quad + 2 x_1 x_3^{(6)} \xi^1 \xi_{(6)}^3 - 3 x_1 x_3^{(6)} \xi_{(1)}^2 \xi_{(5)}^3 - 3 x_1 x_3^{(6)} \xi_{(3)}^1 \xi_{(3)}^1 + x_1 x_4^{(6)} \xi^1 \xi_{(6)}^4 + 2 x_1 x_4^{(6)} \xi_{(1)}^1 \xi_{(5)}^3 \\&\quad - 4 x_2 x_1^{(6)} \xi^2 \xi_{(6)}^1 - 5 x_2 x_2^{(6)} \xi^2 \xi_{(6)}^2 + 2 x_2 x_2^{(6)} \xi_{(1)}^2 \xi_{(5)}^2 + 3 x_2 x_2^{(6)} \xi_{(2)}^1 \xi_{(4)}^1 - 5 x_2 x_3^{(6)} \xi^2 \xi_{(6)}^3 \\&\quad - 3 x_2 x_3^{(6)} \xi_{(2)}^1 \xi_{(4)}^1 - 6 x_2 x_4^{(6)} \xi^2 \xi_{(6)}^4 + 4 x_2 x_4^{(6)} \xi_{(1)}^1 \xi_{(5)}^2 - 4 x_2 x_4^{(6)} \xi_{(2)}^1 \xi_{(4)}^2 + 5 x_1 x_1^{(6)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(3)}^1 \\&\quad + 2 x_1 x_2^{(6)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(3)}^1 - 4 x_1 x_3^{(6)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(3)}^1 + 2 x_1 x_1^{(6)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(1)}^2, \end{align*} \begin{align*} \pi_8 &= - 3 x_1 x_1^{(7)} \xi^1 \xi_{(7)}^1 + 6 x_1 x_1^{(7)} \xi_{(1)}^1 \xi_{(6)}^1 + 6 x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(6)}^2 + 4 x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(6)}^3 + 6 x_1 x_1^{(7)} \xi_{(2)}^1 \xi_{(5)}^1 \\&\quad + 6 x_1 x_1^{(7)} \xi_{(2)}^1 \xi_{(5)}^2 - 2 x_1 x_1^{(7)} \xi_{(3)}^1 \xi_{(4)}^1 - 3 x_1 x_2^{(7)} \xi^1 \xi_{(7)}^2 - 2 x_1 x_2^{(7)} \xi_{(1)}^1 \xi_{(6)}^1 - 3 x_1 x_2^{(7)} \xi_{(1)}^2 \xi_{(6)}^2 \\&\quad - 2 x_1 x_2^{(7)} \xi_{(1)}^2 \xi_{(6)}^3 - x_1 x_2^{(7)} \xi_{(2)}^1 \xi_{(5)}^1 + x_1 x_2^{(7)} \xi_{(3)}^1 \xi_{(4)}^1 - 2 x_1 x_3^{(7)} \xi^1 \xi_{(7)}^3 + 4 x_1 x_3^{(7)} \xi_{(1)}^1 \xi_{(6)}^3 \\&\quad + 3 x_1 x_3^{(7)} \xi_{(1)}^2 \xi_{(6)}^4 + 3 x_1 x_3^{(7)} \xi_{(2)}^1 \xi_{(5)}^3 + 6 x_1 x_3^{(7)} \xi_{(3)}^1 \xi_{(4)}^2 - 2 x_1 x_4^{(7)} \xi^1 \xi_{(7)}^4 - 2 x_1 x_4^{(7)} \xi_{(1)}^1 \xi_{(6)}^3 \\&\quad - 2 x_1 x_4^{(7)} \xi_{(1)}^2 \xi_{(6)}^4 - x_1 x_4^{(7)} \xi_{(2)}^1 \xi_{(5)}^3 - 3 x_1 x_4^{(7)} \xi_{(3)}^1 \xi_{(4)}^2 - x_1 x_5^{(7)} \xi^1 \xi_{(7)}^5 + 5 x_2 x_1^{(7)} \xi^2 \xi_{(7)}^1 \\&\quad + 3 x_2 x_1^{(7)} \xi_{(1)}^2 \xi_{(6)}^1 + 5 x_2 x_2^{(7)} \xi^2 \xi_{(7)}^2 + 6 x_2 x_3^{(7)} \xi^2 \xi_{(7)}^3 + 2 x_2 x_3^{(7)} \xi_{(1)}^1 \xi_{(6)}^1 + 2 x_2 x_3^{(7)} \xi_{(1)}^2 \xi_{(6)}^3 \\&\quad + 4 x_2 x_3^{(7)} \xi_{(2)}^1 \xi_{(5)}^1 + 2 x_2 x_3^{(7)} \xi_{(3)}^1 \xi_{(4)}^1 + 6 x_2 x_4^{(7)} \xi^2 \xi_{(7)}^4 + 4 x_2 x_4^{(7)} \xi_{(2)}^1 \xi_{(5)}^2 + 7 x_2 x_5^{(7)} \xi^2 \xi_{(7)}^5 \\&\quad - 2 x_2 x_5^{(7)} \xi_{(1)}^1 \xi_{(6)}^2 + 4 x_2 x_5^{(7)} \xi_{(1)}^1 \xi_{(6)}^3 + x_2 x_5^{(7)} \xi_{(1)}^2 \xi_{(6)}^4 + 9 x_2 x_5^{(7)} \xi_{(2)}^1 \xi_{(5)}^3 + 5 x_2 x_5^{(7)} \xi_{(3)}^1 \xi_{(4)}^2 \\&\quad - 5 x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^2 - 10 x_1 x_1^{(7)} \xi_{(1)}^2 \xi_{(2)}^1 \xi_{(3)}^1 + 2 x_1 x_2^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^2 + 3 x_1 x_2^{(7)} \xi_{(1)}^2 \xi_{(2)}^1 \xi_{(3)}^1 \\&\quad + 4 x_1 x_3^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(4)}^2 + 4 x_1 x_3^{(7)} \xi_{(1)}^1 \xi_{(2)}^1 \xi_{(3)}^1 - 2 x_1 x_4^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(4)}^2 - 2 x_1 x_4^{(7)} \xi_{(1)}^1 \xi_{(2)}^1 \xi_{(3)}^1 \\&\quad + 3 x_2 x_1^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^1 + 4 x_2 x_3^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(4)}^1 + x_2 x_3^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(4)}^2 + 2 x_2 x_3^{(7)} \xi_{(1)}^2 \xi_{(2)}^1 \xi_{(3)}^1 \\&\quad + 4 x_2 x_5^{(7)} \xi_{(1)}^1 \xi_{(1)}^1 \xi_{(4)}^1 + 4 x_2 x_5^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(4)}^2 + 10 x_2 x_5^{(7)} \xi_{(1)}^1 \xi_{(2)}^1 \xi_{(3)}^1 - 6 x_1 x_1^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1 \\&\quad + 2 x_1 x_2^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1 + x_2 x_1^{(7)} \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1 + 2 x_2 x_3^{(7)} \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(1)}^2 \xi_{(2)}^1 \\&\quad + 4 x_2 x_5^{(7)} \xi_{(1)}^1 \xi_{(1)}^1 \xi_{(1)}^2 \xi_{(2)}^1. \end{align*} None of the terms above is in $\mathfrak a^2$. For $m\ge 3$ the tensor appears to have nondiagonal terms. \subsection{A complete intersection given by two polynomials} Let us consider the ideal in $\boldsymbol k[x_1,x_2,x_3,x_4]$ generated by the two quadratic monomials $f_1 = x_1x_2$ and $f_2 = x_3 x_4$ with diagonal Poisson bracket $\{x_i,x_j\}=x_ix_j$ for $1\le i<j\le 4$. It turns out that \begin{align*} \pi_0 &= x_1 x_2 \xi_{(1)}^1 + x_3 x_4 \xi_{(1)}^2,\\ \pi_1 &= x_1 x_2 \xi^1 \xi^2 + x_2 x_3 \xi^2 \xi^3 + x_3 x_4 \xi^3 \xi^4, \\ \pi_2 &= - x_1 x^{(1)}_1 \xi^1 \xi_{(1)}^1 + x_2 x^{(1)}_1 \xi^2 \xi_{(1)}^1 - x_2 x^{(1)}_2 \xi^2 \xi_{(1)}^2 + x_3 x^{(1)}_1 \xi^3 \xi_{(1)}^1 - x_3 x^{(1)}_2 \xi^3 \xi_{(1)}^2 + x_4 x^{(1)}_2 \xi^4 \xi_{(1)}^2, \\ \pi_3 &= x^{(1)}_1 x^{(1)}_2 \xi_{(1)}^1 \xi_{(1)}^2, \\ \pi_m &= 0\quad \mbox{for }m\ge 4. \end{align*} In other words, the Koszul complex here is a dg Poisson algebra. As there are only two indices $i=1,2$, any cubic expression in the $x^{(1)}_i$ has to vanish. This explains the vanishing of $\pi_m$ for $m\ge 4$. More generally, we observe that the Koszul complex of a regular sequence of length $2$ with vanishing $\delta_{\operatorname{Poiss}} Z-[Z,Z]$ has the structure of a dg Poisson algebra. We note that the $\pi_m$ have a diagonal shape for $m\ge 1$. Up to now all examples of Poisson complete intersections that we are aware of are monomial, Casimir, or a hypersurface. \subsection{An ideal generated two cubic Casimirs} We consider the ideal of $\boldsymbol k[x_1, x_2, x_3, x_4]$ generated by the two monomials $f_1 = x_1 x_2 x_3$ and $f_2 = x_2 x_3 x_4$. As the Poisson bracket we take that of Theorem \ref{thm:detbracket} with derivations $\partial/\partial x_1$, $\partial/\partial x_2$, $\partial/\partial x_3$, $\partial/\partial x_4$ and $g=(x_2x_3)^{-1}$ (it can also be seen as a diagonal bracket). To calculate the first terms of $\pi_0$ we used \emph{Macaulay2}: \begin{align*} \pi_0^{\le 11} &= x_1 x_2 x_3 \xi_{(1)}^1 + x_2 x_3 x_4 \xi_{(1)}^2 + x_1 x^{(1)}_2 \xi_{(2)}^1 - x_4 x^{(1)}_1 \xi_{(2)}^1+ x_2 x_3 x^{(2)}_1 \xi_{(3)}^1 - x^{(1)}_1 x^{(1)}_2 \xi_{(3)}^1\\ &\quad+ x_1 x^{(3)}_1 \xi_{(4)}^2 + x_4 x^{(3)}_1 \xi_{(4)}^1 - x^{(1)}_1 x^{(2)}_1 \xi_{(4)}^2 - x^{(1)}_2 x^{(2)}_1 \xi_{(4)}^1 \\ &\quad+ 2 x_1 x^{(4)}_1 \xi_{(5)}^1 - 2 x_4 x^{(4)}_2 \xi_{(5)}^1 + x_2 x_3 x^{(4)}_1 \xi_{(5)}^2 + x_2 x_3 x^{(4)}_2 \xi_{(5)}^3 - x^{(1)}_1 x^{(3)}_1 \xi_{(5)}^3 - x^{(1)}_2 x^{(3)}_1 \xi_{(5)}^2 + x^{(2)}_1 x^{(2)}_1 \xi_{(5)}^1 \\ &\quad+ x_1 x_2^{(5)} \xi_{(6)}^2 + x_1 x_3^{(5)} \xi_{(6)}^4 - x_1^{(1)} x_1^{(4)} \xi_{(6)}^2 - 2 x_1^{(1)} x_1^{(4)} \xi_{(6)}^3 - x_1^{(1)} x_2^{(4)} \xi_{(6)}^4 - x_1^{(2)} x_1^{(3)} \xi_{(6)}^3 - x_2^{(1)} x_1^{(4)} \xi_{(6)}^1 \\&\quad - x_2^{(1)} x_2^{(4)} \xi_{(6)}^2 - x_2^{(1)} x_2^{(4)} \xi_{(6)}^3 + x_4 x_2^{(5)} \xi_{(6)}^1 + x_4 x_3^{(5)} \xi_{(6)}^2 + 3 x_4 x_3^{(5)} \xi_{(6)}^3 + x_2 x_3 x_1^{(5)} \xi_{(6)}^3 \\ &\quad + 3 x_1 x_1^{(6)} \xi_{(7)}^1 + x_1 x_3^{(6)} \xi_{(7)}^2 - x_1^{(1)} x_1^{(5)} \xi_{(7)}^2 - x_1^{(1)} x_2^{(5)} \xi_{(7)}^4 - x_1^{(1)} x_3^{(5)} \xi_{(7)}^5 + x_1^{(2)} x_1^{(4)} \xi_{(7)}^1 + x_1^{(2)} x_2^{(4)} \xi_{(7)}^2 \\&\quad - x_2^{(1)} x_1^{(5)} \xi_{(7)}^1 - x_2^{(1)} x_2^{(5)} \xi_{(7)}^3 - x_2^{(1)} x_3^{(5)} \xi_{(7)}^4 - 3 x_4 x_2^{(6)} \xi_{(7)}^1 + x_4 x_3^{(6)} \xi_{(7)}^1 - 3 x_4 x_4^{(6)} \xi_{(7)}^2 \\&\quad + x_2 x_3 x_1^{(6)} \xi_{(7)}^3 + x_2 x_3 x_2^{(6)} \xi_{(7)}^4 + x_2 x_3 x_4^{(6)} \xi_{(7)}^5 \\ &\quad +x_1 x_3^{(7)} \xi_{(8)}^4 + x_1 x_4^{(7)} \xi_{(8)}^6 + x_1 x_5^{(7)} \xi_{(8)}^8 - 3 x_1^{(1)} x_1^{(6)} \xi_{(8)}^1 - x_1^{(1)} x_1^{(6)} \xi_{(8)}^4 - x_1^{(1)} x_2^{(6)} \xi_{(8)}^6 - x_1^{(1)} x_3^{(6)} \xi_{(8)}^2 \\&\quad - x_1^{(1)} x_4^{(6)} \xi_{(8)}^8 + 3 x_1^{(2)} x_2^{(5)} \xi_{(8)}^1 + x_1^{(2)} x_2^{(5)} \xi_{(8)}^4 - x_1^{(2)} x_2^{(5)} \xi_{(8)}^5 + 3 x_1^{(2)} x_3^{(5)} \xi_{(8)}^2 + x_1^{(2)} x_3^{(5)} \xi_{(8)}^6 - x_1^{(2)} x_3^{(5)} \xi_{(8)}^7 \\&\quad - 4 x_1^{(3)} x_1^{(4)} \xi_{(8)}^1 - x_1^{(3)} x_1^{(4)} \xi_{(8)}^4 + x_1^{(3)} x_1^{(4)} \xi_{(8)}^5 - 4 x_1^{(3)} x_2^{(4)} \xi_{(8)}^2 - x_1^{(3)} x_2^{(4)} \xi_{(8)}^6 + x_1^{(3)} x_2^{(4)} \xi_{(8)}^7 - x_2^{(1)} x_1^{(6)} \xi_{(8)}^3 \\&\quad + 3 x_2^{(1)} x_2^{(6)} \xi_{(8)}^1 - x_2^{(1)} x_2^{(6)} \xi_{(8)}^5 - x_2^{(1)} x_3^{(6)} \xi_{(8)}^1 + 3 x_2^{(1)} x_4^{(6)} \xi_{(8)}^2 - x_2^{(1)} x_4^{(6)} \xi_{(8)}^7 + x_4 x_3^{(7)} \xi_{(8)}^3 + x_4 x_4^{(7)} \xi_{(8)}^5 \\&\quad + x_4 x_5^{(7)} \xi_{(8)}^7 + x_2 x_3 x_1^{(7)} \xi_{(8)}^1 + x_2 x_3 x_2^{(7)} \xi_{(8)}^2\\ &\quad +x_1 x_1^{(8)} \xi_{(9)}^2 + x_1 x_2^{(8)} \xi_{(9)}^4 + 2 x_1 x_3^{(8)} \xi_{(9)}^5 + x_1 x_5^{(8)} \xi_{(9)}^6 + 2 x_1 x_7^{(8)} \xi_{(9)}^7 - x_1^{(1)} x_1^{(7)} \xi_{(9)}^2 - x_1^{(1)} x_2^{(7)} \xi_{(9)}^4 \\&\quad - x_1^{(1)} x_3^{(7)} \xi_{(9)}^9 - x_1^{(1)} x_4^{(7)} \xi_{(9)}^{11} - x_1^{(1)} x_5^{(7)} \xi_{(9)}^{13} - 3 x_1^{(2)} x_1^{(6)} \xi_{(9)}^1 + 2 x_1^{(2)} x_1^{(6)} \xi_{(9)}^5 - 3 x_1^{(2)} x_2^{(6)} \xi_{(9)}^2 + x_1^{(2)} x_2^{(6)} \xi_{(9)}^6 \\&\quad + x_1^{(2)} x_3^{(6)} \xi_{(9)}^2 - x_1^{(2)} x_3^{(6)} \xi_{(9)}^3 - 3 x_1^{(2)} x_4^{(6)} \xi_{(9)}^4 + 2 x_1^{(2)} x_4^{(6)} \xi_{(9)}^7 - x_1^{(3)} x_1^{(5)} \xi_{(9)}^2 + x_1^{(3)} x_1^{(5)} \xi_{(9)}^3 - x_1^{(3)} x_2^{(5)} \xi_{(9)}^9 \\&\quad + x_1^{(3)} x_2^{(5)} \xi_{(9)}^{10} - x_1^{(3)} x_3^{(5)} \xi_{(9)}^{11} + x_1^{(3)} x_3^{(5)} \xi_{(9)}^{12} + 2 x_1^{(4)} x_1^{(4)} \xi_{(9)}^1 - x_1^{(4)} x_1^{(4)} \xi_{(9)}^5 + 2 x_1^{(4)} x_2^{(4)} \xi_{(9)}^2 + 2 x_1^{(4)} x_2^{(4)} \xi_{(9)}^3 \\&\quad - x_1^{(4)} x_2^{(4)} \xi_{(9)}^6 - x_2^{(1)} x_1^{(7)} \xi_{(9)}^1 - x_2^{(1)} x_2^{(7)} \xi_{(9)}^3 - x_2^{(1)} x_3^{(7)} \xi_{(9)}^8 - x_2^{(1)} x_4^{(7)} \xi_{(9)}^{10} - x_2^{(1)} x_5^{(7)} \xi_{(9)}^{12} + 2 x_2^{(4)} x_2^{(4)} \xi_{(9)}^4 \\&\quad - x_2^{(4)} x_2^{(4)} \xi_{(9)}^7 + x_4 x_1^{(8)} \xi_{(9)}^1 + x_4 x_2^{(8)} \xi_{(9)}^3 - 2 x_4 x_4^{(8)} \xi_{(9)}^5 - x_4 x_6^{(8)} \xi_{(9)}^6 - 2 x_4 x_8^{(8)} \xi_{(9)}^7 + x_2 x_3 x_3^{(8)} \xi_{(9)}^8 \\&\quad + x_2 x_3 x_4^{(8)} \xi_{(9)}^9 + x_2 x_3 x_5^{(8)} \xi_{(9)}^{10} + x_2 x_3 x_6^{(8)} \xi_{(9)}^{11} + x_2 x_3 x_7^{(8)} \xi_{(9)}^{12} + x_2 x_3 x_8^{(8)} \xi_{(9)}^{13}\\ &\quad+ x_1 x_1^{(9)} \xi_{(10)}^1 + x_1 x_3^{(9)} \xi_{(10)}^2 + 2 x_1 x_8^{(9)} \xi_{(10)}^5 + 3 x_1 x_9^{(9)} \xi_{(10)}^{10} + x_1 x_{10}^{(9)} \xi_{(10)}^{11} + x_1 x_{11}^{(9)} \xi_{(10)}^{15} \\&\quad + x_1 x_{12}^{(9)} \xi_{(10)}^{16} + x_1 x_{13}^{(9)} \xi_{(10)}^{18} - 3 x_1^{(1)} x_1^{(8)} \xi_{(10)}^8 - x_1^{(1)} x_2^{(8)} \xi_{(10)}^{14} - 2 x_1^{(1)} x_3^{(8)} \xi_{(10)}^5 - 4 x_1^{(1)} x_3^{(8)} \xi_{(10)}^6 \\&\quad - 3 x_1^{(1)} x_4^{(8)} \xi_{(10)}^{10} - x_1^{(1)} x_5^{(8)} \xi_{(10)}^{11} - 3 x_1^{(1)} x_5^{(8)} \xi_{(10)}^{12} - x_1^{(1)} x_6^{(8)} \xi_{(10)}^{15} - x_1^{(1)} x_7^{(8)} \xi_{(10)}^{16} \\&\quad - 2 x_1^{(1)} x_7^{(8)} \xi_{(10)}^{17} - x_1^{(1)} x_8^{(8)} \xi_{(10)}^{18} + x_1^{(2)} x_1^{(7)} \xi_{(10)}^1 + x_1^{(2)} x_2^{(7)} \xi_{(10)}^2 + 5 x_1^{(2)} x_3^{(7)} \xi_{(10)}^4 + x_1^{(2)} x_3^{(7)} \xi_{(10)}^5 \\&\quad - x_1^{(2)} x_3^{(7)} \xi_{(10)}^6 - x_1^{(2)} x_3^{(7)} \xi_{(10)}^7 + 5 x_1^{(2)} x_4^{(7)} \xi_{(10)}^8 + 5 x_1^{(2)} x_4^{(7)} \xi_{(10)}^9 + x_1^{(2)} x_4^{(7)} \xi_{(10)}^{10} - 2 x_1^{(2)} x_4^{(7)} \xi_{(10)}^{12} \\&\quad - x_1^{(2)} x_4^{(7)} \xi_{(10)}^{13} + 5 x_1^{(2)} x_5^{(7)} \xi_{(10)}^{14} + x_1^{(2)} x_5^{(7)} \xi_{(10)}^{15} - x_1^{(2)} x_5^{(7)} \xi_{(10)}^{16} - 3 x_1^{(2)} x_5^{(7)} \xi_{(10)}^{17} + x_1^{(3)} x_1^{(6)} \xi_{(10)}^4 \\&\quad - x_1^{(3)} x_1^{(6)} \xi_{(10)}^5 - 3 x_1^{(3)} x_1^{(6)} \xi_{(10)}^6 + x_1^{(3)} x_1^{(6)} \xi_{(10)}^7 + 4 x_1^{(3)} x_2^{(6)} \xi_{(10)}^8 - 5 x_1^{(3)} x_2^{(6)} \xi_{(10)}^9 - x_1^{(3)} x_2^{(6)} \xi_{(10)}^{10} \\&\quad - x_1^{(3)} x_2^{(6)} \xi_{(10)}^{12} + x_1^{(3)} x_2^{(6)} \xi_{(10)}^{13} - 3 x_1^{(3)} x_3^{(6)} \xi_{(10)}^8 + 3 x_1^{(3)} x_3^{(6)} \xi_{(10)}^9 - 2 x_1^{(3)} x_4^{(6)} \xi_{(10)}^{14} - x_1^{(3)} x_4^{(6)} \xi_{(10)}^{15} \\&\quad + x_1^{(3)} x_4^{(6)} \xi_{(10)}^{16} + x_1^{(3)} x_4^{(6)} \xi_{(10)}^{17} - x_1^{(4)} x_1^{(5)} \xi_{(10)}^1 - 4 x_1^{(4)} x_2^{(5)} \xi_{(10)}^4 + 2 x_1^{(4)} x_2^{(5)} \xi_{(10)}^6 - 10 x_1^{(4)} x_3^{(5)} \xi_{(10)}^8 \\&\quad - x_1^{(4)} x_3^{(5)} \xi_{(10)}^9 - 2 x_1^{(4)} x_3^{(5)} \xi_{(10)}^{10} + x_1^{(4)} x_3^{(5)} \xi_{(10)}^{11} + 4 x_1^{(4)} x_3^{(5)} \xi_{(10)}^{12} - x_1^{(4)} x_3^{(5)} \xi_{(10)}^{13} - 2 x_2^{(1)} x_1^{(8)} \xi_{(10)}^4 \\&\quad - 3 x_2^{(1)} x_2^{(8)} \xi_{(10)}^9 - x_2^{(1)} x_3^{(8)} \xi_{(10)}^3 + 5 x_2^{(1)} x_4^{(8)} \xi_{(10)}^4 - x_2^{(1)} x_4^{(8)} \xi_{(10)}^5 - x_2^{(1)} x_4^{(8)} \xi_{(10)}^6 - x_2^{(1)} x_4^{(8)} \xi_{(10)}^7 \\&\quad - 2 x_2^{(1)} x_5^{(8)} \xi_{(10)}^7 + 5 x_2^{(1)} x_6^{(8)} \xi_{(10)}^8 + 5 x_2^{(1)} x_6^{(8)} \xi_{(10)}^9 + x_2^{(1)} x_6^{(8)} \xi_{(10)}^{10} - x_2^{(1)} x_6^{(8)} \xi_{(10)}^{11} - 2 x_2^{(1)} x_6^{(8)} \xi_{(10)}^{12} \\&\quad - x_2^{(1)} x_6^{(8)} \xi_{(10)}^{13} - 3 x_2^{(1)} x_7^{(8)} \xi_{(10)}^{13} + 5 x_2^{(1)} x_8^{(8)} \xi_{(10)}^{14} + x_2^{(1)} x_8^{(8)} \xi_{(10)}^{15} - 2 x_2^{(1)} x_8^{(8)} \xi_{(10)}^{16} - 3 x_2^{(1)} x_8^{(8)} \xi_{(10)}^{17} \\&\quad - x_2^{(4)} x_1^{(5)} \xi_{(10)}^2 + 4 x_2^{(4)} x_2^{(5)} \xi_{(10)}^8 - 5 x_2^{(4)} x_2^{(5)} \xi_{(10)}^9 + 2 x_2^{(4)} x_2^{(5)} \xi_{(10)}^{10} - x_2^{(4)} x_2^{(5)} \xi_{(10)}^{11} - x_2^{(4)} x_2^{(5)} \xi_{(10)}^{12} \\&\quad + x_2^{(4)} x_2^{(5)} \xi_{(10)}^{13} - 2 x_2^{(4)} x_3^{(5)} \xi_{(10)}^{14} + x_2^{(4)} x_3^{(5)} \xi_{(10)}^{17} - x_4 x_2^{(9)} \xi_{(10)}^1 - x_4 x_4^{(9)} \xi_{(10)}^2 + x_4 x_8^{(9)} \xi_{(10)}^3 - 5 x_4 x_9^{(9)} \xi_{(10)}^4 \\&\quad + x_4 x_9^{(9)} \xi_{(10)}^5 + 5 x_4 x_9^{(9)} \xi_{(10)}^6 + x_4 x_9^{(9)} \xi_{(10)}^7 + 2 x_4 x_{10}^{(9)} \xi_{(10)}^7 - 5 x_4 x_{11}^{(9)} \xi_{(10)}^8 - 5 x_4 x_{11}^{(9)} \xi_{(10)}^9 - x_4 x_{11}^{(9)} \xi_{(10)}^{10} \\&\quad + x_4 x_{11}^{(9)} \xi_{(10)}^{11} + 5 x_4 x_{11}^{(9)} \xi_{(10)}^{12} + x_4 x_{11}^{(9)} \xi_{(10)}^{13} + 3 x_4 x_{12}^{(9)} \xi_{(10)}^{13} - 5 x_4 x_{13}^{(9)} \xi_{(10)}^{14} - x_4 x_{13}^{(9)} \xi_{(10)}^{15} \\&\quad + 2 x_4 x_{13}^{(9)} \xi_{(10)}^{16} + 5 x_4 x_{13}^{(9)} \xi_{(10)}^{17} + 2 x_2 x_3 x_1^{(9)} \xi_{(10)}^4 + 3 x_2 x_3 x_2^{(9)} \xi_{(10)}^8 + 3 x_2 x_3 x_3^{(9)} \xi_{(10)}^9 + x_2 x_3 x_4^{(9)} \xi_{(10)}^{14} \\&\quad + 2 x_2 x_3 x_5^{(9)} \xi_{(10)}^6 + 3 x_2 x_3 x_6^{(9)} \xi_{(10)}^{12} + x_2 x_3 x_7^{(9)} \xi_{(10)}^{17}\\ &\quad +10 x_1 x_3^{(10)} \xi_{(11)}^3 + 3 x_1 x_4^{(10)} \xi_{(11)}^7 + 3 x_1 x_6^{(10)} \xi_{(11)}^6 + 3 x_1 x_7^{(10)} \xi_{(11)}^8 + x_1 x_8^{(10)} \xi_{(11)}^{11} + x_1 x_9^{(10)} \xi_{(11)}^{12} + x_1 x_{12}^{(10)} \xi_{(11)}^{10} \\&\quad + x_1 x_{13}^{(10)} \xi_{(11)}^{13} + x_1 x_{14}^{(10)} \xi_{(11)}^{16} + x_1 x_{17}^{(10)} \xi_{(11)}^{15} - x_1^{(1)} x_1^{(9)} \xi_{(11)}^1 - 6 x_1^{(1)} x_1^{(9)} \xi_{(11)}^7 - 3 x_1^{(1)} x_2^{(9)} \xi_{(11)}^{11} \\&\quad - x_1^{(1)} x_3^{(9)} \xi_{(11)}^2 - 3 x_1^{(1)} x_3^{(9)} \xi_{(11)}^{12} - x_1^{(1)} x_4^{(9)} \xi_{(11)}^{16} - 6 x_1^{(1)} x_5^{(9)} \xi_{(11)}^6 - 3 x_1^{(1)} x_6^{(9)} \xi_{(11)}^{10} - x_1^{(1)} x_7^{(9)} \xi_{(11)}^{15} \\&\quad - 2 x_1^{(1)} x_8^{(9)} \xi_{(11)}^{18} - 3 x_1^{(1)} x_9^{(9)} \xi_{(11)}^{20} - x_1^{(1)} x_{10}^{(9)} \xi_{(11)}^{21} - x_1^{(1)} x_{11}^{(9)} \xi_{(11)}^{23} - x_1^{(1)} x_{12}^{(9)} \xi_{(11)}^{24} \\&\quad - x_1^{(1)} x_{13}^{(9)} \xi_{(11)}^{25} + x_1^{(2)} x_1^{(8)} \xi_{(11)}^1 + 3 x_1^{(2)} x_1^{(8)} \xi_{(11)}^5 - 3 x_1^{(2)} x_1^{(8)} \xi_{(11)}^6 + 9 x_1^{(2)} x_1^{(8)} \xi_{(11)}^7 + 3 x_1^{(2)} x_1^{(8)} \xi_{(11)}^8 \\&\quad - 3 x_1^{(2)} x_1^{(8)} \xi_{(11)}^9 + x_1^{(2)} x_2^{(8)} \xi_{(11)}^2 - x_1^{(2)} x_2^{(8)} \xi_{(11)}^{10} + x_1^{(2)} x_2^{(8)} \xi_{(11)}^{11} + 4 x_1^{(2)} x_2^{(8)} \xi_{(11)}^{12} + x_1^{(2)} x_2^{(8)} \xi_{(11)}^{13} \\&\quad - x_1^{(2)} x_2^{(8)} \xi_{(11)}^{14} + 10 x_1^{(2)} x_3^{(8)} \xi_{(11)}^3 - 4 x_1^{(2)} x_3^{(8)} \xi_{(11)}^4 + 3 x_1^{(2)} x_4^{(8)} \xi_{(11)}^6 - 15 x_1^{(2)} x_4^{(8)} \xi_{(11)}^7 + 3 x_1^{(2)} x_4^{(8)} \xi_{(11)}^8 \\&\quad + 6 x_1^{(2)} x_5^{(8)} \xi_{(11)}^8 - 3 x_1^{(2)} x_5^{(8)} \xi_{(11)}^9 + 2 x_1^{(2)} x_6^{(8)} \xi_{(11)}^{10} - 5 x_1^{(2)} x_6^{(8)} \xi_{(11)}^{11} - 5 x_1^{(2)} x_6^{(8)} \xi_{(11)}^{12} + x_1^{(2)} x_6^{(8)} \xi_{(11)}^{13} \\&\quad + 3 x_1^{(2)} x_7^{(8)} \xi_{(11)}^{13} - 2 x_1^{(2)} x_7^{(8)} \xi_{(11)}^{14} + 3 x_1^{(2)} x_8^{(8)} \xi_{(11)}^{15} - 5 x_1^{(2)} x_8^{(8)} \xi_{(11)}^{16} - 2 x_1^{(3)} x_1^{(7)} \xi_{(11)}^1 \\&\quad - 3 x_1^{(3)} x_1^{(7)} \xi_{(11)}^5 + 3 x_1^{(3)} x_1^{(7)} \xi_{(11)}^6 - 9 x_1^{(3)} x_1^{(7)} \xi_{(11)}^7 - 3 x_1^{(3)} x_1^{(7)} \xi_{(11)}^8 + 3 x_1^{(3)} x_1^{(7)} \xi_{(11)}^9 - 2 x_1^{(3)} x_2^{(7)} \xi_{(11)}^2 \\&\quad + x_1^{(3)} x_2^{(7)} \xi_{(11)}^{10} - x_1^{(3)} x_2^{(7)} \xi_{(11)}^{11} - 4 x_1^{(3)} x_2^{(7)} \xi_{(11)}^{12} - x_1^{(3)} x_2^{(7)} \xi_{(11)}^{13} + x_1^{(3)} x_2^{(7)} \xi_{(11)}^{14} - x_1^{(3)} x_3^{(7)} \xi_{(11)}^{18} \\&\quad + x_1^{(3)} x_3^{(7)} \xi_{(11)}^{19} - x_1^{(3)} x_4^{(7)} \xi_{(11)}^{20} + x_1^{(3)} x_4^{(7)} \xi_{(11)}^{22} - x_1^{(3)} x_5^{(7)} \xi_{(11)}^{23} + x_1^{(3)} x_5^{(7)} \xi_{(11)}^{24} - 4 x_1^{(4)} x_1^{(6)} \xi_{(11)}^3 \\&\quad + 2 x_1^{(4)} x_1^{(6)} \xi_{(11)}^4 + 6 x_1^{(4)} x_2^{(6)} \xi_{(11)}^1 + 18 x_1^{(4)} x_2^{(6)} \xi_{(11)}^5 - 18 x_1^{(4)} x_2^{(6)} \xi_{(11)}^6 + 36 x_1^{(4)} x_2^{(6)} \xi_{(11)}^7 + 12 x_1^{(4)} x_2^{(6)} \xi_{(11)}^8 \\&\quad - 12 x_1^{(4)} x_2^{(6)} \xi_{(11)}^9 - 2 x_1^{(4)} x_3^{(6)} \xi_{(11)}^1 - 9 x_1^{(4)} x_3^{(6)} \xi_{(11)}^5 + 6 x_1^{(4)} x_3^{(6)} \xi_{(11)}^6 - 12 x_1^{(4)} x_3^{(6)} \xi_{(11)}^7 - 6 x_1^{(4)} x_3^{(6)} \xi_{(11)}^8 \\&\quad + 6 x_1^{(4)} x_3^{(6)} \xi_{(11)}^9 + 6 x_1^{(4)} x_4^{(6)} \xi_{(11)}^2 - 6 x_1^{(4)} x_4^{(6)} \xi_{(11)}^{10} + 6 x_1^{(4)} x_4^{(6)} \xi_{(11)}^{11} + 15 x_1^{(4)} x_4^{(6)} \xi_{(11)}^{12} + 3 x_1^{(4)} x_4^{(6)} \xi_{(11)}^{13} \\&\quad - 2 x_1^{(4)} x_4^{(6)} \xi_{(11)}^{14} - 3 x_1^{(5)} x_2^{(5)} \xi_{(11)}^1 - 9 x_1^{(5)} x_2^{(5)} \xi_{(11)}^5 + 6 x_1^{(5)} x_2^{(5)} \xi_{(11)}^6 - 12 x_1^{(5)} x_2^{(5)} \xi_{(11)}^7 - 6 x_1^{(5)} x_2^{(5)} \xi_{(11)}^8 \\&\quad + 6 x_1^{(5)} x_2^{(5)} \xi_{(11)}^9 - 3 x_1^{(5)} x_3^{(5)} \xi_{(11)}^2 + x_1^{(5)} x_3^{(5)} \xi_{(11)}^{10} + 2 x_1^{(5)} x_3^{(5)} \xi_{(11)}^{11} - 7 x_1^{(5)} x_3^{(5)} \xi_{(11)}^{12} - x_1^{(5)} x_3^{(5)} \xi_{(11)}^{13} \\&\quad + x_1^{(5)} x_3^{(5)} \xi_{(11)}^{14} + 2 x_2^{(1)} x_1^{(9)} \xi_{(11)}^3 - 2 x_2^{(1)} x_1^{(9)} \xi_{(11)}^4 + x_2^{(1)} x_2^{(9)} \xi_{(11)}^1 + 3 x_2^{(1)} x_2^{(9)} \xi_{(11)}^5 - 3 x_2^{(1)} x_2^{(9)} \xi_{(11)}^6 \\&\quad + 3 x_2^{(1)} x_2^{(9)} \xi_{(11)}^7 + 3 x_2^{(1)} x_2^{(9)} \xi_{(11)}^8 - 3 x_2^{(1)} x_2^{(9)} \xi_{(11)}^9 - 3 x_2^{(1)} x_3^{(9)} \xi_{(11)}^5 + x_2^{(1)} x_4^{(9)} \xi_{(11)}^2 - x_2^{(1)} x_4^{(9)} \xi_{(11)}^{10} \\&\quad + x_2^{(1)} x_4^{(9)} \xi_{(11)}^{11} + x_2^{(1)} x_4^{(9)} \xi_{(11)}^{12} + x_2^{(1)} x_4^{(9)} \xi_{(11)}^{13} - x_2^{(1)} x_4^{(9)} \xi_{(11)}^{14} - 2 x_2^{(1)} x_5^{(9)} \xi_{(11)}^4 - 3 x_2^{(1)} x_6^{(9)} \xi_{(11)}^9 \\&\quad - x_2^{(1)} x_7^{(9)} \xi_{(11)}^{14} - x_2^{(1)} x_8^{(9)} \xi_{(11)}^{17} - x_2^{(1)} x_9^{(9)} \xi_{(11)}^{18} - x_2^{(1)} x_9^{(9)} \xi_{(11)}^{19} - 2 x_2^{(1)} x_{10}^{(9)} \xi_{(11)}^{19} + x_2^{(1)} x_{11}^{(9)} \xi_{(11)}^{20} \\&\quad - x_2^{(1)} x_{11}^{(9)} \xi_{(11)}^{21} - x_2^{(1)} x_{11}^{(9)} \xi_{(11)}^{22} - 3 x_2^{(1)} x_{12}^{(9)} \xi_{(11)}^{22} + x_2^{(1)} x_{13}^{(9)} \xi_{(11)}^{23} - 2 x_2^{(1)} x_{13}^{(9)} \xi_{(11)}^{24} \\&\quad - 6 x_2^{(4)} x_1^{(6)} \xi_{(11)}^1 - 9 x_2^{(4)} x_1^{(6)} \xi_{(11)}^5 + 18 x_2^{(4)} x_1^{(6)} \xi_{(11)}^6 - 30 x_2^{(4)} x_1^{(6)} \xi_{(11)}^7 - 12 x_2^{(4)} x_1^{(6)} \xi_{(11)}^8 + 9 x_2^{(4)} x_1^{(6)} \xi_{(11)}^9\\ &\quad + x_2^{(4)} x_2^{(6)} \xi_{(11)}^{10} - 4 x_2^{(4)} x_2^{(6)} \xi_{(11)}^{11} + 5 x_2^{(4)} x_2^{(6)} \xi_{(11)}^{12} - x_2^{(4)} x_2^{(6)} \xi_{(11)}^{13} - 2 x_2^{(4)} x_3^{(6)} \xi_{(11)}^2 + x_2^{(4)} x_3^{(6)} \xi_{(11)}^{10} \\&\quad + 2 x_2^{(4)} x_3^{(6)} \xi_{(11)}^{11} - 7 x_2^{(4)} x_3^{(6)} \xi_{(11)}^{12} - x_2^{(4)} x_3^{(6)} \xi_{(11)}^{13} + x_2^{(4)} x_3^{(6)} \xi_{(11)}^{14} - x_2^{(4)} x_4^{(6)} \xi_{(11)}^{15} + 2 x_2^{(4)} x_4^{(6)} \xi_{(11)}^{16} \\&\quad + 2 x_2^{(5)} x_3^{(5)} \xi_{(11)}^{20} - x_2^{(5)} x_3^{(5)} \xi_{(11)}^{21} + x_2^{(5)} x_3^{(5)} \xi_{(11)}^{22} - x_4 x_4^{(10)} \xi_{(11)}^3 + x_4 x_4^{(10)} \xi_{(11)}^4 - 5 x_4 x_5^{(10)} \xi_{(11)}^3 + x_4 x_6^{(10)} \xi_{(11)}^4 \\&\quad - x_4 x_8^{(10)} \xi_{(11)}^5 + x_4 x_8^{(10)} \xi_{(11)}^6 - x_4 x_8^{(10)} \xi_{(11)}^7 - x_4 x_8^{(10)} \xi_{(11)}^8 + x_4 x_8^{(10)} \xi_{(11)}^9 + x_4 x_9^{(10)} \xi_{(11)}^5 - 5 x_4 x_{10}^{(10)} \xi_{(11)}^6 \\&\quad + 5 x_4 x_{10}^{(10)} \xi_{(11)}^7 - x_4 x_{10}^{(10)} \xi_{(11)}^8 - 6 x_4 x_{11}^{(10)} \xi_{(11)}^8 + x_4 x_{12}^{(10)} \xi_{(11)}^9 + x_4 x_{14}^{(10)} \xi_{(11)}^{10} - x_4 x_{14}^{(10)} \xi_{(11)}^{11} \\&\quad - x_4 x_{14}^{(10)} \xi_{(11)}^{12} - x_4 x_{14}^{(10)} \xi_{(11)}^{13} + x_4 x_{14}^{(10)} \xi_{(11)}^{14} - 5 x_4 x_{15}^{(10)} \xi_{(11)}^{10} + 5 x_4 x_{15}^{(10)} \xi_{(11)}^{11} + 5 x_4 x_{15}^{(10)} \xi_{(11)}^{12} \\&\quad - x_4 x_{15}^{(10)} \xi_{(11)}^{13} - 3 x_4 x_{16}^{(10)} \xi_{(11)}^{13} + x_4 x_{17}^{(10)} \xi_{(11)}^{14} - 5 x_4 x_{18}^{(10)} \xi_{(11)}^{15} + 5 x_4 x_{18}^{(10)} \xi_{(11)}^{16} + x_2 x_3 x_1^{(10)} \xi_{(11)}^1 \\&\quad + x_2 x_3 x_2^{(10)} \xi_{(11)}^2 + x_2 x_3 x_3^{(10)} \xi_{(11)}^{17} + x_2 x_3 x_5^{(10)} \xi_{(11)}^{18} + x_2 x_3 x_7^{(10)} \xi_{(11)}^{19} + x_2 x_3 x_{10}^{(10)} \xi_{(11)}^{20} + x_2 x_3 x_{11}^{(10)} \xi_{(11)}^{21} \\&\quad + x_2 x_3 x_{13}^{(10)} \xi_{(11)}^{22} + x_2 x_3 x_{15}^{(10)} \xi_{(11)}^{23} + x_2 x_3 x_{16}^{(10)} \xi_{(11)}^{24} + x_2 x_3 x_{18}^{(10)} \xi_{(11)}^{25}. \end{align*} We were able to determine $\pi_m$ for $1\le m\le 12$: \begin{align*} \pi_1 &= x_1 x_2 \xi^1 \xi^2 - x_1 x_3 \xi^1 \xi^3 + x_2 x_3 \xi^2 \xi^3 - x_2 x_4 \xi^2 \xi^4 + x_3 x_4 \xi^3 \xi^4, \quad \pi_2 = 0, \quad \pi_3 = x_3 x_1^{(2)} \xi^3 \xi_{(2)}^1 - x_2 x_1^{(2)} \xi^2 \xi_{(2)}^1, \\ \pi_4 &= 0,\quad \pi_5 = - x_2 x_1^{(4)} \xi^2 \xi_{(4)}^1 - x_2 x_2^{(4)} \xi^2 \xi_{(4)}^2 + x_3 x_1^{(4)} \xi^3 \xi_{(4)}^1 + x_3 x_2^{(4)} \xi^3 \xi_{(4)}^2, \quad \pi_6 = 2 x_2 x_1^{(5)} \xi^2 \xi_{(5)}^1 - 2 x_3 x_1^{(5)} \xi^3 \xi_{(5)}^1, \\ \pi_7 &= - x_2 x_1^{(6)} \xi^2 \xi_{(6)}^1 - x_2 x_2^{(6)} \xi^2 \xi_{(6)}^2 - x_2 x_3^{(6)} \xi^2 \xi_{(6)}^3 - x_2 x_4^{(6)} \xi^2 \xi_{(6)}^4 + x_3 x_1^{(6)} \xi^3 \xi_{(6)}^1 + x_3 x_2^{(6)} \xi^3 \xi_{(6)}^2 \\&\quad + x_3 x_3^{(6)} \xi^3 \xi_{(6)}^3 + x_3 x_4^{(6)} \xi^3 \xi_{(6)}^4, \\ \pi_8 &= 2 x_2 x_1^{(7)} \xi^2 \xi_{(7)}^1 + 2 x_2 x_2^{(7)} \xi^2 \xi_{(7)}^2 - 2 x_3 x_1^{(7)} \xi^3 \xi_{(7)}^1 - 2 x_3 x_2^{(7)} \xi^3 \xi_{(7)}^2, \\ \pi_9 &= - x_2 x_1^{(8)} \xi^2 \xi_{(8)}^1 - x_2 x_2^{(8)} \xi^2 \xi_{(8)}^2 - x_2 x_3^{(8)} \xi^2 \xi_{(8)}^3 - x_2 x_4^{(8)} \xi^2 \xi_{(8)}^4 - x_2 x_5^{(8)} \xi^2 \xi_{(8)}^5 - x_2 x_6^{(8)} \xi^2 \xi_{(8)}^6 \\&\quad - x_2 x_7^{(8)} \xi^2 \xi_{(8)}^7 - x_2 x_8^{(8)} \xi^2 \xi_{(8)}^8 + x_3 x_1^{(8)} \xi^3 \xi_{(8)}^1 + x_3 x_2^{(8)} \xi^3 \xi_{(8)}^2 + x_3 x_3^{(8)} \xi^3 \xi_{(8)}^3 + x_3 x_4^{(8)} \xi^3 \xi_{(8)}^4 \\&\quad + x_3 x_5^{(8)} \xi^3 \xi_{(8)}^5 + x_3 x_6^{(8)} \xi^3 \xi_{(8)}^6 + x_3 x_7^{(8)} \xi^3 \xi_{(8)}^7 + x_3 x_8^{(8)} \xi^3 \xi_{(8)}^8, \\ \pi_{10} &= 2 x_2 x_1^{(9)} \xi^2 \xi_{(9)}^1 + 2 x_2 x_2^{(9)} \xi^2 \xi_{(9)}^2 + 2 x_2 x_3^{(9)} \xi^2 \xi_{(9)}^3 + 2 x_2 x_4^{(9)} \xi^2 \xi_{(9)}^4 + 2 x_2 x_5^{(9)} \xi^2 \xi_{(9)}^5 \\&\quad + 2 x_2 x_6^{(9)} \xi^2 \xi_{(9)}^6 + 2 x_2 x_7^{(9)} \xi^2 \xi_{(9)}^7 - 2 x_3 x_1^{(9)} \xi^3 \xi_{(9)}^1 - 2 x_3 x_2^{(9)} \xi^3 \xi_{(9)}^2 - 2 x_3 x_3^{(9)} \xi^3 \xi_{(9)}^3 - 2 x_3 x_4^{(9)} \xi^3 \xi_{(9)}^4 \\&\quad - 2 x_3 x_5^{(9)} \xi^3 \xi_{(9)}^5 - 2 x_3 x_6^{(9)} \xi^3 \xi_{(9)}^6 - 2 x_3 x_7^{(9)} \xi^3 \xi_{(9)}^7, \\ \pi_{11} &= - 3 x_2 x_1^{(10)} \xi^2 \xi_{(10)}^1 - 3 x_2 x_2^{(10)} \xi^2 \xi_{(10)}^2 - x_2 x_3^{(10)} \xi^2 \xi_{(10)}^3 - x_2 x_4^{(10)} \xi^2 \xi_{(10)}^4 - x_2 x_5^{(10)} \xi^2 \xi_{(10)}^5 \\&\quad - x_2 x_6^{(10)} \xi^2 \xi_{(10)}^6 - x_2 x_7^{(10)} \xi^2 \xi_{(10)}^7 - x_2 x_8^{(10)} \xi^2 \xi_{(10)}^8 - x_2 x_9^{(10)} \xi^2 \xi_{(10)}^9 - x_2 x_{10}^{(10)} \xi^2 \xi_{(10)}^{10} - x_2 x_{11}^{(10)} \xi^2 \xi_{(10)}^{11} \\&\quad - x_2 x_{12}^{(10)} \xi^2 \xi_{(10)}^{12} - x_2 x_{13}^{(10)} \xi^2 \xi_{(10)}^{13} - x_2 x_{14}^{(10)} \xi^2 \xi_{(10)}^{14} - x_2 x_{15}^{(10)} \xi^2 \xi_{(10)}^{15} - x_2 x_{16}^{(10)} \xi^2 \xi_{(10)}^{16} \\&\quad - x_2 x_{17}^{(10)} \xi^2 \xi_{(10)}^{17} - x_2 x_{18}^{(10)} \xi^2 \xi_{(10)}^{18} + 3 x_3 x_1^{(10)} \xi^3 \xi_{(10)}^1 + 3 x_3 x_2^{(10)} \xi^3 \xi_{(10)}^2 + x_3 x_3^{(10)} \xi^3 \xi_{(10)}^3 \\&\quad + x_3 x_4^{(10)} \xi^3 \xi_{(10)}^4 + x_3 x_5^{(10)} \xi^3 \xi_{(10)}^5 + x_3 x_6^{(10)} \xi^3 \xi_{(10)}^6 + x_3 x_7^{(10)} \xi^3 \xi_{(10)}^7 + x_3 x_8^{(10)} \xi^3 \xi_{(10)}^8 + x_3 x_9^{(10)} \xi^3 \xi_{(10)}^9 \\&\quad + x_3 x_{10}^{(10)} \xi^3 \xi_{(10)}^{10} + x_3 x_{11}^{(10)} \xi^3 \xi_{(10)}^{11} + x_3 x_{12}^{(10)} \xi^3 \xi_{(10)}^{12} + x_3 x_{13}^{(10)} \xi^3 \xi_{(10)}^{13} + x_3 x_{14}^{(10)} \xi^3 \xi_{(10)}^{14} \\&\quad + x_3 x_{15}^{(10)} \xi^3 \xi_{(10)}^{15} + x_3 x_{16}^{(10)} \xi^3 \xi_{(10)}^{16} + x_3 x_{17}^{(10)} \xi^3 \xi_{(10)}^{17} + x_3 x_{18}^{(10)} \xi^3 \xi_{(10)}^{18}, \\ \pi_{12} &= 2 x_2 x_1^{(11)} \xi^2 \xi_{(11)}^1 + 2 x_2 x_2^{(11)} \xi^2 \xi_{(11)}^2 + 2 x_2 x_3^{(11)} \xi^2 \xi_{(11)}^3 + 2 x_2 x_4^{(11)} \xi^2 \xi_{(11)}^4 + 2 x_2 x_5^{(11)} \xi^2 \xi_{(11)}^5 \\&\quad + 2 x_2 x_6^{(11)} \xi^2 \xi_{(11)}^6 + 2 x_2 x_7^{(11)} \xi^2 \xi_{(11)}^7 + 2 x_2 x_8^{(11)} \xi^2 \xi_{(11)}^8 + 2 x_2 x_9^{(11)} \xi^2 \xi_{(11)}^9 + 2 x_2 x_{10}^{(11)} \xi^2 \xi_{(11)}^{10} \\&\quad + 2 x_2 x_{11}^{(11)} \xi^2 \xi_{(11)}^{11} + 2 x_2 x_{12}^{(11)} \xi^2 \xi_{(11)}^{12} + 2 x_2 x_{13}^{(11)} \xi^2 \xi_{(11)}^{13} + 2 x_2 x_{14}^{(11)} \xi^2 \xi_{(11)}^{14} + 2 x_2 x_{15}^{(11)} \xi^2 \xi_{(11)}^{15} \\&\quad + 2 x_2 x_{16}^{(11)} \xi^2 \xi_{(11)}^{16} - 2 x_3 x_1^{(11)} \xi^3 \xi_{(11)}^1 - 2 x_3 x_2^{(11)} \xi^3 \xi_{(11)}^2 - 2 x_3 x_3^{(11)} \xi^3 \xi_{(11)}^3 - 2 x_3 x_4^{(11)} \xi^3 \xi_{(11)}^4 \\&\quad - 2 x_3 x_5^{(11)} \xi^3 \xi_{(11)}^5 - 2 x_3 x_6^{(11)} \xi^3 \xi_{(11)}^6 - 2 x_3 x_7^{(11)} \xi^3 \xi_{(11)}^7 - 2 x_3 x_8^{(11)} \xi^3 \xi_{(11)}^8 - 2 x_3 x_9^{(11)} \xi^3 \xi_{(11)}^9 - 2 x_3 x_{10}^{(11)} \xi^3 \xi_{(11)}^{10} \\&\quad - 2 x_3 x_{11}^{(11)} \xi^3 \xi_{(11)}^{11} - 2 x_3 x_{12}^{(11)} \xi^3 \xi_{(11)}^{12} - 2 x_3 x_{13}^{(11)} \xi^3 \xi_{(11)}^{13} - 2 x_3 x_{14}^{(11)} \xi^3 \xi_{(11)}^{14} - 2 x_3 x_{15}^{(11)} \xi^3 \xi_{(11)}^{15} \\&\quad - 2 x_3 x_{16}^{(11)} \xi^3 \xi_{(11)}^{16}. \end{align*} \endgroup In this example the $\pi_m$'s appear to have a diagonal shape and are remarkably simple in comparison to the coefficients of $\pi_0$. We observe that $\llbracket \pi_i,\pi_j\rrbracket=0$ for $1\le i,j \le 12$. Even though $S/I$ is not Koszul the $\pi_m$ are homogeneous of $x$-degree $2$ for $1\le m\le 12$. Note the $\pi_0^{\le 11}$ is not of homogeneous $x$-degree. \subsection{A summary of the data for more contrived examples}\label{subsec:contrived} For more interesting examples the calculations quickly become overwhelming, and for lack of space we cannot present the details of our findings here; they are available by request from the authors. Let us just record how far could we get with the examples of Section \ref{sec:affine}. \begin{align*} \begin{tabular}{c||c|c|c} Example & $\dim S$ & $|\mathcal I_m|$, $m=1,2,\dots$ & higher brack. computed\\\hline\hline $(-1,1,1)$-circle quotient (see Subsec. \ref{subsubsec:-1,1,1}) & $8$& $9,16,45,\dots$ & $\pi_1$, $\pi_2$, $\pi_3$, $\pi_4$\\\hline $2$ particles with zero ang. momentum (see Subsec. \ref{subsubsec:angmom}) & $10$ & $11,10,10,\dots$ & $\pi_1$, $\pi_2$, $\pi_3^{02}$, $\pi_3^{011}$\\\hline deg. $2$ harm. polynomials of $3$ var. (see Subsec. \ref{subsubsec:harmonic}) & $3$ & $5,5,10,24,55,\dots$ & $\pi_1$, $\pi_2$, $\pi_3$, $\pi_4$, $\pi_5$\\\hline $2\times 2$-minors of a $3\times 3$-matrix (see Subsec. \ref{subsubsec:detideal}) & $9$& $9,16,45,\dots$ & $\pi_1$, $\pi_2$, $\pi_3$, $\pi_4$ \end{tabular} \end{align*} The example of two particles in dimension three (see Subsec. \ref{subsubsec:angmom}) is not Koszul and $\pi_2$ and $\pi_3$ are not of homogeneous $x$-degree.
1,116,691,500,533
arxiv
\section{Introduction}\label{sec:introduction} Supervised learning is the main stream in pattern recognition, computer vision and natural language processing nowadays due to the great success of deep learning. On one hand, the performance of a learning system should improve as the number of training samples increases. On the other hand, some learning systems may benefit more than others from a large number of training samples. For example, deep neural networks (DNNs) often work better than classical learning systems that contain feature extraction and classification two stages. How the quantity of labeled samples affects the performance of learning systems is an important question in the data-driven era. Is it possible to design a supervision-scalable learning system? We attempt to shed light on these questions by choosing the image classification problem as an illustrative example in this work. Strong supervision is costly in practice since data labeling demands a lot of time and resource. Besides, it is unlikely to collect and label desired training samples in all possible scenarios. Even with a huge amount of labeled data in place, it may still be substantially less than the need. Weak supervision can appear in different forms, e.g., inexact supervision, inaccurate supervision, and incomplete supervision. Labels are provided at the coarse grain (instead of the instance level) in inexact supervision. One example is multi-instance learning \cite{foulds2010review, wei2016scalable}. For inaccurate supervision, labels provided may suffer from labeling errors, leading to the noisy label problem in supervised learning \cite{angluin1988learning,frenay2013classification}. Only a limited number of labeled data is available to the training process in incomplete supervision \cite{zhou2018brief,zhang2021learning}. Here, we consider the scenario of incomplete supervision. To improve learning performance under incomplete supervision, solutions such as semi-supervised learning and active learning have been developed. In semi-supervised learning, both labeled and unlabeled data are utilized to achieve better performance \cite{chapelle2009semi,zhu2009introduction,zhou2010semi}. It is built upon several assumptions such as smoothness, low-density, and manifold assumptions \cite{van2020survey}. In active learning, it attempts to expand the labeled data set by identifying important unlabeled instances that will help boost the learning performance most ~\cite{settles2009active,haussmann2020scalable}. Another related technology is few-shot learning (FSL) \cite{fink2004object} that learns from a very limited number of labeled data without the help of unlabeled data. For example, a \textit{N}-way-\textit{K}-shot classification refers to a labeled set with \textit{K} samples from each of the \textit{N} classes. Meta learning is often used to solve the FSL problem \cite{sun2019meta,chen2021meta}. Humans can learn effectively in a weakly supervised setting. In contrast, deep learning networks often need more labeled data to achieve good performance. What makes weak supervision and strong supervision different? There is little study on the design of supervision-scalable leaning systems. In this work, we show the design of two learning systems that demonstrate an excellent scalable performance with respect to various supervision degrees. The first one adopts the classical histogram of oriented gradients (HOG) \cite{dalal2005histograms} features while the second one uses successive-subspace-learning (SSL) features. We discuss ways to adjust each module so that their design is more robust against the number of training samples. To illustrate their robust performance, we compare with the performance of LeNet-5, which is an end-to-end optimized neural network, for MNIST and Fashion-MNIST datasets. The number of training samples per image class goes from the extremely weak supervision condition (i.e., 1 labeled sample per class) to the strong supervision condition (i.e., 4096 labeled sample per class) with gradual transition in between (i.e., $2^n$, $n=0, 1, \cdots, 12$). Experimental results show that the two families of modularized learning systems have more robust performance than LeNet-5. They both outperform LeNet-5 by a large margin for small $n$ and have performance comparable with that of LeNet-5 for large $n$. The rest of the paper is organized as follows. The design of HOG-based learning systems is examined in Sec.~\ref{sec:method-HOG}, where two methods, called HOG-I and HOG-II, are proposed. The design of SSL-based learning systems is investigated in Sec.~\ref{sec:method}, where two methods, called IPHop-I and IPHOP-II, are presented. Performance benchmarking of HOG-I, HOG-II, IPHop-I, IPHop-II and LeNet-5 is conducted in Sec.~\ref{sec:experiments}. Discussion on experimental results is given in Sec.~\ref{sec:discussion}. Finally, concluding remarks and future work are given in Sec.~\ref{sec:conclusion}. \begin{figure*}[tbp] \centerline{\includegraphics[width=1.0\linewidth]{figures/hog_pipeline.png}} \caption{An overview of the HOG-based learning system, where the input image is of size $28 \times 28$.}\label{fig:hog_pipe} \end{figure*} \section{Design of Learning Systems with HOG Features}\label{sec:method-HOG} Classical pattern recognition methods consist of feature extraction and classification two steps. One well known feature extraction method is the Histogram of Oriented Gradients (HOG) \cite{dalal2005histograms}. Before the big data era, most datasets are small in terms of the numbers of training samples and test samples. As a result, HOG-based solutions are typically applied to small datasets. To make HOG-based solutions scalable to larger datasets, some modifications have to be made. In this section, we propose two HOG-based learning systems, HOG-I and HOG-II. They are suitable for small and large training sizes, respectively. \subsection{Design of Three Modular Components}\label{sec:3_modules} As mentioned earlier, we focus on the design of a modularized system that can be decomposed into representation learning, feature learning and decision learning three modules. We will examine them one by one below. \subsubsection{Representation Learning}\label{sec:hog_module1} HOG was originally proposed for human detection in \cite{dalal2005histograms}. It measures the oriented gradient distribution in different orientation bins evenly spaced over 360 degrees at each local region of the input image. Modifications are made to make the HOG representation more powerful for multi-class recognition. Images in the MNIST and the Fashion-MNIST datasets have resolution of $28\times 28$ without padding. The proposed HOG representation scheme for them is illustrated in Fig.~\ref{fig:hog_pipe}. As shown in the figure, both spatial and spectral HOG representations are considered. Hyper-parameters used in the experiments are specified below. First, we decompose an input image into $7\times 7$ overlapping patches with stride $3$, leading to $8 \times 8=64$ patches. HOG is computed within each patch and the number of orientation bins is set to $8$. For each orientation bin, there are $8\times 8$ responses in the spatial domain. Thus, each image has a 512-D spatial HOG representation vector. It is called the spatial HOG representation since each element in the vector captures the probability of a certain oriented gradient in a local region. Next, for each bin, we apply the 2D discrete cosine transform (DCT) to $8 \times 8$ spatial responses to derive the spectral representation. The DCT converts 64 spatial responses to 64 spectral responses. It is called the spectral HOG representation. Each image has a 512-D spectral HOG representation vector, too. We combine spatial and spectral HOG representations to yield a vector of 1024 dimensions as the joint spatial/spectral HOG features. \subsubsection{Feature Learning}\label{sec:hog_module2} The size of HOG feature set from Module 1 is large. It is desired to select discriminant features to reduce the feature dimension before classification. We adopt two feature selection methods in Module 2 as elaborated below. When the training size is small, we may consider unsupervised feature selection. One common method is to use the variance of a feature. Intuitively speaking, if one feature has a smaller variance value among all training samples, it is not able to separate different classes well as compared with features that have higher variance values. Thus, we can rank order features from the largest to the smallest variance values and use a threshold to select those of larger variance. When the training size becomes larger, we can exploit class labels for better feature selection. The advantage of semi-supervised feature selection over unsupervised becomes more obvious as the supervision level increases. Here, we adopt a newly developed method, called Discriminant Feature Test (DFT) \cite{yang2022supervised}, for semi-supervised feature selection. DFT computes the discriminant power of each 1D feature by partitioning its range into two non-overlapping intervals and searching for the optimal partitioning point that minimizes the weighted entropy loss. Mathematically, we have the entropy function of the left interval as \begin{equation}\label{eq:entropy} H^{i}_{L,t} = -\sum_{c=1}^{C}p^{i}_{L,c}log(p^{i}_{L,c}), \end{equation} where $p^{i}_{L,c}$ is the probability of class $c$ in the left interval of the $i$th feature and $t$ is a threshold. Similarly, we can compute entropy $H^{i}_{R,t}$ for the right interval. Then, the entropy of the whole range is the weighted average of $H_{L,t}$ and $H_{R,t}$, denoted by $H^i_{t}$. Then, the optimized entropy $H^i_{op}$ for the $i$th feature is given by \begin{equation}\label{eq:optimized_entropy} H^i_{op} = \min_{t \epsilon T} H^i_{t}, \end{equation} where $T$ indicates a set of discrete partition points. The lower the weighted entropy, the higher the discriminant power. Top $K$ features with the lowest DFT loss are selected as discriminant features. \subsubsection{Decision Learning}\label{sec:hog_module3} We consider two classifiers - the k-nearest-neighbor (KNN) classifier and the eXtreme Gradient Boosting (XGBoost~\cite{xgb}) classifier. In a weakly supervised setting with a small number of training samples, the choice is very limited and the distance-based classifier seems to be a reasonable choice. When the training sample becomes larger, we can use more powerful supervised classifier to yield better classification performance. The XGBoost classifier is a representative one. \begin{figure*}[tbp] \centering \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.93\linewidth]{figures/hog_combination_mnist.png} \caption{MNIST Dataset} \end{subfigure} \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.93\linewidth]{figures/hog_combination_fashion.png} \caption{Fashion-MNIST Dataset} \end{subfigure} \caption{Performance comparison of HOG-based learning systems on MNIST and Fashion-MNIST datasets under four different combinations among two feature learning methods (variance thresholding and DFT) and two classifiers (KNN and XGBoost) as a function of the training sample number per class in the log scale.}\label{fig:hog_combination} \end{figure*} \subsection{HOG-I and HOG-II}\label{sec:hog_12} Based on the three modules introduced in Sec. \ref{sec:3_modules}, we propose two HOG-based learning systems below. \begin{enumerate} \item HOG-I \begin{itemize} \item Objective: targeting at weaker supervision \item Representation Learning: HOG features \item Feature Learning: variance thresholding \item Decision Learning: KNN \end{itemize} \item HOG-II \begin{itemize} \item Objective: targeting at stronger supervision \item Representation Learning: HOG features \item Feature Learning: DFT \item Decision Learning: XGBoost \end{itemize} \end{enumerate} \begin{figure*}[tbp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/hog_FeatAblation_mnist_1.png} \caption{Weak Supervision with HOG-I} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/hog_FeatAblation_mnist_2.png} \caption{Strong Supervision with HOG-II} \end{subfigure} \caption{Performance comparison of spatial, spectral and joint HOG features on MNIST under weak and strong supervision conditions.}\label{fig:hog_mnist} \end{figure*} \begin{figure*}[tbp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/hog_FeatAblation_fashion_1.png} \caption{Weak Supervision with HOG-I} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/hog_FeatAblation_fashion_2.png} \caption{Strong Supervision with HOG-II} \end{subfigure} \caption{Performance comparison of spatial, spectral and joint HOG features on Fashion-MNIST under weak and strong supervision conditions.}\label{fig:hog_fashion} \end{figure*} To justify these two designs, we conduct experiments on MNIST and Fashion-MNIST to gain more insights. By adopting the joint HOG features, we consider four combinations of two feature learning methods (variance thresholding and DFT) and two classifiers (KNN and XGBoost). The classification accuracy of test data against MNIST and Fashion-MNIST, as a function of the number of training data per class, $N_c$, is compared in Fig. \ref{fig:hog_combination}, whose x-axis is in the unit of $\log_2 N_c = n_c$. The primary factor to the performance is the classifier choice. When $N_c \le 2^3=8$, the KNN classifier outperforms the XGBoost classifier by an obvious margin. If $N_c \ge 2^6=64$, the XGBoost classifier offers better performance. There is a cross over region between $N_c=2^4=16$ and $N_c=2^5=32$. Furthermore, by zooming into the weak supervision region with $N_c \le 8$, the variance thresholding feature selection method is better than the DFT feature selection method. On the other hand, in the stronger supervision region with $N_c \ge 32$, the DFT featurer selection method is better than the variance thresholding feature selection method. For this reason, we use the combination of variance thresholding and KNN in HOG-I and adopt the combination of DFT and XGBoost in HOG-II. They target at the weaker and stronger supervision cases, respectively. The phenomenon observed in Fig. \ref{fig:hog_combination} can be explained below. When the supervision degree is weak, it is difficult to build meaningful data models (e.g., low-dimensional manifolds in a high-dimensional representation space). Variance thresholding and KNN are classical feature selection and classification methods derived from the Euclidean distance, respectively. When the supervision degree becomes stronger, it is feasible to build more meaningful data models. The Euclidean distance measure is too simple to capture the data manifold information. Instead, DFT and XGBoost can leverage the manifold structure for better feature selection and decision making, respectively. Next, we compare the performance of spatial, spectral and joint HOG features for HOG-I and HOG-II under their preferred supervision range for MNIST and Fashion-MNIST. Their results are shown in Figs. \ref{fig:hog_mnist} and \ref{fig:hog_fashion}, respectively. We have the following observations. First, the performance gap between spatial and spectral HOG features is small under weak supervision with $2^4=16$ training samples per class. The performance gap becomes larger if the training sample number per class is greater or equal to $2^5=32$. Second, the joint HOG features provide the best overall performance. This is not a surprise since the set of joint HOG features contain the spatial and spectral HOG features as its two subsets. Here, we would like to point out that the performance gap between the joint HOG features and the spatial HOG features are larger for smaller $N_c$. It means that the spectral HOG features do complement the spatial HOG features and contribute to the performance gain. The value of spectral HOG features diminishes as $N_c$ is sufficiently large. We may give the following explanation to the phenomena observed in Figs. \ref{fig:hog_mnist} and \ref{fig:hog_fashion}. By performing the DCT on the histogram of each bin over $8 \times 8$ patches, the values of low-frequency DCT coefficients are larger due to energy compaction. Their values for the same object class are relatively stable regardless of the supervision degree. In contrast, the spatial HOG features are distributed over the whole image. They are more sensitive to the local variation of each individual sample. Thus, when the supervision is weak, HOG-I can benefit from spectral HOG features. Second, as the supervision becomes stronger, the situation is different. Although the HOG of a single patch provides only the local information, we can obtain both local and global information by concatenating spatial HOG features across all patches. The HOG of at the same patch location could be noisy (i.e. varying from one sample to the other). Yet, the variation can be filtered out by DFT and XGBoost. On the other hand, the values of high-frequency DCT coefficients are small and many of them are close to zero because of energy compaction. Thus, spectral HOG features are not as discriminant as spatial HOG features under strong supervision. \begin{figure*}[tbp] \centerline{\includegraphics[width=1.0\linewidth]{figures/iphop_pipeline.png}} \caption{An overview of the SSL-based learning system, where the input image is of size $32 \times 32$.}\label{fig:pipeline} \end{figure*} \section{Design of Learning Systems with SSL Features}\label{sec:method} Successive subspace learning (SSL) was recently introduced in \cite{kuo2016understanding, kuo2017cnn, kuo2018data,kuo2019interpretable}. The technique has been applied to many applications such as point cloud classification, segmentation and registration \cite{zhang2020pointhop, zhang2020pointhop++, zhang2020unsupervised, kadam2020unsupervised, kadam2022r}, face recognition \cite{rouhsedaghat2020facehop, rouhsedaghat2021successive}, deepfake detection \cite{chen2021defakehop}, anomaly detection \cite{zhang2021anomalyhop}, etc. SSL-based object classification work can be found in~\cite{chen2020pixelhop, chen2020pixelhop++,Yang2021EPixelHopAE}. We propose two improved PixelHop (IPHop) learning systems and name them IPHop-I and IPHop-II in this section. The system diagram of IPHop-I/II is shown in the left subfigure of Fig.~\ref{fig:pipeline}. It consists of three modules: 1) unsupervised representation learning based on SSL features, 2) semi-supervised feature learning, and 3) supervised decision learning. Since its modules 2 and 3 are basically the same as those in HOG-based learning systems, we will primarily focus on the representation learning in Sec. \ref{sec:iphop_module1}. Afterwards, we compare the performance of IPHop-I and IPHop-II under weak and strong supervision scenarios in Sec.~\ref{sec:iphop2}. \subsection{SSL-based Representation Learning}\label{sec:iphop_module1} We describe the processing procedure in Module 1 of the left subfigure of Fig.~\ref{fig:pipeline} below. The input is a tiny image of spatial resolution $32 \times 32$. The processing procedure can be decomposed into two cascaded units, called Hop-1 and Hop-2, respectively. We first extract the spatial Saab features at Hop-1 and Hop-2. For each hop unit, we apply filters of spatial size $5\times5$. At Hop-1, a neighborhood of size $5\times5$ centered at each of the interior $28\times28$ pixels is constructed. The Saab transform is conducted at each neighborhood to yield $K_1=25$ channel responses at each pixel. Afterwards, a $2\times2$ absolute max-pooling is applied to each channel. It reduces the spatial resolution from $28\times28$ to $14\times14$. As a result, the input to Hop-2 is $14 \times 14 \times 25$. Similarly, we apply the channel-wise Saab transform with $K_2$ filters to the interior $10\times10$ points to get $K_2$ responses for each point. Here, we set $K_2=256$ and $K_2=204$ for MNIST and Fashion-MNIST, respectively, based on the energy thresholding criterion introduced in \cite{chen2020pixelhop++}. This above design is basically the standard PixelHop++ pipeline as described in \cite{chen2020pixelhop++}. The only modification in IPHop is that we change max-pooling in PixelHop++ to absolute max-pooling. Note that The responses from Hop-1 can be either positive or negative since no nonlinear activation is implemented at Hop-1. Instead of clipping negative values to zero, we find that it is advantageous to take the absolute value of the response first and then conduct the maximum pooling operation. The spatial filter responses extracted at Hop-1 and Hop-2 only have a local view on the object due to the limited receptive field. They are not discriminant enough for semantic-level understanding. Since there exists correlations among these local filter responses, we can conduct another Saab transform across all local responses at each individual channel. Such a processing step provides the global spectral Saab features at Hop-1 and Hop-2 as shown in the right subfigure of Fig.~\ref{fig:pipeline}. To explain the procedure in detail, we use Hop-1 as an example. For each of the $K_1=25$ channels, $14 \times 14=196$ spatial Saab features are flattened and then passed through a one-stage Saab transform. All responses are kept without truncation. Thus, the dimension of the output spectral Saab features is 196 for each channel. As compared to features learned by gradually enlarging the neighborhood range, the spectral Saab features capture the long range information from a finer scale. Finally, the spatial and spectral Saab features are concatenated at Hop-1 and Hop-2 to form the joint-spatial-spectral Saab features. \subsection{IPHop-I and IPHop-II}\label{sec:iphop2} Since the two hops have different combinations of spatial and spectral information, it is desired to treat them differently. For this reason, we partition IPHop features into two sets: \begin{itemize} \item Feature Set no. 1: spatial and spectral features of Hop-1, \item Feature Set no. 2: spatial and spectral features of Hop-2. \end{itemize} Feature learning is used to select the subset of discriminant features from the raw representation. By following HOG-I and HOG-II, we consider variance thresholding and DFT two choices, apply them to feature sets no. 1 and no. 2, and select the same number of optimal features from each set individually. Furthermore, the same two classifiers are used for decision learning: KNN and XGBoost. For KNN, we concatenate optimal features from feature set no. 1 and no. 2, and compute the distance in this joint feature space. For XGBoost, we apply it to feature set no. 1 and no. 2 and make soft decision for each hop separately. Afterwards, we average the two soft decisions and use the maximum likelihood principle to yield the final decision. We propose two SSL-based leanring systems below. \begin{enumerate} \item IPHop-I \begin{itemize} \item Objective: targeting at weaker supervision \item Representation Learning: Joint SSL features (i.e. both feature set nos. 1 and 2) \item Feature Learning: variance thresholding \item Decision Learning: KNN \end{itemize} \item IPHop-II \begin{itemize} \item Objective: targeting at stronger supervision \item Representation Learning: Joint SSL features (i.e. both feature set nos. 1 and 2) \item Feature Learning: DFT \item Decision Learning: XGBoost \end{itemize} \end{enumerate} To justify these two designs, we consider all four possible combinations of feature and decision learning choices and compare their performance in Fig. \ref{fig:iphop_combination}. We use the fashion-MNIST dataset as an example in the following discussion. We see from Fig. \ref{fig:iphop_combination}(b) that KNN outperforms XGBoost under weak supervision (i.e. the training sample number per class $N_c \le 4$). On the other hand, XGBoost outperforms KNN under stronger supervision ($N_c \ge 16$). There is a transition point at $N_c=8$. For the weak supervision scenario, variance thresholding feature selection is better than DFT. This is particularly obvious when $N_c=1$. The performance gap is around 25\%. Thus, we use the combination of variance thresholding and KNN in IPHop-I to be used for weaker supervision. For the stronger supervision case, DFT is slightly better than variance thresholding in both KNN and XGBoost. Therefore, we use the combination of DFT and XGBoost in IPHop-II to be used for stronger supervision. Next, we conduct ablation study on different representations for IPHop-I and IPHop-II in their preferred operating ranges to understand the impact of each feature type. Fig. \ref{fig:iphop_mnist} compares the test accuracy with individual spatial and spectral features of hop-1 and hop-2 and jointly for MNIST under different supervision levels. Under weak supervision, we see from Fig. \ref{fig:iphop_mnist}(a) that spectral features are more powerful than spatial features while spectral features of hop-2 are slightly better than those of hop-1. Under stronger supervision, we see from Fig. \ref{fig:iphop_mnist}(b) that spatial features are more powerful than spectral features since spatial features can capture more detail information without energy compaction and the detail information does help the classification performance as the number of labeled sample increases. Furthermore, features of hop-2 are more useful than those of hop-1. The main differences between hop-1 and hop-2 features lie two factors: \begin{itemize} \item spatial features are determined by the receptive field of Saab filters, \item spectral features are determined by spatial aggregation of Saab responses over the entire set of grid points. \end{itemize} For the former, the cascaded filters in hop-2 offer a larger receptive field which has stronger discriminant power than hop-1. For the latter, hop-1 has 28x28 grid points while hop-2 has only 14x14 grid points. The content in hop-1 has larger diversity than that in hop-2. Although the spatial Saab transform can achieve energy compaction, the percentages of stable and discriminant spectral features in hop-1 tend to be fewer than those in hop-2. Yet, hop-1 and hop-2 do provide complementary features so that the joint feature set gives the best performance. Finally, we show the test accuracy with individual spatial and spectral features of hop-1 and hop-2 and joint feature sets under different supervision levels for Fashion-MNIST in Fig. \ref{fig:iphop_fashion}. The same observations and discussion apply to Fashion-MNIST. \begin{figure*}[tbp] \centering \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.93\linewidth]{figures/iphop_combination_mnist.png} \caption{MNIST Dataset} \end{subfigure} \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.93\linewidth]{figures/iphop_combination_fashion.png} \caption{Fashion-MNIST Dataset} \end{subfigure} \caption{Performance comparison of SSL-based learning systems on MNIST and Fashion-MNIST datasets under four different combinations among two feature learning methods (variance thresholding and DFT) and two classifiers (KNN and XGBoost) as a function of $n_c=\log_2(N_c)$.} \label{fig:iphop_combination} \end{figure*} \begin{figure*}[tbp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/iphop_FeatAblation_mnist_1.png} \caption{Weak Supervision with IPHop-I} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/iphop_FeatAblation_mnist_2.png} \caption{Strong Supervision with IPHop-II} \end{subfigure} \caption{Performance comparison of spatial, spectral and joint SSL features on MNIST under strong and weak supervision conditions.}\label{fig:iphop_mnist} \end{figure*} \begin{figure*}[tbp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/iphop_FeatAblation_fashion_1.png} \caption{Weak Supervision with IPHop-I} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/iphop_FeatAblation_fashion_2.png} \caption{Strong Supervision with IPHop-II} \end{subfigure} \caption{Performance comparison of spatial, spectral and joint SSL features on Fashion-MNIST under strong and weak supervision conditions.}\label{fig:iphop_fashion} \end{figure*} \section{Experiments}\label{sec:experiments} Experiments are conducted on MNIST~\cite{lecun1998gradient} and Fashion-MNIST~\cite{xiao2017fashion} datasets for performance benchmarking of HOG-I/II, IPHop-I/II and LeNet-5 learning systems against a wide range of supervision levels. For HOG and IPHop, we also introduce a hybrid solution. That is, type I is used when $N_c \le 8$ while type II is used for $N_c \ge 16$. They are called hybrid HOG and hybrid IPHop, respectively. \subsection{Experimental Setup}\label{sec:setting} The Adam optimizer is used for backpropagation in the training of LeNet-5 network. The number of epochs is set to 50 for all $N_c$ values. Both MNIST and Fashion-MNIST contain grayscale images of resolution $28\times28$, with 60K training and 10K test images. MNIST contains 10 hand-written digits (from 0 to 9) in MNIST while Fashion-MNIST has 10 fashion classes. The training sample number per class is around 6K. Among the 6K training samples per class, we choose its subset of size $N_c=2^{n_c}$, $n_c=0, 1, \cdots, 12$ randomly as the training set. All classes have the same training sample number. In words, we go from the extremely weak supervision condition with 1 labeled sample per class to the strong supervision condition with 4,096 labeled sample per class with gradual transition in between. Experiments with random training sample selection are performed with multiple runs. \subsection{Performance Benchmarking}\label{sec:benchmarking} We conduct performance benchmarking of HOG-I, HOG-II, IPHop-I, IPHop-II, and LeNet-5 in this subsection. The mean test accuracy and standard deviation values for MNIST and Fashion-MNIST under different supervision levels are reported in Table~\ref{tab:mnist_compare} and Table~\ref{tab:fashion_compare}, respectively, based on results from 10 runs. We have the following observations. \begin{itemize} \item Under weak supervision with $N_c=1, \, 2, \, 4, \, 8$, HOG-I and IPHop-I outperforms LeNet-5 by a large margin. Specifically, where these is only one labeled image per class ($N_c=1$) or 10 labeled data for the whole dataset, HOG-I and IPHop-I can still reach an accuracy of around $50\%$ on for both datasets. For MNIST, HOG-I and IPHop-I surpass LeNet-5 by 12.51\% and 10.67\%, respectively. For Fashion-MNIST, the performance gains of HOG-I and IPHop-I are 8.62\% and 5.55\%, respectively. It shows that the performance of HOG-based and IPHop learning systems is more robust as the number of labeled samples decreases. \item Under middle supervision with $N_c=16, \, 32, \, 64, \, 128$, HOG-I, HOG-II, IPHop-I and IPHop-II still outperform LeNet-5. Furthermore, we start to see the advantage of HOG-II over HOG-I and the advantage of IPHop-II over IPHop-I. Besides comparison of mean accuracy scores, we see that the standard deviation of LeNet-5 is significantly higher than that of HOG-I, HOG-II, IPHop-I and IPHop-II under the weak and middle supervision levels. \item Under strong supervision with $N_c \ge 1,024$ (or the total training sample number is more than 10,240 since there are 10 classes), the advantage of LeNet-5 starts to show up. Yet, IPHop-II still outperforms LeNet-5 in Fashion-MNIST while the performance difference between IPHop-II and LeNet-5 on MNIST is very small. \item When the full training dataset (i.e., $N_c=6K$) is used, each of HOG-I, HOG-II, IPHop-I and IPHop-II has a single test accuracy value and the standard deviation value is zero since the training set is the same. In contrast, even with the same input, LeNet-5 can yield different accuracy values due to the stochastic optimization nature of backpropagation. \end{itemize} It is natural to consider hybrid HOG and IPHop schemes, where type I is adopted when $N_c \le 8$ and type II is adopted for $N_c \ge 16$. For ease of visual comparison, we plot the mean accuracy curves as well as the standard deviation values (indicated by vertical bars) of hybrid HOG, hybrid IPHop and LeNet-5 as a function of $N_c$ in Fig. \ref{fig:hybrid_compare}. Clearly, hybrid IPHop provides the best overall performance among the three. Hybrid IPHop outperforms LeNet-5 by a significant margin when $N_c \le 128$ on MNIST and throughout the whole range of $N_c$ on Fashion-MNIST. As to hybrid HOG, it outperforms LeNet-5 with $N_c \le 128$, underperforms LeNet-5 with $N_c \ge 512$ and has a crossover point with LeNet-5 at $N_c=256$ on MNIST. Hybrid HOG has higher accuracy than LeNet-5 when $N_c \le 1,024$ while its performance is comparable with LeNet-5 when $N_c = 2,048$ or $4,096$. \begin{table*}[tbp] \small \centering \caption{Comparison of the mean test accuracy (\%) and standard deviation on MNIST under weak, middle and strong Supervision degree, where the best performance is highlighted in bold.} \label{tab:mnist_compare} \begin{tabular}{|c|l|c|cc|cc|} \hline \multicolumn{1}{|l|}{Supervision} & N\_c & LeNet-5 & HOG-I & HOG-II & IPHop-I & IPHop-II \\ \hline \multirow{4}{*}{Weak} & 1 &40.07 ($\pm$5.78) &\textbf{52.58} ($\pm$3.89) &9.80 ($\pm$0.00) & 50.74 ($\pm$4.13) & 9.80 ($\pm$0.00)\\ & 2 &54.43 ($\pm$6.62) &58.94 ($\pm$3.33) &38.40 ($\pm$2.14) & \textbf{59.96} ($\pm$3.42) & 45.82 ($\pm$3.23) \\ & 4 &63.19 ($\pm$3.52) &66.55 ($\pm$2.42) &43.48 ($\pm$4.18) & \textbf{71.28} ($\pm$2.22) & 56.00 ($\pm$3.98) \\ & 8 &72.41 ($\pm$2.50) &74.12 ($\pm$1.50) &63.39 ($\pm$3.21) & \textbf{79.40} ($\pm$1.18) & 73.90 ($\pm$2.62) \\ \hline \multirow{4}{*}{Middle} & 16 &73.38 ($\pm$3.69) &78.61 ($\pm$1.04) &77.35 ($\pm$1.80) & 84.78 ($\pm$0.87) & \textbf{85.47} ($\pm$1.02)\\ & 32 &82.51 ($\pm$3.62) &82.87 ($\pm$0.40) &85.60 ($\pm$0.99) & 88.87 ($\pm$0.44) & \textbf{90.69} ($\pm$0.85)\\ & 64 &83.92 ($\pm$5.94) &86.01 ($\pm$0.72) &90.47 ($\pm$0.34) & 91.60 ($\pm$0.32) & \textbf{93.60} ($\pm$0.21)\\ & 128 &90.92 ($\pm$5.52) &88.34 ($\pm$0.30) &93.14 ($\pm$0.43) & 93.49 ($\pm$0.32) & \textbf{95.27} ($\pm$0.24)\\ \hline \multirow{6}{*}{Strong} & 256 &94.87 ($\pm$2.61) &90.29 ($\pm$0.23) &95.09 ($\pm$0.26) & 94.99 ($\pm$0.23) & \textbf{96.49} ($\pm$0.08)\\ & 512 &97.17 ($\pm$0.26) &91.77 ($\pm$0.20) &96.16 ($\pm$0.15) & 95.93 ($\pm$0.14) & \textbf{97.44} ($\pm$0.09)\\ & 1024 &\textbf{98.18} ($\pm$0.16) &93.02 ($\pm$0.12) &97.04 ($\pm$0.14) & 96.59 ($\pm$0.08) & 98.04 ($\pm$0.07)\\ & 2048 &\textbf{98.64} ($\pm$0.17) &93.95 ($\pm$0.12) &97.68 ($\pm$0.04) & 97.23 ($\pm$0.08) & 98.55 ($\pm$0.07)\\ & 4096 &\textbf{98.95} ($\pm$0.09) &94.70 ($\pm$0.13) &98.08 ($\pm$0.04) & 97.66 ($\pm$0.06) & 98.90 ($\pm$0.06)\\ & Full &\textbf{99.07} ($\pm$0.07) &95.03 &98.20 &98.08 &99.04 \\ \hline \end{tabular} \end{table*} \begin{table*}[tbp] \small \centering \caption{ Comparison of the mean test accuracy (\%) and standard deviation on Fashion-MNIST under weak, middle and strong Supervision degree, where the best performance is highlighted in bold.} \label{tab:fashion_compare} \begin{tabular}{|c|l|c|cc|cc|} \hline \multicolumn{1}{|l|}{Supervision} & N\_c & LeNet-5 & HOG-I & HOG-II & IPHop-I & IPHop-II \\ \hline \multirow{4}{*}{Weak} & 1 &41.18 ($\pm$5.06) &\textbf{49.80} ($\pm$ 4.29) &10.00 ($\pm$ 0.00) &46.73 ($\pm$4.87) &10.00 ($\pm$0.00) \\ & 2 &50.65 ($\pm$5.36) &54.43 ($\pm$ 4.42) &39.85 ($\pm$ 2.07) &\textbf{56.57} ($\pm$2.16) &47.17 ($\pm$2.42) \\ & 4 &56.22 ($\pm$4.23) &\textbf{60.42} ($\pm$ 1.99) &41.53 ($\pm$ 2.21) &59.21 ($\pm$3.39) &52.48 ($\pm$2.73) \\ & 8 &60.54 ($\pm$3.85) &\textbf{64.25} ($\pm$ 1.58) &54.29 ($\pm$ 2.28) &62.90 ($\pm$1.91) &65.44 ($\pm$1.10) \\ \hline \multirow{4}{*}{Middle} & 16 &61.34 ($\pm$3.17) &68.22 ($\pm$ 1.71) &66.38 ($\pm$ 1.33) &69.37 ($\pm$1.47) &\textbf{73.93} ($\pm$1.41) \\ & 32 &67.49 ($\pm$2.88) &71.60 ($\pm$ 0.75) &73.02 ($\pm$ 0.74) &71.47 ($\pm$0.89) &\textbf{77.86} ($\pm$0.88) \\ & 64 &71.58 ($\pm$2.33) &73.33 ($\pm$ 0.48) &77.60 ($\pm$ 0.66) &74.44 ($\pm$0.52) &\textbf{80.88} ($\pm$0.89) \\ & 128 &75.04 ($\pm$1.65) &75.49 ($\pm$ 0.68) &80.12 ($\pm$ 0.57) &76.81 ($\pm$0.28) &\textbf{83.54} ($\pm$0.37) \\ \hline \multirow{6}{*}{Strong} & 256 &76.81 ($\pm$3.05) &77.57 ($\pm$ 0.52) &82.78 ($\pm$ 0.41) &78.74 ($\pm$0.37) &\textbf{85.59} ($\pm$0.33) \\ & 512 &82.38 ($\pm$2.19) &79.05 ($\pm$ 0.37) &84.19 ($\pm$ 0.17) &80.29 ($\pm$0.31) &\textbf{87.41} ($\pm$0.25) \\ & 1024 &84.51 ($\pm$2.34) &80.56 ($\pm$ 0.38) &86.00 ($\pm$ 0.25) &81.99 ($\pm$0.26) &\textbf{88.81} ($\pm$0.20) \\ & 2048 &87.13 ($\pm$2.06) &81.91 ($\pm$ 0.24) &87.14 ($\pm$ 0.17) &83.59 ($\pm$0.26) &\textbf{89.93} ($\pm$0.13) \\ & 4096 &88.97 ($\pm$0.56) &83.06 ($\pm$ 0.18) &88.35 ($\pm$ 0.08) &85.01 ($\pm$0.11) &\textbf{91.03} ($\pm$0.16) \\ & Full &89.54 ($\pm$0.33) &83.52 &88.84 &85.77 &\textbf{91.37} \\ \hline \end{tabular} \end{table*} \begin{figure*}[tbp] \centering \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.93\linewidth]{figures/all_methods_merge_mnist.png} \caption{MNIST} \end{subfigure} \begin{subfigure}{0.8\textwidth} \centering \includegraphics[width=0.93\linewidth]{figures/all_methods_merge_fashion.png} \caption{Fashion-MNIST} \end{subfigure} \caption{Comparison of test accuracy between hybrid HOG, hybrid IPHop, and LeNet-5 for MNIST and Fashion-MNIST. For hybrid HOG and IPHop, type I is adopted when $N_c \le 8$ and type II is adopted for $N_c \ge 16$.} \label{fig:hybrid_compare} \end{figure*} \begin{figure*}[tbp] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/mnist_feat_LS_hop1.png} \caption{Local Saab filters in Hop-1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/mnist_feat_LS_hop2.png} \caption{Local Saab filters in Hop-2} \end{subfigure}\\ \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/mnist_feat_GS_hop1.png} \caption{Global Saab filters in Hop-1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/mnist_feat_GS_hop2.png} \caption{Global Saab filters in Hop-2} \end{subfigure} \caption{The plot of Frobenius norms of difference matrices between the covariance matrices learned under different supervision levels and the one learned from the full set for MNIST.} \label{fig:fro_1} \end{figure*} \section{Discussion} \label{sec:discussion} The superiority of hybrid IPHop under both weak and strong supervision conditions is clearly demonstrated in Fig. \ref{fig:hybrid_compare}. We would like to provide some explanations in this section. \begin{itemize} \item Robustness in Representation Learning \\ The IPHop representation is determined by Saab filters. Saab filters are obtained by PCA, which is an eigen-analysis of the covariance matrix of input vectors. If the covariance matrix converges fast as the training sample number increases, then IPHop's feature learning is robust with respect to supervision degree. We show the Frobenius norm of the difference matrix between the covariance matrix derived by $N_c$ training images and the full training size in Fig.~\ref{fig:fro_1}. There are four cases; namely, the local and global Saab filters in Hop-1 and Hop-2, respectively. The results are averaged among 5 runs. We see that the Forbenius norm of the difference covariance matrices is already small even for $N_c=1$. This is because one image contains many small patches which contribute to a robust covariance matrix. \item Robustness in Feature Learning \\ To demonstrate the robustness of DFT, we measure the overlapping of the selected feature set based on $N_c$ training samples and that based on the full training size ($N_c=6K$). These two sets are denoted by $\{F\}_{N_c}$ and $\{F\}_{full}$, respectively. We define an intersection-over-union (IoU) score as \begin{equation}\label{eq:iou} IoU_{N_c} = \frac{\left | \{F\}_{full} \bigcap \{F\}_{N_c} \right |}{\left | \{F\}_{full} \bigcup \{F\}_{N_c} \right |}, \end{equation} where the numerator represents for the number of features agreed between the two subsets while the denominator represents the number of features selected by at least one of the two subsets. For each $N_c$, there exists randomness in selecting a subsets of labeled samples. To eliminate the randomness, we calculate the averaged IoU values with 10 runs. The IoU values of selecting the top 200-D and 400-D from the 1024-D HOG features for MNIST and Fashion-MNIST, respectively, are shown in Fig.~\ref{fig:dft_1}. We see that, as the number of labeled data increases, the IoU score increases. With a small $N_c$ value (say, 32) the IoU score can already reach 90\%. It clearly shows that DFT is a semi-supervised feature selection tool and it can work well under very weak supervision condition. \item Robustness in Decision Learning \\ The KNN classifier is used when the training number is small. It is an exemplar-based classifier. Instead of minimizing the loss using labeled data, it finds the most similar training sample based on the Euclidean distance in the feature space. However, it cannot capture the data manifold that often lies in a higher dimensional feature space. As the number of labeled data increases, XGBoost is more powerful. It minimizes the cross-entropy loss with a gradient boosting technique. XGBoost is a decision tool based on ensemble learning which explains its robust decision behavior. \end{itemize} On the other hand, it is desired to understand the behaviour of LeNet-5 under different supervision levels. We show the learning curves of LeNet-5 on MNIST using $N_c=2^{n_c}$, $n_c=0, 1, \cdots, 12$, in Fig.~\ref{fig:lenet_learning_curve}, which are expressed as the cross-entropy losss as a function of the epoch number. The batch size in each epoch is set to the total number of labeled images if $N_c \le 16$ and 128 if $N_c>16$. The loss curves are averaged among 5 random runs. The loss decreases slowly and converges at a higher loss value when $N_c$ is small. In contrast, it decreases faster and converges at a lower loss value when $N_c$ is larger. Clearly, the learning performance of LeNet-5 in both convergence rates and converged loss values is highly dependent on the number of the labeled samples. \begin{figure*}[tbp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/mnist_hog_iou_avgrun_to6000.png} \caption{MNIST} \label{fig:dft_1a} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/fashion_hog_iou_avgrun_to6000.png} \caption{Fashion-MNIST} \label{fig:dft_1b} \end{subfigure} \caption{IoU scores between the feature sets selected using full training size and using $N_c$ on MNIST and Fashion-MNIST.} \label{fig:dft_1} \end{figure*} \begin{figure*}[tbp] \centerline{\includegraphics[width=0.6\linewidth]{figures/mnist_lr_curve.png}} \caption{Learning curve using LeNet-5 on MNIST dataset with selected supervision levels.} \label{fig:lenet_learning_curve} \end{figure*} \section{Conclusion and Future Work}\label{sec:conclusion} In this work, we compared the supervision-scalability of three learning systems; namely, the HOG-based and IPHop-based learning systems and LeNet-5, which is an representative deep-learning system. Both HOG-based and IPHop-based learning systems work better than LeNet-5 under weak supervision. As the supervision degree goes higher, the performance gap narrows. Yet, IPHop-II still outperforms LeNet-5 on Fashion-MNIST under strong supervision. It is well known that it is essential to have a sufficient amount of labeled data for deep learning systems to work properly. Data augmentation and adoption of pre-trained networks are two commonly used techniques to overcome the problem of insufficient training data at the cost of larger model sizes and higher computational cost. Our performance benchmarking study is only preliminary. In the future, we would like to conduct further investigation by considering supervision scalability, tradeoff of accuracy, model sizes and computational complexity jointly. \section*{Acknowledgement}\label{sec:acknowledgement} The authors acknowledge the Center for Advanced Research Computing (CARC) at the University of Southern California for providing computing resources that have contributed to the research results reported within this publication. \bibliographystyle{ieeetr}
1,116,691,500,534
arxiv
\section{Introduction} Autism is a diffusive developmental disorder in early childhood. As parents none wants their children to have any problem but when it comes to an autism, people should give serious attention. Children with autism face immense struggles when it comes interacting with their typically developing peers and also in their learning process. It should be carefully handled both their mental and physical health in every situation. Many individuals figure children with a mental imbalance ought to invest less energy playing contrasted with non-impaired children. In reality, we should pay more attention to autistic children \cite{item1}. Without a doubt ASD isn't something a child just `grows out of', there are numerous medications that can enable to secure new aptitudes and overcome a wide variety of formative difficulties. From free government organizations to in-home behavioral treatment and school-based projects, help is accessible to meet child’s unique needs. With the correct treatment design, and a considerable measure of affection and support, child can learn, develop, and thrive. The most punctual indications of extreme introvert are the nonappearance of typical practices and the nearness of strange ones so they can be difficult to perceive. The earliest symptoms of autism children are as signs calm, free, and undemanding. Despite the fact that a mental imbalance is difficult to analyze before 24 months, symptoms regularly surface in the surrounding area of 12 and year and a half. If signs are distinguished by year and a half of age, concentrated treatment may rewire the mind and turn around the symptoms. Children with ASD hang loose applying what they have taken in one session from a specialist, home or others. Having a consistent way of interaction with a special child and it should stick to a schedule which is best for them. Positive behavior can go long way with children. We should praise them what they act appropriately or learn new skill and give some reward for their performances. Creating a comfort zone at home helps them a lot. In research of ten years, it has been seen that computer-based interventions can provide extra ordinary strategies to help the children with special needs in many ways. Autism is the most commonly found neuro-development disorder and its core deficits in three domains: social interaction, communication, and repetitive or stereotypic behavior. It’s been calculated that 1$\%$ of the world’s population, suffer from an autism spectrum disorder. In many developing countries like Bangladesh also have no data to measure the summation that how many children or adults are suffering from this lifelong neurological condition. In recent years, Bangladesh has been developing facilities for special children with autism \cite{item3,item4}. In this paper, we address the issue of improving learning capacity through our designed applications \cite{item2}. Our examination has focused on smart phone-based applications to improve the learning skill of children with Down syndrome through playing some simple mini games in several levels. \section{Understanding Down Syndrome} Human body is nothing less than a miracle. How human body will develop, look and work depends on the fact known as “genes”. Genes are the reason behind every characteristic of a human body. They are also responsible for any abnormalities in a human body. People by birth have 22 chromosomes but people with Down Syndrome are born with 23 chromosomes in their bodies. Chromosomes are the set of genes, with Down syndrome, this extra chromosome causes issues that affects them for life time. Although Down Syndrome is a lifelong problem but with the help of modern science and treatments, at present doctors are helping these patients. With proper care and education, Down syndrome issue can get a better solution.  \subsection{Effects of Down Syndrome} Down syndrome occurs in about one per 1,000 babies born each year all over the world. It varies in characteristic and occurs differently in people. Some may suffer from understanding where some may suffer from interaction with surroundings. With proper care, these issues can be handled. People with Down syndrome have some physical features in common. For example, flat noses, small ears, straight hair etc. They’ll learn skills gradually but will face problems in daily activities like walking, talking, and developing social skills. \subsection{Causes of Down Syndrome} The worldwide accepted main reason behind down syndrome is to have one extra chromosome in human body. There is a higher chance that women aged 35 and older and already having a child with Down syndrome, are more likely to have another one who has it as well. It is possible to pass Down syndrome from parent to child. Again, parents having no Down syndrome can have down syndrome child because they have correct number of genes, but their child doesn’t. \subsection{Life of a Down Syndrome child} Individuals with Down syndrome usually have cognitive development profiles that suggest mild to moderate intellectual disability. However, cognitive development and intellectual ability are highly variable. Children with Down syndrome often reach developmental milestones later than their peers. There may be a delay in acquiring speech. A child may need speech therapy to help them gain expressive language. Fine motor skills may also be delayed. They can take time to develop after gross motor skills have been acquired.\\ On average, a child with Down syndrome will sit at 11 months, crawl at 17 months and walk at 26 months. There may also be problems with attention, a tendency to make poor judgments, and impulsive behavior. However, people with Down syndrome can attend school and become active, working members of the community.\\ Sometimes, there are general health problems that can affect any organ system or bodily function. Around half of all people with Down syndrome have a congenital heart defect. There may also be a higher risk of: respiratory problems, Hearing difficulties, Alzheimer's disease, childhood leukemia, epilepsy and thyroid conditions. However, there also appears to be a lower risk of hardening of the arteries, diabetic retinopathy and most kinds of cancer. \subsection{Sign and Syndromes} There are various types of a person with Down Syndrome autism and there are many forms of disorder which cannot be described easily \cite{item5}. It’s easy to fall into thinking that everyone with Down Syndrome looks similar but in reality, Down syndrome affects people both physically and mentally very differently in each of them. And there’s no prediction how it will affect anybody in the long run. For some of them, the effects are normal, they can even have an easier life style. But for others it is impossible to do daily activities without the assist of anyone else. \subsubsection{Physical and Mental Symptoms} Some common physical features that are seen in down syndrome children are flatter faces, almonds shaped eyes, small ears, small hands and feet, short neck and small head. Some common mental symptoms that are seen in down syndrome children are: they may suffer from hearing loss, they may have heart problems, suffer from obstructive sleep apnea, most common eye sight problems and develop several blood conditions and infections. \subsection{Treatment Strategies} \subsubsection{Applied Behavior Analysis (ABA)} Applied Behavior Analysis (ABA) is the direction of learning and encouragement from Behavior Analysis, and the strategies and innovation got from those standards to the arrangement of issues of social centrality. The thought process of this technique is advance. This technique enhances such huge numbers of issues, for example, correspondence, creative energy, discretion, self-observing. It is organized and regular practice process which helps to enhance over the long haul. A child can response in three ways of any method. Behind every child, they need individual therapy which is given by the instructor. So many dynamic benefit of a package of ABA methods in Comprehensive, Individualized and Intensive Early Intervention projects for special children. Comprehensive mention the skill of day to day life for children and self-control and motivation. Early intervention designed the entire beginning program before age 4. Intensive refers to the total program per week 30-40 hours for 1-3 years children \cite{item6}. \subsubsection{Discrete Trial Training (DTT)} Discrete trial preparing (DTT) is a strategy for instructing in which the grown-up utilizes grown-up coordinated, fortifies decided for their quality, and clear possibilities and duplication to educate new abilities. DTT is a particularly solid technique for inspiring for another reaction to an incitement. Its restrictions are absence of fortification of student immediacy. Using DTT this learner follows some steps. In 1st step teacher or instructor decide for learners what are the objectives can be taught using DTT and summarize the results. 2ND step teacher completes the student analysis task and lists what to do and how a student can do that task. 3rd step is about Setting up the Data Collection by teachers. In 4th mentors designed the location which can take place. 5th is teacher gather the materials which will be used in practice. In 6th teacher assists the learner and gather all the attention. In 7th massed the trial of the learner. 8th is what teacher conducting with the learner and lastly means in 9th mentors give a review and modify the results \cite{item7}. \subsubsection{Functional Communication Training (FCT)} Functional communication training (FCT) is an effective finding of behavior problems. This method described by Carr and Durand (1985), differential reinforcement is used for this method. People who have lots of aggression, people who hurt themselves, vocal problem, stereotypy participants are applicable for this therapy. A variety of response targeted in FCT, including vocal responses, picture exchanges, sign language, gestures, and activation of voice or text output devices \cite{item8}. \subsubsection{Incidental Teaching} Incidental teaching creates an environment for special children which introduce children interest. This method used to add fun to children life. Six principle serves this method. First is early intervention is essential and proper time to develop this method between the ages 15 to 30 months. Second is the improvement is not going to show you in one day it’s just they should engage with minimum 30 hours per week. The third is home and company with parents is most important of upbringing them. 4th is they should interact with socially and for that need to plan how can they interact. 5th is children need to learn to speak in a discrete trial. And final is depending upon incidental teaching procedures within the environment \cite{item9}. \section{Related Work} In past decades, many researches are conducted on the issue of autism and how to solve this issue with the help of technology. On the internet we can find numerous serious games, casual games and program are made to help the autistic children. Hanan M. Zakari analyzed a huge amount of these games and tried to categorize them based on their primary goal \cite{item10}. This category helped us to define that specific sector, Down Syndrome we want to improve. From our research, not all of these games are highly effective. In fact, a little of them are effective in real life scenario. One of the most common problem with Down Syndrome children are delayed language and speech development. Some children cannot pronounce the word or sentence correctly and find trouble to express their thoughts. Rahman and his team tried to solve this problem by playing an e-learning iterative game to the children. The teacher will choose some picture of different things and pass those pictures through LAN. The autistic children will receive them in their computers. These pictures will show at the left of the screen and begin to move to right of the screen. In this time, the children will pronounce the meaning of those pictures loudly and clearly so that the system can record that. Now the system will analyze the speech and if it is correct, that child will get the score \cite{item11}. Related to our previous reference, another interesting game was made to help children speak properly. Many autistic children speak so fast that other person can not understand them. To control their speaking, M. Goodwin and his team created a turtle race serious game. Children have to speak in proper speed, loudness and clearness to win the race. They can not speak very fast nor very slow. To win the race, they have to maintain the moderate speed of speaking \cite{item12} Another major difficulty autistic children face is the poor emotion recognition of other people. LIFEisGAME is a serious game that help children to understand facial expression of a person in different levels. Tech giant Microsoft formed a group with specialist from Portugal and University of Texas to develop this game. This game have multiple stage. The very first one is the $``$Recognize the expression$"$ . A 3D cartoon model will be shown in the screen and children have to recognize the emotion of that model by reading its facial expression. In the next stage $``$Build a Face$"$ , children have to rebuild a face on the basis of emotions on cards given. The final stage of this game is $``$Live the Story$"$ , where the children will play a sort story of a cartoon character. The main goal of this story is to express emotions correctly in certain scenarios \cite{item13,item14}. Modern technology opened a door to solve the problems of autism more efficiently. Virtual Reality (VR) is one of the most promising among of these technologies that can help us greatly to solve many of these problems. Marco\textit{ }Simões and his team develop an interesting game to help not only the children but also the young adults. With the help of VR, they created a game that will help ASD patient to give the experience of bus journey. They staged a small scenario where the player have to complete a bus journey. Player have to perform some small tasks like buying the ticket, waiting for the bus, choosing a sit in the bus, get down on the stoppage etc. With the VR headset, this game can give them almost the real feel of a true bus journey. This will help them to deal with the real life scenarios \cite{item15}. \section{Our Method} {\methodName} is a cross platform serious game that has been developed with an intension of teaching its user in a fun way to develop their certain skill set. The game has been developed for both Android and iOS platforms to reach the maximum user in the industry. \subsection{Game Architecture} {\methodName} is a mobile application that has been specially designed and developed for Autistic children. The game consists of two mini game. The game `Balloon Pop' will help to develop your skill for fast letter recognition while `Match Making' will help you by challenging your memory to solve the different board puzzle. Our adaptive learning AI, will challenge the user based on their performance. While their performance gets stored to our Firebase cloud database for our further research and performance analysis. Involving user to an environment that is comfortable to explore with the learning material ensuring that player is not always engaged with the less performance game, but also engaging with the most performing game in order to keep the $``$fun vs. challenge$"$ curve consistent. At the center of the game architecture is a game engine that is in the control of three processes: i) game generation, ii) storage of information and iii) user management. The basic architecture of {\methodName} is shown in Figure~\ref{figArch}. The basic game generation is based on learning analytics and it keeps user records via online and local storage. A new user willing to play the game has to log in to the system. A user who have not played the game that much frequently, his or her games are generated based on a basic game play module. However, playing each game improves the performance and the analytics is applied after at least three game plays. After each play, the performance is stored in cloud if the user is online or the information is locally stored when offline with an option to be synced with the online storage when the user is online again. The analytics generates game each time after that minimum number of plays based on their previous performances. A flow chart of the game engine is shown in Figre~\ref{figFlow}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{dia1} \end{center} \caption{Basic Game Architecture. \label{figArch}} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{dia2} \caption{A simplified flowchart of the game. \label{figFlow}} \end{figure} \subsection{The Gameplay} After Installing and opening the application, user have to log in by the email ID in order to get track on their performance report. The user can logout or switch between accounts with the different email. After the login process complete, user simply have to tap on the $``$Play Button$"$. The game will start with one of the mini game that has been available to play. If the game is $``$Balloon Pop$"$ , the user has to tap on the balloon with the targeted $``$Letter$"$ that has been displayed as the target. If the targeted letter is $``$A$"$ , the user has to tap on the balloon consist of letter $``$A$"$ and avoid tapping on other balloon. At the end of $``$Balloon Pop$"$ game, the performance will be stored in firebase cloud database based which is based on accuracy. If the game is $``$Matchmaking$"$ , the user has to match the two same card with the same letter in order to complete the board. So if the targeted letter is $``$A$"$ , user has to match all pair of $``$A$"$ at the board in order to complete the level. At the end of $``$Matchmaking$"$ game. The performance will be stored in firebase cloud database which is based on time. Screenshots of the gameplays are shown in Figure~\ref{figPlay}. \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{ss2}&\includegraphics[width=0.3\textwidth]{ss3}&\includegraphics[width=0.3\textwidth]{ss4}\\ (a)&(b)&(c)\\ \end{tabular} \caption{Screenshots of the game play: (a) main menu, (b) balloon pop and (c) match making. \label{figPlay}} \end{figure} \subsection{Implementation} We used $``$Unity$"$ as cross platform game engine to develop our game to reach out the maximum user on different platform. We used $``$Firebase$"$ to store our user data which handle the both offline $\&$ online situation of our application. We also have deployed $``$Vuforia$"$ , an $``$AR$"$ sdk for both $``$Android$"$ $\&$ $``$iOS$"$ platform \section{Results and Discussion} We have experimented with our game with special permission from the authorities of Smiling Special Children School for Special children at Badda, Dhaka. We gathered game play experience with the autistic children and their mentors. We made sure to maintain the ethical and privacy related issues while experimenting with our application and the survey. Ten mentors or teachers from the special school were involved in the survey part. Seven of them were female and three of them were male teachers. Their age was between 21-40 and 8 of them actually under thirty years of age. The goal of our session with the school was twofold. One is to get the children play the game using the app so that we can gather real time feedback. Also we wanted the mentors to rate the application and their experiences with the children. The survey contained several questions for the teachers and mentors. There were three groups of questions: i) experience related, ii) responsibility related and iii) procedure related. In the experience related section there were four questions enlisted as below: \begin{enumerate} \item How many years you have been taught autism children? \item What strategies/materials do you use when teaching students with autism? \item In what environmental setting you have most often teach? \item What subject area(s) you are currently teaching? \end{enumerate} In response to these questions, we found that 60$\%$ among the participants are teaching around 4 to 6 years and 30$\%$ participants are teaching around 1 to 3 years and also 10$\%$ participants are teaching around 7 to 9 years. It is shown that most of the teachers are teaching around 1-6 years. In conclusion, they have significant experience on teaching autistic children. All of them agreed to provide practical treatment so that the children can develop them easily. The third question was multiple choice with options like traditional classroom, virtual classroom, playground and others. Among all participants 100\% of them prefer virtual classroom, general educational classroom and playground which is most required as their preferences. In other section 30\% of them prefer outing program which is included shopping, picnic, tour etc. Most of the teachers were engaged in teaching mathematics and science, while a few in dancing, pre-writing, nonverbal act and dotted training to write alphabet. The second section was on responsibility and following questions were asked: \begin{enumerate} \item With whom the children share good relationship? \item Number of students you are responsible for in a typical day? \item Is there any specific talent in children? \end{enumerate} Among all the participants 80\% participants answered that children relationship is very close with parents and teachers , 10\% participants said that children relationship with other autism children is also good. 10\% of children is nearly happy with everyone and 10\% participants said that children do not share good relationship with none of them and they also attention seeker. Among all of the participants 90\% participants said that they are responsible for around 1-25 autistic children and 10\% of participants said that 26-50 children. In most of the cases, they are engaged in one-to-one teaching. Among the participants of teachers answered that 70\% of the children have some specific talent and 30\%of the children have no specific talent. In the last section on teaching procedure, everyone were asked if they have any formal training on autism and on the scale of the effect of autism changing a students life. Among all the participants 50\% of teachers answered that autism has huge impact on student life because some children can’t even spent day to day life without any help. 40\% of participants answered that there are some impact because those autism children can lead day to day life but they have problem to learn new things. 10\% of participants answered no impact because those children doing well and they are improving. Among all participants, 100\% of participants read book and article about autism. 90\% of participants have completed autism-specific undergraduate class. About 50\% of participants surfed internet to gather knowledge, 20\% of participants have taken organizational training under that school where taught. And 10\% of participants taken sports training from another institution. We were also engaged with the mentors after the game playing session of the children. The game was mostly received with positive feedback. However, the initial promises of the game encouraged us to work further in future. \begin{figure} \centering \includegraphics[width=\columnwidth]{fwork} \caption{Applying VR to detect alphabets based on objects. \label{figVR}} \end{figure} \section{Conclusion} In this paper, we presented {\methodName} a learning game application for special children. With our experiences with the children, we came to conclusion that they are more likely to accept the digital media as they find it more interesting than their real life toys and exercise elements. The benefits of engaging in digital media is helping them to their learning curve but removing intension of social communication as they feel more comfort of being alone. The teachers gave us an amazing point that if the digital media can also communicate, it will help them to encourage in social communication as well. We have already deployed $``$Vuforia$"$ (see Figure~\ref{figVR}) to inspire the user to detect letter from the real word, such as books or stickers to make them more interactive with environment. In future, we are planning to add virtual agents to help them with speech exercise.
1,116,691,500,535
arxiv
\section{\label{sec:Introduction}Introduction} Neural networks have emerged as the current disruptive computational concept. When cascading multiple network layers, these systems set the benchmark in multiple challenging tasks \cite{LeCun2015}. In such \textit{deep} neural networks, layers are dedicated to highlight specific aspects of the input-information, and previous layers commonly serve as input of consecutive layers. Such a hierarchical arrangement is crucial for boosting the computational performance. In deep convolutional neural networks (CNN), layers convolute their input with spatial filters. By increasing filter width and step size, deeper layers focus on more general features, while local features are highlighted in earlier layers \cite{Krizhevsky2012ImageNetNetworks}. In the wake of deep neural networks' success, it was realized that their emulation on Turing / von Neumann machines is highly inefficient. This stimulated strong interest in the realization of neural networks in physical substrates whose architecture submit to the networks' topology. Particularly photonic systems, which offer key advantages for parallelization, are considered a promising future alternative. However, directly mapping the complex topology of a deep neural network onto a hardware substrates presents a significant challenge. Of essential importance are therefore concepts which strike a balance between architectural complexity and hardware implementation simplicity. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{Scheme.pdf \caption{\label{fig:Scheme} Schematic of cascaded nonlinear oscillators acting as deep network, here consisting of two layers. Two coupled nonlinear delay systems $x_{1}(t)$ with states $x_{2}(t)$ implement individual time delay reservoir layers. Information is injected into the first system, and nonlinear nodes are coupled instantaneously according to weights $w_{1,2}$ and $w_{2,1}$. The readout-layer has access to all layers.} \vspace{-0.7cm} \end{figure} Among the various neural network architectures, reservoir computers \cite{Jaeger2004} have emerged as especially interesting theoretical model-systems \cite{Lu2018,Inubushi2017,Marzen2017} and promising candidates for hardware implementations. A reservoir computer is a complex recurrent neural network and conceptually corresponds to a high-dimensional nonlinear dynamical system. Training is restricted to the connections between the reservoir and its output, and hence the nonlinear dynamical system's topology remains constant. This strongly assists implementations in physical substrates, resulting in a large number of realizations in nonlinear photonic \cite{VanderSande2017} and other physical systems \cite{Tanaka2019RecentReview}. Yet, precisely this simplicity raised fundamental concerns regarding deep reservoirs. Recently it was found that, comparable to deep convolutional networks, a continuous change of \textit{spatial}-frequency in the response of consecutive layers appears beneficial \cite{Gallicchio2016,Gallicchio2017}. The workhorse of the field have been nonlinear delay systems implementing time delay reservoirs (TDRs) \cite{VanderSande2017,Larger2017, Brunner2018}. These offer a compromise between good computing performance and exceptional ease of hardware implementation and serve as model-systems for more complex hardware substrates \cite{Shen2016, Bueno2018, Lin2018}. We report on a deep reservoir scheme comprising hierarchically coupled nonlinear delay oscillators exhibiting dynamics on multiple timescales. Crucially, coupling between different layers is constant and training remains limited to the readout weights, in contrast to a proposed deep hardware TDR \cite{Nakajima2018}. This is an essential simplification as it adheres to the conceptual simplicity motive, which strongly fosters hardware implementation. We find that cascading significantly and qualitatively improves computational performance when compared to a single layer reservoir of identical size. Crucially, our architectural simplicity curbs the challenges particular to physically implementing complex and large networks. In Fig. \ref{fig:Scheme}, we schematically illustrate our deep TDR concept. Dynamics are governed by the following set of equations: \begin{align} &\tau_{i} \dot{x}_{i}(t) = -x_{i}(t) - \delta_{i}y_{i}(t) + \beta_{i}\sin^{2}[ d_{i}(t) + b_{i} ] \label{eq:DDIE} \\ &\dot{y}_{i}(t) = x_{i}(t) \label{eq:Int} \\ &d_{i}(t) = x_{i}(t-\tau_{Di}) + \sum_{p=\pm 1} w_{i+p,i}x_{i+p}(t) + \rho_{i} u(t) \label{eq:Arg} \\ &\rho_{i\neq 1} = 0, \delta_{1} = 0. \label{eq:input} \end{align} The state of the delay-coupled node in layer $i\in \{ 1, \cdots, I \}$ is given by $x_{i}(t)$, and we use the $\sin^{2}$-nonlinearity often employed in photonic TDRs \cite{Larger2012,Paquot2012}. Due to inertia, dynamics generally experience low-pass (LP) filtering according to a fast time constant $\tau_{i}$, which can be extended to band-pass (BP) filtering when a slow time constant $\delta_{i}$ is added \cite{Larger2013}. Each layer's nonlinearity is weighed by bifurcation parameter $\beta_{i}$, and the nonlinearity's argument contains constant bias $b_{i}$ and a time-dependent drive $d_{i}(t)$, see Eq. \eqref{eq:Arg}. Drive $d_{i}(t)$ features self-feedback delayed by $\tau_{Di}$ and potentially bidirectional coupling to adjacent layers according to coefficients $w_{i\pm 1, i}$. Only the first layer is coupled to $u(t)$, see Eq. \eqref{eq:input}. External drive $u(t)$ encodes the information to be processed $s(t)$ according to the temporal masking procedure which implements a linear matrix multiplication \cite{Appeltant2011, Brunner2018}. We have employed a de-synchronized information injection procedure in which each value of $s(t)$ is kept for an input-masking length of $0.8\cdot\tau_{Di}$. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{Spatio-temporal.pdf} \caption{\label{fig:Spatio-temporal} Neuron responses ($x^{\sigma_{i}}_{i}(n)$) found in layers $i=1$ (a), $i=2$ (b) and $i=3$ (c), illustrated in a spatio-temporal ($\sigma_{i},n$) representation. The spatial frequency along virtual space $\sigma_{i}$ continuously decreases for the higher layers. Comparable functionality is implemented in deep convolutional networks or deep reservoirs.} \vspace{-0.75cm} \end{figure} According to Eq. \eqref{eq:Arg}, layer $i$ is coupled to layer $i+1$ ($i-1$) according to the fixed connection-weight $w_{i+1,i}$ ($w_{i - 1,i}$), and coupling to $i - 1 = 0$ is unphysical and hence eliminated. Therefore, a recurrent layer simply consist of one hardware nonlinearity, one linear delay line and its fixed connections to previous or consecutive layer. This has multiple consequences. First, inter-layer coupling is instantaneous and constant in time. Training of the inter-layer connections, a long-time open question for deep reservoirs \cite{Gallicchio2017} and significant challenge for full hardware integration \cite{Antonik2016}, is therefore avoided. Second, such a minimal complexity architecture \cite{Soriano2015} can readily be implemented in hardware \cite{Tanaka2019RecentReview}. Finally, it allows establishing a clear mapping from deep TDRs onto deep convolutional neural networks. The fact that TDR-layers can be termed \textit{convolutional} originates from a nonlinear dynamical node's response to perturbations. The state of a nonlinear node in layer $i$ is given by the convolution between its impulse response function $h_{i}(t)$ and its drive $d_{i}(t)$. Combined with a normalization of continuous time $t$ by feedback delay $\tau_{Di}$, one can express the dynamical evolution by \begin{align} \frac{t}{\tau_{Di}} &= n + \sigma_{i} / N_{i}, \sigma_{i}\in \{1, N_{i}\}, n\in \{ 1, 2, \dots \}, \label{eq:time} \\ x_{i}^{\sigma_{i}}(n) &= \int_{-\infty}^{n+\sigma_{i}}h_{i}\left(n+\sigma_{i}-\xi\right) \sin^{2}\left[d_{i}(\xi-1) + b_{i}\right] d\xi \label{eq:convolution} , \end{align} with $N_{i}$ as the number of neurons in layer $i$, see Fig. \ref{fig:Scheme}. Firstly, Eqs. \eqref{eq:time} and \eqref{eq:convolution} map continuous time $t$ onto discrete time $n$ and node $\sigma_{i}$'s position relative to delay time $\tau_{Di}$. Details of this temporal embedding technique can be found in \cite{Arecchi1992, Larger2017,Brunner2018}. Secondly, expressing the dynamical evolution via the convolution operation shows that a node's impulse response function corresponds to the convolution kernels of a CNN-layer. Crucially, coupling created with such a dynamical convolution can directly be translated to the convolution kernel of spatio-temporal networks \cite{Brunner2018,Hart2018}. \begin{figure*}[ht] \includegraphics[width=0.85\textwidth]{Results.pdf} \caption{\label{fig:ResultsMG} Coupling strongly enhances the network's performance for predicting the chaotic Mackey-Glass timeseries by $\Delta{n}=34$ steps into the future. (a) The uncoupled system (both systems receive the input sequence $u(t)$) ($w_{1,2}=w_{2,1}=0$) achieves NMSE=$8.3\cdot10^{-6}$. (b) Bidirectional coupling ($w_{1,2}=0.7$, $w_{2,2}=0.6$) results in no improvement (NMSE=$8.8\cdot10^{-6}$). (c) The decisively best performing architecture is the unidirectional coupling between the recurrent layers, i.e. feed-forward connections ($w_{1,2}=1.4$, $w_{2,1}=0$): NMSE=$1.3\cdot10^{-6}$.} \vspace{-0.5cm} \end{figure*} The analogy between cascaded TDRs and deep convolutional networks goes further. Layers of a CNN commonly feature convolution kernels whose width increases the further back in the cascaded hierarchy a layer is located \cite{Krizhevsky2012ImageNetNetworks}. This operation is often associated with generalization: convolution with wider filters reduces the importance of local features in their input, while more general aspects are highlighted. The cascaded arrangement of layers in CNNs therefore produces layers which accentuate different input information features. In TDRs, increasing the convolution kernel's width corresponds to widening $h_{i}(t)$, see \cite{Supp}. Here this is realized by an additional low-frequency cut-off according to timescale $\delta_{i}$ in Eqs. \eqref{eq:DDIE} and \eqref{eq:Int}, and we enforce widening kernels. In Fig. \ref{fig:Spatio-temporal} we show the response of a three-layer deep TDR driven by the chaotic Mackey-Glass sequence. Each sample corresponds to $\delta t=1$ time-step of the Mackey-Glass system, for which we used the same parameters as in \cite{Jaeger2004}. Parameters are $\beta_{2,3}=1.1$, $\tau_{1}=6\cdot{10}^{-3}$, $\tau_{2,3}=7\cdot{10}^{-3}$, $\tau_{Di}=17.85$, $\Phi_0=0.2$, $\delta_{2,3}=0.01$, $w_{1,2}=0.7$, $w_{2,3}=0.8$, $w_{2,1}=w_{3,2}=0$. Responses are plotted in spatio-temporal representation \cite{Arecchi1992,Marino2018}, where nodes are arranged along $\sigma_{i}$ and the temporal evolution is along discrete time $n$, with $n$ typically close to a system's delay $\tau_D$ \cite{Brunner2018}. As we move into higher layers, from (a) to (c) in Fig. \ref{fig:Spatio-temporal}, dynamics do highlight different spatio-temporal scales. Our deep TDR therefore hosts features much like those taken into consideration in the design of CNNs. Creating a computational result requires to connect the deep TDR to an output via weights adjusted during learning. Our readout layer has access to all virtual nodes of all network layers, and the system's output is created according to \begin{equation} y_{j}^{out}(n) = \sum_i^{I} \sum_{\sigma_{i}}^{N_{i}} W_{i,\sigma_{i}, j}^{out} x_{i}^{\sigma_{i}}(n). \end{equation} Here, $j$ is the dimension of the system's output, which depends on the particular task. Common methods to obtain $\mathbf{W}^{out}$ are based on linear or ridge regression, and $W^{out}$ is optimized using a representable set of training data \cite{Jaeger2004,Brunner2018}. In experimental systems, these methods can be implemented in auxiliary hardware like field-programmable gate arrays \cite{Hermans2016}, or can to a degree be replaced by Boolean learning algorithms \cite{Bueno2018}. Recurrent neural networks are primarily relevant for processing temporal information. We therefore task the system to predict chaotic sequences $\Delta{n}$ timesteps into the future. Training optimizes $W^{out}$ for $y^{out}(n)$ to approximate target $y^{T}(n) = s(n + \Delta{n}), n\in \{1, n^{T} \}$, where $n^{T}=5000$ are the number of samples used for training. We quantify the prediction's quality for $n>n^{T}$, hence on testing data not used for training the system, according to the normalized mean square error $NMSE = 1 / n^{T} \sum_{n=1\dots n^{T}} (y^{T}(n) - y^{out}(n))^{2} / (\sigma^{T})^{2}$, where $\sigma^{T}$ is the target-signal's standard deviation. First, we predict the chaotic Mackey-Glass delay equation, which features a delay of 17 timesteps. By predicting ahead twice its delay ($\Delta{n}=34$), the objective is long-term prediction. We establish a systematic interpretation by cascading only two TDR layers ($N_{i}$=600) and display the performance dependence on the exhaustively scanned system parameters in Fig. \ref{fig:ResultsMG}. We keep $\tau_{1}=0.6\cdot{10}^{-3}$, $\tau_{2}=0.6\cdot{10}^{-3}$, $\tau_{D1,2}=12$, $b_{1,2}=0.2$, $\rho_{1}=8$ and $\delta_{2}=0.01$ constant, with their values mostly based on empirical observations. In order to provide a baseline-reference for other topologies, we evaluate uncoupled layers ($w_{1,2}=w_{2,1}=0$) and scan the bifurcation parameter-plane ($\beta_{1}, \beta_{2}$), see Fig. \ref{fig:ResultsMG}(a). Importantly, for this test we set $\rho_{2}=\rho_{1}$ and hence couple the BP-layer to the same input as the LP layer. We find a clear optimum for $\beta_{1}$, while performance dependence on $\beta_{2}$ is less pronounced. The lowest error (NMSE=$8.3\cdot10^{-6}$) is obtained at $\beta_{1}=1.4$ and $\beta_{2}=1.2$. We now turn to different coupling topologies and disconnect the second layer from the system's input information ($\rho_{2}=0$, $w_{1,2}=0.7$, $w_{2,1}=0.6$). Figure \ref{fig:ResultsMG}(b) shows that bidirectional coupling significantly alters the optimal bifurcation parameters and results in a equally pronounced $\beta_{2}$ dependency. We obtain NMSE=$8.8\cdot10^{-6}$ at $\beta_{1}=1.4$ and $\beta_{2}=1.2$, and the performance benefit of bidirectional coupling is negligible. Continuing with the optimized value of $\beta_{i}$, we focus on the coupling topology by exhaustively scanning $w_{1,2}$ and $w_{2,1}$, see Fig. \ref{fig:ResultsMG}(c). The NMSE reveals some performance sensitivity upon the coupling-strength from the first to the second layer. The most important finding is, however, that there is a systematic dependency upon $w_{2,1}$: the clear global performance optimum is found for unidirectional coupling with $w_{2,1}=0$. The achieved prediction error (NMSE=$1.3\cdot10^{-6}$) is $\sim{3}$ times smaller than for the bidirectional and the uncoupled systems, confirming the benefit of the hierarchical arrangement between consecutive network layers also for TDRs. To further generalize our finding, we turn to predicting the chaotic Lorenz system. The Lorenz system is a three-dimensional set of ordinary differential equations. Each sample corresponds to $\delta t=$0.02 time-steps, and we used the same parameters as in \cite{Lu2017}. The input information was the Lorenz system's first dimension $x(n)$, and the prediction target was $y^{T}(n) = x(n + 1)$, hence $\Delta{n}=1$. Results are listed in Tab. \ref{tab:Lorenz}. Prediction performance is again enhanced by the addition of two layers in a unidirectional configuration. However, on a first glance the positive benefit appears to be smaller. Until now prediction only evaluated the system via predicting ahead by distance $\Delta{n}$. A more suited approach to determine the capacity of approximating a chaotic system's behavior is based on the so called teacher forcing \cite{Jaeger2004}. After training using $\Delta{n}=1$, the system's input becomes its own output, $s(\tilde{n}) = y^{out}(\tilde{n}-1), \tilde{n} = n - n^{T}, n>n^{T}$. The TDR becomes an autonomous predictor of the learned system \cite{Jaeger2004}, and the autonomous evolution enables comparison to the original chaotic sequence over long intervals. Crucially, this corresponds to predicting until $\tilde{n}$ only relying information of the original signal at $n^{T}$; the prediction autonomously advanced from there. This reveals how well the chaotic system as a whole is approximated by the neural network. Figure \ref{fig:FBFOrcing} (a) and (c) show autonomous evolution for Lorenz and Mackey-Glass prediction using three cascaded TDR-layers with unidirectional coupling. The prediction targets are the black solid data. The positive impact of deep (red dashed data) over the single-layer (blue dotted data) TDRs is apparent, and particularly striking when predicting the Lorenz system, see Fig. \ref{fig:FBFOrcing}(a). Rather than chaotic excursions along an attractor, the autonomous single layer TDR quickly converges to a dynamical state resembling a limit-cycle and therefore fails to reproduce its target system. Only with the three layers coupled in a deep, uni-directional topology the network is capable of an excellent approximate of Lorenz chaos. This is also visible from the temporal divergence measured as the Euclidean distances between the Taken's reconstructed attractors of $\mathbf{y}^{out}(\tilde{n})$ and $\mathbf{y}^{T}(\tilde{n})$, see Fig. \ref{fig:FBFOrcing}(b) and (d). The solid black lines indicate the divergence according to the maximum Lyapunov exponent (Mackey-Glass: $\lambda_{max}=5.8\cdot10^{-3}$, Lorentz: $\lambda_{max}=0.91$). Cascading layer improves prediction by a factor of 20 and 10.5 for Lorenz and Mackey-Glass prediction, respectively. The substantial improvement and fundamental importance of the cascaded, 3-layer deep TDR architecture can be further appreciated by inspection of the resulting return maps, see \cite{Supp}. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|} \hline Nodes per layer & Coupling strength & LZ NMSE\tabularnewline \hline \hline 1200 lp & -- & $7.6\cdot10^{-7}$\tabularnewline \hline 600 lp, 600 bp & $w_{\ensuremath{1,2}}=1.1$ & $5.7\cdot10^{-7}$\tabularnewline \hline 400 lp, 400 bp, 400 bp & $w_{\ensuremath{1,2}}=w_{\ensuremath{2,3}}=1.1$ & $2.5\cdot10^{-7}$\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:Lorenz} Comparison for different architectures with identical total number of neurons $N=1200$. Layers are lp=low-pass, bp=band-pass. LZ: Lorenz chaotic time series one step prediction parameters: $\tau_{1}=$ 0.006, $\tau_{2}=\tau_{3}=0.007$, $\delta_{2}=\delta_{3}=0.01$, $\beta_{1}=1.5$, $\beta_{2}=\beta_{3}=1.2$.} \vspace{-0.5cm} \end{table} We shall finish our investigation by also discussing limitations of our approach. The range of possible kernel shapes is limited by physical constraints and have not been optimized during training, through this is possible in principle. Also, deep TDRs do no yet reach the accuracy of the original spatio-temporal reservoir \cite{Jaeger2004}. Predicting the Mackey-Glass timeseries 84 steps into the future results in NMSE=$10^{-4.4}$ with our deep TDR, while the original reservoir achieves NMSE=$10^{-8.4}$ \cite{Jaeger2004}. However, multiple simple additions to the current concept could still significantly improve performance \cite{Martinenghi2012, Grigoryeva2014}. Using current high-performance hardware \cite{Jouppi2017}, CNN still run five times slower than TDRs \cite{Larger2017}. However CNNs are optimized via back-propagation, which will certainly result in lower errors than deep TDRs. If error back propagation can be realized in deep hardware networks remains questionable, while training of our system retains the simplicity and elegance of reservoir computing. To conclude, we have introduced an elegant scheme for deep convolutional networks in a simple architecture of coupled nonlinear oscillators with delay. Information processing conditions conceptually comparable to deep convolutional neural networks with widening convolution kernels are achieved by cascading TDRs with increasingly longer internal timescales. Intra- and inter-layer connectivity can be adjusted via the oscillators' time scales, providing a practical control mechanism for hardware realizations. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{SelfForced.pdf} \caption{\label{fig:FBFOrcing} (Colour online) When connecting the system to its own predicted output at $\tilde{n}=1$, its dynamical evolution becomes autonomous from the original chaotic timeseries. The top x-axis is in units of the inverse Lyapunov exponent. The long-term prediction performance for predicting the Lorenz (a) and Mackey-Glass system (c) via the connected (not connected) system as red dashed (blue dotted) data. Divergence between the predicted and the original attractors are shown in (b) and (d) for Lorenz and Mackey-Glass, respectively. The solid line indicates divergence according to the largest Lyapunov exponent.} \vspace{-0.5cm} \end{figure} Applied to both, Mackey-Glass and Lorenz chaos prediction, our concept significantly improves the quality of long-term predictions and proofs essential in the case of Lorenz forecasting. Recently, reservoirs have been demonstrated to infer a chaotic oscillator's hidden degrees of freedom \cite{Lu2017} and to predict the evolution of chaotic spatio-temporal systems far into the future \cite{Pathak2018}. Temporal structure found in the divergence between prediction and target, such as in Fig. \ref{fig:FBFOrcing}(d), could be addressed via further optimizing timescales $\tau_{i}$ and $\delta_{i}$. Finally, we would like to point out the large variety of possible hierarchical TDR networks. Hybrid systems, where for some or all layers self-feedback is removed, would incorporate feed-forward architectures \cite{Ortin2015}. Layers featuring excitable solitons can potentially create long term memory \cite{Romeira2016} and, when combined with the reported LP and BP-layers, physically implement long-short term memory networks \cite{Hochreiter1997}. This opens possibilities in new domains like natural language processing and sequence generation. This work was supported by the EUR EIPHI program (Contract No. ANR-17-EURE-0002), by the BiPhoProc ANR project (No. ANR-14-OHRI-0002-02), by the Volkswagen Foundation NeuroQNet project and the ENERGETIC project of Bourgogne Franche-Comt\'{e}. X.P. has received funding from the European Union’s Horizon 2020 research and innovation programe under the Marie Sklodowska-Curie grant agreement No. 713694 (MULTIPLY).
1,116,691,500,536
arxiv
\section{Method}\label{algorithm} We now introduce our feature selection scheme. We call this method \emph{FOS}~({\it F}ast and {\it O}ptimal {\it S}election) for convenient reference in the paper. A heuristic version of the method is stated in Algorithm~\ref{heuristics}. The algorithm contains three main parts: First, for given tuning parameter, optimization steps are applied until the computational precision is sufficient. Second, a test determines the optimal stopping along the tuning parameter path. Third, an estimate of the support is computed by thresholding the current regression vector. On a high level, our method is simply a roughly computed Lasso estimate with a subsequent thresholding of the elements. However, the difficult tasks are to answer the three following questions that correspond to the three parts described in the heuristic: What computational precision is required, and how to check if it is reached? How to test for an optimal stopping point along the tuning parameter path? What is the optimal value for thresholding? None of these questions is answered in the literature. Algorithm~\ref{algo} contains the precise version of the method. We now show how it solves the three tasks above. For this, we first disentangle two different kinds of quantities: We identify with hats~$\hat{}$ quantities that refer to statistical estimators. Typically, these are merely theoretical quantities; for example, in finite time, we cannot exactly compute a Lasso solution $\ensuremath{\hat\beta^\tuningparameter}$. On the other hand, we identify with tildes~$\tilde{}$ quantities that result from algorithms. These are the quantities that are of most practical relevance. We use the concept of duality gap to ensure that the computational precision is sufficient. For given tuning parameter $\ensuremath{r}$, a dual formulation of the Lasso problem~(\ref{eq:lasso}) reads \cite{BJMO11} \begin{equation}\label{eq:dual_lasso} \ensuremath{\hat\nu^\tuningparameter}:= ~\operatornamewithlimits{argmax}_{\nu\in\ensuremath{\R^n}} ~~ D(\nu,\ensuremath{r}) \text{~~~~~subject to}\; \normsup{X^\top\nu} \leq 1~, \end{equation} where $D(\nu,\ensuremath{r}) := -\ensuremath{r}^2\normtwo{\nu+2Y/\ensuremath{r}}^2/4+\normtwo{Y}^2\,.$ For any regression vector $\ensuremath{\tilde\beta}$ and any dual feasible variable $\ensuremath{\tilde\nu}$ of (\ref{eq:dual_lasso}), the duality gap is $\dgap{\ensuremath{\tilde\beta}}{\ensuremath{\tilde\nu}}:=f(\ensuremath{\tilde\beta},\ensuremath{r})-D(\ensuremath{\tilde\nu},\ensuremath{r}),$ and it is well known~\cite{Borwein2010} that $\dgap{\ensuremath{\tilde\beta}}{\ensuremath{\tilde\nu}}$ is an upper bound of $f(\ensuremath{\tilde\beta},\ensuremath{r})-f(\ensuremath{\hat\beta^\tuningparameter},\ensuremath{r})$. This upper bound ensures that the required precision is reached. Importantly, we do not need to solve the dual problem of the Lasso, but instead, we only require a dual point, which can be found with an explicit expression. We refer to Appendix~B for details. The stopping point on the tuning parameter path is determined via \ensuremath{\operatorname{AV}_\infty}-tests~\cite{Chichignoud_Lederer_Wainwright14}. Theorem~\ref{thm:optimality} stated in the following section, more precisely, the bounds in sup-norm and the bound on the resulting tuning parameter, provide a proper value for the thresholding. Note that no similar results are known for Cross-Validation, BIC, AIC, or other standard calibration schemes, so that a theoretically justified thresholding procedure is not available for these methods. \begin{algorithm}[!ht]\vspace{1mm} \SetKwInOut{Input}{Inputs}\SetKwInOut{Output}{Output} \Input{$Y\in\ensuremath{\R^n}$; $X\in \ensuremath{\R^{n\times p}}$; $\ensuremath{r}_1=\ensuremath{r}_{\max} > \ensuremath{r}_2 > \ldots > \ensuremath{r}_M>0$; $\re>0$; $\ensuremath{c}>0$} \Output{$\ensuremath{\tilde S}$} \BlankLine \textbf{Initialization :} \texttt{statsCont:=true}; \texttt{statsIt:=}1; $\ensuremath{\tilde\beta}^{\ensuremath{r}_1}:=0$; $\widetilde\ensuremath{r}:=\ensuremath{r}_M$\; \While{\texttt{statsCont==true} \texttt{AND} $\texttt{statsIt}\,<M$}{ \texttt{statsIt:=statsIt+1}\; \texttt{stopCrit:=false}\; \texttt{betaOld:=}$\ensuremath{\tilde\beta}^{\ensuremath{r}_{\texttt{statsIt}-1}}$\; \While{\texttt{stopCrit==false}}{ Compute a dual feasible point $\ensuremath{\tilde\nu}^{\ensuremath{r}_\texttt{statsIt}}$ of Problem (\ref{eq:dual_lasso})\; Compute the duality gap $\dgap{\ensuremath{\tilde\beta}^{\ensuremath{r}_\texttt{statsIt}}}{\ensuremath{\tilde\nu}^{\ensuremath{r}_\texttt{statsIt}}}$\ \eIf{$\dgap{\ensuremath{\tilde\beta}^{\ensuremath{r}_\texttt{statsIt}}}{\ensuremath{\tilde\nu}^{\ensuremath{r}_\texttt{statsIt}}}\leq \,\re\ensuremath{c}^2\ensuremath{r}_{\texttt{statsIt}}^2/\ensuremath{n}$}{ $\ensuremath{\tilde\beta}^{\ensuremath{r}_\texttt{statsIt}}$\texttt{:=betaOld}\; \texttt{stopCrit:=true}\; }{ $\ensuremath{\tilde\beta}^{\ensuremath{r}_\texttt{statsIt}}\texttt{:=}\ensuremath{\mathcal{T}}_{\ensuremath{r}_\texttt{statsIt}/L}\left(\texttt{betaOld} - 2 X^\top(X\cdot\texttt{betaOld}-Y)/L \right)$\; \texttt{betaOld:=}$\ensuremath{\tilde\beta}^{\ensuremath{r}_\texttt{statsIt}}$\; } } \texttt{statsCont:=}$\prod_{k=1}^{\texttt{statsIt}} \mathds{1}\left\{\normsup{\ensuremath{\tilde\beta}^{\ensuremath{r}_\texttt{statsIt}} - \ensuremath{\tilde\beta}^{\ensuremath{r}_k}}/(\ensuremath{r}_\texttt{statsIt} + \ensuremath{r}_k) - \ensuremath{c}/\ensuremath{n} \leq 0\right\}$\; } \If{\texttt{statsCont==false}}{ $\widetilde{\ensuremath{r}}:=\ensuremath{r}_{\texttt{statsIt}-1}$\; } $\ensuremath{\tilde S}$:=$\{j\in\ensuremath{\{1,\dots,p\}}\,:\,|\ensuremath{\tilde\beta}_j^{\widetilde\ensuremath{r}}|\geq 6\ensuremath{c}{\widetilde\ensuremath{r}}/\ensuremath{n}\}$ \caption{FOS Scheme for Feature Selection in Linear Regression}\label{algo} \end{algorithm} \clearpage As initialization, we choose the all-zeros vector in $\ensuremath{\R^p},$ reflecting our assumption that many entries of the true regression vector are close to zero. As optimization algorithm, one could select proximal gradient descent, coordinate descent, or other techniques. We opted for the first one; the corresponding updates in Line~13 then read \begin{equation*} \ensuremath{\tilde\beta} \mapsto \ensuremath{\mathcal{T}}_{\frac{\ensuremath{r}}{L}}(\ensuremath{\tilde\beta} - \frac{2}{L}X^\top(X\ensuremath{\tilde\beta}-Y))\,, \end{equation*} where $\ensuremath{\mathcal{T}}$ is the elementwise soft-threshold operator defined by $\ensuremath{\mathcal{T}}_b(a)_j := \sign(a_j)\max(|a_j|-b,0)$ for $j\in\ensuremath{\{1,\dots,p\}}\,,$ and where $L>0$ is the step size determined by backtracking. In our data examples, we use the FISTA implementation~\cite{Beck09} in \texttt{MATLAB}\textsuperscript{\textregistered}. Since we are limited to finitely many computations in practice, we consider finite sequences $r_1=r_{\max}>r_2>\dots>\ensuremath{r}_M=\ensuremath{r}_{\min}>0.$ The concrete choice follows the ones used in standard implementations~\cite{Hastie10}: we use a logarithmically spaced grid of size $M=100,$ set $\ensuremath{r}_{\max} := 2\normsup{X^\top Y}$ to the smallest tuning parameter such that $\ensuremath{\hat\beta^\tuningparameter}=0$, and define $\ensuremath{r}_{\min} := \ensuremath{r}_{\max}/u$ as a fraction of $\ensuremath{r}_{\max}$. Standard choices for $u$ range from $100$ to $10'000.$ On a very high level, $\ensuremath{r}_{\max}/\ensuremath{r}_{\alpha}\approx \normsup{X^\top Y}/\normsup{X^\top \varepsilon}\approx n\normone{\ensuremath{\beta}}/\sqrt{n}\approx \sqrt{n}.$ To ensure that $\ensuremath{r}_{\min}< \ensuremath{r}_{\alpha}$ on our data sets, we thus select $u:=1000.$ Finally note that our theoretical results hold for any types of grids (also for continuous ranges of \ensuremath{r}), and because of the warm starts and the early stopping, the computational complexity of FOS depends only very mildly on~\mbox{$M$ and $\ensuremath{r}_{\min}.$} Theory finally provides a precise guidance on the constants $\ensuremath{c}$ and $\re$. Indeed, recall that $\ensuremath{c}$ and $\re$ are {\it not} tuning parameters, but instead dimensionless constants that specify the model assumptions. Therefore, for all practical purposes here, it is sufficient to hardcode the values $\ensuremath{c}=0.75$ and $\re=1$ that correspond to orthogonal design. (Note that in any case, feature selection needs the design to be nearly orthogonal~\cite{BiYuConsistLasso}.) However, for theoretical purposes, it is important to keep track of these constants. For example, estimators other than the Lasso might verify Condition~\ref{boundcon} with a different constant~$\ensuremath{c}$. Our results can then be transferred directly using this value of~$\ensuremath{c}$ in the algorithm. Computationally, FOS has two advantages: First, only a part of the tuning parameter path needs to be computed, more precisely, only the part with large and moderate tuning parameters. Second, only very rough computations are required; in particular, since a large tolerance can be accepted for large tuning parameters, only very small numbers of optimization cycles (in practice, often zero to five) are required per tuning parameter. Statistically, our scheme has optimal accuracy for variable selection. Indeed, we prove that FOS provides the same statistical guarantees (up to a small factor) as if we computed the Lasso to convergence with optimal, but in practice unknown, tuning parameter. In the following section, we establish the statistical optimality of our approach, see Theorem~\ref{thm:optimality}. After that, we demonstrate its empirical performance. \section{Discussion}\label{discussion} In view of the theoretical and empirical evidence provided above, FOS is a competitive approach to feature selection with large and high-dimensional data. The underlying theoretical results are precise guarantees for\vspace{-4mm} \begin{itemize} \item the computational accuracy needed along the tuning parameter path;\vspace{-4mm} \item the stopping criterion for the tuning parameter selection;\vspace{-4mm} \item the thresholding value.\vspace{-4mm} \end{itemize} For standard methods, no comparable guarantees are available. Of interest for further research are screening rules for FOS. For the Lasso path, a variety of such rules have been developed and included in popular software such as~\texttt{glmnet}~\cite{Ghaoui2010,Fercoq2015,Tibshirani12,Xiang2014}. Another direction for further research is the application to other ``base'' estimators. In this study, we have focused on the Lasso as starting point. However, other methods, such as SCAD or MCP, satisfy similar $\ell_\infty$-bounds~\cite{Negahban12}, and thus, could also fit our framework. Finally, it would be of interest to study further the hypotheses needed for the theoretical results. Currently, the theory relies on strict assumptions on the design matrix. These assumptions are due to our focus on $\ell_\infty$-estimation and support recovery, which are possible only under strict conditions~\cite{BiYuConsistLasso}. However, it would be interesting to consider extensions to other tasks, such as $\ell_2$-estimation and prediction, that are possible under weaker assumptions on the design. Moreover, since the correlations are assumed small in our context, our theoretical and empirical results allow us to set~$\ensuremath{c}=0.75$ and $\re=1$. At this point, however, it is unclear whether these are still appropriate choices if the correlations are large. \section{Underlying optimization results} Our approach estimates the computational accuracies of iterative optimization steps. For this, we invoke Fenchel duality~\cite{Borwein2010}, which is used for many learning tasks in which the objective function can be split into a convex, differentiable fitting term and a convex, possibly non-smooth regularization term~\cite{Rifkin2007}. The Fenchel conjugate, which is central to Fenchel duality, is defined by \begin{equation*} g^*(\eta) := \sup_{\omega\in\ensuremath{\R^n}} \left\{\eta^\top\omega -g(\omega)\right\} \end{equation*} for any function $g:\ensuremath{\R^n}\rightarrow[-\infty,\infty]$. \begin{theorem}\label{lemma:duality_gap} Let $\ensuremath{\tilde\beta}\in\ensuremath{\R^p}$ be an estimate of $\ensuremath{\hat\beta^\tuningparameter}$. (i) A feasible point of the dual problem of the Lasso is \begin{equation*}\label{eq:dual_feasible} \ensuremath{\tilde\nu} = \frac{2 s}{\ensuremath{r}}(X\ensuremath{\tilde\beta}-Y) \end{equation*} where \begin{equation*} s = \min\left\{\max\left\{\frac{-\ensuremath{r}}{\normsup{2X^\top(X\ensuremath{\tilde\beta}-Y)}},\frac{-Y^\top(X\ensuremath{\tilde\beta}-Y)}{\normtwo{Y-X\ensuremath{\tilde\beta}}^2}\right\},\frac{\ensuremath{r}}{\normsup{2X^\top(X\ensuremath{\tilde\beta}-Y)}}\right\} \end{equation*} Moreover (ii) a duality gap is \begin{equation*}\label{eq:duality_gap} \dgap{\ensuremath{\tilde\beta}}{\ensuremath{\tilde\nu}}=\normtwo{Y-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}} + \frac{\ensuremath{r}^2}{4}\normtwo{\ensuremath{\tilde\nu}+\frac{2Y}{\ensuremath{r}}}^2-\normtwo{Y}^2\,. \end{equation*} \end{theorem} \begin{proof}[Proof of Theorem~\ref{lemma:duality_gap}] The Lasso problem is equivalent to the following primal optimization problem \begin{equation*} \begin{aligned} \ensuremath{\hat\beta^\tuningparameter}\in& \mathop{\mathrm{arg\,min}}_{\ensuremath{\theta}\in\ensuremath{\R^p}} & &\hspace{-0.5cm} \{g(\omega) + \ensuremath{r}\normone{\ensuremath{\theta}}\} \\ & \text{subject to} & & \omega = X\ensuremath{\theta}\,, \end{aligned} \end{equation*} where $g(\omega):=\normtwo{Y-\omega}^2$ with Fenchel conjugate $g^*(\eta)= \normtwo{\eta+2Y}^2/4-\normtwo{Y}^2$. The corresponding Fenchel dual problem then reads \cite{BJMO11} \begin{equation* \begin{aligned} \ensuremath{\hat\nu^\tuningparameter}:=& \operatornamewithlimits{argmax}_{\nu\in\ensuremath{\R^n}} & & \hspace{-0.5cm} -g^*(\ensuremath{r}\nu) \\%=-\frac{\ensuremath{r}^2}{4}\normtwo{\nu+\frac{2Y}{\ensuremath{r}}}^2+\normtwo{Y}^2\\ & \text{subject to} & & \normsup{X^\top\nu} \leq 1\,. \end{aligned} \end{equation*} where $g^*(\ensuremath{r}\nu) = \ensuremath{r}^2\normtwo{\nu+2Y/\ensuremath{r}}^2/4-\normtwo{Y}^2$. A primal solution of the Lasso and the dual solution are linked by the relation \begin{equation*} 2(X\ensuremath{\hat\beta^\tuningparameter}-Y) = \ensuremath{r}\ensuremath{\hat\nu^\tuningparameter}\,. \end{equation*} Given a current primal estimate $\ensuremath{\tilde\beta}$, a dual feasible variable $\ensuremath{\tilde\nu}$ can be chosen as the closest (in $\ell_2$-norm) point to $-2Y/\ensuremath{r}$ proportional to $2(X\ensuremath{\tilde\beta}-Y)/\ensuremath{r}$. This yields $\ensuremath{\tilde\nu}=2 s (X\ensuremath{\tilde\beta}-Y)/\ensuremath{r}$\,, where $s$ is given by~\cite{Ghaoui2010} \begin{equation*} s = \min\left\{\max\left\{\frac{-\ensuremath{r}}{\normsup{2X^\top(X\ensuremath{\tilde\beta}-Y)}},\frac{-Y^\top(X\ensuremath{\tilde\beta}-Y)}{\normtwo{Y-X\ensuremath{\tilde\beta}}^2}\right\},\frac{\ensuremath{r}}{\normsup{2X^\top(X\ensuremath{\tilde\beta}-Y)}}\right\}\,. \end{equation*} This concludes the first part of the proof. Hence, a duality gap for the Lasso problem is \begin{equation*} \dgap{\ensuremath{\tilde\beta}}{\ensuremath{\tilde\nu}}= g(X\ensuremath{\tilde\beta})+\ensuremath{r}\normone{\ensuremath{\tilde\beta}} + g^*(\ensuremath{r}\ensuremath{\tilde\nu}) = \normtwo{Y-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}} + \frac{\ensuremath{r}^2}{4}\normtwo{\ensuremath{\tilde\nu}+\frac{2Y}{\ensuremath{r}}}^2-\normtwo{Y}^2\,. \end{equation*} This concludes the second part of the proof. \end{proof} \section{Proof of Theorem 3.1} Having derived the oracle inequality stated in Theorem~\ref{thm:sup_norm_vdiff_main} and the computational accuracy stated in Theorem~\ref{lemma:duality_gap}, we are now ready to prove our main result. \begin{proof}[Proof of Theorem 3.1] Let $\ensuremath{r}\geq 2\ensuremath{r}_\alpha$ and $(\ensuremath{\tilde\beta},\ensuremath{\tilde\nu})$ any pair of Lasso primal-dual variables that satisfies $\dgap{\ensuremath{\tilde\beta}}{\ensuremath{\tilde\nu}}\leq\re\ensuremath{c}^2\ensuremath{r}^2/\ensuremath{n}$. According to our results above, it holds with probability at least $1-\alpha$ that \begin{equation*} \normsup{\ensuremath{\beta}-\ensuremath{\tilde\beta}}\leq \frac{2\ensuremath{c}\ensuremath{r}}{\ensuremath{n}}\,. \end{equation*} Since our scheme invokes \ensuremath{\operatorname{AV}_\infty}-tests introduced in~\cite{Chichignoud_Lederer_Wainwright14}, we can now prove Theorem~3.1 along the same lines as~\cite[Theorem~1]{Chichignoud_Lederer_Wainwright14}. \end{proof} \section{Empirical performance}\label{numerics} We now demonstrate the computational efficiency and the empirical accuracy of FOS. To obtain a comprehensive overview, we consider a variety of settings, including synthetic data as well as biological and financial applications. In Section~\ref{finance}, we show the scalability of our method by analyzing a financial data set with more than 150'000 parameters. The application to even larger data is currently limited by the memory restrictions in \texttt{MATLAB}\textsuperscript{\textregistered}; a future \texttt{C/Fortran} implementation could remove this limitation. Equally important, however, is that the complexity often stems not from the size of a single data set but instead from the need for analyzing a large number of data sets. In Section~\ref{lung}, for example, we learn a biological network with a neighborhood selection scheme. Each of the corresponding regressions comprises only 500 samples and 1000 parameters. However, since 1000 such regressions are needed, the computational complexity can easily render standard methods infeasible. Therefore, the efficiency in regressions with moderate data is also a main concern; we analyze such settings in Section~\ref{synthetic}, where we invoke synthetic data with up to 10'000 samples and parameters. We mainly compare FOS to Lasso with Cross-Validation (LassoCV), because the latter is currently the most popular method. However, we found the same conclusions when comparing to other Lasso-based approaches, such as Lasso combined with AIC or BIC. In particular, we rival or outmatch the feature selection performance of Lasso combined with BIC, a method specifically designed for this task. We refer to the Appendix for corresponding simulation results. On the other hand, non-convex approaches are presently not suited for our purposes. There has been recent progress regarding the computation of non-convex methods. For example, Wang et al.~\cite{Wang14} show that path following algorithms can lead to more efficient computations of SCAD, MCP, and other non-convex methods, and Bien et al.~\cite{Lederer16} show how to solve a specific non-convex problem as a sequence of convex problems. However, despite such progress, non-convex methods remain computationally intractable for our data applications. All computations are conducted with \texttt{MATLAB}\textsuperscript{\textregistered} and are run on an E5-2640 \texttt{Intel}\textsuperscript{\textregistered} \texttt{Xeon}\textsuperscript{\textregistered} CPU (2.50GHz). FOS is implemented using the \texttt{SPAMS} package~\cite{BJMO11} coded in \texttt{C++}. We compare with two LassoCV implementations: First, we implement LassoCV analogously to FOS using the \texttt{SPAMS} package and a 10-fold Cross-Validation with warm starts. This implementation, called \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ in the following, is the most appropriate one for comparisons with the FOS implementation. However, much work has gone into efficient implementations of LassoCV. Therefore, we also use the well-known \texttt{glmnet} package~\cite{Hastie10} and call the corresponding implementation~\ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}. However, these results must be treated with reservation, because \texttt{glmnet} cannot be calibrated to the same convergence criterion as our implementation. More precisely, the convergence criterion in \texttt{glmnet} needs to be specified in terms of maximum change in the objective, which does not coincide with the criterion in our algorithm and with the convergence of the estimator itself as needed in the theory. One could also argue that comparing FOS with \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}\ is not fair in any case, because \texttt{glmnet} exploits additional geometric properties of the Lasso (such as screening rules). These additional properties could also be used in our scheme, but their implementation is deferred to future work. However, we demonstrate that even in its current version, FOS can outperform both \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ and~\ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}. \subsection{Synthetic data}\label{synthetic} In this section, we use synthetic data to demonstrate the empirical performance of FOS. To this end, we generate data of two different sizes from linear regression models with $\ensuremath{n}=500$ and $p=1000$ and with $\ensuremath{n}=5000$ and $p=10'000$, respectively. More specifically, we sample each row of the design matrix $X\in\mathbb{R}^{n\times p}$ independently from a $p$-dimensional normal distribution with mean~$0$ and covariance matrix $(1-\rho)\operatorname{I}_{p\times p}+\rho{\rm 1}\mskip -4,5mu{\rm l} _{p \times p},$ where $\operatorname{I}_{p\times p}$ is the identity matrix, ${\rm 1}\mskip -4,5mu{\rm l} _{p \times p}$ is the matrix of ones, and $\rho=0.3$ is the correlation among the variables. We then normalize the matrix $X$ so that its columns have Euclidean norm equal to $\sqrt n.$ The entries of the noise $\varepsilon\in\ensuremath{\R^n}$ are generated according to a one-dimensional standard normal distribution. The entries of $\ensuremath{\beta}$ are first set to $0$ except for $10$ uniformly at random chosen entries that are each set to $1$ or $-1$ with equal probability. The entire vector $\ensuremath{\beta}$ is then rescaled such that the signal-to-noise ratio ${\|X \ensuremath{\beta}\|_2^2/\ensuremath{n}}$ is equal to 5. We summarize the results in Table~\ref{tab:time_hamming}. The computational efficiency is measured in average timing in seconds; the statistical accuracy is measured in average Hamming distance, which is the sum of the number of false positives and the number of false negatives. We observe that FOS outperforms \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ and \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}\ both in computational efficiency and in statistical accuracy. While we restrict the presentation to two settings, we found the same conclusion for a wide spectrum of parameters; we refer to the Appendix for additional results. \begin{table}[!hb] \caption{Average run times (in seconds) and average Hamming distances for \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}, \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}, and FOS. For the larger data set, \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ timed out on our machine, which means that it took more than one hour on average. \label{tab:time_hamming}} \centering \vspace{.5\baselineskip} \begin{tabular}{lllll} \toprule & \multicolumn{2}{c}{$\ensuremath{n}=500,\,p=1000$} & \multicolumn{2}{c}{$\ensuremath{n}=5000,\,p=10'000$} \\ \cmidrule{2-5} Method & Timing & Hamming distance & Timing & Hamming distance\\ \midrule \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}} & $137.15\pm 9.33$ & $56.00\pm18.37$ & NA &NA\\ \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}} & $~~~2.08\pm 0.32$ & $44.90\pm 16.07$ & $111.47\pm 1.46$ & $56.50\pm 23.93$ \\ FOS & $~~~0.10\pm 0.06$ & $~\,8.60\pm~\,3.10$ & $~~~4.81\pm 3.60$ & $~\,2.30\pm~\,4.16$ \\ \bottomrule \end{tabular} \end{table} An illustration of the two computational benefits of FOS is given in Figure~\ref{fig:iterations}. First, we observe that even with warm starts, \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ requires a large number of iterations to converge. In contrast, FOS allows for early stopping, in particular, for large tuning parameters (recall that the required precision for FOS is proportional to the tuning parameter; instead, the required precision for other methods is unknown). Moreover, Cross-Validation, BIC, AIC, and similar calibration schemes are based on the entire Lasso path, while only a part of the path is required for FOS. \input{sections/fig_iterations} Another feature of FOS is the theoretically justified threshold. In contrast, there is no sound guidance for how to threshold LassoCV. This leads to many (tentatively small) false positives for LassoCV. Also, note that the results for \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ and \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}\ differ, since their implementations are based on different algorithms and different stopping criteria. \subsection{Financial data}\label{finance} We now consider a large data set to demonstrate the scalability of FOS. The data~\cite{Kogan09} comprises $\ensuremath{n}=16'087$ samples and $p=150'348$ predictors and is publicly available.\footnote{see \url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html}} The goal is to use financial reports of companies to predict the volatility of stock returns. The feature representation of the financial reports is based on the calculation of TF-IDF (term frequency and inverse document frequency) of unigrams. The computational time (note that no ground truth is available for this data) of \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}\ is {\bf $\mathbf{153.02}$s} and of FOS {\bf $\mathbf{1.49}$s}. Instead, \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ timed out on our machine. \subsection{Biological data}\label{lung} FOS can also be applied to structure learning problems by estimating the local neighborhood of each node via high-dimensional regressions. In this specific application~\cite{Guyon08}, the goal is to understand the interaction network of $p=1000$ genes in lung cancer patients from $\ensuremath{n}=500$ expression profiles. We do neighborhood selection with the ``or-rule''~\cite{Meinshausen06} based on FOS and LassoCV and compare the estimated graphs with the available gold standard~\cite{Statnikov15}. The results are summarized in Figure~\ref{fig:reged}. Note that here, the Hamming distance is the sum of the falsely included edges and the falsely omitted edges.\vspace{3mm} \begin{figure}[!ht] \begin{minipage}[c]{.45\linewidth} \begin{bchart}[min=0, max=4000, step=1000, width=0.9\linewidth] \bcbar[text={\ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}},color=orange,plain]{3.7656e+03 \bcbar[text={FOS},color=blue!25,plain]{1.7306e+03 \bcxlabel{{\scriptsize Run time (seconds)}} \end{bchart} \end{minipage}\hfill \begin{minipage}[c]{.5\linewidth} \begin{bchart}[min=0, max=4, step=1, width=0.8\linewidth] \bcbar[text={\ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}},color=orange,plain]{3.5772 \bcbar[text={FOS},color=blue!25,plain]{0.1387 \bcxlabel{{\scriptsize Hamming distance (\% of total number of possible edges)}} \end{bchart} \end{minipage} \caption{Run times (in seconds) and Hamming distances (in \% of the total number of possible edges) for \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}\ and for FOS on the lung cancer data set. The implementation \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}\ timed out on our machine. \label{fig:reged}} \vspace{0.5\baselineskip} \end{figure} \section{Introduction} Feature selection is one of the most common statistical techniques to learn from data. In Systems Biology, just to name one field of application, feature selection is used to learn about the microbiological environment in the human gut from stool samples~\cite{AmGutProj,Kurtz15}. The advent of Big Data provides large data sets that are expected to render feature selection possible at unprecedented resolutions. For this, however, major computational and statistical challenges yet need to be resolved. A widely-used framework for feature selection is linear regression. Beyond being of interest by itself, it can serve as a basis for the analysis of network data~\cite{Meinshausen06} and data with instrumental variables~\cite{Belloni11b}. Many contemporary data sets are ``big'' in the sense that both the number of parameters and the number of samples are large, and they are ``high-dimensional'' in the sense that the number of parameters can rival or even exceed the number of samples. On the other hand, one can often make the additional assumption that the true underlying model is ``sparse'' in the sense that there is a suitable model that is based on only a small number of parameters. For such data, the first choice is the families of penalized estimators, among them the Lasso~\cite{Tibshirani96}, MCP~\cite{Zhang10}, SCAD~\cite{Fan_Li01}, Scaled Lasso~\cite{ScaledLasso11} and Square-Root Lasso~\cite{Belloni11}, and TREX~\cite{Lederer15}. However, these methods entail two main difficulties: First, even convex methods based on the Lasso or Scaled/Square-Root Lasso become infeasible in the presence of hundreds of thousands or even millions of samples and parameters, and much more so non-convex methods, such as MCP, SCAD, or TREX. Second, feature selection with penalized methods is hardly understood from a theoretical perspective. For example, the Lasso, MCP, and SCAD involve one or more tuning parameters, but the calibration of these tuning parameters with standard schemes such Cross-Validation, AIC, or BIC lacks any finite sample guarantees. On the other hand, methods such as Square-Root/Scaled Lasso or TREX that aim at making tuning parameter selection superfluous, either require knowledge of model parameters that are inaccessible in practice, such as the noise distribution, or they lack a theoretical framework altogether. In this paper, we introduce a novel approach to feature selection with large and high-dimensional data. Its two main properties are: \begin{itemize} \item It is equipped with sharp finite sample guarantees for arbitrary distributions of the noise and provides accurate feature selection in practice. \item It is computationally more efficient than standard approaches based on the Lasso, MCP, SCAD, or Scaled/Square-Root Lasso, allowing for the analysis of larger data sets. \end{itemize} After setting the framework and objective below, we introduce our approach in the form of an algorithm in Section~\ref{algorithm}. Two of its main ingredients are optimization steps of the Lasso objective function and \ensuremath{\operatorname{AV}_\infty}-tests~\cite{Chichignoud_Lederer_Wainwright14} for calibration. The statistical guarantees are stated in Section~\ref{theory}, and the empirical performance of the method both in simulations and on real data is demonstrated in Section~\ref{numerics}. We finally conclude with a discussion in Section~\ref{discussion}. The proofs and further simulation results are deferred to the Appendix. \subsection*{Framework and objective} We assume the linear regression model \begin{equation*} Y = X\ensuremath{\beta} + \varepsilon\,, \end{equation*} where $Y\in\ensuremath{\R^n}$ is the outcome, $X\in\ensuremath{\R^{n\times p}}$ the design matrix, $\ensuremath{\beta}\in\ensuremath{\R^p}$ the regression vector, and $\varepsilon\in\ensuremath{\R^n}$ random noise. For simplicity, we assume that the design matrix is standardized, that is, for all $j\in\ensuremath{\{1,\dots,p\}}$\,, we assume $\sum_{i=1}^\ensuremath{n} X_{ij} = 0$ and $\sum_{i=1}^\ensuremath{n} X_{ij}^2=\ensuremath{n}$\,. Importantly, however, we allow for {\it arbitrary} noise distributions; in particular, we allow for correlated, heavy-tailed $\varepsilon\,.$ Our goal is feature selection, that is to estimate the support \mbox{$\ensuremath{S}:=\operatorname{supp}(\ensuremath{\beta}):=\{j:\ensuremath{\beta}_j\neq 0\}.$} It is well known that the Lasso with optimal tuning parameter provides accurate feature selection under certain model assumptions~\cite{Buhlmann11,Hastie15}. For vectors $\ensuremath{\theta}\in\ensuremath{\R^p}$ and tuning parameters $\ensuremath{r}>0$\,, we thus consider the Lasso objective function~\cite{Tibshirani96} \begin{equation*} f(\ensuremath{\theta},\ensuremath{r}):=\normtwo{Y-X\ensuremath{\theta}}^2+\ensuremath{r}\normone{\ensuremath{\theta}}\,. \end{equation*} For given $r,$ this provides the estimator $\ensuremath{\hat S^\tuningparameter}:=\operatorname{supp}(\ensuremath{\hat\beta^\tuningparameter})\,,$ where \begin{equation}\label{eq:lasso} \ensuremath{\hat\beta^\tuningparameter}\in\mathop{\mathrm{arg\,min}}_{\ensuremath{\theta}\in\ensuremath{\R^p}}f(\ensuremath{\theta},\ensuremath{r})\,. \end{equation} In view of its - in a sense optimal - theoretical performance (see below), we consider the Lasso {\it with optimal tuning parameter} as our gold standard for feature selection. To be able to use this benchmark, we recall the model assumptions needed for successful feature selection with the Lasso. Recall that irrespective of the method, feature selection is feasible only if the correlations in the design matrix~$X$ are sufficiently small. Here, we impose two standard conditions on~$X$. We first impose a sup-norm bound for the Lasso: \begin{condition}\label{boundcon} Given any $\alpha>0\,,$ there exists a constant $\ensuremath{c}>0$ and a (in general unknown) number~$\ensuremath{r}_\alpha\,,$ the latter depending on $\alpha\,,$ such that for all $\ensuremath{r}\geq \ensuremath{r}_\alpha\,,$ the two following conditions are met with probability at least $1-\alpha$ \begin{enumerate}[(i)] \item $\ensuremath{\hat S^\tuningparameter}\subset\ensuremath{S}\,;$ \item $\normsup{\ensuremath{\beta}-\ensuremath{\hat\beta^\tuningparameter}}\leq\ensuremath{c}\ensuremath{r}/n\,.$ \end{enumerate} \end{condition} \noindent This condition is a condition on $X$ in disguise: for example, Theorem~11.3 in the book~\cite{Hastie15} uses the primal-dual witness approach to derive a result of the above form under the irrepresentability condition on $X$. Further results of the above form with different assumptions on~$X$ can be found in \cite{BuneaEN,Chichignoud_Lederer_Wainwright14,Karim08}. Imposing Condition~\ref{boundcon} allows us to conveniently encompass all these results. We also impose a version of the restricted eigenvalues condition itself: \begin{condition}[Restricted eigenvalues]\label{RE} For given $\ensuremath{d}\geq 0$, there is a constant $\re>0$ such that \begin{equation*} \normtwo{X\ensuremath{\delta}}^2\geq n\re\normtwo{\ensuremath{\delta}}^2 \end{equation*} for all $\ensuremath{\delta}\in\ensuremath{\R^p}$ that satisfy $\normone{\ensuremath{\vdiff_{S^c}}}\leq 3\normone{\ensuremath{\vdiff_{S}}} + 2\ensuremath{d}/\ensuremath{r}$. \end{condition} \noindent The restricted eigenvalues condition is standard in the statistics literature; see~\cite{Buhlmann11,Hastie15} and references therein. The origin of the additive term $2\ensuremath{d}/\ensuremath{r}$ will become apparent later; in general, it is a small additive term since $d$ is small and $r$ will be of the order $1$. We also refer to a related formulation in~\cite{Negahban12}, which contains additive terms that originate in variations of the objective function. Finally, we note that the assumptions on $X$ used to prove Condition~\ref{boundcon}, such as mutual coherence or irrepresentability, imply restricted eigenvalues~\cite{Sara09}; however, we impose Condition~\ref{RE} separately both for ease of presentation and to obtain sharper results. Two important facts are that (i)~the optimal tuning parameter is unknown in practice and (ii)~Lasso computations can be very time-consuming or even infeasible for very large data sets. Hence, we seek a method for feature selection that has the same statistical guarantees as the (in practice unknown) set $\ensuremath{\hat S}^{\ensuremath{r}_\alpha}$ and that is computationally efficient. \section{Underlying statistical results} In this section, we derive two properties of all vectors close to a Lasso solution: In Lemma~\ref{cones_cond}, we show that their error belongs to a cone; in Theorem~\ref{thm:sup_norm_vdiff_main}, we show that they satisfy an oracle inequality in $\ell_\infty$-norm. \begin{lemma}\label{cones_cond} Let $\ensuremath{d}\geq 0$ be a constant, $\ensuremath{r}\geq 2\ensuremath{r}_\alpha$ a tuning parameter, and $\ensuremath{\tilde\beta}\in\ensuremath{\R^p}$ any vector that satisfies $f(\ensuremath{\tilde\beta},\ensuremath{r})\leq f(\ensuremath{\hat\beta^\tuningparameter},\ensuremath{r})+\ensuremath{d}$. Then $\ensuremath{\delta}:=\ensuremath{\hat\beta^\tuningparameter}-\ensuremath{\tilde\beta}$ belongs to the cone \begin{equation*} \ensuremath{\mathcal{C}(\sets)}:= \left\{\nu\in\mathbb{R}^p\,:\,\normone{\nu_{\ensuremath{S^c}}}\leq 3\normone{\nu_{\ensuremath{S}}} + \ensuremath{\frac{2\dif}{\tuningparameter}} \right\}. \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{cones_cond}] Since $f(\ensuremath{\tilde\beta},\ensuremath{r})\leq f(\ensuremath{\hat\beta^\tuningparameter},\ensuremath{r})+\ensuremath{d} \leq f(\ensuremath{\hat\beta^{\tuningparameter/2}},\ensuremath{r})+\ensuremath{d}$\,, we find the basic inequality \begin{equation*} \normtwo{Y-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}}\leq \normtwo{Y-X\ensuremath{\hat\beta^{\tuningparameter/2}}}^2+\ensuremath{r}\normone{\ensuremath{\hat\beta^{\tuningparameter/2}}}+\ensuremath{d}\,. \end{equation*} We can now rewrite $\normtwo{Y-X\ensuremath{\tilde\beta}}^2$ as follows: \begin{align*} \normtwo{Y-X\ensuremath{\tilde\beta}}^2 &= \normtwo{Y-X\ensuremath{\hat\beta^{\tuningparameter/2}}+X\ensuremath{\hat\beta^{\tuningparameter/2}}-X\ensuremath{\tilde\beta}}^2\\ &= \normtwo{Y-X\ensuremath{\hat\beta^{\tuningparameter/2}}}^2 + 2\inprod{Y-X\ensuremath{\hat\beta^{\tuningparameter/2}}}{X\ensuremath{\hat\beta^{\tuningparameter/2}}-X\ensuremath{\tilde\beta}}+ \normtwo{X\ensuremath{\hat\beta^{\tuningparameter/2}}-X\ensuremath{\tilde\beta}}^2\\ &= \normtwo{Y-X\ensuremath{\hat\beta^{\tuningparameter/2}}}^2 + 2\inprod{X^\top(Y-X\ensuremath{\hat\beta^{\tuningparameter/2}})}{\ensuremath{\hat\beta^{\tuningparameter/2}}-\ensuremath{\tilde\beta}}+ \normtwo{X\ensuremath{\hat\beta^{\tuningparameter/2}}-X\ensuremath{\tilde\beta}}^2\,. \end{align*} Combining the two displays yields \begin{align*} 2\inprod{X^\top(Y-X\ensuremath{\hat\beta^{\tuningparameter/2}})}{\ensuremath{\hat\beta^{\tuningparameter/2}}-\ensuremath{\tilde\beta}}+ \normtwo{X\ensuremath{\hat\beta^{\tuningparameter/2}}-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}} &\leq \ensuremath{r}\normone{\ensuremath{\hat\beta^{\tuningparameter/2}}}+\ensuremath{d}\,, \end{align*} and, by rearranging and using that $\normtwo{X\ensuremath{\hat\beta^{\tuningparameter/2}}-X\ensuremath{\tilde\beta}}^2\geq 0$, \begin{align*} \ensuremath{r}\normone{\ensuremath{\tilde\beta}} &\leq \inprod{2X^\top(Y-X\ensuremath{\hat\beta^{\tuningparameter/2}})}{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^{\tuningparameter/2}}} + \ensuremath{r}\normone{\ensuremath{\hat\beta^{\tuningparameter/2}}}+\ensuremath{d}\,. \end{align*} Invoking H\"older's inequality and the KKT conditions for $\ensuremath{\hat\beta^{\tuningparameter/2}}$ provides us with \begin{align*} 2\inprod{X^\top(Y-X\ensuremath{\hat\beta^{\tuningparameter/2}})}{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^{\tuningparameter/2}}} &\leq \normsup{2X^\top(Y-X\ensuremath{\hat\beta^{\tuningparameter/2}})}\normone{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^{\tuningparameter/2}}}\\ &\leq\frac{\ensuremath{r}}{2}\normone{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^{\tuningparameter/2}}}\,. \end{align*} Combining the two displays yields \begin{align*} 2\normone{\ensuremath{\tilde\beta}} &\leq \normone{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^{\tuningparameter/2}}}+2\normone{\ensuremath{\hat\beta^{\tuningparameter/2}}}+\frac{2\ensuremath{d}}{\ensuremath{r}}\,. \end{align*} Hence, \begin{align*} 2\normone{\ensuremath{\cm_{\hat S}}}+2\normone{\ensuremath{\cm_{\hat S^c}}} &\leq \normone{\ensuremath{\cm_{\hat S}}-\ensuremath{\tmdt_{\hat S}}}+\normone{\ensuremath{\cm_{\hat S^c}}-\ensuremath{\tmdt_{\hat S^c}}}+2\normone{\ensuremath{\tmdt_{\hat S}}}+2\normone{\ensuremath{\tmdt_{\hat S^c}}}+\frac{2\ensuremath{d}}{\ensuremath{r}}\\ &=\normone{\ensuremath{\cm_{\hat S}}-\ensuremath{\tmdt_{\hat S}}}+\normone{\ensuremath{\cm_{\hat S^c}}}+2\normone{\ensuremath{\tmdt_{\hat S}}}+\frac{2\ensuremath{d}}{\ensuremath{r}}\,, \end{align*} where $\ensuremath{\hat S}:=\operatorname{supp}(\ensuremath{\hat\beta^{\tuningparameter/2}}).$ This is equivalent to \begin{align*} \normone{\ensuremath{\cm_{\hat S^c}}} &\leq \normone{\ensuremath{\cm_{\hat S}}-\ensuremath{\tmdt_{\hat S}}}+2\normone{\ensuremath{\tmdt_{\hat S}}}-2\normone{\ensuremath{\cm_{\hat S}}}+\frac{2\ensuremath{d}}{\ensuremath{r}}\,, \end{align*} so that with the reverse triangle inequality \begin{align*} \normone{\ensuremath{\cm_{\hat S^c}}} &\leq 3\normone{\ensuremath{\cm_{\hat S}}-\ensuremath{\tmdt_{\hat S}}}+\frac{2\ensuremath{d}}{\ensuremath{r}}\,. \end{align*} Finally, setting $\ensuremath{\delta}:=\ensuremath{\hat\beta^\tuningparameter}-\ensuremath{\tilde\beta},$ we get \begin{align*} \normone{\ensuremath{\vdiff_{\hat S^c}}} &\leq 3\normone{\ensuremath{\vdiff_{\hat S}}}+\frac{2\ensuremath{d}}{\ensuremath{r}}\,. \end{align*} Now, since $\ensuremath{r}/2\geq \ensuremath{r}_\alpha,\,$ Condition~1.1~(i) entails $\normone{\ensuremath{\vdiff_{\hat S}}} \leq \normone{\ensuremath{\vdiff_{S}}}$ and $\normone{\ensuremath{\vdiff_{S^c}}} \leq \normone{\ensuremath{\vdiff_{\hat S^c}}}$. Combining these two inequalities with the above display yields \begin{align*} \normone{\ensuremath{\vdiff_{S^c}}} &\leq \normone{\ensuremath{\vdiff_{\hat S^c}}} \leq 3\normone{\ensuremath{\vdiff_{\hat S}}} + \ensuremath{\frac{2\dif}{\tuningparameter}} \leq 3\normone{\ensuremath{\vdiff_{S}}} + \ensuremath{\frac{2\dif}{\tuningparameter}} \end{align*} as desired. \end{proof} \begin{theorem}\label{thm:sup_norm_vdiff_main} Suppose that $\alpha>0$ and $\ensuremath{r}\geq 2\ensuremath{r}_\alpha$. Then, for any $\ensuremath{d}\geq 0$ and any $\ensuremath{\tilde\beta}\in\ensuremath{\R^p}$ with $f(\ensuremath{\tilde\beta},\ensuremath{r})\leq f(\ensuremath{\hat\beta^\tuningparameter},\ensuremath{r})+\ensuremath{d},$ it holds with probability at least $1-\alpha$ that \begin{equation*} \normsup{\ensuremath{\beta}-\ensuremath{\tilde\beta}}\leq \frac{\ensuremath{c}\ensuremath{r}}{ \ensuremath{n}}+\sqrt{\frac{\ensuremath{d}}{\ensuremath{n}\re}}\,. \end{equation*} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:sup_norm_vdiff_main}] In view of Condition~1.1~(ii), it is sufficient to show that \begin{equation*} \normsup{\ensuremath{\hat\beta^\tuningparameter}-\ensuremath{\tilde\beta}}\leq \sqrt{\frac{\ensuremath{d}}{\ensuremath{n}\re}}\,. \end{equation*} Indeed, the desired claim follows directly from this via the triangle inequality for norms. Let us prove the above inequality. Since $f(\ensuremath{\hat\beta^\tuningparameter},\ensuremath{r})\leq f(\ensuremath{\tilde\beta},\ensuremath{r})\leq f(\ensuremath{\hat\beta^\tuningparameter},\ensuremath{r})+\ensuremath{d},\,$ we find the basic equality \begin{align*} \normtwo{Y-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}}= \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}}^2+\ensuremath{r}\normone{\ensuremath{\hat\beta^\tuningparameter}}+\ensuremath{d'}\,, \end{align*} for some $\ensuremath{d'}\in[0,\ensuremath{d}].$ This equation is equivalent to \begin{align*} \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}+X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}}= \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}}^2 +\ensuremath{r}\normone{\ensuremath{\hat\beta^\tuningparameter}}+\ensuremath{d'}\,, \end{align*} and hence to \begin{align*} \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}}^2+ 2\inprod{Y-X\ensuremath{\hat\beta^\tuningparameter}}{X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}+ \normtwo{X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}}= \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}}^2 +\ensuremath{r}\normone{\ensuremath{\hat\beta^\tuningparameter}}+\ensuremath{d'}\,, \end{align*} and finally \begin{align*} \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}}^2+ \inprod{2X^\top(Y-X\ensuremath{\hat\beta^\tuningparameter})}{\ensuremath{\hat\beta^\tuningparameter}-\ensuremath{\tilde\beta}}+ \normtwo{X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}^2+\ensuremath{r}\normone{\ensuremath{\tilde\beta}}= \normtwo{Y-X\ensuremath{\hat\beta^\tuningparameter}}^2 +\ensuremath{r}\normone{\ensuremath{\hat\beta^\tuningparameter}}+\ensuremath{d'}\,. \end{align*} Rearranging yields \begin{align*} \normtwo{X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}^2=\inprod{2X^\top(Y-X\ensuremath{\hat\beta^\tuningparameter})}{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^\tuningparameter}}-\ensuremath{r}\normone{\ensuremath{\tilde\beta}}+\ensuremath{r}\normone{\ensuremath{\hat\beta^\tuningparameter}}+\ensuremath{d'}\,. \end{align*} We now recall that the KKT conditions for the objective function $f$ read \begin{equation*} -2X^\top(Y-X\ensuremath{\hat\beta^\tuningparameter})+\ensuremath{r}\ensuremath{\hat\kappa}=0 \end{equation*} for any vector $\ensuremath{\hat\kappa}$ that satisfies $\normsup{\ensuremath{\hat\kappa}}\leq 1$ and $\ensuremath{\hat\kappa}^\top\ensuremath{\hat\beta^\tuningparameter} = \normone{\ensuremath{\hat\beta^\tuningparameter}}$. Plugging this into the above display yields \begin{align*} \normtwo{X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}^2 &= \inprod{\ensuremath{r}\ensuremath{\hat\kappa}}{\ensuremath{\tilde\beta}-\ensuremath{\hat\beta^\tuningparameter}}-\ensuremath{r}\normone{\ensuremath{\tilde\beta}}+\ensuremath{r}\normone{\ensuremath{\hat\beta^\tuningparameter}}+\ensuremath{d'}\\ &= \ensuremath{r}(\ensuremath{\hat\kappa}^\top\ensuremath{\tilde\beta} - \normone{\ensuremath{\tilde\beta}})+\ensuremath{d'}\,, \end{align*} and by H\"older's inequality, $\ensuremath{\hat\kappa}^\top\ensuremath{\tilde\beta}\leq \normsup{\ensuremath{\hat\kappa}}\normone{\ensuremath{\tilde\beta}} \leq \normone{\ensuremath{\tilde\beta}}$\,. Therefore, \begin{align*} \normtwo{X\ensuremath{\hat\beta^\tuningparameter}-X\ensuremath{\tilde\beta}}^2\leq \ensuremath{d'}\leq \ensuremath{d}\,. \end{align*} We finally get, setting $\ensuremath{\delta}:=\ensuremath{\hat\beta^\tuningparameter}-\ensuremath{\tilde\beta},$ \begin{align*} \normtwo{X\ensuremath{\delta}}^2 &\leq \ensuremath{d}\,. \end{align*} Moreover, Lemma~\ref{cones_cond} guarantees that $\ensuremath{\delta}\in\ensuremath{\mathcal{C}(\sets)}$ and thus allows us to apply Condition~2.1. So we also find \begin{equation*} \normtwo{X\ensuremath{\delta}}^2 \geq \ensuremath{n}\re\normtwo{\ensuremath{\delta}}^2\,. \end{equation*} Combining these two inequalities gives us \begin{equation*} \normtwo{\ensuremath{\delta}}^2 \leq \frac{\ensuremath{d}}{\ensuremath{n}\re}, \end{equation*} which implies \begin{equation*} \normsup{\ensuremath{\delta}}\leq \sqrt{\frac{\ensuremath{d}}{\ensuremath{n}\re}} \end{equation*} as desired. \end{proof} \section{Additional simulation results} We provide additional simulation results to complement the empirical studies in the main part of the paper. We consider the $n=500$ and $p=1000$ setting with the specifications as described in the main part of the paper - except for the stated differences. In Table~\ref{tabadd}, we add the results for Lasso with AIC and BIC. In Table~\ref{tabS}, we consider different support sizes. In Table~\ref{tabr}, we vary the correlation. The BIC and AIC criteria are defined as~\cite{Zou07} \begin{align*} \text{BIC}(\ensuremath{r}) &= \frac{\normtwo{Y-X\ensuremath{\tilde\beta}^{\ensuremath{r}}}^2}{\ensuremath{n}\sigma^2_\varepsilon}+\frac{\log(\ensuremath{n})|\ensuremath{\tilde S}^{\ensuremath{r}}|}{\ensuremath{n}}\\ \text{AIC}(\ensuremath{r}) &= \frac{\normtwo{Y-X\ensuremath{\tilde\beta}^{\ensuremath{r}}}^2}{\ensuremath{n}\sigma^2_\varepsilon}+\frac{2|\ensuremath{\tilde S}^{\ensuremath{r}}|}{\ensuremath{n}}\,, \end{align*} where $\sigma^2_\varepsilon$ is the variance of the noise $\varepsilon$. In our experiments, $\sigma^2_\varepsilon=1$. To establish valid comparisons, we again implement these schemes with the \texttt{SPAMS} package. \begin{table}[!ht] \caption{Average run times (in seconds) and average Hamming distances for \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}, \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}, \ensuremath{\text{LassoBIC}{}_{\texttt{SPAMS}}}, \ensuremath{\text{LassoAIC}{}_{\texttt{SPAMS}}}, and FOS.} \label{tabadd} \centering \vspace{.5\baselineskip} \begin{tabular}{lll} \toprule \cmidrule{2-3} Method & Timing & Hamming distance \\ \midrule \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}} & $137.15\pm 9.33$ & $56.00\pm 18.37$ \\ \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}} & $~~~2.08\pm 0.32$ & $44.90\pm 16.07$ \\ \ensuremath{\text{LassoBIC}{}_{\texttt{SPAMS}}} & $~\,20.12\pm 5.98$ & $10.40\pm ~\,7.15$ \\ \ensuremath{\text{LassoAIC}{}_{\texttt{SPAMS}}} & $~\,20.12\pm 5.98$ & $45.30\pm21.40$\\ FOS & $~~\,\,0.10\pm 0.06$ & $~\,8.60\pm~\,3.10$\\ \bottomrule \end{tabular} \end{table} \begin{table}[!ht] \caption{Average run times (in seconds) and average Hamming distances for \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}, \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}, and FOS for two different sizes of the true support: $|\ensuremath{S}|\in\{5,30\}$.} \label{tabS} \centering \vspace{.5\baselineskip} \begin{tabular}{lllll} \toprule & \multicolumn{2}{c}{$|\ensuremath{S}|=5$} & \multicolumn{2}{c}{$|\ensuremath{S}|=30$} \\ \cmidrule{2-5} Method & Timing & Hamming distance & Timing & Hamming distance\\ \midrule \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}} & $283.19\pm 75.41$ & $43.50\pm 19.53$ & $224.92\pm 41.77$ &$117.50\pm 22.49$\\ \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}} & $~~~2.59\pm~\,0.40$ & $29.90\pm 12.66$ & $~~~2.52\pm~~0.25$ & $~\,98.00\pm 35.50$ \\ FOS & $~~~0.76\pm~\,2.13$ & $~\,4.40\pm~\,1.58$ & $~~~2.23\pm~~2.01$ & $~\,20.60\pm 11.98$ \\ \bottomrule \end{tabular} \end{table} \begin{table}[!ht] \caption{Average run times (in seconds) and average Hamming distances for \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}}, \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}}, and FOS for two different strengths of the pairwise correlations: $\rho\in\{0,0.4\}$.} \label{tabr} \centering \vspace{.5\baselineskip} \begin{tabular}{lllll} \toprule & \multicolumn{2}{c}{$\rho=0$} & \multicolumn{2}{c}{$\rho=0.4$} \\ \cmidrule{2-5} Method & Timing & Hamming distance & Timing & Hamming distance\\ \midrule \ensuremath{\text{LassoCV}{}_{\texttt{SPAMS}}} & $168.03\pm 25.28$ & $56.50\pm 22.45$ & $177.79\pm 48.11$ & $58.30\pm 20.67$\\ \ensuremath{\text{LassoCV}{}_{\texttt{glmnet}}} & $~~~2.46\pm~\,0.24$ & $40.40\pm 13.17$ & $~~~2.36\pm~\,0.21$ & $44.50\pm 18.12$ \\ FOS & $~~~1.78\pm~\,0.73$ & $~\,1.10\pm~\,1.29$ & $~~~0.12\pm~\,0.06$ & $~\,9.70\pm~\,0.95$ \\ \bottomrule \end{tabular} \end{table} \section{Statistical guarantees}\label{theory} Feature selection is a common statistical technique; yet, standard approaches based on the Lasso, SCAD, or MCP lack finite sample guarantees. The reason is that these approaches involve one or more tuning parameters, and corresponding calibration schemes such as Cross-Validation are not equipped with corresponding theory. Similarly, bounds for feature selection with methods such as the Square-Root Lasso, see~\cite{Yoyo13}, presume knowledge of inaccessible model parameters such as the noise distribution. In strong contrast, our main result states that FOS is, up to small constant, equipped with the same statistical guarantees as the Lasso with optimal, in practice unknown tuning parameter. Indeed, we have the following. \begin{theorem}\label{thm:optimality} For any $\alpha>0,$ it holds with probability at least $1-\alpha$ that \begin{equation*} \normsup{\ensuremath{\beta}-\tilde\beta^{\tilde r}}\leq \frac{12\ensuremath{c}\ensuremath{r}_\alpha}{ \ensuremath{n}}\,. \end{equation*} Moreover, it holds with probability at least $1-\alpha$ that $\tilde r\leq 2r_\alpha$ and, if $\min_{j\in\ensuremath{S}}|\ensuremath{\beta}_j|>12\ensuremath{c}\ensuremath{r}_{\alpha}/\ensuremath{n},$ that \begin{equation*} \ensuremath{\tilde S}\supset \ensuremath{S}\,. \end{equation*} \end{theorem} \noindent This result proves two features of FOS: (1)~False negative control: FOS recovers all sufficiently large coefficients. (2)~Entry-wise control: the coefficient estimates associated with FOS are accurate; in particular, any coefficients corresponding to false positives are small. Note that the result holds for any $\alpha>0,$ while $\alpha$ does {\it not} need to be specified in FOS. Note also that, opposed to the Square-Root Lasso/Scaled Lasso approach, the noise distribution does {\it not} need to be known beforehand. The optimality of the scheme can be seen when comparing the $\ell_\infty$-bound with the corresponding bound in Condition~\ref{boundcon}. Indeed, our scheme satisfies the same bound (up to the factor 12) as the Lasso with optimal tuning parameter. We defer the proof of Theorem~\ref{thm:optimality} to the Appendix. It consists of three parts: First, we derive an oracle inequality for vectors that are close to a Lasso solution. Second, we use Fenchel duality~\cite{Borwein2010,Rifkin2007} to show that our scheme is sufficiently accurate. Finally, we show that our application of the \ensuremath{\operatorname{AV}_\infty}-testing scheme~\cite{Chichignoud_Lederer_Wainwright14} selects an optimal tuning parameter.
1,116,691,500,537
arxiv
\section{Introduction} While the study of nonlinear partial differential equations (PDEs) captures the lion's share of modeling efforts in physical, chemical and biological problems, the study of nonlinear dynamical lattices has received growing attention over the last decades~\cite{Aubry06,Flach:2008,FPUreview,pgk:2011}. To some degree, this interest stems from the consideration of discretization methods for simulating PDEs. However, arguably, the most appealing aspect of such lattice problems is that they naturally emerge as the suitable model in systems where there is a degree of ``granularity''/lattice structure. This may stem from waveguides and their arrays in nonlinear optics~\cite{moti}, or in experiments of micromechanical oscillator arrays~\cite{sievers} % as well as lattice nonlinear electrical circuits~\cite{remoissenet}. It may arise in material science systems~\cite{yuli_book,granularBook}, in antiferromagnetic~\cite{lars3}, or more generally anharmonic~\cite{ST,page} crystals, in superconducting settings of Josephson-junction ladders~\cite{alex,alex2} or in biological models of DNA base pairs~\cite{Peybi,Yomosa1983}. Such lattice models may also be effective ones, emulating the periodic variation of optical lattices in atomic condensates~\cite{Morsch}. On the other hand, over the past few years there has been an explosion of interest in the use of data-driven techniques towards the study of physical phenomena and the development, as well as identification of relevant models~\cite{karniadakis2021physics}. Among the most dominant methodologies in that regard for the solution of both inverse and forward problems in PDEs have been methodologies such as PINNs (Physics-Informed Neural Networks)~\cite{raissi_physics-informed_2019}, and the subsequent extension of DeepXDE~\cite{lu2021deepxde}, as well as the SINDY (sparse identification of nonlinear systems) method of~\cite{brunton_discovering_2016}, sparse optimization in \cite{schaeffer2017learning}, meta-learning \cite{feliu2020meta}, and neural operators \cite{ li2021fourier}. There have been numerous variations and extensions of these approaches in a wide range of problems (a small subset of which, e.g., contain~\cite{sirignano2018dgm,weinan2018deep,gu2021selectnet,shin2020error,luo2020two}). Yet, it can be argued that these approaches have been, by and large, limited to continuum PDE problems and the emergent aspect of nonlinear dynamical lattices has been somewhat overlooked. Here, we build on the earlier work of some of the present authors~\cite{zhu_neural_2022}, which was aiming to build in the neural networks more of the physical structure of the underlying problem (in that case through symmetries, while other authors have also enforced, e.g., the symplectic structure of potential underlying systems~\cite{george_again}). Our emphasis in the present work is to adapt methods of the above type, most notably PINNs, to nonlinear dynamical lattices. In particular, we will select a sequence of progressively more complex yet physically relevant examples and seek to leverage the above computational methodology, albeit now in an inherently discrete, high-dimensional setting. By high-dimensional here, we refer to the number of degrees of freedom (and not the spatial dimension of the problem). Our aim will be to solve the inverse problem of the identification of linear and nonlinear coefficients of the models building progressively from simpler to more complex. We will start from a real $\phi^4$ discrete nonlinear Klein-Gordon system~\cite{p4book} and subsequently move to the complex variant of the model, namely the discrete nonlinear Schr{\"o}dinger (DNLS) model~\cite{kevrekid_dnls_book}. We will subsequently explore an example bearing a different type of complexity where the nonlinearity is not a pure power law, but rather a sinusoidal one in the form of the Frenkel-Kontorova~\cite{braun1998} or discrete sine-Gordon~\cite{SGbook} nonlinearity. This will serve to showcase some of the challenges and limitations of the approach. Finally, we will extend considerations beyond the Hamiltonian class of examples to the discrete variant~\cite{PhysRevE.67.026606,GL2} of the Ginzburg-Landau equation~\cite{RevModPhys.74.99}, a topic that continues to be of wide interest in its own right as evidenced in the recent review of~\cite{Salerno2022}. Our presentation will be structured as follows. In Section 2, we will provide some of the mathematical background of the problem, both at the level of the dynamical models under consideration (Sec.~\ref{sec:discrete_systems}) and as concerns the PINN approach (Sec.~\ref{sec:pinns}). Then, upon explaining how to adapt the discovery of the governing equation to nonlinear dynamical lattices in Sec.~\ref{sec:disc_pinns}, we present our numerical experiments in Sec.~\ref{sec:num_exp}. Finally, in Sec.~\ref{sec:concl}, we summarize our findings and present a number of possibilities for future studies. \section{Background} \subsection{Discrete nonlinear lattices} \label{sec:discrete_systems} In this work, we consider a variety of 1D discrete nonlinear lattices consisting of $N$ nodes. In all the cases that we will focus on hereafter, $u_{n}(t)$ (which can be real or complex, depending on the model) will correspond to a dynamical variable with $n=1,\dots,N$. We start our presentation of the models by considering first the discrete $\phi^{4}$ model~\cite{p4book} \begin{align} \ddot{u}_{n} = C(u_{n+1} + u_{n-1} - 2u_n) + 2(u_n - u_n^3), \quad u_{n}\in\mathbb{R}, \label{dphi4} \end{align} where the overdot stands for the temporal derivative of $u_{n}$, and $C=1/h^{2}(>0)$ effectively represents the coupling constant with $h$ representing the lattice spacing between adjacent nodes. It should be further noted that neighboring sites in Eq.~\eqref{dphi4} are coupled due to the presence of the $(u_{n+1}+u_{n-1}-2u_n)$ discrete Laplacian term therein, and the strength of the coupling is dictated by the magnitude of $C$. That is, a large value of $C$, i.e., $C\gg 1$ or equivalently $h\ll 1$ signifies that Eq.~\eqref{dphi4} is close to the continuum $\phi^4$ limit, whereas a small value of $C$ will result in a highly discrete system. Moreover, this coupling term (involving $C$), that will be ubiquitous in all of the models that we consider herein, emanates from the discretization of the Laplacian operator in 1D by using a centered, second-order accurate finite difference scheme. This provides a vein along which the model can interpolate between the so-called anti-continuum limit~\cite{Aubry06} of $C=0$ and the continuum limit of the respective PDE. Eq.~\ref{dphi4} is a model, variants of which have been useful towards understanding solitary wave dynamics in a simpler, real lattice nonlinear system~\cite{PhysRevE.76.026601,PhysRevE.72.035602}. We thus use it as a preamble to studying the complex DNLS variant of the model. Indeed, we subsequently focus on the well-known yet physically relevant, discrete nonlinear Schr\"odinger (DNLS) equation~\cite{kevrekid_dnls_book} with a focusing (cubic) nonlinearity: \begin{align} i\dot u_n = -C(u_{n+1}+u_{n-1}-2u_n) -|u_n|^2u_n, \quad u_{n}\in\mathbb{C}. \label{dnls} \end{align} Here, we allow the relevant field representing, e.g., the envelope of the electric field along an optical waveguide array~\cite{moti} or the quantum-mechanical wavefunction along the nodes of a deep optical lattice~\cite{Morsch}, to be complex. Another intriguing example consists of the discrete sine-Gordon (DsG)~\cite{SGbook}, also known in dislocation theory as Frenkel-Kontorova model~\cite{braun1998}: \begin{align} \ddot{u}_{n} = C(u_{n+1} + u_{n-1} - 2u_n) -\sin{(u_n)}, \quad u_{n}\in\mathbb{R}. \label{dsG} \end{align} This model, similarly to Eq.~(\ref{dphi4}) admits kink-like solutions. However, it also has a key distinguishing feature from the former. Namely, it bears a transcendental nonlinear function, one that cannot be expressed as a simple power law. Indeed, the difficulty to represent such a simple pendulum (unless further, e.g., Hamiltonian structure of the problem is built in the sparse identification approach) has been previously documented, e.g., as concerns SINDY in~\cite{pmlr-v190-lee22a}. Finally, the other fundamental model of interest in the present work is the discrete, complex Ginzburg-Landau (DCGL) equation: \begin{align} \dot u_n = (1+i)C(u_{n+1} + u_{n-1} - 2u_n) - (1-i) |u_n|^2u_n + u_n, \quad u_{n}\in\mathbb{C}, \label{dcgl} \end{align} with a cubic nonlinearity~\cite{PhysRevE.67.026606}. The DCGL can be considered as a (dissipative) perturbation of the DNLS [cf. Eq.~\eqref{dnls}]. Such settings are of interest in the same contexts as DNLS when dissipative perturbations are present, as, e.g., in experimental studies such as that of~\cite{doi:10.1126/science.abf6873} in optics or the one of~\cite{doi:10.1126/sciadv.aat6539} in atomic Bose-Einstein condensates. \begin{table}[pt!] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline Model & Equation & IC($u$) \\ \hline DNLS & $\displaystyle{i\dot u_n = -C(u_{n+1}+u_{n-1}-2u_n) -|u_n|^2u_n}$ & $\displaystyle{e^{-x_{n}^2}}$ \\ \hline DCGL & $\displaystyle{\dot{u}_n = (1+i)C(u_{n+1} + u_{n-1} - 2u_n) - (1-i) |u_n|^2u_n + u_n}$ & $\displaystyle{\tanh{\!x_{n}}\exp{(i\ln{(\cosh{x_{n}}}))}}$ \\ \hline Discrete $\phi^4$ & $\ddot{u}_{n} = C(u_{n+1} + u_{n-1} - 2u_n) + 2(u_n - u_n^3)$ & $\tanh{\left(\frac{x_{n}}{\sqrt{1-v^2}}\right)}$\\ \hline DsG & $\displaystyle{\ddot{u}_{n} = C(u_{n+1} + u_{n-1} - 2u_n) -\sin{(u_n)}}$ & $\displaystyle{4\arctan{\left(\exp{\left(\frac{x_{n}}{\sqrt{1-v^2}}\right)}\right)}}$ \\ \hline \end{tabular} } \caption{Discrete nonlinear lattices that are considered in this work together with the respective initial conditions. Note that $x_{n}$ are grid points with $n=1,\dots,N$, taken uniformly from the interval $\left[-\frac{N}{2\sqrt{C}},\frac{N}{2\sqrt{C}}\right]$. } \label{our_models} \end{table} A recap of the principal models of interest can be found in the Table~\ref{our_models}. The table contains not only the mathematical form of each of the models but also the initial conditions (ICs) that are used therein in order to perform the model training (cf.~Section~\ref{sec:data_generation}). We conclude this section by mentioning the boundary conditions (BCs) that we will employ for all the above models. In particular, we impose free BCs at both ends of the lattice, i.e., $u_{0}=u_{1}$ and $u_{N+1}=u_{N}$. These BCs can be thought of as the discrete analogues of zero Neumann BCs in the continuum limit, and emanate through the discretization of the latter through first-order accurate, forward and backward finite difference formulas, respectively. Having discussed about the models of interest herein, we now turn into a brief overview of Physics-Informed Neural Networks (PINNs). \subsection{Physics-Informed Neural Networks} \label{sec:pinns} Since their introduction by Raissi et al. \cite{raissi_physics-informed_2019}, PINNs have garnered growing attention from the scientific machine learning community due to their flexible and gridless design in data-driven modeling of forward and inverse problems. Consider, for instance, the following parametrized PDE: \begin{align} \label{eq:general-pde} \left\{ \begin{aligned} & u_t = \mathcal{N}(u; \lambda), \quad && \bm{x}\in\Omega, t\in [0, T],\\ & u(\bm{x}, 0) = g(\bm{x}), \quad && \bm{x}\in\Omega,\\ & \mathcal{B}u(\bm{x}, t) = h(\bm{x},t), \quad && \bm{x}\in\partial\Omega, t\in [0, T], \end{aligned}\right. \end{align} where $u(\bm{x}, t)$ is the unknown, $\mathcal{N}(\cdot;\lambda)$ is a (spatial) nonlinear differential operator parametrized by $\lambda$, and $\mathcal{B}$ is an operator associated with a specific BC. In \textit{forward problems}, i.e., when the model parameter $\lambda$ is fixed and given, one aims to derive the (numerical) solution $u(\bm{x}, t)$ of Eq.~\eqref{eq:general-pde} with the specified initial and boundary conditions. A PINN for Eq.~\eqref{eq:general-pde} in this setting is a neural network ansatz $\hat{u}(\bm{x}, t;\bm{\theta})$ that serves as a surrogate of the solution $u(\bm{x}, t)$, where $\bm{\theta}$ is the collection of all trainable parameters of the neural network, e.g., weights and biases of a fully-connected feed-forward PINN. The optimal solution $\hat{u}(\bm{x}, t;\bm{\theta}^*)$ is searched such that the constraints imposed by the PDE and the initial/boundary conditions are (approximately) satisfied. More specifically, let $\mathcal{T}_{\mathcal{N}}\subset \Omega\times [0, T]$, $\mathcal{T}_{g}\subset \Omega\times \{t=0\}$ and $\mathcal{T}_h\subset \partial\Omega\times[0, T]$ be three finite collections of scattered ``training'' points sampled from their corresponding regions.~The discrepancy between $\hat{u}(\bm{x}, t;\bm{\theta})$ and the constraints in Eq.~\eqref{eq:general-pde} is measured through the following loss function $\mathcal{L}(\bm{\theta}; \mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h)$ defined as a weighted sum of the discrete $l^2$ norms of the residuals for the PDE and the initial/boundary conditions: \begin{align} \label{eq:pinn-res-min-forward} \mathcal{L}(\bm{\theta}; \mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h) \coloneqq w_\mathcal{N} \mathcal{L}_\mathcal{N}(\bm{\theta}; \mathcal{T}_\mathcal{N}) + w_g \mathcal{L}_g(\bm{\theta}; \mathcal{T}_g) + w_h \mathcal{L}_h(\bm{\theta}; \mathcal{T}_h), \end{align} where \begin{align} & \mathcal{L}_\mathcal{N}(\bm{\theta}; \mathcal{T}_\mathcal{N}) = \frac{1}{|\mathcal{T}_\mathcal{N}|}\sum_{(\bm{x}, t)\in\mathcal{T}_\mathcal{N}}\left|\hat{u}_t(\bm{x}, t;\bm{\theta})-\mathcal{N}(\hat{u};\lambda)(\bm{x}, t;\bm{\theta}) \right|^2,\\ & \mathcal{L}_g(\bm{\theta}; \mathcal{T}_g) = \frac{1}{|\mathcal{T}_g|}\sum_{(\bm{x}, 0)\in \mathcal{T}_g}\left|\hat{u}(\bm{x}, 0;\bm{\theta})-g(\bm{x})\right|^2,\\ & \mathcal{L}_h(\bm{\theta}; \mathcal{T}_h) = \frac{1}{|\mathcal{T}_h|}\sum_{(\bm{x}, t)\in \mathcal{T}_h}\left|\mathcal{B}\hat{u}(\bm{x}, t;\bm{\theta})-h(\bm{x},t )\right|^2, \end{align} $|\mathcal{T}_\mathcal{N}|,|\mathcal{T}_g|,|\mathcal{T}_h|$ are the cardinalities of the sets $\mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h$, and $w_\mathcal{N}, w_g, w_h > 0$ are the weights. The differential operators in the loss function $\mathcal{L}(\bm{\theta}; \mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h)$ are obtained through automatic differentiation \cite{baydin2018automatic}, and $\bm{\theta}^* = \arg\min_{\bm{\theta}} \mathcal{L}(\bm{\theta}; \mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h)$ is typically solved by gradient-based optimization methods (such as ADAM \cite{DBLP:journals/corr/KingmaB14} or L-BFGS \cite{liu1989limited}). In \textit{inverse problems}, the parameter $\lambda$ in Eq.~\eqref{eq:general-pde} is not known, and the objective is to infer the unknown parameter $\lambda$ from some extra measurement of the system in addition to the PDE and the initial/boundary conditions. For instance, let $\mathcal{T}_{f}\subset \Omega\times [0, T]$, and assume that the values of the solution $u(\bm{x}, t)$ are known for $(\bm{x}, t)\in\mathcal{T}_f$: \begin{align} \label{eq:inverse_observation} u(\bm{x}, t) = f(\bm{x}, t), \quad \forall (x, t)\in\mathcal{T}_f\subset \Omega\times [0, T]. \end{align} In this setting, the loss function $\mathcal{L}(\bm{\theta}, \lambda; \mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h, \mathcal{T}_f)$ will have an extra term corresponding to the additional information of the system given by Eq.~\eqref{eq:inverse_observation}: \begin{align} \label{eq:pinn-res-min-inverse} \mathcal{L}(\bm{\theta}, \lambda; \mathcal{T}_\mathcal{N}, \mathcal{T}_g, \mathcal{T}_h, \mathcal{T}_f) \coloneqq w_\mathcal{N} \mathcal{L}_\mathcal{N}(\bm{\theta}, \lambda; \mathcal{T}_\mathcal{N}) % + w_g \mathcal{L}_g(\bm{\theta}, \lambda; \mathcal{T}_g) + w_h \mathcal{L}_h(\bm{\theta}, \lambda; \mathcal{T}_h) + w_f \mathcal{L}_f(\bm{\theta}, \lambda; \mathcal{T}_f), \end{align} where \begin{align} & \mathcal{L}_f(\bm{\theta}, \lambda; \mathcal{T}_f) = \frac{1}{|\mathcal{T}_f|}\sum_{(\bm{x}, t)\in\mathcal{T}_f}\left|\hat{u}(\bm{x}, t;\bm{\theta})-f(\bm{x}, t) \right|^2. \end{align} Another notable change from Eq.~\eqref{eq:pinn-res-min-forward} to Eq.~\eqref{eq:pinn-res-min-inverse} is that the unknown $\lambda$ also becomes the trainable parameter of the model, and it is jointly searched with $\bm{\theta}$ by minimizing Eq.~\eqref{eq:pinn-res-min-inverse}. \section{Discovering governing equations in discrete systems} \label{sec:disc_pinns} The problem of interest to us herein concerns the data-driven discovery of governing equations, and in particular, the ones corresponding to the nonlinear dynamical lattices discussed in Section~\ref{sec:discrete_systems}. To do so, consider a 1D lattice of $N$ nodes with a dynamical variable $u_n(t)\in\mathbb{R}$ (or $\mathbb{C}$) associated to each node $n=1, 2, \dots, N$. Assume that the evolution of $\bm{u} = (u_1, u_2, \dots, u_N)$ is governed by the following nonlinear dynamics \begin{align} \label{eq:general-ode} \dot{\bm{u}} = (\dot{u}_1, \dot{u}_2, \dots, \dot{u}_N) = \mathcal{N}(u_1, \dots, u_N), \end{align} where $\mathcal{N}:\mathbb{R}^N\to\mathbb{R}^N$ (or $\mathbb{C}^N\to\mathbb{C}^N$) is an operator comprised of inter-site couplings between nearest neighbors, but the explicit form of $\mathcal{N}(u_1, \dots, u_N)$ is unknown. Our objective is to learn the governing equation $\mathcal{N}$ from sparse (temporal) observations of the nonlinear dynamics $\bm{u}(t)=\bm{f}(t)$ at times $t\in \mathcal{T}_{\bm{f}}\subset [0, T]$, where $T>0$ is the terminal time of the system. The differences between our setting and the PDE inverse problem explained in Section~\ref{sec:pinns} are two-fold. First, even though the inter-site couplings in $\mathcal{N}$ between nearest neighbors can sometimes be viewed as finite differences, bearing resemblance to their continuous counterparts of (spatial) differential operators (cf.~Section~\ref{sec:discrete_systems}), the system described by Eq.~\eqref{eq:general-ode} is intrinsically discrete. Second, unlike the parametric nonlinear operator $\mathcal{N}(u;\lambda)$ in Eq.~\eqref{eq:general-pde}, whose explicit dependence on $\lambda$ is given, the governing equation [cf. Eq.~\eqref{eq:general-ode}] is generally unknown, except for the prior knowledge that the right-hand-side $\mathcal{N}(u_1, \dots, u_N)$ involves only a shift-invariant coupling to nearby sites. We thus make the following modifications to the PINN model of Eq.~\eqref{eq:pinn-res-min-inverse}. For systems with real dynamical variables $\bm{u}(t)\in \mathbb{R}^N$, the PINN $\hat{\bm{u}}: \mathbb{R}\to\mathbb{R}^N$ takes only time $t$ as the input, which is mapped through an $L$-layer fully-connected neural network to the output corresponding to an $N$-dimensional vector $\hat{\bm{u}}(t) = (\hat{u}_1(t),\hat{u}_2(t),\dots, \hat{u}_N(t))\in\mathbb{R}^N$. Since the form of $\mathcal{N}(u_1, \dots, u_N)$ is not known, we build an overcomplete library $\mathrm{Lib}= \{D_\alpha\}_{\alpha\in A}$ of shift-invariant discrete spatial operators modeling the linear inter-site couplings between nearest neighbors, as well as different types of nonlinear contributions. For instance, one dictionary element that is included in many of our numerical experiments is the discrete Laplacian \begin{align} (D_2 \bm{u})_n = u_{n-1} - 2u_n + u_{n+1}. \end{align} The unknown operator $\mathcal{N}:\mathbb{R}^N\to\mathbb{R}^N$ is then modeled as a linear combination $\mathcal{N} = \sum_{\alpha\in A}\lambda_\alpha D_\alpha$ of elements $D_\alpha$ in the library, and the expansion coefficients $\bm{\lambda} = (\lambda_\alpha)_{\alpha\in A}$ are learned by minimizing the loss function \begin{align} \mathcal{L}(\bm{\theta}, \bm{\lambda}; \mathcal{T}_\mathcal{N}, \mathcal{T}_{\bm{f}}) \coloneqq w_\mathcal{N}\mathcal{L}_\mathcal{N}(\bm{\theta}, \bm{\lambda};\mathcal{T}_\mathcal{N}) + w_{\bm{f}}\mathcal{L}_{\bm{f}}(\bm{\theta}, \bm{\lambda};\mathcal{T}_{\bm{f}}), \end{align} where \begin{align} & \mathcal{L}_\mathcal{N}(\bm{\theta}, \bm{\lambda}; \mathcal{T}_\mathcal{N}) = \frac{1}{|\mathcal{T}_\mathcal{N}|}\sum_{t\in\mathcal{T}_\mathcal{N}}\left|\dot{\hat{\bm{u}}}(t;\bm{\theta})% -\sum_{\alpha\in A}\lambda_\alpha D_\alpha \hat{\bm{u}}(t; \bm{\theta}) \right|^2,\\ & \mathcal{L}_{\bm{f}}(\bm{\theta}, \bm{\lambda}; \mathcal{T}_{\bm{f}}) = \frac{1}{|\mathcal{T}_{\bm{f}}|}\sum_{t\in\mathcal{T}_{\bm{f}}}\left|\hat{\bm{u}}(t;\bm{\theta})% -\bm{f}(t; \bm{\theta}) \right|^2, \end{align} and $\mathcal{T}_\mathcal{N}$, $\mathcal{T}_{\bm{f}}$, respectively, are subsets of $[0,T]$ corresponding to the training collocation points at which the ODE residual and the discrepancy between $\hat{\bm{u}}$ and the observed $\bm{f}$ are minimized. Nevertheless, it should be noted that although the notation $D_2$ prompts one to think of derivative operators, the relevant symbolism of $D_\alpha$ more generally concerns elements of the nonlinear operator, some of which will, by necessity, reflect the nonlinearity of the model (so they should be generally thought of as nonlinear operators). In concluding this section, it is worth pointing out that when the dynamical variables are complex, i.e., $\bm{u}(t)\in \mathbb{C}^N$, they can be decomposed into real and imaginary parts, i.e., $\bm{u}^{\mathrm{(R)}}$ and $\bm{u}^{\mathrm{(I)}}$, respectively, thus rendering the dynamical variable $\bm{u}$ to be a mapping of the form of $\bm{u}:[0,T]\to\mathbb{R}^{2N}$. This way, the PINN $\hat{u}:\mathbb{R}\to\mathbb{R}^{2N}$ is mapped again through a fully-connected neural network to an output $2N$-dimensional vector now being itself of the form of $\hat{\bm{u}}(t) = (\hat{u}_1^{\mathrm{(R)}}(t),\hat{u}_2^{\mathrm{(R)}}(t),\dots, \hat{u}_N^{\mathrm{(R)}}(t),% \hat{u}_1^{\mathrm{(I)}}(t),\hat{u}_2^{\mathrm{(I)}}(t),\dots, \hat{u}_N^{\mathrm{(I)}}(t))\in\mathbb{R}^{2N}$. Having set up the stage of our computations, we are now ready to turn to the details of our numerical experiments. \section{Numerical Experiments} \label{sec:num_exp} In all the numerical experiments that we discuss below, we consider the models summarized in Table~\ref{our_models}. Moreover, we will use $N$, i.e., the number of lattice sites, to be in the range $20$ to $31$. {In our experiments we find that changing the size of the lattice does not seem to change the results in any dramatic way. On the other hand, our experiments suggest that learning is faster when more lattice sites are involved in the dynamics as compared to when the dynamics is local to only a few sites. We made this observation while trying different initial conditions for the $\phi^4$ model where ICs that led to dynamical behavior involving a larger number of lattice nodes led to faster learning and convergence. } \subsection{Data generation} \label{sec:data_generation} At first, we solve the initial-value problems (IVPs) consisting of the models of Table~\ref{our_models} and the specific ICs in order to obtain spatio-temporal data that will be used for training our PINNs. To do so, we employ temporal integration. In particular, we use a fourth-order Runge-Kutta (RK4) method for the discrete $\phi^4$ and DsG examples, and an implicit backward difference scheme~\cite{hairer_wanner_I} for the DNLS and DCGL examples. The ICs for our data generation (see, Table~\ref{our_models}) are inspired by the exact solutions to the continuous analogues of our models, although they do not always constitute ones such. For example, and as per the discrete $\phi^4$ and DsG cases, we use a traveling kink solution of the respective continuum cases. On the other hand, we use a Gaussian pulse in lieu of a bright soliton for the DNLS in order to observe an example of dynamics not necessarily proximal to a solitonic equilibrium. On the other hand, in the DCGL case, we use a form of the so-called Nozaki-Bekki holes \cite{RevModPhys.74.99}. As such, we solve the respective IVPs from $t=0$ to $t=10$ with a time step-size of $dt = 10^{-3}.$ We then extract $50$ samples from the simulated data at equal time intervals ($\Delta t = 0.2$), and use these samples to train our neural network. \subsection{Neural Network setup} We conducted all of our experiments, presented here and otherwise, using the DeepXDE ~\cite{lu2021deepxde} library. Our neural networks take as input only time ($t$) and for first-order systems output $u_n(t),\,\forall n \in 1, \dots, N,$ while for second-order systems output $u_n(t),\,\forall n \in 1, \dots, N,$ and $v_n = \dot u_n(t),\,\forall n \in 1, \dots, N$. Furthermore, as it was already mentioned in Section~\ref{sec:disc_pinns}, in the cases where we have complex data (such as in the DNLS and DCGL cases), we split the data into real and imaginary parts and learn the two simultaneously, i.e., our neural network outputs: $u_n^{\mathrm{R}}(t)$ and $u_{n}^{\mathrm{I}}(t),\,\forall n \in 1, \dots, N$. In the residual losses we construct, we only consider the interior nodes (i.e., nodes having both nearest-neighbors). This eliminates the need to know the BCs that govern the underlying data. Furthermore, and as per the second-order systems, we consider residual losses on both displacements and velocities $(u_n,v_n)$, as discussed above. Lastly, the neural network architecture used in all the experiments involves fully-connected networks consisting of three hidden layers with 40 neurons each, and each layer uses the $\tanh$ activation function. \subsection{Results and Discussion} We start our results presentation by the arguably simplest among our selected models, namely the (real) discrete $\phi^{4}$ model given by Eq.~\eqref{dphi4}. We consider first the following library of terms \begin{align} \mathrm{Lib}^{(1)}=\Big\{\left(u_{n+1}+u_{n-1}-2u_n\right),\left(u_{n+1}-u_{n-1}\right)/2,u_n,u_n^3\Big\}, \label{dphi4_lib1} \end{align} which contains the discrete Laplacian operator as well as the finite difference representation of the first derivative (i.e., the second element in Eq.~\eqref{dphi4_lib1} corresponding to a centered, second-order accurate finite difference operator for $u_{x}$) alongside linear and cubic terms in $u_{n}$. One can argue that this is a library inspired by the continuum analogue of the model and its respective (potential) derivative term inclusions. Our respective results for $C=2$ are depicted in Fig.~\ref{fig:exps_dphi4}(a) for this case where the solid red, blue, green and yellow lines correspond to the discrete representation of $u_{xx}$, $u_{x}$, $u$, and $u^{3}$, respectively. Indeed, the PINN learns the correct coefficients, and most importantly, it learns that there is no $u_{x}$ (namely, its discrete version) present in the governing equation for our data. In the experiments that are shown in Figs.~\ref{fig:exps_dphi4}(b)-(d), we are taking a more ``inherently discrete'' approach to the relevant problem. More specifically, motivated by the discreteness coupling to near-neighbors (rather than to combinations prompting towards derivatives), we disaggregate the (discrete) operator $(u_{n+1}+u_{n-1}-2u_n)$ into its constituents. Namely, instead of trying to learn the particular form of the inter-component difference, we learn the dependence of the governing equation on each of the sets of nearest neighbors. In particular, the panels (b), (c), and (d) in the figure consider respectively the following libraries: \begin{align} \mathrm{Lib}^{(2)}=\Big\{u_{n+1},u_{n-1},u_{n},u_{n}^{3}\Big\}, \label{dphi4_lib2} \end{align} \begin{align} \mathrm{Lib}^{(3)}=\Big\{\mathrm{Lib}^{(2)},u_{n+2},u_{n-2}\Big\}, \label{dphi4_lib3} \end{align} and {\small \begin{align} \mathrm{Lib}^{(4)}=\Big\{\mathrm{Lib}^{(2)}, u_{n+1}^2u_n, % u_{n-1}^2u_n,u_{n+1}u_{n-1}u_n, u_{n+1}^2u_{n-1}, u_{n-1}^2u_{n+1}, % u_n^2u_{n+1}, u_n^2u_{n-1}, u_{n+1}^3, u_{n-1}^3\Big\}. \label{dphi4_lib4} \end{align} } $\mathrm{Lib}^{(2)}$ is the simplest example containing the main ingredients of the original model. Hence, one would like to check whether the methodology can ``disentangle'' the role of these ingredients from other similar contributions to both the linear and nonlinear terms. With that in mind, $\mathrm{Lib}^{(3)}$ is essentially an augmentation of $\mathrm{Lib}^{(2)}$ whence the next-nearest neighbors, i.e., $u_{n\pm2}$ are appended therein. Moreover, $\mathrm{Lib}^{(4)}$ contains several possibilities for the cubic nonlinearity of the model. Indeed, in the latter case all possible combinations involving nearest neighbor contributions to a cubic nonlinearity have been incorporated; see, e.g., Ref.~\cite{Dmitriev2009} where also the complex variant of such terms is discussed in the context of the DNLS model. In Fig.~\ref{fig:exps_dphi4}(b), we observe that our gradient optimization method converges to the correct coefficients, thus constructing the correct nonvanishing prefactors for each of the relevant term contributions. Next, in the numerical experiments presented in Figs.~\ref{fig:exps_dphi4}(c)-(d), we consider the libraries of Eqs.~\eqref{dphi4_lib3} and~\eqref{dphi4_lib4}, respectively, and try to find the dependence on next-to-next neighbors of each node. Here, we expect that solely the relevant ``ingredient'' terms will be selected, while the prefactor of extraneous contributions will converge to zero. However, it is important to note, as a limitation of the method, that for libraries that contained even-ordered terms (in particular, quadratic and quartic terms in our experiments), the model had difficulty learning the correct coefficients, and only by using data augmentation, were we able to get the model to learn the correct coefficients. More specifically, we accomplished data augmentation by using the fact that if $u$ is a solution to our system, so is $-u$, i.e., leveraging the relevant invariance of the model under this parity transformation of the field. This is, in line, with the earlier work of~\cite{zhu_neural_2022}, where we have leveraged the symmetries of the model to enhance the network's ability to solve the inverse problem. We thus made the model learn both solutions simultaneously, and as expected, the model learned that the governing equations do not have any even order terms in them. In that sense, we have ensured (results not shown here for brevity) that, using both $u$ and $-u$, even-ordered terms in the library do not alter the findings presented in Fig.~\ref{fig:exps_dphi4}. \begin{figure}[!ph] \centering \begin{overpic}[width=0.4\textwidth]{phi41.jpg} \put(50,-4){$(a)$} \end{overpic} \begin{overpic}[width=0.4\textwidth]{phi43.jpg} \put(50,-4){$(b)$} \end{overpic} \vskip 0.5cm \begin{overpic}[width=0.4\textwidth]{phi42.jpg} \put(50,-4){$(c)$} \end{overpic} \begin{overpic}[width=0.4\textwidth]{phi44.jpg} \put(50,-4){$(d)$} \end{overpic} \vspace{0.2cm} \caption{Numerical results for the discrete $\phi^{4}$ model [cf.~Eq.~\ref{dphi4}] with $C=2$. In panel (a), the library $\mathrm{Lib}^{(1)}$ of Eq.~\eqref{dphi4_lib1} was considered where the solid blue, red, green and yellow lines correspond to the discrete representation of $u_{xx}$, $u_{x}$, $u$, and $u^{3}$, respectively. The numerical results obtained by using the library $\mathrm{Lib}^{(2)}$ [cf.~Eq.~\eqref{dphi4_lib2} are presented in panel (b) where solid blue, red, green, and yellow depict the $u_{n+1}$, $u_{n-1}$, $u_{n}$, and $u_{n}^{3}$, respectively. The same line-coloring-to-terms correspondence is used in panels (c) and (d) utilizing the libraries of Eqs.~\eqref{dphi4_lib3} and~\eqref{dphi4_lib4}, respectively. The solid black lines therein correspond to (c) the terms $u_{n\pm2}$, and (d) to all the other cubic terms. In all the panels, the dashed lines serve as reference values for the actual values of the coefficients.} \label{fig:exps_dphi4} \end{figure} \begin{figure}[!pt] \centering \begin{overpic}[width=0.4\textwidth]{DNLSLib1C2.jpg} \put(50,-4){$(a)$} \end{overpic} \begin{overpic}[width=0.4\textwidth]{DNLSLib1C05.jpg} \put(50,-4){$(b)$} \end{overpic} \vspace{0.2cm} \caption{Numerical results for the DNLS [cf.~Eq.~\ref{dnls}] with (a) $C=2$ and (b) $C=1/2$ (see, also Eq.~\eqref{dnls_lib_1}). % The dashed lines are reference values corresponding to the actual values for the coefficients (see text). The solid lines (see the legends of each panel) with colors other than black correspond to the imaginary parts of the trained coefficients $b_{1}$ (blue), $b_{2}$ (red), $b_{3}$ (green), and $b_{4}$ (orange). The solid black lines depict the real parts of the trained coefficients ($a_{k}$).} \label{fig:exps1_2_dnls} \end{figure} Our next example involved the complex variant of the model, namely the DNLS [cf.~Eq.~\ref{dnls}], which enables considerable additional richness in terms of the available nonlinear terms (see, e.g., Eq.~(16.11) in~\cite{Dmitriev2009}). It should be noted again that the PINN models consider real coefficients and thus we split our coefficients herein and state variables into real and imaginary parts. As a result, we construct separate losses for the real and imaginary parts, and set up the PINN to learn the respective coefficients simultaneously. Indeed, the panels (a) and (b) in Fig.~\ref{fig:exps1_2_dnls} summarize our results herein for $C=2$ (a setting more proximal to the continuum limit) and $C=1/2$ (i.e., a rather discrete case), respectively. For this numerical experiment, we consider a library of the form: \begin{align} \dot{u}_n = \alpha_1 u_{n+1} + \alpha_2 u_{n-1} + \alpha_3 u_n+ \alpha_4 |u_n|^2u_n,% \,\,\, \alpha_k = a_k+ib_k\in \mathbb{C}, \,\,\, a_{k},b_{k}\in\mathbb{R}, \,\,\, k=\{1,2,3,4\}, \label{dnls_lib_1} \end{align} for both cases (i.e., Figs.~\ref{fig:exps1_2_dnls}(a)-(b)). It can be discerned from both panels of Fig.~\ref{fig:exps1_2_dnls} that the PINN learns purely imaginary coefficients (i.e., $b_{k}$) as expected (see the red, blue, green, and yellow lines therein). That is, in this case, the scheme detects the effectively conservative nature of the model, since real coefficients would be tantamount to gain terms. Consequently, here the real coefficients $a_{k}$ denoted by solid black lines converge to zero. For convenience, in both panels we include the correct values of the coefficients for comparison. \begin{figure}[!pt] \centering \begin{overpic}[width=0.4\textwidth]{DNLSLib2C05.jpg} \put(50,-4){$(a)$} \end{overpic} \begin{overpic}[width=0.4\textwidth]{DNLSLib3C05.jpg} \put(50,-4){$(b)$} \end{overpic} \vspace{0.2cm} \caption{Numerical results for the DNLS [cf.~Eq.~\ref{dnls}] both with $C=1/2$, and by using the library of (a) Eq.~\eqref{dnls_lib_3} and (b) Eq.~\eqref{dnls_lib_4}. Similar to Fig.~\ref{fig:exps1_2_dnls}, the dashed lines are reference values corresponding to the actual values for the coefficients. For the line coloring, see the legends of each panel.} \label{fig:exps3_4_dnls} \end{figure} We further performed numerical experiments on the DNLS with other libraries, motivated by the general cubic nonlinearity form presented in~\cite{Dmitriev2009} and found that the PINN is capable of learning the coefficients of the DNLS correctly. Indicatively, we demonstrate in Figs.~\ref{fig:exps3_4_dnls}(a)-(b) two cases with $C=1/2$ that, respectively, consider the following libraries: \begin{align} \dot u_n &=\alpha_1 u_{n+1} + \alpha_2 u_{n-1} + \alpha_3 u_n + \alpha_4 |u_n|^2u_n + \alpha_5 |u_n|^2 + % \alpha_6 u_n^2 + \alpha_7 (u_n^{*})^2+\alpha_8 \frac{u_{n+1} + u_{n-1}}{2}u_n \nonumber \\ &+\alpha_9 |u_n|^2(u_{n+1}+u_{n-1}) + \alpha_{10} u_n(|u_{n+1}|^2 + |u_{n-1}|^2), % \,\,\,\alpha_k = a_k+ib_k\in \mathbb{C},\,\,\, k=\{1,\dots,10\}, \label{dnls_lib_3} \end{align} and \begin{align} \dot u_n &= \alpha_1 u_{n+1} + \alpha_2 u_{n-1} + \alpha_3 u_n + \alpha_4 |u_n|^2u_n + % \alpha_5 |u_n|^2 + \alpha_6 |u_{n+1}|^2 + \alpha_7 |u_{n-1}|^2+ \alpha_8 |u_{n+1}|^2u_{n+1} \nonumber \\ & + \alpha_9 |u_{n-1}|^2u_{n-1}, \,\,\,\alpha_k = a_k+ib_k\in \mathbb{C},\,\,\, k=\{1,\dots,9\}, \label{dnls_lib_4} \end{align} where both $a_{k}$ and $b_{k}$ are real as before. Notice that once again here, similarly to the $\phi^4$ case discussed above, we have included terms that are quadratic in nature, {and, similarly to the $\phi^4$ case we needed to use data augmentation in order to retrieve the correct coefficients in the presence of such quadratic terms}. Despite the generality of the above libraries containing themselves different quadratic and cubic nonlinearities, the PINN model learned correctly the (purely imaginary) coefficients, as this can be discerned from panels (a) and (b) of Fig.~\ref{fig:exps3_4_dnls}. We mention in passing that we tried other values for the coupling constant $C$ as well as other libraries (alongside the ones presented so far), and found that in all the cases considered the PINN discovered the correct coefficients. \begin{figure}[!pt] \centering \begin{overpic}[width=0.4\textwidth]{CGLELib1C2.jpg} \put(50,-4){$(a)$} \end{overpic} \begin{overpic}[width=0.4\textwidth]{CGLELib1C05.jpg} \put(50,-4){$(b)$} \end{overpic} \vspace{0.2cm} \caption{Numerical results for the DCGL [cf.~Eq.~\ref{dcgl}] with (a) $C=2$ and (b) $C=1/2$ (see, also Eq.~\eqref{dnls_lib_1}). % Same as before, the dashed lines are reference values corresponding to the actual values for the coefficients. Note that $\mathrm{Re}(\alpha_{i})=a_{i}$ and $\mathrm{Im}(\alpha_{i})=b_{i}$ (see also the legend for the coloring-to-coefficients mapping).} \label{fig:exps1_dcgl} \end{figure} Having discussed the DNLS, we turn our focus to the DCGL model [cf.~Eq.~\eqref{dcgl}]. Our motivation in doing so was to investigate whether PINNs can learn complex coefficients, when ones such are relevant for the (general) libraries considered herein. At first, we consider the prototypical library of Eq.~\eqref{dnls_lib_1}, where we expect to discover the complex prefactor of both the discrete Laplacian term, including the equal (complex) coefficients of the $u_{n \pm 1}$ terms, as well as that of the linear term $\propto u_n$. Finally, the PINN method is able to capture equally accurately not only the above linear terms, but also the complex ($-(1-i)$)) term of the cubic nonlinearity. As is clearly shown in Fig.~\ref{fig:exps1_dcgl}, all the relevant terms are accurately identified, while the prefactors of additional, irrelevant terms in the library converge to vanishing values beyond a suitably large number of Epochs. {We found this true for a variety of libraries, including ones with even ordered terms and next to next neighbors. We even performed experiments using the libraries of Eq~\ref{dnls_lib_3} Eq~\ref{dnls_lib_4} and found that our models learned the correct coefficients.} Finally, we choose the discrete sine-Gordon (DsG) model of Eq.~\eqref{dsG} as an intriguing example due to the fact that it contains the $\sin\left(u_{n}\right)$ term. The latter can be expanded in Taylor series, yet it cannot be fully approximated by means of a power-law library. It is presumably for this reason that relevant attempts at the inverse problem of the pendulum~\cite{pmlr-v190-lee22a} or the double pendulum~\cite{kaheman} involve libraries containing trigonometric terms (rather than purely power-law ones). This key difference of the present model from the earlier ones inspired us to use sine series-based libraries, polynomial libraries and mixed libraries in order to explore what the PINN model would learn in such a case and what the limitations of each case example may be. For all the numerical experiments that we discuss here, we picked $C=1/2$, and the respective results are shown in Fig.~\ref{fig:exps_dsG}. In particular, Fig.~\ref{fig:exps_dsG}(a) considers the library (once again, effectively, building in the $u_n \rightarrow -u_n$ invariance of the model): {\small \begin{align} \ddot{u}_n = \alpha_1 u_{n+1} + \alpha_2 u_{n-1} + \alpha_3 u_n + \alpha_4 \sin{(u_n)} + % \alpha_5 \sin{(2u_n)} + \alpha_6 \sin{(3u_n)} + \alpha_7 \sin{(4u_n)} + \alpha_8 \sin{(5u_n)}. \label{dsG_lib1} \end{align} } Here, the PINN model is able to learn the correct sine term (notice that the solid black lines in the figure correspond to $\alpha_{k}=0,\,\,k=5,6,7,8$, upon convergence). Next, the results presented in Fig.~\ref{fig:exps_dsG}(b) explore the case of a power-law-based library of functions. Indeed, in this case, we consider a library that contains three terms of the Taylor series expansion of the sine function as: \begin{align} \ddot{u}_n = \alpha_1 u_{n+1} + \alpha_2 u_{n-1} + \alpha_3 u_n + \alpha_4 u_n^3 + \alpha_5 u_n^5. \label{dsG_lib2} \end{align} It can be discerned from panel (b) of the figure that the model tries to learn a (truncated) Taylor series expansion of the sine-term. However, we should mention that the model seems to be very sensitive when it comes to polynomial libraries while simultaneously, the number of terms in the library seems to have a considerable impact on what the model learns (even when all the terms are odd powers). Indeed, in this case, for instance, the yellow curve associated with coefficient $a_4$ converges to a finite value which is clearly distinct from, e.g., the theoretical Taylor-function prediction of $1/5!$ We can thus observe the relevant limitation of the approach in that a polynomial-based library is unable to fully capture the effects of a sinusoidal nonlinearity. We conclude our series of experiments by discussing Fig.~\ref{fig:exps_dsG}(c) which embodies the library: \begin{align} \ddot{u}_n = \alpha_1 u_{n+1} + \alpha_2 u_{n-1} + \alpha_3 u_n + \alpha_4 u_n^3 + \alpha_5 u_n^2 % + \alpha_6 u_n^5 + \alpha_7 \sin{(u_n)} + \alpha_8 \sin{(2u_n)}, \label{dsG_lib3} \end{align} i.e., a setting containing both polynomial and sinusoidal terms. Our numerical results suggest that there is some sort of a competition between the polynomial and sine terms in trying to describe the nonlinearity of the model. It should be noted however, that when trying to learn the coefficients for this model, the choice of ICs may have a significant impact, especially in cases of this sort with different competing terms contributing at the same order. We briefly report that using the exact solution of the continuous sine-Gordon seemed to work well for some libraries, and using the exact solution of the continuous $\phi^4$ seemed to work well with other libraries. {In particular, using the exact solution of the continuous sine-Gordon seems to work better for libraries with $\mathrm{sine}$ terms in them while the exact solution of the continuous $\phi^4$ seems to work better for libraries with polynomial terms corresponding to the Taylor expansion of the $\sin$ nonlinearity.} \begin{figure}[!pt] \centering \begin{overpic}[width=0.4\textwidth]{SGLib1C05.jpg} \put(50,-4){$(a)$} \end{overpic} \begin{overpic}[width=0.4\textwidth]{SGLib2C05.jpg} \put(50,-4){$(b)$} \end{overpic} \vskip 0.5cm \begin{overpic}[width=0.4\textwidth]{SGLib3C05.jpg} \put(50,-4){$(c)$} \end{overpic} \vspace{0.2cm} \caption{Numerical results for the DsG model [cf.~Eq.~\ref{dsG}] with $C=1/2$. In panel (a), the library of Eq.~\eqref{dsG_lib1} was considered whereas panel (b) utilized the library of Eq.~\eqref{dsG_lib2}. The numerical results while using the library of Eq.~\eqref{dsG_lib3} are depicted in panel (c). The legends in each of these panels offer the line-coloring-to-terms correspondence, and the dashed lines serve as reference values for the actual values of the coefficients.} \label{fig:exps_dsG} \end{figure} \section{Conclusions and Future Challenges} \label{sec:concl} In the present work, we have explored the methodology of Physics-Informed Neural Networks (PINNs) and how PINNs perform when attempting to solve the inverse problem in the context of nonlinear dynamical lattices with many degrees of freedom. We argued herein that, in addition to the relevant problem for PDEs, such lattice models are of particular interest in their own right for various physical contexts ranging from optics to atomic physics to materials science. Hence, a detailed understanding of the solution of the inverse problem of coefficient identification is of particular relevance in this context as well. Indeed, we envision the rather mature experimental observation and data acquisition techniques in such settings to be of value in the near future, not only for machine-learning-based classification tasks, as, e.g., has recently been realized in~\cite{Guo_2021}, but also for data-driven modeling efforts. We started with a simpler real system case example in the form of the $\phi^4$ model. Here, we were able to identify the coefficients, although to avoid the possibility of quadratic terms, a relevant limitation concerned the use of dynamics both for $u$ and $-u$ to establish the invariance of the model under such a transformation. Both in the real case of the $\phi^4$ and in its complex analogue of the discrete nonlinear Schr{\"o}dinger lattice, we considered a wide variety of nonlinear terms. We thus confirmed that the additional nonlinearities bear prefactors that eventually (for sufficiently many epochs) tend to vanishing values, thereby establishing the models of interest. We did not restrict our considerations to purely real (or purely imaginary) coefficients, but rather extended them to models with complex ones such as the discrete complex Ginzburg-Landau equation. Finally, we considered cases beyond the setting of purely power-law nonlinearities, such as the sine-Gordon lattice. Here, too, we explored some of the limitations of the PINNs, such as their inability to capture the fully sinusoidal effects with power-law-based libraries, but also the potential sensitivity that the concurrent presence of trigonometric and power-law terms may lead to. Naturally, the field of such inverse (and forward) problems in the realm of nonlinear dynamical lattices is still at a particularly early stage, and further studies are certainly warranted. Among the numerous points meriting further exploration, we note the case example of nonlinearities beyond nearest-neighbors (and indeed the case of longer-range kernels). Moreover, here we have restricted considerations to $(1+1)$-dimensional problems, yet the examination of higher dimensional settings is of particular interest in its own right. Beyond these examples, a progressively deeper understanding of the limitations of the PINN (or SINDY) type approaches and of how further inclusion of the model structure (conservation laws, symmetries, symplectic nature etc.) of the underlying system may facilitate convergence are, in our view, questions of importance for further studies. Such efforts are presently in progress and will be reported in future publications. \bibliographystyle{unsrt}
1,116,691,500,538
arxiv
\section{Introduction} Most hot subdwarf-B (sdB) stars belong to the population of extreme-horizontal-branch (EHB) stars. The HB designation implies that they have ignited helium through the core-helium flash, and therefore have a core mass close to the helium-flash mass of $\sim$0.46\,\ensuremath{\rm{M}_{\odot}}. But in order to reach B-star effective temperatures, they must have shed almost their entire hydrogen-rich envelope close to the tip of the red-giant branch. Several binary scenarios have been identified that can produce EHB stars, including either common-envelope ejection or stable Roche-lobe overflow \citep[see][for a detailed review]{heber09}. Like the main-sequence-B stars, the sdBs pulsate with short ($p$\,modes) and long periods ($g$\,modes) because of the iron-group elements opacity bump ($\kappa$ mechanism), but at much shorter periods than their main-sequence counterparts. A key element in the driving of pulsations in these stars is the competition between radiative levitation and gravitational settling, which causes a local overabundance of iron in the driving zone \citep{charpinet97,fontaine03}. The hotter short-period pulsators are known as V361-Hya stars after the prototype \citep{kilkenny97}, and equivalently the cooler long-period pulsators are known as V1093-Her stars \citep{green03}, and they are collectively referred to as sdBV stars. Most stars are predominantly one or the other type, but a few stars in the middle of the temperature range are hybrids that show both types of pulsations in equal measure \citep[][and references therein]{ostensen09}. The {\em Kepler}\ spacecraft spent four years monitoring a 105\,deg$^2$ field in the Cygnus--Lyrae region, with the primary goal of detecting transiting planets \citep{borucki11}. The high-quality lightcurves obtained by the spacecraft reveal a host of variable stars, providing a treasure trove for asteroseismic studies \citep{gilliland10a}. In the first four quarters of the {\em Kepler Mission}, a survey for pulsating stars was made, and a total of 113 compact-pulsator candidates were checked for variability by {\O}stensen et al.~(\citealt{ostensen10b}, \citealt{ostensen11b}\,=\,Paper\,{\sc i}). This very successful survey revealed one clear V361-Hya pulsator \citep{kawaler10a} and one other transient short-period pulsator, and a total of thirteen V1093-Her stars \citep[][Paper\,{\sc ii}]{reed10a,kawaler10b,baran11b}, including an sdB+dM eclipsing binary in which the hot primary shows an exceptionally rich pulsation spectrum \citep{2m1938}. Another three V1093-Her pulsators have been identified in the open cluster NGC\,6791 \citep{pablo11,reed12b}, bringing the total number of sdBV stars in the {\em Kepler}\ field to eighteen. A closely related object was discovered in {\em Kepler}-archive data by \citet{ostensen12b}; a BHB pulsator showing pulsation properties similar to the V1093-Her pulsators -- the first such object discovered. \begin{figure*}[t!] \centering \includegraphics[width=14cm]{Field.eps} \caption{ Field images for KIC\,10553698. The NOT/ALFOSC acquisition image (left) covers 2.5$\times$2.5 arc minutes, and the corresponding section from a {\em Kepler}\ full-frame image is also shown (right). The images are aligned so that north is up and east to the left. The target is the bright star in the centre of the field, and the four central pixels that are used for photometry for a typical quarter are outlined in the {\em Kepler}\ image. } \label{fig:field} \end{figure*} Since the series of early papers based on one-month datasets obtained during the survey phase of the {\em Kepler Mission}, only seven of the $g$-mode pulsators have been subjected to detailed analysis based on many months of near-continuous data that the {\em Kepler}\ spacecraft gathered during the long-term monitoring phase. First, \citet{charpinet11b} analysed one year of data on \object{KIC\,5807616}, revealing long-period periodicities that may be signatures of planets in very close orbit around the otherwise single sdB star. \citet{baran12b} analysed nine months of data on \object{KIC\,2697388}, and \citet{telting12a} provided a detailed study of the 10-day sdBV+WD binary \object{KIC\,11558725}, based on 15 months of data. The study of \citet{baran12c} analysed 27 months of data on KIC\,2991403, KIC\,2438324, and KIC\,11179657. Most recently, \citet{reed14} have analysed the full 2.75 year dataset of \object{KIC\,10670103}, providing mode identifications for 178 of 278 detected frequencies. While the papers by \citet{VanGrootel10} and \citet{charpinet11a} matched asteroseismic models to observed frequencies based on survey data, no paper has yet attempted such forward modelling on a full two- to three-year {\em Kepler}\ dataset. The target presented in this work, KIC\,10553698, was included in the original survey and identified as a $g$-mode pulsator in \citetalias{ostensen11b}. The one-month discovery run was examined in \citetalias{baran11b} where 43 pulsation frequencies were identified. Thirty-seven of those frequencies were clearly in the $g$-mode region between 104 and 493\,\textmu Hz, four were found in the intermediate region between 750 and 809\,\textmu Hz, and two were in the high-frequency $p$-mode region at 3074 and 4070\,\textmu Hz. \citet[][Paper\,{\sc iii}]{reed11c} analysed the period spacing in this star along with thirteen other $g$-mode pulsators and made a first estimate of the mean period spacings for the non-radial $g$ modes of degree \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2. Since those early {\em Kepler}\ papers, KIC\,10553698\ was monitored by {\em Kepler}\ throughout quarters 8 to 17, but skipping Q11 and Q15 when the object fell on CCD Module \#3, which failed in January 2010. It is one of the brighter V1093-Hya stars in the sample, but it suffers some contamination in the {\em Kepler}\ photometry due to crowding from nearby stars, as illustrated in Fig.\,\ref{fig:field}. Here we analyse all the available quarters of data from Q8 through to the end of the mission in Q17, and we provide a detailed frequency analysis with identification of the modes detected in the frequency spectrum. We also present new spectroscopic observations that demonstrate that KIC\,10553698\ is a single-lined spectroscopic binary with a 3.6\,d orbital period. We refer to the pulsator as \object{\target A} and the invisible companion as KIC\,10553698 B. \section{Spectroscopic observations} We observed KIC\,10553698\ as part of our observing campaign dedicated to investigating the binary status of the hot subdwarfs in the {\em Kepler}\ field \citep{telting12b,telting13}. From 2010 to 2012 we obtained 42 radial-velocity (RV) measurements of KIC\,10553698, as listed in Table~\ref{tbl:rvs}. The first seven observations were made on each of six consecutive nights between August 25 and 30, 2010, using the ISIS long-slit spectrograph on the William Herschel Telescope (WHT), equipped with the R600B grating and using a 1.0" slit. It was already clear from this initial set of high signal-to-noise (S/N) observations that the target changes its RV with a peak-to-peak amplitude of $\sim$130\,km/s with a period of around three days. Additional spectroscopy were collected with the ALFOSC spectrograph at the Nordic Optical Telescope (NOT) between May 2011 and October 2012. In total 24 spectra were collected over 18 nights using an 0.5" slit with grism\,\#16. The final four of these were obtained using the spectrograph in vertical slit mode for fast readout. We found that these observations appear to be systematically shifted by $\sim$30 km/s with respect to the other observations, indicating a problem with the wavelength calibration in this setup. These points were discarded rather than attempting to correct for the unexpected offset. A final set of ten spectra were obtained between September 29 and October 1, 2012, with the Kitt Peak 4-m Mayall telescope (KPNO) with RC-Spec/F3KB, the kpc-22b grating and a 2" slit. The WHT dataset is clearly the best with a S/N level close to 100\ in each spectrum. The NOT data have variable quality between S/N\,$\sim$\,20 and 80 depending on observing conditions, and the KPNO data have S/N\,$\sim$\,50. All spectra were processed and extracted using standard {\scriptsize IRAF}\footnote{ {\sc iraf} is distributed by the National Optical Astronomy Observatory; see http://iraf.noao.edu/.} tasks. Radial velocities (RVs) were computed with {\tt fxcor}, by cross-correlating with a synthetic template derived from a fit to a mean spectrum of the target, and using the $H\gamma$, $H\delta$, $H\zeta$, and $H\eta$ lines. For the ALFOSC data, the final RVs were adjusted for the position of the target on the slit, judged from slit images taken just before and after the spectra. Table~\ref{tbl:rvs} lists the observations with their mid-exposure dates, and RV measurements with the {\tt fxcor} error ({\sc verr}), as well as the observatory tabulated in the last column. For the final determination of the RV amplitude we used the orbital period and phase as determined from the {\em Kepler}\ photometry (see Section\,\ref{sect:orbit}, below), since this can be determined much more accurately than from the sparse spectroscopic observations. Fitting the amplitude and systemic velocity, we find \begin{eqnarray*} K_1 & = & 64.8 \pm 2.2\,\mathrm{km/s} \\ \gamma & = & 52.1 \pm 1.5\,\mathrm{km/s}. \end{eqnarray*} The phase-folded RV measurements are plotted together with the photometric orbital signal in Fig.\,\ref{fig:foldplot}. The mass function is then \begin{equation}\label{eq:fm} f(m) = \frac{(M_2 \sin i)^3}{(M_1 + M_2)^2} = 1.036\cdot10^{-7} K_1^3 P,\ensuremath{\rm{M}_{\odot}} = 0.095\,\ensuremath{\rm{M}_{\odot}}, \end{equation} which for a canonical 0.46\,\ensuremath{\rm{M}_{\odot}}\ primary, provides a minimum mass for the secondary of 0.42\,\ensuremath{\rm{M}_{\odot}}. Thus, unless the orbital inclination is less than 29$^\circ$, \object{\target B}\ must be a white dwarf (WD). For a canonical 0.6\,\ensuremath{\rm{M}_{\odot}}\ WD, the inclination angle is $\sim52^\circ$. \begin{table}[b]\small\rm \caption[]{Physical parameters derived from the detrended mean spectra.} \label{tbl:physpar} \centering \begin{tabular}{llll} \hline\hline \noalign{\smallskip} Spectrum & \ensuremath{T_{\rm{eff}}} & \ensuremath{\log g} & \ensuremath{\log \left(N_{\mathrm{He}}/N_{\mathrm{H}}\right)} \\ & [K] & [dex] & [dex] \\ \noalign{\smallskip} \hline \noalign{\smallskip} WHT & 27413\,$\pm$\,\ 67 & 5.461\,$\pm$\,0.011 & --2.838\,$\pm$\,0.024 \\ NOT1 & 27007\,$\pm$\,136 & 5.404\,$\pm$\,0.021 & --2.809\,$\pm$\,0.017 \\ KPNO & 27712\,$\pm$\,\ 90 & 5.425\,$\pm$\,0.016 & --2.792\,$\pm$\,0.024 \\ \noalign{\smallskip} \hline \noalign{\smallskip} Adopted & 27423\,$\pm$\,293 & 5.436\,$\pm$\,0.024 & --2.813\,$\pm$\,0.019 \\ \noalign{\smallskip} \hline \end{tabular} \end{table} \begin{figure} \includegraphics[width=\hsize]{splot.eps} \caption{ Mean WHT spectrum of KIC\,10553698 after correcting for the orbital velocity. The S/N in this mean spectrum peaks at $\sim$200, and many weak metal lines can be distinguished in addition to the strong Balmer lines and \ion{He}{i} lines at 4472 and 4026\,\AA. The continuum of the normalised spectrum was sampled individually in 100\,\AA\ sections. Shifted up by 0.2 is the model fit computed with {\sc tlusty}/{\sc XTgrid}. Line identifications are given for lines stronger than 30\,m\AA\ in the model. The final parameters for this model fit are given in Table\,\ref{tbl:abund}. } \label{fig:wht_sp} \end{figure} \subsection{Atmospheric properties of the sdB} We determined \ensuremath{T_{\rm{eff}}}\ and \ensuremath{\log g}\ from each of three mean spectra, co-added after correcting for the radial-velocity variation of the orbit. We redetermined the physical parameters of \object{\target A}, using the H/He LTE grid of \citet{heber00} for consistency with \citetalias{ostensen11b}. We used all the Balmer lines from $H\beta$ to $H\kappa$ (excluding only the $H\epsilon$ line due to contamination with the \ion{Ca}{ii}-H line) and the five strongest \ion{He}{i} lines for the fit. The results are listed in Table~\ref{tbl:physpar}, with the error-weighted mean in the bottom row. The errors listed on the measurements are the formal errors of the fit, which reflect the S/N of each mean, while the errors on the adopted values are the rms of the three measurements, which reflect the systematics of using different spectrographs more than the quality of the observations. These values and errors are relative to the LTE model grid and do not reflect any systematic effects caused by the assumptions underlying those models. \begin{table}[b!] \caption[]{Fitted lines with equivalent widths larger then 50\,m\AA.} \label{tbl:lines} \centering \begin{tabular}{lcr|lcr} \hline\hline \noalign{\smallskip} Ion & Wavelength & \multicolumn{1}{c}{$W_\lambda$} & Ion & Wavelength & \multicolumn{1}{c}{$W_\lambda$} \\ & [\AA] & [m\AA] & & [\AA] & [m\AA] \\ \noalign{\smallskip} \hline \noalign{\smallskip} He {\sc i} & 3819.60 & 67.7 & N {\sc ii} & 4447.03 & 68.5 \\ He {\sc i} & 3888.60 & 57.6 & N {\sc ii} & 4530.41 & 80.5 \\ He {\sc i} & 3888.65 & 228.1 & N {\sc ii} & 4607.15 & 53.6 \\ He {\sc i} & 3964.73 & 72.2 & N {\sc ii} & 4621.39 & 65.7 \\ He {\sc i} & 4026.19 & 232.0 & N {\sc ii} & 4630.54 & 95.6 \\ He {\sc i} & 4387.93 & 98.8 & N {\sc ii} & 5001.13 & 63.3 \\ He {\sc i} & 4471.47 & 295.2 & N {\sc ii} & 5001.47 & 83.2 \\ He {\sc i} & 4471.49 & 258.9 & N {\sc ii} & 5005.15 & 96.5 \\ He {\sc i} & 4471.68 & 79.0 & N {\sc ii} & 5045.10 & 80.7 \\ He {\sc i} & 4713.14 & 50.0 & O {\sc ii} & 4349.43 & 63.8 \\ He {\sc i} & 4921.93 & 154.4 & O {\sc ii} & 4641.81 & 55.9 \\ He {\sc i} & 5015.68 & 96.2 & O {\sc ii} & 4649.14 & 55.3 \\ N {\sc ii} & 3994.99 & 95.2 & Si {\sc iii} & 3806.53 & 54.6 \\ N {\sc ii} & 4041.31 & 68.6 & Si {\sc iii} & 4552.62 & 73.7 \\ N {\sc ii} & 4043.53 & 59.4 & Si {\sc iii} & 4567.84 & 65.3 \\ N {\sc ii} & 4237.05 & 71.1 & Fe {\sc iii} & 4137.76 & 50.6 \\ N {\sc ii} & 4241.79 & 53.0 & Fe {\sc iii} & 4164.73 & 54.1 \\ N {\sc ii} & 4432.74 & 62.6 & Fe {\sc iii} & 4419.60 & 51.6 \\ \noalign{\smallskip} \hline \end{tabular}\end{table} \begin{table}[b] \caption[]{NLTE atmospheric parameters for the fit shown in Fig.\,\ref{fig:wht_sp}, with respect to the solar abundances from \citet{grevesse98} provided for comparison.} \label{tbl:abund} \centering \begin{tabular}{llrrrl} \hline\hline \noalign{\smallskip} Parameter & Value & $+1 \sigma$ & $-1 \sigma$ & $\times$\,Solar & Unit\\ \noalign{\smallskip} \hline \noalign{\smallskip} \ensuremath{T_{\rm{eff}}}\ & 27750 & 130 & 70 & & K\\ \ensuremath{\log g}\ & 5.452 & 0.020 & 0.008 & & dex \\ $\log n(\mathrm{He})/n(\mathrm{H})$\ &--2.74 & 0.03 & 0.11 & 0.018 & dex \\ $\log n(\mathrm{C})/n(\mathrm{H}) $\ &--6.1> & & & 0.001 & dex \\ $\log n(\mathrm{N})/n(\mathrm{H}) $\ &--4.45 & 0.13 & 0.23 & 0.427 & dex \\ $\log n(\mathrm{O})/n(\mathrm{H}) $\ &--4.63 & 0.31 & 0.18 & 0.035 & dex \\ $\log n(\mathrm{Si})/n(\mathrm{H})$\ &--5.65 & 0.24 & 0.46 & 0.063 & dex \\ $\log n(\mathrm{Fe})/n(\mathrm{H})$\ &--4.30 & 0.36 & 0.05 & 1.580 & dex \\ \noalign{\smallskip} \hline \end{tabular} \end{table} We also fitted the mean WHT spectrum with the NLTE model atmosphere code {\sc tlusty}\ \citep{tlusty}. Spectral synthesis was done with {\sc synspec}\,{\small 49}. Our models included H, He, C, N, O, Si and Fe opacities consistently in the calculations for atmospheric structure and synthetic spectra. The fit to the observation was done by the {\sc XTgrid}\ fitting program \citep{nemeth12}. This procedure is a standard $\chi^2$-minimisation technique, which starts with a detailed model, and by successive approximations along the steepest gradient of the $\chi^2$, it converges on a solution. Instead of individual lines, the procedure fits the entire spectrum so as to account for line blanketing. However, the fit is still driven by the dominant Balmer lines with contributions from the strongest metal lines (listed in Table~\ref{tbl:lines}). The best fit was found with \ensuremath{T_{\rm{eff}}}\,=\,27750\,K and \ensuremath{\log g}\,=\,5.45\,dex, using the Stark broadening tables of \citet{tremblay09}. Errors and abundances for those elements that were found to be significant are listed in Table\,\ref{tbl:abund}. When using the VCS Stark broadening tables for hydrogen \citep{lemke97}, we found a systematically lower surface temperature and gravity, by 800\,K and 0.06\,dex. Parameter errors were determined by changing the model in one dimension\ until the critical $\chi^2$-value associated with the probability level at the given number of free parameters was reached. The resulting fit is shown together with the mean spectrum in Fig.\,\ref{fig:wht_sp}. We used a resolution of $\Delta\lambda=1.7$ \AA\ and assumed a non-rotating sdB star. Our NLTE model provides consistent parameters with the LTE analysis, indicating that NLTE effects are negligible in the atmosphere of \object{\target A}. The abundances show that iron is supersolar, nitrogen is about half solar, whereas the other elements are significantly depleted with respect to their solar abundances. This pattern fits in the typical abundance pattern of sdB stars \citep[see e.g.][]{geier13a}. \begin{figure*}[t!] \centering \includegraphics[width=\hsize]{Hamfast_frq.eps} \caption{ FT of the full {\em Kepler}\ dataset of KIC\,10553698. The ordinate axis has been truncated at 300\,ppm to show sufficient details, even if there are some peaks in the FT that exceed this value. The 5-$\sigma$ level is indicated by a continuous line. } \label{fig:kepft} \end{figure*} \section{Photometry and frequency analysis} KIC\,10553698\ has a magnitude in the {\em Kepler}\ photometric passband of {\em Kp}\,=\,15.134 and colours $g-r$\,=\,--0.395 and $g-i$\,=\,--0.694\footnote{The {\em Kepler}\ Input Catalog does not provide errors on the magnitudes}. It also appears in the {\sc 2mass}\ catalogue as 2MASS J19530839+4743002\ with $J$\,=\,15.45(5), $H$\,=\,15.54(9) but is below the detection limit in $K_\mathrm{s}$. Its close proximity to several fainter stars makes the {\em Kepler}\ photometry suffer from contamination that varies slightly from quarter to quarter, depending on the positioning of the instrument's 4"-sized pixels. Figure\,\ref{fig:field} shows a 150"-sized section of an ALFOSC target acquisition frame with the corresponding section of a {\em Kepler}\ full-frame image. For the frequency analysis, we used the optimally extracted lightcurves provided by the MAST\footnote{The Mikulski Archive for Space Telescopes is hosted by the Space Telescope Science Institute (STScI) at http://archive.stsci.edu/.}. These were detrended using low-order polynomials for each continuous lightcurve segment, removing only trends on month-long timescales. We experimented with using the pixel data in order to retain more flux from the target, but no significant improvement was achieved, so all the data presented here are based on the standard extraction provided by the archive pipeline. \begin{figure}[t!] \includegraphics[width=\hsize]{foldplot.eps} \caption{ Top: the 855.6\,d\ {\em Kepler}\ lightcurve folded on the orbital period and binned into 50 bins. Bottom: radial velocity measurements folded on the same ephemeris as the lightcurve. (Red triangles: WHT; blue circles: NOT; black squares: KPNO) } \label{fig:foldplot} \end{figure} \begin{figure*}[t!] \includegraphics[width=\hsize]{Hamfast_run.eps} \caption{ Sliding FT for the region with the most significant pulsations in KIC\,10553698. } \label{fig:running} \end{figure*} \subsection{The orbital signal}\label{sect:orbit} In the Fourier transform (FT) shown in Fig.\,\ref{fig:kepft}, the first significant peak is found at 3.41678\,\textmu Hz, which corresponds to a period of 3.38743\,d. Since the span of the {\em Kepler}\ dataset is 855.6\,d, the frequency resolution is 0.014\,\textmu Hz, which coincidentally corresponds to a precision in period of 0.014\,d. Thus, assuming a circular orbit, we find an ephemeris for the system \begin{eqnarray*} P_\mathrm{orb} & = & 3.387 \pm 0.014\ \mathrm{d},\\ T_0 & = & 55436.468 \pm 0.014\ \mathrm{d}, \end{eqnarray*} where $T_0$ is the time corresponding to zero phase (where the subdwarf is at the closest point to the observer) for the first epoch of observations. Since the orbital signal is too weak for single minima to be detected in the point-to-point scatter, the uncertainty on the ephemeris phase is the same as that of the Fourier analysis. Figure\,\ref{fig:foldplot} shows the lightcurve folded on this ephemeris and binned into 50 points. The error bars reflect the rms noise in these bins, which may be boosted by the pulsation signal. The photometric orbital signal seen in the {\em Kepler}\ lightcurve is caused by the Doppler-beaming effect, as described in detail for the 0.4-d eclipsing sdB+WD binary \object{KPD\,1946+4330} by \citet{bloemen11} and the 10-d sdBV+WD binary \object{KIC\,11558725} by \citet{telting12a}. KIC\,10553698\ is similar to the latter in that it only displays the beaming signal and no ellipsoidal deformation, which would be present at $P_\mathrm{orb}$/2, as is clearly seen in \object{KPD\,1946+4330} and also in \object{KIC\,6614501} \citep{silvotti12}. For plain Doppler beaming, the observed flux from the target is related to the orbital velocity along the line of sight, $v_r$, as \begin{equation} F = F_0 \left ( 1 - B \frac{v_r}{c} \right ) \end{equation} where $F_0$ is the intrinsic flux of the star in the observed passband, $B$ the beaming factor, and $v_r$/$c$ the fraction of the orbital velocity to the speed of light. The beaming factor has several terms, some geometrical, and a part that depends on the spectrum of the radiating star. Since \object{\target A}\ has almost exactly the same physical parameters as \object{KIC\,11558725}, we simply adopt the beaming factor computed by \citet{telting12a}, $B$\,=\,1.403(5). Using the measured RV amplitude we can then predict a beaming amplitude of $B K_1/c$\,=\,303\,$\pm$\,10\,ppm. The observed beaming amplitude is 274\,$\pm$\,6\,ppm in the optimally extracted lightcurve, after applying minimal polynomial fits to remove long-term trends. Correcting for the crowding fraction indicated in the {\em Kepler}\ dataset, which states that the ratio of target flux to total flux is $\sim$0.91 (see Table~\ref{tbl:pixpars}), the contamination-corrected beaming amplitude is 299 ppm, which is perfectly consistent with the predicted value. \begin{table}[bt] \caption[]{{\em Kepler}\ pixel-data parameters.} \label{tbl:pixpars} \centering \begin{tabular}{lccc} \hline\hline \noalign{\smallskip} Quarter & $N_\mathrm{pix}$ & CROWDSAP \tablefootmark{a} & FLFRCSAP \tablefootmark{b} \\ \noalign{\smallskip} \hline \noalign{\smallskip} Q8, Q12, Q16 & 4 & 0.9079 & 0.7305 \\ Q9, Q13 & 5 & 0.9176 & 0.8128 \\ Q10, Q14 & 6 & 0.9026 & 0.8212 \\ Q17 & 5 & 0.9070 & 0.8087 \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{ {\em Kepler}\ FITS data file header keywords, indicating:\\ \tablefoottext{a}{Ratio of target flux to total flux in optimal aperture}\\ \tablefoottext{b}{Fraction of target flux within the optimal aperture}\\ No errors are provided on these values. } \end{table} \subsection{The sliding FT} In Fig.\,\ref{fig:running} we show a sliding FT (sFT) of the same dataset as was used to generate Fig.\,\ref{fig:kepft}. The data were chopped into segments of 12-d length and stepped with 4-d intervals, and the resulting FTs are stacked with time running in the y-direction to visualise the time variability of the modes. The black bands indicate the data gaps, most significantly the Q11 and Q15 gaps, when KIC\,10553698\ fell on the defunct Module \#3. The thin black lines indicate the regular monthly data-downlink gaps of typically one-day duration, with slightly thicker lines indicating other events that caused interruptions to the observations for various reasons. A spacecraft artefact is seen close to $f_{33}$ at $\sim$370\,\textmu Hz\ in Q8 and recurring every year, as described by \citet{baran13b}. It is easy to see from the sFT that some frequencies are single and very stable (e.g.~$f_{33}$, $f_{69}$, and $f_{123}$), and some produce stable beat patterns caused by doublets and triplets that are unresolved in the 12\,d chunks (e.g.~\ensuremath{f_{79-81}}, $f_{43-45}$, $f_{36-37}$, and $f_{63-64}$). At higher frequencies, $f_{21-23}$, $f_{24-26}$ and $f_{28-31}$ form broader, more complex patterns that also appear to be completely stable throughout the duration of the observations. In the low-frequency range, the modes seem less stable, sometimes appearing or disappearing completely. For instance, the mode labelled $f_{108-110}$ appears to have a similar beat period as that of the strongest mode, \ensuremath{f_{79-81}}, but with an amplitude that increases throughout the run duration. \begin{figure*}[t!] \centering \includegraphics[width=\hsize]{periodogram.eps} \caption{ \label{fig:kepper} Periodogram for KIC\,10553698. This is the same FT as in Fig.\,\ref{fig:kepft}, but with period on the abscissa. The first panel starts where the last panel in Fig.\,\ref{fig:kepft} ends, and frequencies shorter than the triplet at 86\,\textmu Hz\ have been truncated. Radial order according to the asymptotic relation (Eq.~\ref{eq:asym}) is indicated on the top axis, with inward tick marks counting the \ensuremath{\ell}\,=\,1\ spacing and outward tick marks counting the \ensuremath{\ell}\,=\,2\ spacing. } \end{figure*} \begin{figure}[t!] \includegraphics[width=8.5cm]{Hamfast_ft1.eps} \caption{ \label{fig:mainpk} Eleven highest amplitude frequencies in the KIC\,10553698\ Fourier spectrum. The bars indicate positions for the central \ensuremath{m}\,=\,0\ component (black), \ensuremath{\ell}\,=\,1, \ensuremath{m}\,=\,\ensuremath{\pm}1\ (blue), and \ensuremath{\ell}\,=\,2, \ensuremath{m}\,=\,\ensuremath{\pm}1,2 (magenta). } \end{figure} \subsection{The Fourier spectrum} A careful analysis of the peaks in the FT of KIC\,10553698\ reveals 162\ significant features. What can be considered significant is always a matter of interpretation in such analyses, especially when amplitude variability is present. Here we have only retained frequencies that appear well separated rather than try to include every peak that appears in a cluster, since many of them are likely to be caused by splittings produced by amplitude variability. We analysed both the FT of the full SC dataset, including all seven available quarters, as plotted in Fig.\,\ref{fig:running}, and the FT of the two long runs from Q8-10 and Q12-14 separately. We set the detection limit to five times the mean level in the FT, \ensuremath{\sigma_{\mathrm{FT}}}, which translates to 25.7\,ppm for the 8-Q dataset and $\sim$40\,ppm for the 3-Q datasets. To be retained, we required every frequency to be above the respective 5\ensuremath{\sigma_{\mathrm{FT}}}\ limit in at least one of these three sets. The full list of frequencies is provided in the Appendix, Table~\ref{tbl:freqs}. The table lists the frequency, period, and amplitude of each detected mode, together with a tentative mode ID provided as a non-radial degree number $\ell$ and a radial order $n$, where one could be estimated based on the analysis of the \'echelle diagram discussed in Section\,\ref{sect:modeID}, below. Also given is a `State' description indicating the stability of the mode. This is given as `stable', `rising', or `dropping' if the mode is present in both 3-Q datasets within $\pm$0.05\,\textmu Hz\ of the frequency detected in the 8-Q dataset, and `stable' to indicate that the amplitude does not change by more than 20\%\ between the two 3-Q sets compared with the amplitude of the 8-Q set. Modes that are not detected in one of the 3-Q datasets are classified as `appearing' or `disappearing', and modes that are only significant in the 8-Q dataset are labelled `noisy'. A few modes labelled `messy' are significant only in one of the 3-Q datasets, but because of cancellation effects, they make only a broad, non-significant peak in the 8-Q set. \subsection{Rotationally split triplets} In Fig.\,\ref{fig:mainpk} we show the eleven strongest peaks together with the window function. It is quite clear from this figure already that many peaks appear in groups with a common spacing, as would be expected for rotationally split multiplets. The picture is not as clear as one may have wished. The strongest triplet, \ensuremath{f_{79-81}}, shows slightly uneven splittings of $\delta\nu$\,=\,0.168 and 0.155\,\textmu Hz, respectively. The triplet $f_{57-59}$ is perfectly even with a splitting of 0.159\,\textmu Hz. Both are somewhat lopsided in amplitude, leaning either to the $m$\,=\,+1 or the $m$\,=\,--1 side. The three symmetric triplets $f_{43-45}$, $f_{52-54}$, and $f_{36-37}$ have splittings of 0.132, 0.141, and 0.132\,\textmu Hz, respectively. The difference in rotational period that can be inferred from these splittings is quite significant (when using a simple model with a Ledoux constant, $C_{n\ell}$, of 0.5, as expected for high-order g-modes of degree \ensuremath{\ell}\,=\,1), ranging from 34.5 to 44.2\,d. To indicate the rotational splitting in Figures 7, 9, and B.1, we have used an average of the splittings measured in the three consecutive triplets identified as $n$\,=\,11, 12, and 13, $\delta\nu$\,=\,0.135\,\textmu Hz, implying a rotational period of 42.9\,d. \subsection{Quintuplets and the pulsation axis} Another spectacular sequence of multiplets can be found at higher frequencies. The four peaks labelled $f_{28-31}$ at 393\,\textmu Hz\ form a perfectly even quintuplet with the middle peak missing and a splitting of 0.235\,\textmu Hz. The sequence $f_{24-26}$ at 419\,\textmu Hz\ appears as three components of a quintuplet with splitting of 0.247\,\textmu Hz, and $f_{21-23}$ likewise matches three components of a quintuplet with splitting of 0.258\,\textmu Hz. In all three cases there are indications of the missing components close to the 5\ensuremath{\sigma_{\mathrm{FT}}}\ limit. The additional fact that this sequence of multiplets appears with a spacing of $\sim$150\,s\,=\,260\,s/$\sqrt{3}$ (see below) makes it very clear that this is a sequence of consecutive \ensuremath{\ell}\,=\,2\ modes. The rotational splitting for high-order $g$-modes in a uniformly rotating star is given by \citet{asteroseismology} \begin{equation} \delta\nu = m \Omega\,(1 - C_{n\ell}) \simeq \frac{m}{P_\mathrm{rot}}\left( 1 - \frac{1}{\ell(\ell+1)} \right), \end{equation} so that the observed splittings of the three consecutive \ensuremath{\ell}\,=\,2\ multiplets translate to a rotation period of 41.0, 39.0, and 37.4\,d, close to the middle of the range seen for the \ensuremath{\ell}\,=\,1\ modes. It is interesting that the middle \ensuremath{m}\,=\,0\ component is suppressed in this sequence of multiplets. This is very different from the quintuplet structure seen in \object{KIC\,10670103}~\citep{reed14} where the middle component is the strongest one. Geometric cancellation of $\ell,m$\,=\,2,0 occurs only when viewing a pulsator within a few degrees of $i$\,=\,55\degr. At this angle there is no significant suppression of any particular \ensuremath{\ell}\,=\,1\ component, consistent with what we have seen. From the spectroscopic observations, we found that the mass function implies that the companion is consistent with a white dwarf for all orbital inclinations higher than 29\degr. If the pulsation axis is aligned with the orbital axis, the mass of the white dwarf must be close to 0.6\,\ensuremath{\rm{M}_{\odot}}, which is the typical value for white dwarfs that are remnants of intermediate-mass stars after normal uninterrupted evolution. \begin{figure}[t] \includegraphics[width=\hsize]{Hamfast_KS.eps} \caption{\label{fig:kstest} KS test statistic for the full frequency list (red) and high-amplitude ($A$\,$>$\,1000\,ppm) modes (red) respectively. } \end{figure} \begin{figure*}[t!] \includegraphics[width=\hsize]{echelle.eps} \caption{\label{fig:echelle} \'Echelle diagram for \ensuremath{\ell}\,=\,1\ (left) and \ensuremath{\ell}\,=\,2\ (right). Detected modes are marked according to their period on the abscissa, with a cyclic folding on the asymptotic period on the ordinate axis. The right-hand axis gives the order $n$ of the mode, according to the asymptotic relation (Eq.\,\ref{eq:asym}). Blue circles mark modes identified as \ensuremath{\ell}\,=\,1, with outlined cyan circles indicating those that appear in multiplets. Red squares mark \ensuremath{\ell}\,=\,2\ modes, again with outlined points indicating multiplets. Cyan diamonds mark trapped \ensuremath{\ell}\,=\,1\ modes, red diamonds mark trapped \ensuremath{\ell}\,=\,2\ modes, and green diamonds mark modes that do not fit either sequence. Outlined cyan squares indicate modes that can be either \ensuremath{\ell}\,=\,1\ or \ensuremath{\ell}\,=\,2. The high-amplitude peaks ($A$\,$>$\,250\,ppm) in the full dataset are marked with enlarged symbols. } \end{figure*} \subsection{The asymptotic period spacing} In the asymptotic limit of stellar pulsation theory, consecutive $g$ modes follow the relation \begin{equation}\label{eq:asym} P_{\ell,n} = \frac{\mit\Pi_0}{L} n + \epsilon_\ell \end{equation} where $L$\,=\,$\sqrt{\ell(\ell+1)}$, $\mit\Pi_0$ is the reduced period spacing in the asymptotic limit, and $\epsilon_\ell$ is a constant offset for each $\ell$. A hallmark of the V1093\,Her stars revealed by {\em Kepler}\ is that the asymptotic period relation is readily detectable by the period spacings, \ensuremath{\Delta P}\ of the modes \citep{reed11c,reed12a}. The favoured method for determining the mean spacing is the Kolmogorov-Smirnov (KS) test, which produces high negative values at the most frequently observed spacing in a dataset. Figure\,\ref{fig:kstest} shows the KS statistic for two period lists, the full set listed in Table\,\ref{tbl:freqs} and a list truncated to contain only modes with amplitudes higher than 1000\,ppm. A clear minimum is seen around 260\,s in both sets. Only for the list including the low-amplitude modes does the KS test show a second minimum at \ensuremath{\Delta P}\,=\,150\,s. The 1/$\sqrt{3}$ relationship between these two peaks is the signature of the period difference between \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ modes in the asymptotic approximation. By plotting the $P$ modulo \ensuremath{\Delta P}\ versus $P$ for the two period spacings, one can construct an \'echelle diagram for $g$-mode pulsators. Unlike the \'echelle diagram used for $p$-mode pulsators, which are evenly spaced in frequency and have the same spacing, $\Delta\nu$, for different orders $\ell$, the $g$-mode \'echelle diagram must be folded on a different \ensuremath{\Delta P}\ for each $\ell$. Figure\,\ref{fig:echelle} shows the \'echelle diagrams for \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ for the 162\ detected peaks in \object{\target A}. After starting with the \ensuremath{\Delta P}\ detected by the KS test, we made some iterations of identifying peaks and adjusting the spacing slightly until a reasonable picture emerged. In the \ensuremath{\ell}\,=\,1\ \'echelle diagram, one can clearly see a ridge of modes that meanders around the vertical lines drawn at $\epsilon$\,=\,180\,s. The vertical lines curve to indicate how the frequency splitting for \ensuremath{m}\,=\,\ensuremath{\pm}1\ translate into period space. Two lines are drawn for each $m$ to indicate the frequency resolution of the full dataset. The \ensuremath{\ell}\,=\,1\ ridge includes almost all the most powerful modes, but a few are significantly off either sequence. \subsection{Assigning modes to observed periods}\label{sect:modeID} While some mode identifications are obvious based on multiplet structures, the deviations from a clear asymptotic sequence imply substantial ambiguities in the labelling. The identifications listed in Table~\ref{tbl:freqs} should therefore not be taken as anything more than what the authors consider to be the most likely ones. Also, some frequencies listed are clearly spurious, caused by amplitude variability, since in several cases groups of four, or even five, peaks are listed as belonging to one \ensuremath{\ell}\,=\,1\ triplet. In some cases the sequence for \ensuremath{\ell}\,=\,1\ overlaps with the one of \ensuremath{\ell}\,=\,2, causing further ambiguity. For instance, in the case of $f_{152-155}$ the two complex peaks (see Fig.\,\ref{fig:mainpk2}) are likely to be $\ell,n$\,=\,1,35 and 2,61, with the \ensuremath{\ell}\,=\,2\ mode at $\sim$107\,\textmu Hz. Several other complex groups in Fig.\,\ref{fig:mainpk2} are identified as superpositions of modes of different orders. In some cases, a particular group might be either \ensuremath{\ell}\,=\,1\ or \ensuremath{\ell}\,=\,2, but has been identified based on the frequency splitting or because the identification for a given $\ell,n$ has already been assigned to a suitable mode. While the majority of frequencies in Table~\ref{tbl:freqs} can be identified in this scheme, some clearly fall well off either sequence. If these were all low-amplitude modes, we could dismiss them as $\ell$\,=\,3 or higher, but two of them appear among the highest amplitude peaks plotted in Fig.\,\ref{fig:mainpk}. The third strongest peak is the single and stable $f_{123}$ at 140.5\,\textmu Hz, which sits way out on the left edge in Fig.\,\ref{fig:echelle}. It falls between the relatively low-amplitude multiplets assigned ID's $\ell,n$\,=\,1,26 and 1,27, which both match the \ensuremath{\ell}\,=\,1\ sequence well, and the nearest \ensuremath{\ell}\,=\,2\ mode, $n$\,=\,47, is also occupied. A similar problem appears with the pair $f_{48-49}$, which is ranked as the 8$^{\mathrm th}$ highest mode in amplitude. The pair appears with a splitting of 0.171\,\textmu Hz, which is just slightly wider than that of the main \ensuremath{f_{79-81}}\ triplet. But it falls between two other higher-amplitude triplets assigned $\ell,n$\,=\,1,12 and 1,13. The spacing between those triplets is 302\,s, so higher than the average period spacing, but much too tight to permit another \ensuremath{\ell}\,=\,1\ mode to squeeze in. It is also sandwiched between two low-amplitude pairs assigned $\ell,n$\,=\,2,22 and 2,23. The remaining peaks that defy assignment in the asymptotic interpretation are all low amplitude and might well be $\ell$\,$>$\,2. \subsection{The trapped modes} The most plausible interpretation of the off-sequence, high-amplitude peaks is that they are trapped modes, which are produced mostly by the H/He transition in the stratified envelope as predicted by classical sdB models \citep{charpinet00}. To visualise the trapping signature, theoretical papers often show a period-spacing diagram where the period difference between consecutive modes are plotted against period. When reduced period, $\mit\Pi$\,=\,$PL$, is used, modes with the same radial order, $k$, should overlap in this diagram. But to compute the required period differences, \ensuremath{\Delta\mit\Pi}\,=\,$\mit\Pi_{k}$\,--\,$\mit\Pi_{k-1}$, we must have completely uninterrupted sequences. Note that when mode trapping occurs, extra modes are inserted into the asymptotic sequence, so that the real number of radial nodes in the star, $k$, is higher than the asymptotic order $n$. \begin{figure}[t!] \includegraphics[width=\hsize]{pspacing.eps} \caption{ Period difference between consecutive modes of the \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ sequences, after converting to reduced periods. The asymptotic order of the modes, $n$, is indicated on the upper axis. } \label{fig:spacing} \end{figure} Inspecting the sequences of \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ modes listed in Table~\ref{tbl:freqs}, including the trapped modes that are marked as $n$\,=\,t in the table, we see that just a few modes are missing. To make a period-spacing diagram with an uninterrupted sequence of consecutive modes, we must check each case and see if we can find a suitable number to use in the sequence. Note that due to the huge number of independent frequencies in the full FT of the whole {\em Kepler}\ dataset, we have maintained a 5-$\sigma$ significance threshold in order to avoid too many spurious frequencies. However, when looking at a specific frequency region suspected of containing a real frequency, it is justified to consider a 4-$\sigma$ signal to be a significant detection. The six modes needed to complete the sequences are \ensuremath{\ell}\,=\,1, $n$\,=\,31, 32, \ensuremath{\ell}\,=\,2, $n$\,=\,19, 32, 33, and an \ensuremath{\ell}\,=\,2\ mode corresponding to the trapped \ensuremath{\ell}\,=\,1\ mode between $n$\,=\,26 and 27. For \ensuremath{\ell}\,=\,1, $n$\,=\,31 there is a feature at the 4-$\sigma$ level that we can use to complete the sequence of modes needed to construct the reduced period diagram. The same is the case for \ensuremath{\ell}\,=\,2, $n$\,=\,19 and 33. The \ensuremath{\ell}\,=\,1, $n$\,=\,32 mode should be at the position where the oddly spaced \ensuremath{\ell}\,=\,2\ multiplet $f_{143-146}$ is found, so we use the highest of these peaks to complete the sequence. A similar case can be made for \ensuremath{\ell}\,=\,2, $n$\,=\,32, which should occur just where the highest peak in the FT is found, at $\sim$202\,\textmu Hz. And an \ensuremath{\ell}\,=\,2\ mode to correspond with the trapped \ensuremath{\ell}\,=\,1\ mode at $\mit\Pi$\,=\,10064\,s would be located at 4109\,s, which is in the region where another strong \ensuremath{\ell}\,=\,1\ mode, $f_{63}$, is seen. After discovering the presence of trapped modes in \object{\target A}, we revisited the table of frequencies to see if we could find evidence of other less obvious trapped modes. The only feature we could find that seems reasonable to interpret as a trapped mode is located between $n$\,=\,19 and 20. The two modes $f_{38-39}$ consist of a relatively high-amplitude stable peak with a noisy companion (shown in the first panel of Fig.~\ref{fig:mainpk2}). It was first interpreted as \ensuremath{\ell}\,=\,1, but there are no missing \ensuremath{\ell}\,=\,1\ modes in this region. As a trapped mode, it could be either \ensuremath{\ell}\,=\,1\ or \ensuremath{\ell}\,=\,2. An \ensuremath{\ell}\,=\,1\ interpetation would require a corresponding \ensuremath{\ell}\,=\,2\ mode at $\sim$1804\,s, but this region is clean. An \ensuremath{\ell}\,=\,2\ interpretation requires the corresponding \ensuremath{\ell}\,=\,1\ mode to have an observed period of $\sim$5412\,s, at which we already find $f_{90}$. This mode was first interpreted as the fourth component of an incomplete \ensuremath{\ell}\,=\,2\ mode, even though the splitting did not match the expected one very well. Identifying $f_{90}$ as the trapped \ensuremath{\ell}\,=\,1\ mode corresponding to a trapped \ensuremath{\ell}\,=\,2\ mode provides a closer match than it did as part of an \ensuremath{\ell}\,=\,2\ multiplet. We therefore adopt this latter interpretation, but note that the identification for this trapped mode is not as clear as for the other two. After completing the sequences by filling in for the six missing modes and the additional trapping feature between $n$\,=\,19 and 20, the reduced-period diagram shown in Fig.\,\ref{fig:spacing} emerges. The modes identified in Table\,\ref{tbl:freqs} are marked with filled symbols and the missing modes with open symbols in the figure. All the observed multiplets have been reduced to a single period for this figure. For the triplets and the three clear \ensuremath{\ell}\,=\,2\ multiplets we have inferred the position of the central component. For other modes, the case is much more ambiguous. In general, unless the period spacing could be interpreted in such a way as to locate the correct centre, we simply used the highest peak. At low $n$ the error this produces is tiny, but at high $n$ the error can be quite large. The curves plotted in the \'echelle diagrams and labelled on the top axis with the respective $m$, reflects this effect. The offset between the \ensuremath{\ell}\,=\,1\ and 2 sequences in Fig.\,\ref{fig:spacing} increases at high $n$, which is most likely caused by such mode ambiguity rather than any physical effects. The expected correspondence between the \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ sequences is striking. The diagram reveals three clear trapping features, where at least one is clearly present in both the \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ sequences. The first is at low radial order where the \ensuremath{\ell}\,=\,2\ sequence is not present, and the last includes a `missing' \ensuremath{\ell}\,=\,2\ point. The difference in reduced period between consecutive trapped modes can be seen to be $\mit\Pi_H$\,=\,$\sim$2400 and $\sim$2730\,s as indicated by the horizontal arrows in Fig.\,\ref{fig:spacing}. This observational reduced period diagram bears a striking resemblance to similar diagrams produced from theoretical models, e.g.~Fig.\,3 in \citet{charpinet02a}. The properties of this trapping structure depends on the mass of the H-rich envelope and the position of the transition zones inside the star \citep{charpinet00}. Both the transition zone between the hydrogen and helium layers in the envelope and the transition between the helium layer and the convective core, which with time develops an increasing carbon-oxygen content and thus a higher density, produce mode-trapping features that will affect the periods of the trapped modes \citep{charpinet13a}. Due to the complexities of these double trapping zones it may not be straightforward to translate trapping period differences into asteroseismic ages on the EHB. \section{Conclusions} We have analysed the complete {\em Kepler}\ short-cadence lightcurve for KIC\,10553698, and collected and analysed spectroscopic observations that reveal it to be a system consisting of a V1093-Her pulsator orbiting a white dwarf. When starting the frequency analysis of the pulsator, it soon became evident that a large number of the observed periodicities fitted neatly on the asymptotic sequences for \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2, which is similar to what we have seen with the other V1093-Her pulsators in the {\em Kepler}\ field. However, when we realised that a few of the main modes were clearly incompatible with the asymptotic relation, we were immediately intrigued. By accepting two high-amplitude modes as trapped \ensuremath{\ell}\,=\,1\ modes, we were finally able to make a convincing case that mode trapping, as predicted by all theoretical models for V1093-Her pulsators, is present and detectable in the {\em Kepler}\ observations. Thanks to almost perfectly complete sequences of consecutive \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ modes, we were able to, for the first time, generate an observed period-spacing diagram that shows convincing evidence for mode trapping. It is somewhat surprising that such features have not been spotted in other V1093-Her pulsators observed with {\em Kepler}. But it was only the high amplitude of two of the trapped modes, and the fact that all available \ensuremath{\ell}\,=\,1\ modes in the sequence were already assigned, that tipped us off to this feature. For other pulsators, trapped modes may hide in the unassigned low-amplitude modes. It might be worthwhile revisiting the full sample of V1093-Her stars in light of this revelation. Many \ensuremath{\ell}\,=\,1\ and \ensuremath{\ell}\,=\,2\ modes appear as rotationally split multiplets indicating rotational periods that range from 34.5 to 44.2\,days, with the most convincing \ensuremath{\ell}\,=\,1\ modes averaging to $\sim$43\,d and the best \ensuremath{\ell}\,=\,2\ modes $\sim$39\,d. An accurate determination of the rotation rate from the observed multiplet splittings would require knowledge of the $C_{n\ell}$ values from asteroseismic models. Until this becomes available we must be satisfied with the rough estimate $P_{\mathrm{rot}}$\,=\,41$\pm$3\,d. We also found that a series of clear \ensuremath{\ell}\,=\,2\ multiplets all had the middle $m$\,=\,0 component suppressed, implying a pulsation axis observed at close to 55\degr, which is the same value as would be required for the mass function of the binary to be compatible with a canonical sdB mass and a normal 0.6\,\ensuremath{\rm{M}_{\odot}} white-dwarf. Since \object{\target B}\ must have been the original primary of the progenitor system, the orbit must have been rather wide in order to allow it to complete its red-giant-branch evolution, followed by an asymptotic-giant-branch stage that brought it into contact with the main-sequence progenitor of the current primary, \object{\target A}. The parameters of the KIC\,10553698\ system and others like it can be used to constrain the possible binary-interaction scenarios that allow enough angular momentum to be lost from the system to result in the observed configuration. \begin{acknowledgements} The authors gratefully acknowledge the {\em Kepler}\ team and everybody who has contributed to making this mission possible. Funding for the {\em Kepler Mission}\ is provided by the NASA Science Mission Directorate. We also thank Prof.~Uli Heber for kindly providing the model grids used for the LTE atmospheric analysis. The research leading to these results has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007--2013)/ERC grant agreement N$^{\underline{\mathrm o}}$\,227224 ({\sc prosperity}), and from the Research Council of KU~Leuven grant agreement GOA/2008/04. Funding for this research was also provided by the US National Science Foundation grants \#1009436 and \#1312869, and the Polish National Science Centre under project N$^{\underline{\mathrm o}}$\,UMO-2011/03/D/ST9/01914. The spectroscopic observations used in this work were collected at the Nordic Optical Telescope ({\scriptsize NOT}) at the Observatorio del Roque de los Muchachos (ORM) on La Palma, operated jointly by Denmark, Finland, Iceland, Norway, and Sweden; the William Herschel Telescope (WHT) also at the ORM and operated by the Isaac Newton Group; and the Mayall Telescope of Kitt Peak National Observatory, which is operated by the Association of Universities for Research in Astronomy under cooperative agreement with the NSF. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,500,539
arxiv
\section{Introduction} \label{sec:intro} The Alternating Direction Method of Multipliers (ADMM) has become a very popular approach to solving a broad variety of optimization problems in signal and image processing, prominent examples including Total Variation regularization and sparse representation problems~\cite{goldstein-2009-split},~\cite[Sec. 6]{boyd-2010-distributed}, ~\cite{afonso-2011-augmented}. This method introduces an additional parameter, the \emph{penalty parameter}, on which the rate of convergence is strongly dependent, but for which there are no analytic results to guide selection other than for a very specific set of problems~\cite{ghadimi-2015-optimal, raghunathan-2014-alternating},~\cite[Sec. 5]{raghunathan-2015-admm}. There is, however, a heuristic method for automatically adapting the penalty parameter~\cite{he-2000-alternating} that appears to becoming quite popular~\cite{hansson-2012-subspace, liu-2013-nuclear, vu-2013-fantope, iordache-2014-collaborative, weller-2014-phase, wohlberg-2014-efficient}. The present paper demonstrates a serious flaw in this heuristic approach, and proposes a modification that at least partially addresses it. \section{ADMM} \label{sec:admm_dtl} The notation and exposition in this section follows that of the influential tutorial by Boyd {et al.}\xspace~\cite{boyd-2010-distributed}. The \emph{Lagrangian} for the constrained problem \begin{equation} \argmin_{\mb{x}} f(\mb{x}) \; \text{ such that } \; A \mb{x} = \mb{b} \;, \label{eq:linconprimal} \end{equation} is \begin{equation} L(\mb{x}, \mb{y}) = f(\mb{x}) + \mb{y}^T (A \mb{x} - \mb{b}) \;, \label{eq:pcnstlgrng} \end{equation} where $\mb{x}$ and $\mb{y}$ are referred to as the \emph{primal} and \emph{dual} variables respectively. The primal and dual feasibility conditions \begin{align} 0 = \nabla L(\mb{x}^*, \cdot) \;\;\Rightarrow\;\; & A \mb{x}^* - \mb{b} = 0 \label{eq:lgrngnopty}\\ 0 \in \partial L(\cdot, \mb{y}^*) \;\;\Rightarrow\;\; & 0 \in \partial f(\mb{x}^*) + A^T \mb{y}^* \;, \label{eq:lgrngoptx} \end{align} where $\partial$ denotes the subdifferential operator~\cite[Ch. D]{urruty-2004-fundamentals}, provide conditions on the optimal primal and dual variables $\mb{x}^*$ and $\mb{y}^*$. The \emph{method of multipliers} solves this problem via \emph{dual ascent} \begin{align} \mb{x}^{(k+1)} & = \argmin_{\mb{x}} L_{\rho}(\mb{x}, \mb{y}^{(k)}) \\ \mb{y}^{(k+1)} & = \mb{y}^{(k)} + \rho (A \mb{x}^{(k+1)} - \mb{b}) \; , \end{align} where $L_{\rho}$ is the \emph{augmented Lagrangian} \begin{equation} L_{\rho}(\mb{x}, \mb{y}) = f(\mb{x}) + \mb{y}^T (A \mb{x} - \mb{b}) + \frac{\rho}{2} \norm{A \mb{x} - \mb{b}}_2^2 % \end{equation} with \emph{penalty parameter} $\rho$. ADMM can be viewed as a variant of this method\footnote{There are limitations to this interpretation~\cite{eckstein-2012-augmented}.} applied to the problem \begin{equation} \argmin_{\mb{x},\mb{z}} f(\mb{x}) + g(\mb{z}) \; \text{ such that } \; A \mb{x} + B \mb{z} = \mb{c} \; , \label{eq:admmprob} \end{equation} where $\mb{x} \in \mbb{R}^n$, $\mb{z} \in \mbb{R}^m$, and $\mb{c} \in \mbb{R}^p$, and the Lagrangian and augmented Lagrangian are, respectively, \begin{align} L(\mb{x}, \mb{z}, \mb{y}) = & f(\mb{x}) + g(\mb{z}) + \mb{y}^T (A \mb{x} + B \mb{z} - \mb{c}) \\ L_{\rho}(\mb{x}, \mb{z}, \mb{y}) = & L(\mb{x}, \mb{z}, \mb{y}) + \frac{\rho}{2} \norm{A \mb{x} + B \mb{z} - \mb{c}}_2^2 \; . \end{align} Instead of jointly solving for $\mb{x}$ and $\mb{z}$, ADMM alternates the $\mb{x}$ and $\mb{z}$ updates (thus the \emph{alternating direction}) \begin{align} \mb{x}^{(k+1)} & = \argmin_{\mb{x}} L_{\rho}(\mb{x}, \mb{z}^{(k)}, \mb{y}^{(k)}) \label{eq:admmx} \\ \mb{z}^{(k+1)} & = \argmin_{\mb{z}} L_{\rho}(\mb{x}^{(k+1)}, \mb{z}, \mb{y}^{(k)}) \label{eq:admmz} \\ \mb{y}^{(k+1)} & = \mb{y}^{(k)} + \rho (A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c}) \; . \label{eq:admmy} \end{align} It is often more convenient to work with the \emph{scaled form} of ADMM, which is obtained by the change of variable to the \emph{scaled dual variable} $\mb{u} = \rho^{-1} \mb{y}$. Defining the residual \begin{equation} \mb{r} = A \mb{x} + B \mb{z} - \mb{c} \end{equation} and replacing $\mb{y}$ with $\mb{u}$ we have \begin{align} L_{\rho}(\mb{x}, \mb{z}, \mb{u}) &= f(\mb{x}) + g(\mb{z}) + \frac{\rho}{2} \norm{\mb{r}+\mb{u}}_2^2 - \frac{\rho}{2} \norm{\mb{u}}_2^2 \; . \end{align} Since the minimisers of $L_{\rho}(\mb{x}, \mb{z}, \mb{u})$ with respect to $\mb{x}$ and $\mb{z}$ do not depend on the final $\frac{\rho}{2} \norm{\mb{u}}_2^2$ term, % the iterations can be written as \smallmath{ \begin{align} \mb{x}^{(k+1)} & = \argmin_{\mb{x}} f(\mb{x}) + \frac{\rho}{2} \norm{A \mb{x} + B \mb{z}^{(k)} - \mb{c} + \mb{u}^{(k)}}_2^2 \label{eq:admmscaledx} \\ \mb{z}^{(k+1)} & = \argmin_{\mb{z}} g(\mb{z}) + \frac{\rho}{2} \norm{A \mb{x}^{(k+1)} + B \mb{z} - \mb{c} + \mb{u}^{(k)}}_2^2 \\ \mb{u}^{(k+1)} & = \mb{u}^{(k)} + A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c} \; . \end{align} } \subsection{ADMM Residuals} \label{sec:resid} Denote optimal primal variables by $\mb{x}^*$ and $\mb{z}^*$, and the optimal dual variable by $\mb{y}^*$. It will also be useful to define $p^* = f(\mb{x}^*) + g(\mb{z}^*)$ and $p^{(k)} = f(\mb{x}^{(k)}) + g(\mb{z}^{(k)})$. The primal feasibility condition \begin{equation} A \mb{x}^* + B \mb{z}^* - \mb{c} = 0 \;, \label{eq:admmlgprim} \end{equation} and dual feasibility conditions \begin{align} 0 \in \partial L(\cdot, \mb{z}^*, \mb{y}^*) \; \Rightarrow \;\; & 0 \in \partial f(\mb{x}^*) + A^T \mb{y}^* \label{eq:admmlgrduafsg} \\ 0 \in \partial L(\mb{x}^*, \cdot, \mb{y}^*) \; \Rightarrow \;\; & 0 \in \partial g(\mb{z}^*) + B^T \mb{y}^* \label{eq:admmlgrduagsg} \end{align} for~\eq{admmprob} hold at the problem solution $(\mb{x}^*,\mb{z}^*,\mb{y}^*)$. These conditions can be used to derive convergence measures for ADMM algorithm iterates $(\mb{x}^{(k)},\mb{z}^{(k)},\mb{y}^{(k)})$. A natural measure of primal feasibility based on~\eq{admmlgprim} is the \emph{primal residual} \begin{equation} \mb{r}^{(k+1)} = A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c} \;. \label{eq:prires} \end{equation} Now, since $\mb{z}^{(k+1)}$ minimises $L_{\rho}(\mb{x}^{(k+1)}, \mb{z}, \mb{y}^{(k)})$ (see~\eq{admmz}), we have \begin{align} 0 & \in [\partial L_{\rho}(\mb{x}^{(k+1)}, \cdot, \mb{y}^{(k)})](\mb{z}^{(k+1)}) \nonumber \\ & = \partial g(\mb{z}^{(k+1)}) + B^T \mb{y}^{(k)} \nonumber \\ & \qquad \qquad \;\;\;\;\;\;\, + \rho B^T (A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c}) \nonumber \\ & = \partial g(\mb{z}^{(k+1)}) + B^T \mb{y}^{(k)} + \rho B^T \mb{r}^{(k+1)} \nonumber \\ & = \partial g(\mb{z}^{(k+1)}) + B^T (\mb{y}^{(k)} + \rho \mb{r}^{(k+1)}) \nonumber \\ & = \partial g(\mb{z}^{(k+1)}) + B^T \mb{y}^{(k+1)} \;, \end{align} so that iterates $\mb{z}^{(k+1)}$ and $\mb{y}^{(k+1)}$ always satisfy dual feasibility condition~\eq{admmlgrduagsg}, leaving~\eq{admmlgrduafsg} as the remaining optimality criteria to be satisfied. Following a similar derivation, since $\mb{x}^{(k+1)}$ minimises $L_{\rho}(\mb{x}, \mb{z}^{(k)}, \mb{y}^{(k)})$ (see~\eq{admmx}), we have \begin{align} 0 & \in [\partial L_{\rho}(\cdot, \mb{z}^{(k)}, \mb{y}^{(k)})](\mb{x}^{(k+1)}) \nonumber \\ & = \partial f(\mb{x}^{(k+1)}) + A^T \mb{y}^{(k)} + \rho A^T (A \mb{x}^{(k+1)} + B \mb{z}^{(k)} - \mb{c}) \nonumber \\ & = \partial f(\mb{x}^{(k+1)}) + A^T \mb{y}^{(k)} + \rho A^T (A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \;\;\;\;\; - \mb{c} + B \mb{z}^{(k)} - B \mb{z}^{(k+1)}) \nonumber\\ & = \partial f(\mb{x}^{(k+1)}) + A^T \mb{y}^{(k)} \nonumber \\ & \qquad \qquad \qquad + \rho A^T (\mb{r}^{(k+1)} + B \mb{z}^{(k)} - B \mb{z}^{(k+1)}) \nonumber \\ & = \partial f(\mb{x}^{(k+1)}) + A^T (\mb{y}^{(k)} \nonumber \\ & \qquad \qquad \qquad + \rho \mb{r}^{(k+1)}) + \rho A^T B ( \mb{z}^{(k)} - \mb{z}^{(k+1)})\nonumber \\ & = \partial f(\mb{x}^{(k+1)}) + A^T \mb{y}^{(k+1)} \!+\! \rho A^T B ( \mb{z}^{(k)} \!-\! \mb{z}^{(k+1)}) \;. \label{eq:dualres0} \end{align} Setting $\rho A^T B (\mb{z}^{(k+1)} - \mb{z}^{(k)}) = 0$ in~\eq{dualres0} implies that $\mb{x}^{(k+1)}, \mb{y}^{(k+1)}$ satisfy dual feasibility condition~\eq{admmlgrduafsg}, which suggests defining \begin{equation} \mb{s}^{(k+1)} = \rho A^T B (\mb{z}^{(k+1)} - \mb{z}^{(k)}) \label{eq:dualres} \end{equation} as the \emph{dual residual} based on dual feasibility condition~\eq{admmlgrduafsg}. Since both primal and dual residuals converge to zero as the ADMM algorithm progresses~\cite[Sec. 3.3]{boyd-2010-distributed}, they can be used to define ADMM algorithm convergence measures. It is also worth noting that~\eq{admmx} and~\eq{admmz} suggest that the norm of the primal residual decreases with increasing $\rho$ (and vice versa), and the definition of the dual residual suggests that it increases with increasing $\rho$ (and vice versa). \subsection{Adaptive Penalty Parameter} \label{sec:adaptrho} As discussed in~\sctn{intro}, the correct choice of the penalty parameter plays a vital role in obtaining good convergence. He {et al.}\xspace~\cite{he-2000-alternating} define the distance from convergence as $\| \mb{r}^{(k+1)} \|_2^2 + \| \mb{s}^{(k+1)}\|_2^2$, and argue that adaptively choosing the penalty parameter to balance these two terms is a reasonable heuristic for minimising this distance. This heuristic is implemented as the update scheme \begin{equation} \rho^{(k+1)} = \left\{ \ \begin{array}{ll} \tau \rho^{(k)} & \text{ if } \normsz[\big]{\mb{r}^{(k)}}_2 > \mu \normsz[\big]{\mb{s}^{(k)}}_2\\[3pt] \tau^{-1} \rho^{(k)} & \text{ if } \normsz[\big]{\mb{s}^{(k)}}_2 > \mu \normsz[\big]{\mb{r}^{(k)}}_2\\[3pt] \rho^{(k)} & \text{ otherwise } \;, \end{array} \right. \label{eq:rhoupdate} \end{equation} where $\tau$ and $\mu$ are constants, the usual values being $\tau = 2$ and $\mu = 10$~\cite{he-2000-alternating, wang-2001-decomposition},~\cite[Sec 3.4.1]{boyd-2010-distributed}. This scheme has has been found to be effective for a variety of problems~\cite{hansson-2012-subspace, liu-2013-nuclear, vu-2013-fantope, iordache-2014-collaborative, weller-2014-phase, wohlberg-2014-efficient}, but it will be demonstrated that it suffers from a potentially serious flaw. \subsection{Stopping Criteria} \label{sec:stopcrit} The residuals can be used to define stopping criteria for the ADMM iterations; {e.g.}\xspace Boyd {et al.}\xspace~\cite[Sec. 3.3.1]{boyd-2010-distributed} recommend stopping criteria \begin{equation} \normsz[\big]{\mb{r}^{(k)}}_2 \leq \epsilon_{\mathrm{pri}}^{(k)} \;\; \text{ and } \normsz[\big]{\mb{s}^{(k)}}_2 \leq \epsilon_{\mathrm{dua}}^{(k)} \end{equation} where \smallmath{ \begin{align} \epsilon_{\mathrm{pri}}^{(k)} & \!=\! \sqrt{p} \epsilon_{\mathrm{abs}} \!+\! \epsilon_{\mathrm{rel}} \max\left\{\normsz[\big]{A \mb{x}^{(k)}}_2, \normsz[\big]{B \mb{z}^{(k)}}_2, \normsz[\big]{\mb{c}}_2 \right\} \label{eq:epri} \\ \epsilon_{\mathrm{dua}}^{(k)} \!&\! = \sqrt{n} \epsilon_{\mathrm{abs}} + \epsilon_{\mathrm{rel}} \normsz[\big]{A^T \mb{y}^{(k)}}_2 \label{eq:edua} \;, \end{align}} $\epsilon_{\mathrm{abs}}$ and $\epsilon_{\mathrm{rel}}$ are absolute and relative tolerances respectively, and $n$ and $p$ are the dimensionalities of $\mb{x}$ and $\mb{c}$ respectively ({i.e.}\xspace $\mb{x} \in \mbb{R}^n$ and $\mb{c} \in \mbb{R}^p$). \section{ADMM Problem Scaling Properties} \label{sec:admmscale} Let us consider the behaviour of ADMM under scaling of the optimization problem being addressed, denoting~\eq{admmprob} as problem $P$, and defining $\tilde{P}$ as \begin{equation} \argmin_{\mb{x},{\mb{z}}} \alpha f(\gamma \mb{x}) + \alpha g(\gamma \mb{z}) \; \text{ s.t. } \; \beta A \gamma \mb{x} + \beta B \gamma \mb{z} = \beta \mb{c} \;. \label{eq:admmp1} \end{equation} In this problem $\alpha$ represents a scaling of the objective function, $\beta$ represents a scaling of the constraint, and $\gamma$ represents a scaling of the problem variables. These scalings are chosen to parameterise the family of scalings of an ADMM problem under which the solution is invariant, modulo a scaling\footnote{The minimisers of $\tilde{P}$ are invariant to $\alpha$ and $\beta$, and are invariant to $\gamma$ modulo a scaling factor.}. It is important to emphasise that these scalings can represent both explicit scaling of a problem and the implicit scaling with respect to alternative possible choices\footnote{For problems involving physical quantities, for example, scaling by $\alpha$ and $\gamma$ correspond respectively to choices of the units in which the functional value and solution are expressed. Scaling by $\beta$ corresponds to the choices to be made in constructing the constraint; for example, if $\mb{z}$ is to represent the gradient of $\mb{x}$, then $A$ could be scaled to represent differences between samples with or without normalisation by the physical step size of the grid on which $\mb{x}$ is defined.} inherent in choosing functional, constraints, and variables. Problem $\tilde{P}$ can be expressed in the standard form as \begin{equation} \argmin_{\mb{x},{\mb{z}}} \tl{f}( \mb{x}) + \tl{g}(\mb{z}) \; \text{ such that } \; \tl{A} \mb{x} + \tl{B} \mb{z} = \tmb{c} \label{eq:admmp1std} \end{equation} with \begin{gather} \tl{f}(\mb{x}) = \alpha f(\gamma \mb{x}) \quad \tl{g}(\mb{z}) = \alpha g(\gamma \mb{z}) \quad \nonumber \\ \tl{A} = \beta \gamma A \quad \tl{B} = \beta \gamma B \quad \tmb{c} = \beta \mb{c} \;. \end{gather} The Lagrangian is \begin{align} \tilde{L}({\mb{x}}, {\mb{z}}, {\mb{y}}) = \alpha f(\gamma \mb{x}) & + \alpha g(\gamma \mb{z}) \nonumber \\ & + \mb{y}^T (\beta \gamma A \mb{x} + \beta \gamma B \mb{z} - \beta \mb{c}) \; , \end{align} and the primal and dual feasibility conditions are \begin{equation} \beta A \gamma \tmb{x}^* + \beta B \gamma \tmb{z}^* - \beta \tmb{c} = 0 \;, \end{equation} and \smallmath{ \begin{align} 0 \in \partial \tilde{L}(\cdot, \tilde{\mb{z}}^*, \tilde{\mb{y}}^*) \; \Rightarrow \; & 0\!\in\! \alpha \gamma [\partial f(\cdot)](\gamma \tilde{\mb{x}}^*) + \beta \gamma A^T \tilde{\mb{y}}^* = 0 \\ 0 \in \partial \tilde{L}(\tilde{\mb{x}}^*, \cdot, \tilde{\mb{y}}^*) \; \Rightarrow \; & 0\! \in\! \alpha \gamma [\partial g(\cdot)](\gamma \tilde{\mb{z}}^*) + \beta \gamma B^T \tilde{\mb{y}}^* = 0 \end{align} } respectively. It is easily verified that if $\mb{x}^*$, $\mb{z}^*$, and $\mb{y}^*$ satisfy the optimality criteria \eq{admmlgprim}, \eqc{admmlgrduafsg}, and \eqc{admmlgrduagsg} for problem $P$, then \begin{gather} \tilde{\mb{x}}^* = \gamma ^{-1} \mb{x}^* \quad \;\; \tilde{\mb{z}}^* = \gamma ^{-1} \mb{z}^* \quad \;\; \tilde{\mb{y}}^* = \frac{\alpha}{\beta} \mb{y}^* \label{eq:optscaling} \end{gather} satisfy the primal and dual feasibility criteria for $\tilde{P}$. The augmented Lagrangian for $\tilde{P}$ is \smallmath{ \begin{align} \tilde{L}_{\tilde{\rho}}(\mb{x}, \mb{z}, \mb{y}) & = \alpha f(\gamma \mb{x}) + \alpha g(\gamma \mb{z}) \nonumber \\ &+ \alpha \left(\frac{\beta}{\alpha} \mb{y}^T \right) ( \gamma A \mb{x} + \gamma B \mb{z} - \mb{c} ) \nonumber \\ &+ \alpha \left(\frac{\beta^2}{\alpha} \tilde{\rho} \right) \frac{1}{2} \norm{ \gamma A \mb{x} + \gamma B \mb{z} - \mb{c}}_2^2 \;, \end{align} } so that setting % \begin{equation} \tilde{\rho} = \frac{\alpha}{\beta^2} \rho \label{eq:rhoscale} \end{equation} gives \begin{equation} \tilde{L}_{\tilde{\rho}}(\mb{x}, \mb{z}, \mb{y}) = \alpha L_{\rho}\left(\gamma \mb{x}, \gamma \mb{z}, \frac{\beta}{\alpha} \mb{y}\right) \;. \label{eq:tlscale} \end{equation} The iterates $\mb{x}^{(k+1)}$, $\mb{z}^{(k+1)}$, and $\mb{y}^{(k+1)}$ for iteration $k$ of the ADMM algorithm for $P$ are given by \eq{admmx}, \eqc{admmz}, and \eqc{admmy}. We now consider the corresponding iterates for $\tilde{P}$, assuming that \begin{gather} \tilde{\mb{z}}^{(k)} = \gamma^{-1} \mb{z}^{(k)} \quad \quad \tilde{\mb{y}}^{(k)} = \frac{\alpha}{\beta} \mb{y}^{(k)} \;. \label{eq:initscaled} \end{gather} The $\mb{x}$ update is \begin{align} \tmb{x}^{(k+1)} & = \argmin_{\mb{x}} \tl{L}_{\tl{\rho}}(\mb{x}, \tmb{z}^{(k)}, \tmb{y}^{(k)}) \nonumber \\ &= \argmin_{\mb{x}} \tl{L}_{\tl{\rho}}(\mb{x}, \gamma^{-1} \mb{z}^{(k)}, \frac{\alpha}{\beta} \mb{y}^{(k)}) \nonumber \\ &= \argmin_{\mb{x}} \alpha L_{\rho} (\gamma \mb{x}, \mb{z}^{(k)}, \mb{y}^{(k)}) \;. \end{align} For convex $f$ we have that if $\mb{x}^*$ minimises $f(\mb{x})$ then $\gamma^{-1} \mb{x}^*$ minimises $\tl{f}(\mb{x}) = \alpha f (\gamma \mb{x})$, so \begin{equation} \tmb{x}^{(k+1)} = \gamma^{-1} \mb{x}^{(k+1)} \;, \end{equation} and similarly it can be shown that \begin{equation} \tmb{z}^{(k+1)} = \gamma^{-1} \mb{z}^{(k+1)} \;. \end{equation} For the $\mb{y}$ update we have \begin{align} \tmb{y}^{(k+1)} & = \tmb{y}^{(k)} + \tl{\rho} (\beta \gamma A \tmb{x}^{(k+1)} + \beta \gamma B \tmb{z}^{(k+1)} - \beta \mb{c}) \nonumber \\ &= \frac{\alpha}{\beta} \left( \mb{y}^{(k)} + \rho (A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c}) \right) \nonumber \\ &= \frac{\alpha}{\beta} \mb{y}^{(k+1)} \;. \end{align} Finally, the primal and dual residuals for $\tilde{P}$ have the following scaling relationship with those of $P$: \begin{align} \tilde{\mb{r}}^{(k+1)} & = \tl{A} \tmb{x}^{(k+1)} + \tl{B} \tmb{z}^{(k+1)} - \tmb{c} \nonumber \\ &= \beta A \mb{x}^{(k+1)} + \beta B \mb{z} ^{(k+1)} - \beta \mb{c} \nonumber \\ & = \beta\mb{r}^{(k+1)} \label{eq:priresscl} \\ \tilde{\mb{s}}^{(k+1)} & = \tilde{\rho} \tilde{A}^T \tilde{B} (\tilde{\mb{z}}^{(k+1)} - \tilde{\mb{z}}^{(k)}) \nonumber \\ & = \alpha \gamma \rho A^T B (\mb{z}^{(k+1)} - \mb{z}^{(k)}) \nonumber \\ & = \alpha \gamma \mb{s}^{(k+1)} \;. \label{eq:duaresscl} \end{align} In summary, the parameters $\alpha$, $\beta$, and $\gamma$ in problem $\tilde{P}$ generate families of ADMM problems with the same solutions (modulo a scaling, in the case of $\gamma$), as expressed in~\eq{optscaling}, but the iterates of the corresponding ADMM algorithms are only similarly invariant if the initial iterates (see~\eq{initscaled}) and constant penalty parameter (see~\eq{rhoscale}) are appropriately scaled. \section{Residual Balancing} The scaling properties described in the previous section have a major impact on the residuals and their use within the residual balancing scheme for penalty parameter selection. \subsection{Adaptive Penalty Parameter} \label{sec:adaptpenparam} It was demonstrated above that ADMM algorithm iterates can be made invariant to problem scaling by a suitable choice of fixed penalty parameter. It is easily verified that invariance can be maintained with a varying penalty parameter $\rho^{(k)}$ as long as the required relationship is also maintained, i.e. $\tilde{\rho}^{(k)} = \frac{\alpha}{\beta^2} \rho^{(k)}$. If an adaptive update rule such as~\eq{rhoupdate}, that operates by multiplying the penalty parameter by some factor, is to preserve this relationship, it is necessary that (i) $\tilde{\rho}^{(0)} = \frac{\alpha}{\beta^2} \rho^{(0)}$, and (ii) the choice of multiplier and when to apply it must be invariant to problem scaling. But it is clear from~\eq{priresscl} and~\eq{priresscl} that the primal and dual residuals do not share the same scaling factors, so that the update rule~\eq{rhoupdate} based on these residuals does not, in general, preserve the scaling behaviour of the penalty parameter required to maintain invariance of the algorithm iterates. It follows that if the adaptive penalty parameter method of~\sctn{adaptrho} performs well for some problem $P$, it should not be expected to do so for problem $\tilde{P}$ as the scaling parameters $\alpha$, $\beta$, and $\gamma$ deviate from unity. If update rule~\eq{rhoupdate} is known to provide good performance for a reference problem $P$, and it becomes necessary to modify the problem formulation in a way that corresponds to switching to a scaled problem $\tl{P}$ (e.g. a change of physical units), then the same performance can be achieved by using a modified update rule \begin{equation} \rho^{(k+1)} = \left\{ \ \begin{array}{ll} \tau \rho^{(k)} & \text{ if } \normsz[\big]{\mb{r}^{(k)}}_2 > \xi \mu \normsz[\big]{\mb{s}^{(k)}}_2\\[3pt] \tau^{-1} \rho^{(k)} & \text{ if } \normsz[\big]{\mb{s}^{(k)}}_2 > \xi^{-1} \mu \normsz[\big]{\mb{r}^{(k)}}_2\\[3pt] \rho^{(k)} & \text{ otherwise } \;, \end{array} \right. \label{eq:rhoupdatexi} \end{equation} with $\xi = \beta^{-1} \alpha \gamma$ chosen to compensate for the scaling of the ratio of residuals with the problem scaling. It is important to emphasise, however, that this issue is not only relevant to the practitioner considering explicitly scaling an existing ADMM problem: problem $\tilde{P}$ merely makes explicit the implicit choices involved in setting up any ADMM problem, and there is no reason to believe that the often-arbitrary choices made in setting up the problem correspond to an optimal or even a good choice of scaling with respect to the convergence of the ADMM iterates subject to update rule~\eq{rhoupdate}, or subject to update rule~\eq{rhoupdatexi} with $\xi = 1$. \subsection{Relative Residuals} \label{sec:relres} A simple approach that avoids the need for explicit compensation for problem scaling when the formulation is modified is to base the adaptive penalty parameter policy on residuals that represent relative instead of absolute error\footnote{It is worth noting that similar normalisation of error/convergence measures is quite commonly applied in other areas of optimization, see e.g. \cite[Sec. 1.2]{mittelmann-2003-independent}, \cite[Sec. 2.1]{wachter-2006-implementation}.}. If the normalisations required for relative error measures are selected appropriately\footnote{It is no coincidence that these normalisations turn out to be the same as those in the definitions of $\epsilon_{\mathrm{pri}}^{(k+1)}$ and $\epsilon_{\mathrm{dua}}^{(k+1)}$ in~\cite[Sec 3.3.1]{boyd-2010-distributed}.}, they will cancel the scaling with $\beta$ and $\alpha \gamma$, making them invariant to problem scaling. A reasonable normalisation to make the primal residual $\mb{r}^{(k+1)} = A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c}$ a relative residual is $$\max\left\{\normsz[\big]{ A \mb{x}^{(k+1)}}_2, \normsz[\big]{ B \mb{z}^{(k+1)}}_2, \normsz[\big]{ \mb{c}}_2 \right\} \;, $$ allowing us to define the relative primal residual {\small \begin{equation} \mb{r}_{\mathrm{rel}}^{(k+1)} = \frac{A \mb{x}^{(k+1)} + B \mb{z}^{(k+1)} - \mb{c}}{\max\left\{\normsz[\big]{ A \mb{x}^{(k+1)}}_2, \normsz[\big]{ B \mb{z}^{(k+1)}}_2, \normsz[\big]{ \mb{c}}_2 \right\}} \;, \label{eq:nrmprires} \end{equation} } which is invariant to problem scaling since the normalisation factor has the same scaling as the absolute residual, {\small \begin{multline} \max\left\{\normsz[\big]{\tl{A} \tmb{x}^{(k+1)}}_2, \normsz[\big]{\tl{B} \tmb{z}^{(k+1)}}_2, \normsz[\big]{\tmb{c}}_2 \right\} \\ = \max\left\{\normsz[\big]{\beta \gamma A \gamma^{-1} \mb{x}^{(k+1)}}_2, \normsz[\big]{\beta \gamma B \gamma^{-1} \mb{z}^{(k+1)}}_2, \normsz[\big]{\beta \mb{c}}_2 \right\}\\ = \beta \max\left\{\normsz[\big]{ A \mb{x}^{(k+1)}}_2, \normsz[\big]{ B \mb{z}^{(k+1)}}_2, \normsz[\big]{ \mb{c}}_2 \right\} \;. \label{eq:rnrmfct} \end{multline} } A suitable normalisation for the dual residual $\mb{s}^{(k+1)} = \rho A^T B (\mb{z}^{(k+1)} - \mb{z}^{(k)})$ can be obtained from~\eq{dualres0}. When $f$ is differentiable and the gradient is easily computable, a reasonable choice of the normalisation would be $\max\left\{\normsz[\big]{ \nabla f(\mb{x}^{(k+1)})}_2, \normsz[\big]{A^T \mb{y}^{(k+1)}}_2 \right\}$, but since this is often not the case, we simply use $\normsz[\big]{A^T \mb{y}^{(k+1)}}_2$ as the normalisation factor, giving the relative dual residual \begin{equation} \resizemath{.85\hsize}{ \mb{s}_{\mathrm{rel}}^{(k+1)} = \frac{\rho A^T B (\mb{z}^{(k+1)} \!-\! \mb{z}^{(k)})}{\normsz[\big]{A^T \mb{y}^{(k+1)}}_2 } = \frac{ A^T B (\mb{z}^{(k+1)} \!-\! \mb{z}^{(k)})}{\normsz[\big]{A^T \mb{u}^{(k+1)}}_2 } \;, } \label{eq:nrmduares} \end{equation} which is again invariant to problem scaling since the normalisation factor has the same scaling as the absolute residual, \vspace{-3mm} \begin{equation} \resizemath{.88\hsize}{ \normsz[\big]{\tl{A}^T \tmb{y}^{(k+1)}}_2 = \normsz[\big]{\beta \gamma A^T \frac{\alpha}{\beta} \mb{y}^{(k+1)}}_2 = \alpha \gamma \normsz[\big]{A^T \mb{y}^{(k+1)}}_2 \;. } \label{eq:snrmfct} \end{equation} Using these definitions, $\tmb{r}_{\mathrm{rel}}^{(k+1)} = \mb{r}_{\mathrm{rel}}^{(k+1)}$ and $\tmb{s}_{\mathrm{rel}}^{(k+1)} = \mb{s}_{\mathrm{rel}}^{(k+1)}$; {i.e.}\xspace the residuals are invariant to problem scaling. The corresponding penalty parameter update policy becomes \begin{equation} \rho^{(k+1)} = \left\{ \ \begin{array}{ll} \tau \rho^{(k)} & \text{ if } \normsz[\big]{\mb{r}_{\mathrm{rel}}^{(k)}}_2 > \xi \mu \normsz[\big]{\mb{s}_{\mathrm{rel}}^{(k)}}_2\\[3pt] \tau^{-1} \rho^{(k)} & \text{ if } \normsz[\big]{\mb{s}_{\mathrm{rel}}^{(k)}}_2 > \xi^{-1} \mu \normsz[\big]{\mb{r}_{\mathrm{rel}}^{(k)}}_2\\[3pt] \rho^{(k)} & \text{ otherwise } \;, \end{array} \right. \label{eq:rhoupdaterel} \end{equation} where the parameter $\xi$ is retained for reasons that will be made apparent shortly. The convergence proof~\cite{he-2000-alternating} of the standard adaptive scheme ({i.e.}\xspace~\eq{rhoupdate} with the standard definitions of the residuals) depends only on bounds on the sequences $\rho^{(k)}$ and $\eta_k = \sqrt{(\rho^{(k+1)} / \rho^{(k)})^2 - 1}$, neither of which is affected by the change in the definition of the residuals, so the convergence results still hold under the modified definitions of the residuals. \subsection{Adaptive Multiplier Policy} The fixed multiplier $\tau$ is a potential weakness of the penalty update policies~\eq{rhoupdate} and~\eq{rhoupdaterel}. If $\tau$ is small, then a large number of iterations may be required\footnote{In many problems to which ADMM is applied, solving the $\mb{x}$ update~\eq{admmscaledx} involves solving a large linear system, which can be efficiently achieved by pre-computing an LU or Cholesky factorization of the system matrix for use in each iteration. Since the system matrix depends on $\rho$, it is necessary to re-compute the factorization when $\rho$ is updated. (This can be avoided by use of an alternative factorisation~\cite[Sec. 4.2]{liu-2013-nuclear}, but since this method is substantially more computationally expensive in some cases, and since a thorough comparison with this alternative is beyond the scope of the present paper, it will not be considered further here.) Given the computational cost of the factorization, it is reasonable to only apply the $\rho$ update at every 10 (for example) iterations so that the cost of the factorization can be amortised over multiple iterations. This compromise further reduces the adaption rate of the adaptive penalty policy. } to reach an appropriate $\rho$ value if $\rho^{(0)}$ is poorly chosen ({i.e.}\xspace, so that $\normsz[\big]{\mb{r}^{(0)}}_2 \gg \xi \normsz[\big]{\mb{s}^{(0)}}_2$, or $\normsz[\big]{\mb{r}^{(0)}}_2 \ll \xi \normsz[\big]{\mb{s}^{(0)}}_2$). On the other hand, if $\tau$ is large, the corrections to $\rho$ may be too large when $\rho$ is close to the optimal value. A straightforward solution is to adapt $\tau$ at each iteration \begin{equation} \hspace{-2.5mm}\resizemath{.92\hsize}{ \tau^{(k)} = \left\{ \ \begin{array}{ll} \!\!\!\sqrt{\xi^{-1} \normsz[\big]{\mb{r}^{(k)}}_2 / \normsz[\big]{\mb{s}^{(k)}}_2} & \text{ if } 1 \le \sqrt{\xi^{-1} \normsz[\big]{\mb{r}^{(k)}}_2 / \normsz[\big]{\mb{s}^{(k)}}_2} < \tau_{\mathrm{max}} \\[3pt] \!\!\!\sqrt{\xi \normsz[\big]{\mb{s}^{(k)}}_2 / \normsz[\big]{\mb{r}^{(k)}}_2} & \text{ if } \tau_{\mathrm{max}}^{-1} < \sqrt{\xi^{-1} \normsz[\big]{\mb{r}^{(k)}}_2 / \normsz[\big]{\mb{s}^{(k)}}_2} < 1 \\[3pt] \!\!\!\tau_{\mathrm{max}} & \text{ otherwise } \;, \end{array} \right. } \label{eq:adapttau} \end{equation} where $\tau_{\mathrm{max}}$ provides a bound on $\tau$. Since $\tau$ is bounded, the convergence results~\cite{he-2000-alternating} still hold for this extension. \subsection{Stopping Criteria} \label{sec:relstopcrit} The stopping criteria in~\sctn{stopcrit} can be expressed in terms of the relative residuals $\mb{r}_{\mathrm{rel}}$ and $\mb{s}_{\mathrm{rel}}$ as \begin{equation} \normsz[\big]{\mb{r}_{\mathrm{rel}}^{(k)}}_2 \leq \epsilon_{\mathrm{pri}}^{(k)} \;\; \text{ and } \normsz[\big]{\mb{s}_{\mathrm{rel}}^{(k)}}_2 \leq \epsilon_{\mathrm{dua}}^{(k)} \end{equation} where \smallmath{ \begin{align} \epsilon_{\mathrm{pri}}^{(k)} & \!=\! \sqrt{p} \epsilon_{\mathrm{abs}} / \max\left\{\normsz[\big]{A \mb{x}^{(k)}}_2, \normsz[\big]{B \mb{z}^{(k)}}_2, \normsz[\big]{\mb{c}}_2 \right\} \!+\! \epsilon_{\mathrm{rel}} \label{eq:eprirel} \\ \epsilon_{\mathrm{dua}}^{(k)} \!&\! = \sqrt{n} \epsilon_{\mathrm{abs}} / \normsz[\big]{A^T \mb{y}^{(k)}}_2 + \epsilon_{\mathrm{rel}} \label{eq:eduarel} \;. \end{align} } These stopping criteria are invariant to problem scaling when $\epsilon_{\mathrm{abs}} = 0$. \subsection{Residual Ratio} \label{sec:resrat} While the relative residuals proposed in~\sctn{relres} address the absence of scaling invariance in the adaptive penalty parameter strategy based on residual balancing, there is another even more serious deficiency that is not so easily remedied. As discussed in~\sctn{adaptrho}, the target ratio of unity is motivated by representing the distance from convergence as $\| \mb{r}^{(k)} \|_2^2 + \| \mb{s}^{(k)}\|_2^2$, but this greatly simplifies the true picture. The ADMM convergence proof in~\cite{boyd-2010-distributed} (see Sec. 3.3.1 and Appendix A) provides some insight into the relationship between the distance from convergence and the residuals, in the form of the inequality \begin{align} f(\mb{x}^{(k)}) + g(\mb{z}^{(k)}) - p^* \leq \; & - (\mb{y}^{(k)})^T \mb{r}^{(k)} \nonumber \\ & + (\mb{x}^{(k)} - \mb{x}^*)^T \mb{s}^{(k)} \;, \end{align} which implies the looser inequality \begin{align} f(\mb{x}^{(k)}) + g(\mb{z}^{(k)}) - p^* \leq \; & \normsz[\big]{ \mb{y}^{(k)} } \normsz[\big]{ \mb{r}^{(k)} } + \nonumber \\ & \normsz[\big]{\mb{x}^{(k)} - \mb{x}^*} \normsz[\big]{\mb{s}^{(k)}} \label{eq:convineq} \end{align} in terms of the norms of the relevant vectors. Applying the original argument that led to unity as the appropriate target ratio to this inequality implies that the appropriate ratio is, in fact, approximately $\normsz[\big]{ \mb{y}^{(k)} } / \normsz[\big]{\mb{x}^{(k)} - \mb{x}^*}$. This would explain why some authors have found the original residual balancing strategy of~\eq{rhoupdate} to be effective~\cite{hansson-2012-subspace, liu-2013-nuclear, vu-2013-fantope, iordache-2014-collaborative, weller-2014-phase, wohlberg-2014-efficient} and others have not~\cite[Sec. 2.4]{ramdas-2015-fast}: the method succeeds when this ratio happens to be relatively close to unity, and fails when it is not. Unfortunately, since $\mb{x}^*$ is unknown while solving the problem, there is no obvious way to estimate this ratio, and we are left with the rather unsatisfactory solution of accepting $\xi$ in~\eq{rhoupdaterel} as a user-selected parameter of the method. Since this approach essentially replaces one user parameter, $\rho$, with another, $\xi$, it is not clear that the residual balancing strategy has any real value as a parameter selection technique. One might argue that, since the residual balancing method has been found to be satisfactory in a variety of applications, it must often be the case that $\xi = 1$ is not too far from the optimal setting, and that $\xi$ may be a more stable parameterisation than $\rho$, but further study is necessary before any reliable conclusions can be drawn. Since~\eq{rhoupdaterel} retains $\xi$, which can be used to compensate for explicit problem scaling as discussed in~\sctn{adaptpenparam}, it is reasonable to ask whether there is any real benefit to using~\eq{rhoupdaterel} based on the relative residuals, {i.e.}\xspace, since we have an unknown $\xi$ in both cases, what is the advantage of one scaling of this unknown quantity in comparison with another. Two arguments can be made in favour of the use of relative residuals as in~\eq{rhoupdaterel}: \begin{itemize} \item Ignoring the question of determining a good choice of $\xi$, once one has been found, \eq{rhoupdaterel} is invariant to problem scaling, while~\eq{rhoupdate} is not. \item Since~\eq{rhoupdaterel} is invariant to problem scaling, one might expect that the $\xi$ for this update rule is more stable than the $\xi$ for~\eq{rhoupdaterel}, in the sense that it varies across a smaller numerical range for different problem. (This important question is not explored in the experimental results presented here.) \end{itemize} It should also be noted that the unknown scalings of the residuals in~\eq{convineq} imply that neither the absolute nor relative stopping tolerances in~\sctn{relstopcrit} can be viewed as providing an actual bound on the solution optimality, either in an absolute or a relative sense (e.g. a relative stopping criterion $\epsilon_{\mathrm{rel}} = 10^{-3}$ does \emph{not} imply that the final iterate is within $10^{-3}$ relative distance to the optimal solution). \section{BPDN} \label{sec:bpdn} To illustrate these issues, we will focus on Basis Pursuit DeNoising (BPDN)~\cite{chen-1998-atomic}, \begin{equation} \argmin_{\mb{x}} \frac{1}{2} \norm{D \mb{x} - \bvsigma}_2^2 + \lambda \norm{\mb{x}}_1 \;, \label{eq:bpdn} \end{equation} a standard problem in computing sparse representations corresponding to~\eq{admmprob} with \begin{gather} f(\mb{x}) = \frac{1}{2} \norm{D \mb{x} - \bvsigma}_2^2 \;\;\;\; g(\mb{z}) = \lambda \norm{\mb{z}}_1 \nonumber \\ A = I \;\;\;\; B = -I \;\;\;\; \mb{c} = 0 \;. \end{gather} Solving via ADMM, we have problem $P$ \begin{equation} \argmin_{\mb{x}} \frac{1}{2} \norm{D \mb{x} - \bvsigma}_2^2 + \lambda \norm{\mb{z}}_1 \text{ s.t. } \mb{x} = \mb{z} \end{equation} with Lagrangian \begin{equation} L(\mb{x}, \mb{z}, \mb{y}) = \frac{1}{2} \norm{D \mb{x} - \bvsigma}_2^2 + \lambda \norm{\mb{z}}_1 + \mb{y}^T (\mb{x} - \mb{z}) \;. \end{equation} We also consider Convolutional BPDN (CBPDN), a variant of BPDN constructed by replacing the linear combination of a set of dictionary vectors by the sum of a set of convolutions with dictionary filters~\cite[Sec. II]{wohlberg-2016-efficient} \begin{equation} \argmin_{\{\mb{x}_m\}} \frac{1}{2} \normsz[\Big]{\sum_m \mb{d}_m \ast \mb{x}_m - \bvsigma}_2^2 + \lambda \sum_m \norm{\mb{x}_m}_1 \; , \label{eq:convbpdn} \end{equation} where $\{\mb{d}_m\}$ is a set of $M$ dictionary \emph{filters}, $\ast$ denotes convolution, and $\{\mb{x}_m\}$ is a set of coefficient maps. Algebraically, this variant is a special case of standard BPDN, so that the same scaling properties apply, but since the dictionaries in this form are very highly overcomplete (the overcompleteness factor is equal to the number of filters $M$), one may expect that this variant might exhibit at least somewhat different behaviour in practice. A further difference is that the $\{\mb{x}_m\}$ can be efficiently computed without any factorisation of system matrices~\cite{wohlberg-2014-efficient}, so in this case the penalty update policy is applied at every iteration instead of at every 10 iterations. \section{Results} \label{sec:rslt} In this section the issues discussed above are illustrated via a number of computational experiments. Many of these experiments compare the effect of different penalty parameter selection methods on the number of iterations required to reach the stopping criteria. With respect to these experiments, it must be emphasised that: \begin{itemize} \item Since the relationship between the stopping criteria and the actual solution suboptimality is unknown (see~\sctn{resrat}), reaching the stopping criteria faster does \emph{not} imply faster convergence. \item These experiments all use relative stopping thresholds ({i.e.}\xspace $\epsilon_{\mathrm{abs}} = 0$), which could be considered to confer an advantage on the relative residual balancing policy since it balances the residuals in a way that that is favourable to satisfying the relative stopping thresholds\footnote{Since the stopping criteria require that both residuals are below the same threshold, they will be satisfied more quickly if they are roughly equal than if one is much larger than the other, all else being equal.}. Note, however, that the original goal of invariance to problem scaling cannot be achieved if $\epsilon_{\mathrm{abs}} \neq 0$. \end{itemize} \subsection{BPDN with Random Dictionary} \begin{figure}[htbp] % \small \hspace{-5mm}\input{otherexprmnt01fnval.pstex_t} \vspace{-3mm} \caption{A comparison of functional value evolution for the same problem with adaptive $\rho$ based on standard and normalised residuals.} \label{fig:exp1fnval} \end{figure} \begin{figure}[htbp] % \small \hspace{-5mm}\input{otherexprmnt01priduares.pstex_t} \vspace{-3mm} \caption{A comparison of primal and dual residual evolution for the same problem with adaptive $\rho$ based on standard and normalised residuals. For a meaningful comparison, the residuals are divided by their respective values of $\epsilon_{\mathrm{pri}}$ or $\epsilon_{\mathrm{dua}}$. % } \label{fig:exp1priduares} \end{figure} \begin{figure}[htbp] % \small \hspace{-5mm}\input{otherexprmnt01rho.pstex_t} \vspace{-3mm} \caption{A comparison of selected $\rho$ values for the same problem with adaptive $\rho$ based on standard and normalised residuals.} \label{fig:exp1rho} \end{figure} The first experiment involves sparse coefficient recovery on a random dictionary without normalisation. A dictionary $D \in \mbb{R}^{512 \times 4096}$ was generated with unit standard deviation i.i.d. entries with a Gaussian distribution, a corresponding reference coefficient vector $\mb{x}_0$ was constructed by assigning random values to 64 randomly selected coefficients, the remainder of which were zero, and a test signal was constructed by adding Gaussian white noise of standard deviation 0.5 to the product of $D$ and $\mb{x}_0$. The experiment involves using BPDN with $\lambda = 40$ (selected for good support identification), $\xi = 1$, $\epsilon_{\mathrm{abs}} = 0$, and $\epsilon_{\mathrm{rel}} = 10^{-4}$ to attempt to recover $\mb{x}_0$ from the signal, comparing performance with both standard and normalised residuals. It is clear from \figs{exp1fnval}--\fign{exp1rho} that the adaptive $\rho$ policy gives very substantially better performance with normalised residuals than with the standard definition. The desired stopping tolerance is reached within 160 iterations when using normalised residuals, but has still not been attained when the maximum iteration limit of 1000 is reached in the case of standard residuals. The performance difference is even greater if random dictionary $D$ is generated with standard deviation greater than unity. \subsection{BPDN with Learned Dictionary} \begin{figure*}[htbp] \centering \small \begin{tabular}{cc} \hspace{-10mm}\subfloat[\label{fig:bpdn_exp12_rhoplot1_std} \protect\rule{0pt}{1.5em} Standard residuals] {\input{expbpdn12_itrh_std_1en03_128.pstex_t}} & \hspace{-9mm}\subfloat[\label{fig:bpdn_exp12_rhoplot1_nrm} \protect\rule{0pt}{1.5em} Normalised residuals] {\input{expbpdn12_itrh_nrm_1en03_128.pstex_t}} \end{tabular} \caption{Variation with $\rho^{(0)}$ of number of iterations required to reach a relative stopping tolerance of $\epsilon_{\mathrm{rel}} = 10^{-3}$ for different variants of the adaptive $\rho$ policy, and for standard and normalised residuals, in a BPDN problem with $D \in \mbb{R}^{64 \times 128}$ and $\lambda = 10^{-2}$. The variant labels are ``Fixed'', indicating that $\rho$ is fixed at $\rho^{(0)}$ and is not adapted, of the form $\mu /\tau$, or of the form $\mu$/Auto, which indicates that $\tau$ is adapted as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 100$ and $\xi = 1$.} \label{fig:bpdn_exp12_rhoplot1} \end{figure*} \begin{figure*}[htbp] \centering \small \begin{tabular}{cc} \hspace{-10mm}\subfloat[\label{fig:bpdn_exp12_64_std} \protect\rule{0pt}{1.5em} Standard residuals] {\input{expbpdn12_itlm_std_64.pstex_t}} & \hspace{-9mm}\subfloat[\label{fig:bpdn_exp12_64_nrm} \protect\rule{0pt}{1.5em} Normalised residuals] {\input{expbpdn12_itlm_nrm_64.pstex_t}} \end{tabular} \caption{Mean number of iterations (averaged over all values of $\rho^{(0)}$) required to reach a relative stopping tolerance of $\epsilon_{\mathrm{rel}} = 10^{-3}$ for different variants of the adaptive $\rho$ policy, and for standard and normalised residuals, in a BPDN problem with $D \in \mbb{R}^{64 \times 64}$ and varying $\lambda$. The variant labels are ``Fixed'', indicating that $\rho$ is fixed at $\rho^{(0)}$ and is not adapted, of the form $\mu /\tau$, or of the form $\mu$/Auto, which indicates that $\tau$ is adapted as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 100$ and $\xi = 1$. ``Fixed (min)'' denotes the minimum number of iterations (i.e. not the mean) obtained via the best fixed choice of $\rho^{(0)}$ at each value of $\lambda$.} \label{fig:bpdn_exp12_64} \end{figure*} \begin{figure*}[htbp] \centering \small \begin{tabular}{cc} \hspace{-10mm}\subfloat[\label{fig:bpdn_exp12_96_std} \protect\rule{0pt}{1.5em} Standard residuals] {\input{expbpdn12_itlm_std_96.pstex_t}} & \hspace{-9mm}\subfloat[\label{fig:bpdn_exp12_96_nrm} \protect\rule{0pt}{1.5em} Normalised residuals] {\input{expbpdn12_itlm_nrm_96.pstex_t}} \end{tabular} \caption{Mean number of iterations (averaged over all values of $\rho^{(0)}$) required to reach a relative stopping tolerance of $\epsilon_{\mathrm{rel}} = 10^{-3}$ for different variants of the adaptive $\rho$ policy, and for standard and normalised residuals, in a BPDN problem with $D \in \mbb{R}^{64 \times 96}$ and varying $\lambda$. The variant labels are ``Fixed'', indicating that $\rho$ is fixed at $\rho^{(0)}$ and is not adapted, of the form $\mu /\tau$, or of the form $\mu$/Auto, which indicates that $\tau$ is adapted as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 100$ and $\xi = 1$. ``Fixed (min)'' denotes the minimum number of iterations (i.e. not the mean) obtained via the best fixed choice of $\rho^{(0)}$ at each value of $\lambda$.} \label{fig:bpdn_exp12_96} \end{figure*} \begin{figure*}[htbp] \centering \small \begin{tabular}{cc} \hspace{-10mm}\subfloat[\label{fig:bpdn_exp12_128_std} \protect\rule{0pt}{1.5em} Standard residuals] {\input{expbpdn12_itlm_std_128.pstex_t}} & \hspace{-9mm}\subfloat[\label{fig:bpdn_exp12_128_nrm} \protect\rule{0pt}{1.5em} Normalised residuals] {\input{expbpdn12_itlm_nrm_128.pstex_t}} \end{tabular} \caption{Mean number of iterations (averaged over all values of $\rho^{(0)}$) required to reach a relative stopping tolerance of $\epsilon_{\mathrm{rel}} = 10^{-3}$ for different variants of the adaptive $\rho$ policy, and for standard and normalised residuals, in a BPDN problem with $D \in \mbb{R}^{64 \times 128}$ and varying $\lambda$. The variant labels are ``Fixed'', indicating that $\rho$ is fixed at $\rho^{(0)}$ and is not adapted, of the form $\mu /\tau$, or of the form $\mu$/Auto, which indicates that $\tau$ is adapted as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 100$ and $\xi = 1$. ``Fixed (min)'' denotes the minimum number of iterations (i.e. not the mean) obtained via the best fixed choice of $\rho^{(0)}$ at each value of $\lambda$.} \label{fig:bpdn_exp12_128} \end{figure*} The second set of experiments compares the performance of a fixed $\rho$ and various adaptive $\rho$ parameter choices, using standard and normalised residuals, for a Multiple Measurement Vector (MMV) BPDN problem. Dictionaries $D \in \mbb{R}^{64 \times 64}$, $D \in \mbb{R}^{64 \times 96}$, and $D \in \mbb{R}^{64 \times 128}$ were learned on a large training set of $8 \times 8$ image patches, and the test data consisted of 32558 zero-mean $8 \times 8$ image patches represented as a matrix $S \in \mbb{R}^{64 \times 32258}$. The number of iterations required to attain a relative stopping tolerance of $\epsilon_{\mathrm{rel}} = 10^{-3}$ for $D \in \mbb{R}^{64 \times 128}$ and $\lambda = 10^{-2}$ is compared in~\fig{bpdn_exp12_rhoplot1}. The following observations can be made with respect to the ability of the different methods to reduce the dependence of the number of iterations on the initial choice $\rho^{(0)}$: \begin{itemize} \item The best choice of fixed $\rho$ gives similar performance to the best adaptive strategy, but performance falloff is quite rapid as $\rho$ is changed away from the optimum. Given the absence of techniques for identifying the optimum $\rho$ \emph{a priori} for most problems, it is clear that the adaptive strategy can play a valuable role in reducing computation time. \item When using normalised residuals, there is an overall improvement with smaller $\mu$. In particular, it appears that, at least for the BPDN problem, the standard choice of $\mu = 10$ is too coarse, and benefit can be obtained from finer control of the residual ratio, \item When using standard residuals, the converse is true, performance decreasing with smaller $\mu$. This should not be surprising given the previously identified theoretical problems regarding the use of standard residuals in~\eq{rhoupdate}: the errors in the residual ratio that are masked by setting $\mu = 10$ become increasingly apparent as $\mu$ is reduced in an attempt at exerting finer control over the residual ratio. In this case the performance of the adaptive $\tau$ methods based on~\eq{adapttau} is particularly poor because the adaptive $\tau$ allows $\rho$ to be more rapidly adjusted to the incorrect value based on the incorrect residual ratios. \item The best overall performance is provided by the two automatic $\tau$ methods based on~\eq{adapttau} with normalised residuals. \end{itemize} Comparisons of the different strategies over a wide range of $\lambda$ values and three different dictionary sizes are presented in~\figs{bpdn_exp12_64}--\fign{bpdn_exp12_128}. The mean number of iterations for all $\rho$ values is plotted against $\lambda$, and also compared with the minimum number of iterations obtained for the best fixed choice of $\rho$. The most important observations to be made are: \begin{itemize} \item The standard residuals give similar performance to the normalised residuals for the larger values of $\lambda$ since in this regime the normalisation quantities turn out to be close to unity. \item At smaller values of $\lambda$, the normalised residuals give much better performance. \item Considered over the entire range of $\lambda$ values, the normalised residuals all give better performance than their un-normalised counterparts. \item Of the methods using normalised residuals, the adaptive $\tau$ methods based on~\eq{adapttau} gives substantially better performance than the standard methods. \end{itemize} \subsection{Convolutional BPDN Problem} \label{sec:cbpdnrslt} \begin{figure*}[htbp] \centering \small \begin{tabular}{cc} \hspace{-10mm}\subfloat[\label{fig:cbpdn_exp04_64_std} \protect\rule{0pt}{1.5em} Standard residuals] {\input{expcbpdn04_itlm_std_64.pstex_t}} & \hspace{-9mm}\subfloat[\label{fig:cbpdn_exp04_64_nrm} \protect\rule{0pt}{1.5em} Normalised residuals] {\input{expcbpdn04_itlm_nrm_64.pstex_t}} \end{tabular} \caption{Mean number of iterations (averaged over all values of $\rho^{(0)}$) required to reach a relative stopping tolerance of $\epsilon_{\mathrm{rel}} = 10^{-3}$ for different variants of the adaptive $\rho$ policy, and for standard and normalised residuals, in a CBPDN problem with a $8 \times 8 \times 64$ dictionary and varying $\lambda$. The variant labels are ``Fixed'', indicating that $\rho$ is fixed at $\rho^{(0)}$ and is not adapted, of the form $\mu /\tau$, or of the form $\mu$/Auto, which indicates that $\tau$ is adapted as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 100$ and $\xi = 1$. ``Fixed (min)'' denotes the minimum number of iterations (i.e. not the mean) obtained via the best fixed choice of $\rho^{(0)}$ at each value of $\lambda$.} \label{fig:cbpdn_exp04_64} \end{figure*} The penalty update strategies were also compared in application to a Convolutional BPDN problem consisting of jointly computing the representations of two $256 \times 256$ pixel images\footnote{As is common practice in convolutional sparse representations, the representation was computed after a highpass filtering pre-processing step, consisting in this case of application of a lowpass filter, equivalent to solving the problem $\argmin_{\mb{x}} \frac{1}{2} \norm{\mb{x} - \bvsigma}_2^2 + \lambda_L \norm{\nabla \mb{x}}_2^2$ with $\lambda_L = 5.0$, and then subtracting the lowpass filtered images from the corresponding original images.} (the well-known ``Lena'' and ``Barbara'' images), with a dictionary consisting of 64 filters of size $8 \times 8$ samples and for a range of $\lambda$ and $\rho^{(0)}$ values. It can be seen from~\fig{cbpdn_exp04_64} that the normalised residuals give good performance for $\lambda \leq 0.1$, but for larger values of $\lambda$ neither standard nor normalised residuals provide performance close to that of the best fixed $\rho$. \begin{figure}[htbp] \small \hspace{-5mm}\input{cbpdn_exp09_rs_xmpl.pstex_t} \vspace{-2mm} \caption{Evolution of primal and dual residuals for two different choices of $\xi$ in a CBPDN problem with an $8 \times 8 \times 32$ dictionary, $\lambda = 0.3$, and $\rho^{(0)} = 251$. The $\rho$ update policy was as in~\eq{rhoupdaterel}, with normalised residuals, $\mu = 1.2$, and with adaptive $\tau$ as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 100$.} \label{fig:cbpdnexp09rsxmpl} \end{figure} This is an indication that $\xi = 1$ is not a suitable choice in this case, for which $\xi = 5$ gives better performance, as illustrated in~\fig{cbpdnexp09rsxmpl}. \begin{figure}[htbp] % \small \hspace{-1mm}\input{cbpdn_newexp02_mnitlmdxi_64.pstex_t} \vspace{-6mm} \caption{Mean number of iterations, averaged over all values of $\rho^{(0)}$, against $\lambda$ and $\xi$ for a CBPDN problem with an $8 \times 8 \times 64$ dictionary. The $\rho$ update policy was as in~\eq{rhoupdaterel}, with normalised residuals, $\mu = 1.2$, and with adaptive $\tau$ as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 1000$.} \label{fig:cbpdnexp02itlmdxi} \end{figure} \begin{figure}[htbp] % \small \hspace{-1mm}\input{cbpdn_newexp02_sditlmdxi_64.pstex_t} \vspace{-6mm} \caption{Standard deviation of number of iterations with respect to $\rho^{(0)}$ in~\fig{cbpdnexp02itlmdxi}. Note that the variation with respect to $\rho$ is small where the mean number of iterations is small. The standard deviation is zero for small $\lambda$ and large $\xi$ because the number of iterations is clipped to 500 by the maximum iteration limit in this region.} \label{fig:cbpdnexp02sditlmdxi} \end{figure} The effect of varying $\xi$ was investigated by running a large number of computational experiments for the CBPDN problem, with a $8 \times 8 \times 64$ dictionary and for different values of $\lambda$ (6 approximately logarithmically spaced values in the range $1 \times 10^{-3}$ to $0.3$), $\rho$ (51 logarithmically spaced values in the range $10^{-1}\lambda$ to $10^{4}\lambda$), and $\xi$ (21 values in the range 0.3 to 10.0). The mean and standard deviation over $\rho^{(0)}$ of the number of iterations required to reach stopping tolerance $\epsilon_{\mathrm{abs}} = 0, \epsilon_{\mathrm{rel}} = 10^{-3}$ are displayed in~\fig{cbpdnexp02itlmdxi} and~\fign{cbpdnexp02sditlmdxi} respectively. It can be observed that that the value of $\xi$ giving the minimum number of iterations varies with $\lambda$, and that considering the mean over $\rho^{(0)}$ of the number of iterations is a reasonable criterion since the variation with $\rho^{(0)}$ is small when $\xi$ is well chosen. \begin{figure*}[htbp] \centering \small \begin{tabular}{cc} \hspace{-10mm}\subfloat[\label{fig:cbpdnexp09xi}\protect\rule{0pt}{1.5em} Function fit to best values of $\xi$ for different $\lambda$. ] {\input{cbpdn_newexp02_lmdxi_64.pstex_t}} & \hspace{-9mm}\subfloat[\label{fig:cbpdnexp09it}\protect\rule{0pt}{1.5em} Mean iterations for different values of $\lambda$. ] {\input{cbpdn_newexp02_mnitlmbd_64.pstex_t}} \end{tabular} \caption{(a) shows the good fit of function $f(\lambda) = 1 + 18.3^{\log_{10}(\lambda) + 1}$ to the values of $\xi$ that minimise the mean (over all values of $\rho^{(0)}$) number of required iterations for different values of $\lambda$, determined by running a large number of simulations for different values of $\xi$, $\rho$, and $\lambda$. (b) shows the variation with $\lambda$ of the mean (over all values of $\rho^{(0)}$) number of iterations for the best choice of $\xi$ as in (a), for $\xi$ chosen according to the function $f(\lambda)$, and for three fixed choices of $\xi$. All simulations were for a CBPDN problem with a $8 \times 8 \times 64$ dictionary and $\epsilon_{\mathrm{abs}} = 0, \epsilon_{\mathrm{rel}} = 10^{-3}$. The $\rho$ update policy was as in~\eq{rhoupdaterel}, with normalised residuals, $\mu = 1.2$, and with adaptive $\tau$ as in~\eq{adapttau}, with $\tau_{\mathrm{max}} = 1000$.} \label{fig:cbpdnexp09} \end{figure*} Since the best $\xi$ varies with $\lambda$, it is reasonable to ask, in the absence of any theory to guide the choice, whether there is a reliable way of making a good choice of $\xi$. By examining the data for the experiments used to generate~\fig{cbpdnexp02itlmdxi} and~\fign{cbpdnexp02sditlmdxi}, as well as for corresponding experiments with other dictionaries with 32, 96, and 128 filters of size $8 \times 8$, it was determined that the function $f(\lambda) = 1 + a^{\log_{10}(\lambda) + 1}$ with $a = 18.3$ provides a reasonable fit to the best choice of $\xi$ for each $\lambda$, over all of these dictionaries. The fit of this function to the experimental data for the dictionary of 64 filters is shown in~\fig{cbpdnexp09xi}, and a corresponding performance comparison in terms of mean iterations averaged over $\rho^{(0)}$ is displayed in~\fig{cbpdnexp09it}. Note that none of the fixed choices of $\xi$ provide good performance over the entire range of $\lambda$ values, while $\xi$ chosen according to $f(\lambda)$ gives the same performance as the best choices of $\xi$ at each $\lambda$. Additional experiments using different test images (the ``Kiel'' and ``Bridge'' standard images) as well as different dictionary filters sizes ($12 \times 12$) indicate that $f(\lambda)$ provides a good choice of $\xi$ over a wide range of conditions. While the choice of $a$ giving the best fit does vary with test images, filter size, and number of filters\footnote{The importance of selecting $\xi > 1$ for larger $\lambda$ values appears to be related to dictionary overcompleteness, corresponding to the number of filters for the CBPDN problem. It is also the case for the standard BPDN problem that the best choice of $\xi$ is greater than unity for larger $\lambda$ values, but for the much lower overcompleteness ratios usually encountered in this problem variant, the performance effect is far smaller, and the loss in choosing fixed $\xi = 1$ is usually negligible.}, the performance is not highly sensitive to the choice of $a$ (note that the mean iteration surface for large $\lambda$ is flat over a wide range of $\xi$ values in~\fig{cbpdnexp02itlmdxi}) and the choice of $a = 18.3$ used in~\fig{cbpdnexp09xi} was found to give performance at or close to the best choice of $\xi$ in all the cases considered. \section{Conclusion} \label{sec:conc} The scaling properties of the standard definitions of the primal and dual residuals are shown to represent a potentially serious weakness in a popular adaptive penalty strategy~\cite{he-2000-alternating} for ADMM algorithms. The proposed solution is to normalise these residuals so that they become invariant to scalings of the ADMM problem to which the solution is also invariant. The impact of this issue is demonstrated using BPDN sparse coding as an example problem. These experiments show that the standard adaptive penalty strategy~\cite{he-2000-alternating} performs very poorly in certain cases, while the proposed modification based on normalised residuals is more robust. There is, however, a more serious issue that is not so easily resolved: the unknown scaling relationship between the residuals and the solution distance from optimality implies that the correct residual ratio to target, $\xi$, is unknown, and not necessarily unity. In some cases it is possible to construct a heuristic estimate of this value, but it is yet to be demonstrated that such an approach offers any real benefit over directly estimating a suitable choice of a fixed $\rho$ parameter. In the interests of reproducible research, software implementations of the main algorithms proposed here are made publicly available~\cite{wohlberg-2016-sporco}. \appendices \section{Scaling of the Graph Form Problem} Many signal and image processing inverse problems can be expressed in terms of the \emph{graph form} problem~\cite{parikh-2014-block} \begin{equation} \argmin_{\mb{x},\mb{z}} f(\mb{x}) + g(\mb{z}) \; \text{ such that } \; A \mb{x} = \mb{z} \;, \label{eq:admmgraphprob} \end{equation} which is a special case of~\eq{admmprob} with $B = -I$ and $\mb{c} = 0$. In this case there is a slightly different set of scalings of the problem under which the solution is invariant, for which the most general scaled problem $\tilde{P}$ is \begin{equation} \argmin_{\mb{x},{\mb{z}}} \alpha f(\gamma \mb{x}) + \alpha g(\delta \mb{z}) \; \text{ s.t. } \; A \gamma \mb{x} = \delta \mb{z} \;, \label{eq:admmgrphp1} \end{equation} which can be expressed as graph form problem in standard form as \begin{equation} \argmin_{\mb{x},{\mb{z}}} \tl{f}( \mb{x}) + \tl{g}(\mb{z}) \; \text{ such that } \; \tl{A} \mb{x} = \mb{z} \label{eq:admmgrphp1std} \end{equation} with \begin{gather} \tl{f}(\mb{x}) = \alpha f(\gamma \mb{x}) \quad \tl{g}(\mb{z}) = \alpha g(\delta \mb{z}) \quad \tl{A} = \delta^{-1} \gamma A \;. \end{gather} The Lagrangian is \begin{align} \tilde{L}({\mb{x}}, {\mb{z}}, {\mb{y}}) = \alpha f(\gamma \mb{x}) & + \alpha g(\delta \mb{z}) + \mb{y}^T (\delta^{-1} \gamma A \mb{x} - \mb{z}) \; , \end{align} and the primal and dual feasibility conditions are \begin{equation} \delta^{-1} \gamma A \tmb{x}^* - \tmb{z}^* = 0 \;, \end{equation} and \smallmath{ \begin{align} 0 \in \partial \tilde{L}(\cdot, \tilde{\mb{z}}^*, \tilde{\mb{y}}^*) \; \Rightarrow \; & 0\!\in\! \alpha \gamma [\partial f(\cdot)](\gamma \tilde{\mb{x}}^*) + \delta^{-1} \gamma A^T \tilde{\mb{y}}^* = 0 \\ 0 \in \partial \tilde{L}(\tilde{\mb{x}}^*, \cdot, \tilde{\mb{y}}^*) \; \Rightarrow \; & 0\! \in\! \alpha \delta [\partial g(\cdot)](\delta \tilde{\mb{z}}^*) - \tilde{\mb{y}}^* = 0 \end{align} } respectively. It is easily verified that if $\mb{x}^*$, $\mb{z}^*$, and $\mb{y}^*$ satisfy the optimality criteria \eq{admmlgprim}, \eqc{admmlgrduafsg}, and \eqc{admmlgrduagsg} for problem $P$, then \begin{gather} \tilde{\mb{x}}^* = \gamma^{-1} \mb{x}^* \quad \;\; \tilde{\mb{z}}^* = \delta^{-1} \mb{z}^* \quad \;\; \tilde{\mb{y}}^* = \alpha \delta \mb{y}^* \label{eq:optscalinggrph} \end{gather} satisfy the primal and dual feasibility criteria for $\tilde{P}$. The augmented Lagrangian for $\tilde{P}$ is \smallmath{ \begin{align} \tilde{L}_{\tilde{\rho}}(\mb{x}, \mb{z}, \mb{y}) & = \alpha f(\gamma \mb{x}) + \alpha g(\delta \mb{z}) \nonumber \\ &+ \alpha \left(\alpha^{-1} \delta^{-1} \mb{y}^T \right) ( \gamma A \mb{x} - \delta \mb{z} ) \nonumber \\ &+ \alpha \left(\alpha^{-1} \delta^{-2} \tilde{\rho} \right) \frac{1}{2} \norm{ \gamma A \mb{x} - \delta \mb{z}}_2^2 \;, \end{align} } so that setting \begin{equation} \tilde{\rho} = \alpha \delta^2 \rho \label{eq:rhoscalegrph} \end{equation} gives \begin{equation} \tilde{L}_{\tilde{\rho}}(\mb{x}, \mb{z}, \mb{y}) = \alpha L_{\rho}\left(\gamma \mb{x}, \delta \mb{z}, \alpha^{-1} \delta^{-1} \mb{y}\right) \;. \label{eq:tlscalegrph} \end{equation} \section{BPDN Scaling Properties} The scaling properties of the BPDN problem with respect to the scalar multiplication of the input signal $\bvsigma$ depend on whether the dictionary is considered to have fixed scaling or scale with the signal. The former is the more common situation since the dictionary is usually normalised, but the latter situation does occur in an \emph{endogenous} sparse representation~\cite{dyer-2013-greedy}, in which the signal is also used as the dictionary (with constraints on the sparse representation to avoid the trivial solution), usually without normalisation of the dictionary. \subsection{Fixed Dictionary} First, define problem $\tl{P}$ with signal $\bvsigma$ scaled by $\delta$ \begin{equation} \argmin_{\mb{x}} \frac{1}{2} \norm{D \mb{x} - \delta \bvsigma}_2^2 + \delta \lambda \norm{\mb{z}}_1 \text{ s.t. } \mb{x} = \mb{z} \;, \end{equation} representing the most common case in which the columns of $D$ are normalised and $D$ does not scale with $\bvsigma$. The corresponding Lagrangian is \begin{align} \tl{L}(\mb{x}, \mb{z}, \mb{y}) &= \frac{1}{2} \norm{D \mb{x} - \delta \bvsigma}_2^2 + \delta \lambda \norm{\mb{z}}_1 + \mb{y}^T (\mb{x} - \mb{z}) \nonumber \\ &= \frac{1}{2} \norm{D \delta \delta^{-1} \mb{x} - \delta \bvsigma}_2^2 + \delta \lambda \norm{\delta \delta^{-1} \mb{z}}_1 \nonumber \\ & \hspace{9.5em} + \mb{y}^T (\delta \delta^{-1}\mb{x} - \delta \delta^{-1}\mb{z}) \nonumber \\ &= \delta^2 L(\delta^{-1}\mb{x}, \delta^{-1}\mb{z}, \delta^{-1}\mb{y}) \;. \end{align} Comparing with~\eq{tlscale} it is clear that we need to set \begin{gather} \alpha = \delta^2 \quad \gamma = \delta^{-1} \quad \beta = \delta \label{eq:bpdnadmmscale1} \end{gather} to use the ADMM scaling results of~\sctn{admmscale}. In this case the scaling behaviour is such that changing $\delta$ does \emph{not} alter the ratio of primal and dual residuals. Note that this merely implies that the adaptive penalty parameter policy with standard residuals is not \emph{guaranteed} to fail when the signal is scaled; it does not follow that the problem scaling is such that normalised residuals are not necessary. \subsection{Dictionary Scales with Signal} In the second form of scaling, $D$ is not normalised, and scales linearly with $\bvsigma$. In this case problem $\tl{P}$ with signal $\bvsigma$ and dictionary $D$ scaled by $\delta$ is \begin{equation} \argmin_{\mb{x}} \frac{1}{2} \norm{\delta D \mb{x} - \delta \bvsigma}_2^2 + \delta^2 \lambda \norm{\mb{z}}_1 \text{ s.t. } \mb{x} = \mb{z} \;. \end{equation} The corresponding Lagrangian is \begin{align} \tl{L}(\mb{x}, \mb{z}, \mb{y}) &= \frac{1}{2} \norm{\delta D \mb{x} - \delta \bvsigma}_2^2 + \delta^2 \lambda \norm{\mb{z}}_1 + \mb{y}^T (\mb{x} - \mb{z}) \nonumber \\ &= \frac{\delta^2}{2} \norm{D \mb{x} - \bvsigma}_2^2 + \delta^2 \lambda \norm{\mb{z}}_1 \nonumber \\ & \hspace{7.3em} + \delta^2 \delta^{-2} \mb{y}^T ( \mb{x} - \mb{z}) \nonumber \\ &= \delta^2 L(\mb{x}, \mb{z}, \delta^{-2}\mb{y}) \;. \end{align} Comparing with~\eq{tlscale} it is clear that we need to set \begin{gather} \alpha = \delta^2 \quad \gamma = 1 \quad \beta = 1 \end{gather} to use the ADMM scaling results of~\sctn{admmscale}. In this case the scaling behaviour is such that changing $\delta$ \emph{does} alter the ratio of primal and dual residuals, and the adaptive penalty parameter policy with standard residuals is guaranteed to perform poorly for all but a restricted range of signal scaling values $\delta$. \section{A Degenerate Case} An unusual degenerate case involving the TV-$\ell_1$ problem~\cite{alliney-1992-digital} illustrates that even the proposed normalised definitions of residuals cannot always be applied without analysis of the specific problem. This problem can be written as \begin{equation} \argmin_{\mb{x}} \| \mb{x} - \mb{s} \|_1 + \lambda \| \sqrt{(G_0 \mb{x})^2 + (G_1 \mb{x})^2}\|_1 \;, \end{equation} which can be expressed in standard ADMM form~\eq{admmprob} (see~\cite[Sec. 2.4.4]{esser-2010-primal}) with \begin{align} \!\!\! f(\mb{x}) = 0 \;\;\;\;\; g(\mb{z}) = \| \mb{z}_s - \mb{s} \|_1 + \lambda \| \sqrt{\mb{z}_0^2 + \mb{z}_1^2}\|_1 \nonumber \\ A = \left( \begin{array}{c} G_0 \\ G_1 \\ I \end{array} \right) \; B = -I \;\;\; \mb{c} = \left( \begin{array}{c} 0 \\ 0 \\ \mb{s} \end{array} \right) \; \mb{z} = \left( \begin{array}{c} \mb{z}_0 \\ \mb{z}_1 \\ \mb{z}_s \end{array} \right) . \end{align} Since $f(\mb{x}) = 0$, dual feasibility condition~\eq{admmlgrduafsg} is simply $A^T \mb{y}^* = 0$ and~\eq{dualres0}, from which the definition~\eq{dualres} is derived, degenerates to \begin{equation} \rho A^T B (\mb{z}^{(k+1)} - \mb{z}^{(k)}) = A^T \mb{y}^{(k+1)} \;. \label{eq:dualresdegen0} \end{equation} Clearly $A^T \mb{y}^{(k+1)}$ is unsuitable either as a normalisation term for the dual residual or as a factor in the stopping tolerance. \bibliographystyle{IEEEtranD}
1,116,691,500,540
arxiv
\section{Introduction} \label{sct:Cyclic Introduction} The spectral action \cite{CC96,CC97} is one of the key instruments in the applications of noncommutative geometry to particle physics. With inner fluctuations \cite{C96} of a noncommutative manifold playing the role of gauge potentials, the spectral action principle yields the corresponding Lagrangians. Indeed, the asymptotic behavior of the spectral action for small momenta leads to experimentally testable field theories, by interpreting the spectral action as a classical action and applying the usual renormalization group techniques. In particular, this provides the simplest way known to geometrically explain the dynamics and interactions of the gauge bosons and the Higgs boson in the Standard Model Lagrangian as an effective field theory \cite{CCM07} (see also the textbooks \cite{CM07,Sui14}). More general noncommutative manifolds (spectral triples) can also be captured by the spectral action principle, leading to models beyond the standard model as well. As shown in \cite{CC06}, if one restricts to the scale-invariant part, one may naturally identify a Yang--Mills term and a Chern--Simons term to elegantly appear in the spectral action. From the perspective of quantum field theory, the appearance of these field-theoretic action functionals sparks hope that we might find a way to go beyond the classical framework provided by the spectral action principle. It is thus a natural question whether we can also field-theoretically describe the full spectral action, without resorting to the scale-invariant part. Motivated by this, we study the spectral action when it is expanded in terms of inner fluctuations associated to an arbitrary noncommutative manifold, without resorting to heat-kernel techniques. Indeed, the latter are not always available and an understanding of the full spectral action could provide deeper insight into how gauge theories originate from noncommutative geometry. Let us now give a more precise description of our setup. We let $(\mathcal{A},\H,D)$ be an finitely summable spectral triple. If $f : \mathbb R \to \mathbb C$ is a suitably nice function we may define the spectral action \cite{CC97}: $$ \operatorname{Tr} (f(D)). $$ An inner fluctuation, as explained in \cite{C96}, is given by a Hermitian universal one-form \begin{align}\label{eq:A uitgeschreven} A=\sum_{j=1}^n a_jdb_j\in\Omega^1(\mathcal{A}), \end{align} for elements $a_j,b_j\in\mathcal{A}$. The terminology `fluctuation' comes from representing $A$ on $\H$ as \begin{align}\label{eq:V uitgeschreven} V:=\pi_D(A)=\sum_{j=1}^n a_j[D,b_j]\in\mathcal{B}(\H)_\textnormal{sa}, \end{align} and fluctuating $D$ to $D+V$ in the spectral action. % The variation of the spectral action under the inner fluctuation is then given by \begin{align}\label{variation of SA} \operatorname{Tr}(f(D+V))-\operatorname{Tr}(f(D)). \end{align} As spectral triples can be understood as noncommutative spin$^\text{c}$ manifolds (see \cite{C08}) encoding the gauge fields as an inner structure, one could hope that perturbations of the spectral action could be understood in terms of noncommutative versions of geometrical, gauge theoretical concepts. Hence we would like to express \eqref{variation of SA} in terms of universal forms constructed from $A$. To express an action functional in terms of universal forms, one is naturally led to cyclic cohomology. As it turns out, hidden inside the spectral action we will identify an odd $(b,B)$-cocycle $(\tilde\psi_1,\tilde\psi_3,\ldots)$ and an even $(b,B)$-cocycle $(\phi_2,\phi_4,\ldots)$ for which $b\phi_{2k}=B\phi_{2k}=0$, i.e., each Hochschild cochain $\phi_{2k}$ forms its own $(b,B)$-cocycle $(0,\ldots,0,\phi_{2k},0,\ldots)$. On the other hand, the odd $(b,B)$-cocycle $(\tilde\psi_{2k+1})$ is truly infinite (in the sense of \cite{C94}). The key result is that for suitable $f:\mathbb R\to\mathbb C$ we may expand \begin{align}\label{eq:expansion intro} \operatorname{Tr}(f(D+V)-f(D))=\sum_{k=1}^\infty\left(\int_{\psi_{2k-1}}\mathrm{cs}_{2k-1}(A)+\frac{1}{2k}\int_{\phi_{2k}}F^{k}\right), \end{align} in which the series converges absolutely. Here $\psi_{2k-1}$ is a scalar multiple of $\tilde\psi_{2k-1}$, $F_t=tdA+t^2A^2$, so that $F=F_1$ is the curvature of $A$, and $\mathrm{cs}_{2k-1}(A)=\int_0^1 AF_t^{k-1}dt$ is a generalized noncommutative Chern--Simons form. As already mentioned, a similar result was shown earlier to hold for the scale-invariant part $\zeta_D(0)$ of the spectral action. Indeed, Connes and Chamseddine \cite{CC06} expressed the variation of the scale-invariant part in dimension $\leq 4 $ as \begin{equation*} \zeta_{D+V}(0) - \zeta_D(0) = - \frac 1 4\int_{\tau_0} (dA+A^2) + \frac 12 \int_\psi \left(A d A + \frac 2 3 A^3\right), \end{equation*} for a certain Hochschild 4-cocycle $\tau_0$ and cyclic 3-cocycle $\psi$. It became clear in \cite{NS21} that an extension of this result to the full spectral action is best done by using multiple operator integrals \cite{ST} instead of residues. It allows for stronger analytical results, and in particular allows to go beyond dimension $4$. Moreover, for our analysis of the cocycle structure that appears in the full spectral action we take the Taylor series expansion as a starting point, and for working with such expansions multiple operator integrals provide the ideal tools, as shown by the strong results in \cite{ACDS09,CS18,Skr13,Sui11}. In \cite{NS21} we pushed these results further still, by proving estimates and continuity properties for the multiple operator integral when the self-adjoint operator has an $s$-summable resolvent, thereby supplying the discussion here with a strong functional analytic foundation. This article will start with a review of the results of \cite{NS21} without involving multiple operator integration techniques. Through the use of abstract brackets, we will investigate the interesting cyclic structure that exists within the spectral action, with all analytical details taking place under the hood. We work out two interesting possibilities for application of our main result and the techniques used to obtain it. The first application is to index theory. One can show that the $(b,B)$-cocycles $\phi$ and $\psi$ are \textit{entire} in the sense of \cite{C88a}. This makes it meaningful to analyze their pairing with K-theory, which we find to be trivial in Section \ref{sct:vanishing pairing}. The second application is to quantization. In Section \ref{sct:One-Loop}, though evading analytical difficulties, we will take a first step towards the quantization of the spectral action within the framework of spectral triples. Using the asymptotic expansion proved in Theorem \ref{thm:asymptotic expansion}, and some basic quantum field theoretic techniques, we will propose a one-loop quantum effective spectral action and show that it satisfies a similar expansion formula, featuring in particular a new pair of $(b,B)$-cocycles Although the main aim of this paper is to give a simple review of the results of \cite{NS21} and \cite{NS21b}, some essential novelty is also provided. In order to connect to the quantization results of \cite{NS21b}, the results of \cite{NS21} are slightly generalized as well as put into context. Moreover, this paper gives a mathematically precise underpinning of the results presented in \cite{NS21b}, which was geared towards a physics audience. We hope that the discussion presented here is clear to mathematicians with or without affinity to physics. \section{Taylor expansion of the spectral action} Consider a finitely summable spectral triple $(\mathcal{A},\H,D)$ (in the sense that for some $s$ the operator $(i-D)^{-s}$ is trace-class). Given the fluctuations of $D$ to $D+V$ as explained in the introduction, we are interested in a Taylor expansion of the spectral action: \begin{align}\label{eq:spectraction brackets} \operatorname{Tr}(f(D+V)-f(D)) &=\sum_{n=1}^\infty\frac{1}{n!}\frac{d^n}{dt^n}\operatorname{Tr}(f(D+tV))\big|_{t=0}\nonumber\\ &=\sum_{n=1}^\infty\frac{1}{n}\br{V,\ldots,V}, \end{align} where $\br{V,\ldots,V}$ is a notation for ($1/(n-1)!$ times) the $n^{\text{th}}$ derivative of the spectral action, defined below, and dependent on $f$ and $D$. Such an expansion exist under varying assumptions on $f$, $D$, and $V$, see for instance \cite{Han06,Sui11,Skr13,NSkr21,NS21}. When we are interested in the inner fluctuations of the form $V = \pi_D(A)$ as in Equation \eqref{eq:V uitgeschreven}, a convenient function class in which $f$ should lie is given as in \cite{NS21} by \begin{align}\label{eq:function class} \mathcal{E}_s^{\gamma}:=\left\{f\in C^\infty \mid \begin{aligned} &\text{there exists $C_f\geq 1$ s.t. } \|\widehat{(fu^m)^{(n)}}\|_1\leq (C_f)^{n+1}n!^\gamma \\ &\text{for all $m=0,\ldots,s$ and $n\in\mathbb N_0$} \end{aligned} \right\}, \end{align} for $\gamma\in (0,1]$ a number, and $s$ the summability of the pertinent spectral triple. Indeed, as shown in \cite{NS21} if $f \in \mathcal{E}_s^{\gamma}$ we have good control over the expansion appearing on the right-hand side of \eqref{eq:spectraction brackets}. For our present expository purposes, however, it is sufficient to assume that $f'$ is compactly supported and analytic in a region of $\mathbb C$ containing a rectifiable curve $\Gamma$ which surrounds the support of $f$ in $\mathbb R$. In this case we have \begin{align \label{eq:br cycl} \br{V_1,\ldots,V_n}= \frac{1}{2\pi i}\oint_\Gamma f'(z) \operatorname{Tr}\left(\prod_{j=1}^n V_j(z-D)^{-1}\right). \end{align} A concrete expression can be also obtained in terms of divided differences of $f$. Indeed, for a self-adjoint operator $D$ in $\H$ with compact resolvent, we let $\varphi_1,\varphi_2,\ldots$ be an orthonormal basis of eigenvectors of $D$, with corresponding eigenvalues $\lambda_1,\lambda_2,\ldots$. Recall Cauchy's integral formula for divided differences \cite[Chapter I.1]{Don74}: $$ g[x_0, \ldots x_n] = \frac{1}{2\pi i} \oint \frac{g(z) }{(z-x_0)\cdots (z-x_n)} dz, $$ with the contour enclosing the points $x_i$. This then yields \begin{align} \br{V,\ldots,V} &=\frac{1}{n}\sum_{i_1,\ldots,i_n\in\mathbb N}f'[\lambda_{i_1},\ldots,\lambda_{i_n}]V_{i_1i_2}\cdots V_{i_{n-1}i_n}V_{i_{n}i_1}.\label{eq:SA divdiff} \end{align} where $V_{kl}:=\p{\varphi_k}{V\varphi_l}$ denote the matrix elements of $V$. This formula appears in \cite[Corollary 3.6]{Han06} and, in higher generality, in \cite[Theorem 18]{Sui11}. The formula \eqref{eq:SA divdiff} gives a very concrete way to calculate derivatives of the spectral action, as well as to calculate the Taylor series of a perturbation of the spectral action. For our algebraic results we only need two simple properties of the bracket $\br{\cdot}$, stated in the following lemma. \begin{lem}\label{cycl bracket} For $V_1,\ldots,V_n\in\mathcal{B}(\H)$ and $a\in\mathcal{A}$ we have \begin{enumerate}[label=\textnormal{(\Roman*)}] \item $\br{V_1,\ldots,V_n}=\br{V_n,V_1,\ldots,V_{n-1}},$\label{cyclicity} \item $ \br{aV_1,V_2,\ldots,V_n}-\br{V_1,\ldots,V_{n-1},V_{n}a}=\br{V_1,\ldots,V_{n},[D,a]}$.\label{commutation} \end{enumerate} \end{lem} \begin{proof} We will omit all analytical details and give a proof for finite-dimensional Hilbert spaces only. The full proof involving multiple operator integrals can be found in \cite{NS21} (as Lemma 14). In finite-dimensions we may use formula \eqref{eq:br cycl} for the bracket. Clearly (I) then follows directly from the tracial property. Note that the left-hand side of equality (II) comes down to the commutator of $a$ with the resolvent $(z-D)^{-1}$, for which we have the equality $$ (z-D)^{-1} a - a(z-D)^{-1} = (z-D)^{-1} [D,a] (z-D)^{-1} $$ This readily leads to the right-hand side in (II). \end{proof} \section{Cyclic cocycles in the spectral action} We now generalize a little and consider a collection of functions $\brr{\cdot} : \mathcal{B}(\H)^{\times n}\to\mathbb R$, $n\in\mathbb N$, satisfing \begin{enumerate}[label=\textnormal{(\Roman*)}] \item $\brr{V_1,\ldots,V_n}~=~\brr{V_n,V_1,\ldots,V_{n-1}},$\label{cyclicity general} \item $\brr{aV_1,V_2,\ldots,V_n}-\brr{V_1,\ldots,V_{n-1},V_na}~=~\brr{V_1,\ldots,V_n,[D,a]}$\label{commutation general} \end{enumerate} In view of Lemma \ref{cycl bracket} above, the brackets $\br{\cdot}$ that appear in the Taylor expansion of the spectral action form a special case of these generalized brackets $\brr{\cdot}$ ---and of course form the key motivation for introducing them. However, such structures pop up in other places as well, for instance \cite{Liu22,GSW}, cf. \cite[Proposition 3.2 and Remark 3.2]{Hock}. In Section \ref{sct:One-Loop}, we will introduce yet another instance of $\brr{\cdot}$, in order to obtain one-loop corrections. Therefore, in contrast to \cite{NS21}, the following discussion will involve the abstract bracket $\brr{\cdot}$ instead of the explicit $\br{\cdot}$. \subsection{Hochschild and cyclic cocycles} When the above brackets $\brr{\cdot}$ are evaluated at one-forms $a[D,b]$ associated to a spectral triple, the relations (I) and (II) can be translated nicely in terms of the coboundary operators appearing in cyclic cohomology. This is very similar to the structure appearing in the context of index theory, see for instance \cite{GS89,Hig06}. Let us start by recalling the definition of Hochschild cochains and the boundary operators $b$ and $B$ from \cite{C85}. \begin{defi} If $\mathcal{A}$ is an algebra, and $n\in\mathbb N_0$, we define the space of \textit{Hochschild $n$-cochains}, denoted by $\mathcal{C}^n(\mathcal{A})$, as the space of $(n+1)$-linear functionals $\phi$ on $\mathcal{A}$ with the property that if $a_j =1$ for some $j \geq 1$, then $\phi(a_0,\ldots,a_n) = 0$. \end{defi} For such cochains we may use, as in \cite{C94}, an integral notation on universal differential forms that is defined by linear extension of $$\int_{\phi}a_0da_1\cdots da_n:= \phi(a_0,a_1,\ldots,a_n).$$ \begin{defi} Define operators $b : \mathcal{C}^{n}(\mathcal{A}) \to \mathcal{C}^{n+1}(\mathcal{A})$ and $B: \mathcal{C}^{n+1}(\mathcal{A}) \to \mathcal{C}^{n}(\mathcal{A})$ by \begin{align*} b\phi(a_0, a_1,\dots, a_{n+1}) :=& \sum_{j=0}^n (-1)^j \phi(a_0,\dots, a_j a_{j+1},\dots, a_{n+1})\\ & + (-1)^{n+1} \phi(a_{n+1} a_0, a_1,\dots, a_n) ,\\ B \phi(a_0 ,a_1, \ldots, a_n) :=& \sum_{j=0}^n (-1)^{nj}\phi(1,a_j,a_{j+1},\ldots, a_{j-1}). \end{align*} \end{defi} Note that $B = \mathbf{A} B_0$ in terms of the operator $ \mathbf{A}$ of cyclic anti-symmetrization and the operator defined by $B_0 \phi (a_0, a_1, \ldots, a_n) = \phi(1,a_0, a_1,\ldots, a_n)$. Note that in integral notation we simply have $$ \int_{B_0 \phi} a_0 d a_1 \cdots d a_n = \int_{\phi} da_0 da_1 \cdots da_n. $$ One may check that the pair $(b,B)$ defines a double complex, \textit{i.e.} $b^2 = 0,~ B^2=0,$ and $bB +Bb =0$. Hochschild cohomology now arises as the cohomology of the complex $(\mathcal{C}^n(\mathcal{A}),b)$. In contrast, we will be using \textit{periodic cyclic cohomology}, which is defined as the cohomology of the totalization of the $(b,B)$-complex. That is to say, \begin{align*} \mathcal{C}^\textup{ev}(\mathcal{A}) = \bigoplus_k \mathcal{C}^{2k} (\mathcal{A}) ; \qquad \mathcal{C}^{\textup{odd}}(\mathcal{A}) = \bigoplus_k \mathcal{C}^{2k+1} (\mathcal{A}), \end{align*} form a complex with differential $b+B$ and the cohomology of this complex is called periodic cyclic cohomology. We will also refer to a periodic cyclic cocycle as a cyclic cocycle or a $(b,B)$-cocycle. Explicitly, an odd $(b,B)$-cocycle is thus given by a sequence $$ (\phi_1, \phi_3, \phi_5, \ldots), $$ where $\phi_{2k+1} \in \mathcal{C}^{2k+1}(\mathcal{A})$ and $$ b \phi_{2k+1} + B \phi_{2k+3} = 0 , $$ for all $k \geq 0$, and also $B \phi_1 = 0$. An analogous statement holds for even $(b,B)$-cocycles. \subsection{Cyclic cocycles associated to the brackets} \label{sct:Cyclic cocycles associated to multiple operator integrals} In terms of the generic bracket $\brr{\cdot}$ satisfying \ref{cyclicity general} and \ref{commutation general}, we define the following Hochschild $n$-cochain: \begin{align}\label{eq:def phi_n} \phi_n(a_0,\ldots,a_n):=\brr{a_0[D,a_1],[D,a_2],\ldots,[D,a_{n}]} \qquad (a_0, \ldots, a_n \in \mathcal{A}). \end{align} We easily see that $B_0\phi_n$ is invariant under cyclic permutations, so that $B\phi_n=nB_0\phi_n$ for odd $n$ and $B\phi_n=0$ for even $n$. Also, $\phi_n(a_0,\ldots,a_n)=0$ when $a_j=1$ for some $j\geq1$. We put $\phi_0:=0$. \begin{lem}\label{lem:b} We have $b\phi_n=\phi_{n+1}$ for odd $n$ and we have $b\phi_n=0$ for even $n$. \end{lem} \begin{proof} We only consider the case $n=1$ while referring to \cite[Lemma 17]{NS21} for the proof of the general case. We combine the definition of the $b$-operator with Leibniz' rule for $[D,\cdot]$ to obtain: \begin{align*} \int_{b \phi_1} a_0 d a_1 d a_2 & = \brr{a_0 a_1 [D,a_2] } - \brr{a_0 [D,a_1 a_2]} + \brr{a_2 a_0 [D,a_1]}\\ &= - \brr{a_0 [D,a_1 ]a_2} + \brr{a_2 a_0 [D,a_1]} = \brr{a_0 [D,a_1],[D,a_2]} \end{align*} where we used (II) for the last equality. \end{proof} \begin{lem}\label{lem:c} Let $n$ be even. We have $bB_0\phi_n=2\phi_n-B_0\phi_{n+1}$. \end{lem} \begin{proof} Again we only consider the first case $n=2$ while referring to \cite[Lemma 17]{NS21} for the proof of the general case \begin{align*} & \int_{b B_0 \phi_2} a_0 d a_1 d a_2 = \int_{B_0 \phi_2} a_0 a_1 d a_2 - \int_{B_0 \phi_2} a_0 d (a_1 a_2) +\int_{B_0 \phi_2} a_2 a_0 d a_1 \\ &\qquad= \brr{[D,a_0a_1],[D,a_2]} - \brr{[D,a_0],[D,a_1a_2]} + \brr{[D,a_2 a_0],[D,a_1]}\\ & \qquad = \cdots = 2 \brr{a_0[D,a_1],[D,a_2]} - \brr{[D,a_0],[D,a_1],[D,a_2]}, \end{align*} combining Leibniz' rule with (I) and (II). \end{proof} Motivated by these results we define \begin{equation} \label{eq:psi} \psi_{2k-1}:=\phi_{2k-1}-\tfrac{1}{2}B_0\phi_{2k}, \end{equation} so that $$B\psi_{2k+1}=2(2k+1)b\psi_{2k-1}.$$ We can rephrase this property in terms of the $(b,B)$-complex as follows. \begin{prop} \label{prop:bB} Let $\phi_n$ and $\psi_{2k-1}$ be as defined above and set $$\tilde{\psi}_{2k-1}:=(-1)^{k-1}\frac{(k-1)!}{(2k-1)!}\psi_{2k-1}\,.$$ \begin{enumerate}[label=\textnormal{(\roman*)}] \item The sequence $(\phi_{2k})$ is a $(b,B)$-cocycle and each $\phi_{2k}$ defines an even Hochschild cocycle: $b \phi_{2k} = 0$. \item The sequence $(\tilde \psi_{2k-1})$ is an odd $(b,B)$-cocycle. \end{enumerate} \end{prop} \subsection{The brackets as noncommutative integrals} We will now describe how brackets $\brr{V,\ldots, V}$ can be written as noncommutative integrals of certain universal differential forms defined in terms of $A = \sum a_j db_j\in \Omega^1(\mathcal{A})$, using only property (I) and (II). At first order not much exciting happens and we simply have $$ \brr{V} =\sum_j \brr{a_j[D,b_j]}= \sum_j\int_{\phi_1} a_j db_j = \int_{\phi_1} A. $$ More interestingly, at second order we find using property (II) of the bracket that \begin{align*} \brr{V,V } &=\sum_{j,k} \brr{ a_j[D,b_j] , a_k[D,b_k] } \\ &=\sum_{j,k} \brr{ a_j[D,b_j] a_k,[D,b_k] } + \sum_{j,k} \brr{ a_j[D,b_j] ,[D,a_k],[D,b_k] }\\ &= \int_{\phi_2} A^ 2 + \int_{\phi_3} A d A. \end{align*} Continuing like this, while only using property (II) of the bracket we find \begin{align*} \brr{V,V,V}&=\int_{\phi_3}A^3+\int_{\phi_4}AdAA+\int_{\phi_5}AdAdA,\\ \brr{V,V,V,V}&=\int_{\phi_4}A^4+\int_{\phi_5}(A^3dA+AdAA^2)+\int_{\phi_6}AdAdAA+\int_{\phi_7}AdAdAdA \end{align*} This implies that, at least when the infinite sum on the left-hand side makes sense: \begin{align*} \sum_n \frac 1 n \brr{V,\ldots, V} =&\int_{\phi_1} A+\frac{1}{2}\int_{\phi_2}A^2+\int_{\phi_3}\Big(\frac{1}{2}AdA+\frac{1}{3}A^3\Big)\\ &+\int_{\phi_4}\Big(\frac{1}{3}AdAA+\frac{1}{4}A^4\Big)+\ldots, \end{align*} where the dots indicate terms of degree 5 and higher. Using $\phi_{2k-1}=\psi_{2k-1}+\frac{1}{2}B_0\phi_{2k}$, this becomes \begin{align*} \sum_n \frac 1 n \brr{V,\ldots, V}=&\int_{\psi_1} A+\frac{1}{2}\int_{\phi_2}(A^2+dA)+\int_{\psi_3}\Big(\frac{1}{2}AdA+\frac{1}{3}A^3\Big)\\ &+\frac{1}{4}\int_{\phi_4}\Big(dAdA+\frac{2}{3}(dAA^2+AdAA+A^2dA)+A^4\Big)+\ldots. \end{align*} Notice that, if $\phi_4$ would be tracial, we would be able to identify the terms $dAA^2$, $AdAA$ and $A^2dA$, and thus obtain the Yang--Mills form $F^2=(dA+A^2)^2$, under the fourth integral. In the general case, however, cyclic permutations under $\int_\phi$ produce correction terms, of which one needs to keep track. Indeed, using \cite[Corollary 24]{NS21} we may re-order the integrands to yield \begin{align*} \sum_n \frac 1 n \brr{V,\ldots, V}=&\int_{\psi_1}A+\int_{\phi_2}\tfrac{1}{2}(dA+A^2)+\int_{\psi_3}(\tfrac{1}{2}dAA+\tfrac{1}{3}A^3)+\tfrac{1}{4}\int_{\phi_4}(dA+A^2)^2\nonumber\\ &+\int_{\psi_5}(\tfrac{1}{3}(dA)^2A+\tfrac{1}{2}dAA^3+\tfrac{1}{5}A^5)+\tfrac{1}{6}\int_{\phi_6}(dA+A^2)^3+\ldots \end{align*} where the dots indicate terms of degree 7 and higher. Writing $F=dA+A^2$ and $\mathrm{cs}_1(A):=A$, $\mathrm{cs}_3(A):=\tfrac12 dAA+\tfrac13 A^3$, etc., we can already discern our desired result in low orders. \bigskip As a preparation for the general result, we briefly recall from \cite{Qui90} the definition of Chern--Simons forms of arbitrary degree. \begin{defi} \label{defi:cs} The (universal) \textbf{Chern--Simons form} of degree $2k-1$ is given for $A \in \Omega^1(\mathcal{A})$ by \begin{equation}\label{eq:cs} \mathrm{cs}_{2k-1}(A) := \int_0^1 A (F_t)^{k-1} \,dt, \end{equation} where $F_t = t dA + t^ 2 A^2$ is the curvature two-form of the (connection) one-form $A_t = t A$. \end{defi} \begin{exam} For the first three Chern--Simons forms one easily derives the following explicit expressions: \begin{gather*} \mathrm{cs}_1(A) = A; \qquad \mathrm{cs}_3(A) = \frac 12 \left( A dA + \frac 2 3 A^3 \right);\\ \mathrm{cs}_5(A) = \frac 13 \left( A (dA)^2 + \frac 3 4 A dA A^ 2 + \frac 3 4 A^3 dA + \frac 3 5 A^5 \right). \end{gather*} \end{exam} \subsection{Cyclic cocycles in the Taylor expansion of the spectral action } We now apply the above results to the brackets appearing in the Taylor expansion of the spectral action: $$ \operatorname{Tr}(f(D+V)-f(D)) = \sum_{n=1}^\infty \frac 1 n \br{V, \cdots, V}. $$ In order to control the full Taylor expansion of the spectral action we naturally need a growth condition on the derivatives of the function $f$, and this is accomplished by considering the class $\mathcal{E}_s^\gamma$ defined in \eqref{eq:function class}. The following result is \cite[Theorem 27]{NS21}. \begin{thm}\label{thm:main thm} Let $(\mathcal{A},\H,D)$ be an $s$-summable spectral triple, and let $f\in\mathcal{E}_s^{\gamma}$ for $\gamma\in(0,1)$. The spectral action fluctuated by $V=\pi_D(A)\in\Omega_D^1(\mathcal{A})_\textnormal{sa}$ can be written as \begin{align* \operatorname{Tr}(f(D+V)-f(D)) = \sum_{k=1}^\infty \left( \int_{\psi_{2k-1}} \mathrm{cs}_{2k-1} (A) +\frac 1 {2k} \int_{\phi_{2k}} F^{k} \right), \end{align*} where the series converges absolutely. \end{thm} Under less restrictive conditions on the function $f$ we also have the following asymptotic version of this result \cite[Proposition 28]{NS21} \begin{thm}\label{thm:asymptotic expansion} Let $(\mathcal{A},\H,D)$ be a spectral triple, and let $\brr{\cdot}$ satisfy \ref{cyclicity} and \ref{commutation}, with associated cyclic cocycles $\phi$ and $\tilde\psi$. For $A\in\Omega^1(\mathcal{A})$ and $V=\pi_D(A)$, we asymptotically have $$\sum_n \frac 1 n \brr{V,\ldots, V}\sim\sum_{k=1}^\infty\bigg(\int_{\psi_{2k-1}}\mathrm{cs}_{2k-1}(A)+\frac{1}{2k}\int_{\phi_{2k}} F^{k}\bigg),$$ by which we mean that, for every $K\in\mathbb N$, there exist forms $\omega_l\in\Omega^l(\mathcal{A})$ for $l=K+1,\ldots,2K+1$ such that \begin{align*} \sum_{n=1}^K\frac{1}{n}\!\brr{V,\ldots,V}&-\sum_{k=1}^K\left(\int_{\psi_{2k-1}}\mathrm{cs}_{2k-1}(A)+\frac{1}{2k}\int_{\phi_{2k}}F^k\right) =\sum_{l=K+1}^{2K+1}\int_{\phi_l}\omega_l. \end{align*} \end{thm} In particular, by taking $\brr{\cdot}=\br{\cdot}$, we obtain the following corollary. \begin{cor} For $f\in C^\infty$, and $V=\pi_D(A)\in\Omega^1_D(\mathcal{A})_\textnormal{sa}$ such that the Taylor expansion of the spectral action converges, we asymptotically have \begin{align*} \operatorname{Tr}(f(D+V)-f(D))&=\sum_{n=1}^\infty \frac{1}{n!}\frac{d^n}{dt^n}\operatorname{Tr}(f(D+tV))\Big|_{t=0}\\ &\sim\sum_{k=1}^\infty\bigg(\int_{\psi_{2k-1}}\mathrm{cs}_{2k-1}(A)+\frac{1}{2k}\int_{\phi_{2k}} F^{k}\bigg). \end{align*} \end{cor} \subsection{Gauge invariance and the pairing with K-theory}\label{sct:vanishing pairing} Since the spectral action is a spectral invariant, it is in particular invariant under conjugation of $D$ by a unitary $U\in \mathcal{A}$. More generally, in the presence of an inner fluctuation we find that the spectral action is invariant under the transformation $$ D+V \mapsto U (D+V) U^* = D + V^U; \qquad V^U = U[D,U^*] + U V U^*. $$ This transformation also holds at the level of the universal forms, with a gauge transformation of the form $A \mapsto A^U = U d U^* + U A U ^*$. Let us analyze the behavior of the Chern--Simons and Yang--Mills terms appearing in Theorem \ref{thm:main thm} under this gauge transformation, and derive an interesting consequence for the pairing between the odd $(b,B)$-cocycle $\tilde \psi$ with the odd K-theory group of $\mathcal{A}$. As an easy consequence of the fact that $\phi_{2k}$ is a Hochschild cocycle, we have \begin{lem} The Yang--Mills terms $\int_{\phi_{2k}} F^k$ with $F = dA +A^2$ are invariant under the gauge transformation $A \mapsto A^U$ for every $k \geq 1$. \end{lem} We are thus led to the conclusion that the sum of Chern--Simons forms is gauge invariant as well. Indeed, arguing as in \cite{CC06}, since both $\operatorname{Tr} (f(D+V))$ and the Yang--Mills terms are invariant under $V \mapsto V^U$, we find that, under the assumptions stated in Theorem \ref{thm:main thm}: $$ \sum_{k=0}^\infty \int_{\psi_{2k+1}} \mathrm{cs}_{2k+1}(A^U ) = \sum_{k=0}^\infty \int_{\psi_{2k+1}} \mathrm{cs}_{2k+1} (A ). $$ Each individual Chern--Simons form behaves non-trivially under a gauge transformation. Nevertheless, it turns out that we can conclude, just as in \cite{CC06}, that the pairing of the whole $(b,B)$-cocycle with K-theory is trivial. Since the $(b,B)$-cocycle $\tilde \psi$ is given as an infinite sequence, we should first carefully study the analytical behavior of $\tilde \psi$. In fact, we should show that it is an \textit{entire cyclic cocycle} in the sense of \cite{C88a} (see also \cite[Section IV.7.$\alpha$]{C94}). It turns out \cite[Lemma 36]{NS21} that our assumptions on the growth of the derivatives of $f$ ensure that the brackets define entire cyclic cocycles. \begin{lem} Fix $f\in\mathcal{E}_s^{\gamma}$ for $\gamma<1$ and equip $\mathcal{A}$ with the norm $\| a \|_1 = \| a \| + \| [D,a]\|$. Then, for any bounded subset $\Sigma\subset\mathcal{A}$ there exists $C_\Sigma$ such that $$\left |\tilde{\psi}_{2k+1}(a_0,\ldots, a_{2k+1}) \right|\leq \frac{C_\Sigma}{k!},$$ for all $a_j\in\Sigma$. Hence, $\phi$ and $\tilde\psi$ are entire cyclic cocycles. \end{lem} We thus have the following interesting consequence of Theorem \ref {thm:main thm}. \begin{thm} Let $f\in\mathcal{E}_s^{\gamma}$ for $\gamma<1$. Then the pairing of the odd entire cyclic cocycle $\tilde \psi$ with $K_1(\mathcal{A})$ is trivial, \textit{i.e.} $$\langle U,\tilde\psi\rangle=(2\pi i)^{-1/2}\sum_{k=0}^\infty (-1)^{k}k!\tilde{\psi}_{2k+1}(U^*,U,\ldots,U^*,U)=0 $$ for all unitary $U\in \mathcal{A}$. \end{thm} \section{One-loop corrections to the spectral action} \label{sct:One-Loop} We now formulate a quantum version of the spectral action. To do this, we must first interpret the spectral action, expanded in terms of generalized Chern--Simons and Yang--Mills actions by Theorem \ref{thm:main thm}, as a classical action, which leads us naturally to a noncommutative geometric notion of a vertex. Enhanced with a spectral gauge propagator derived from the formalism of random matrices (and in particular, random finite noncommutative geometries) this gives us a concept of one-loop counterterms and a proposal for a one-loop \textit{quantum effective spectral action}, without leaving the spectral framework. We will show here that, at least in a finite-dimensional setting, these counterterms can again be written as Chern--Simons and Yang--Mills forms integrated over (quantum corrected) cyclic cocycles. We therefore discern a renormalization flow in the space of cyclic cocycles. \subsection{Conventions} We let $\varphi_1,\varphi_2,\ldots$ be an orthonormal basis of eigenvectors of $D$, with corresponding eigenvalues $\lambda_1,\lambda_2,\ldots$. For any $N\in\mathbb N$, we define \begin{align*} H_N:=(M_N)_\textnormal{sa},\quad M_N:=\textnormal{span}\left\{\ket{\varphi_i}\bra{\varphi_j}:~i,j\in\{1,\ldots,N\}\right\}, \end{align*} and endow $H_N$ with the Lebesgue measure on the coordinates $Q\mapsto {\mathrm{Re}}( Q_{ij})$ ($i\leq j$) and $Q\mapsto{\mathrm{Im}}(Q_{ij})$ ($i<j$). Here and in the following, $Q_{ij}:=\p{\varphi_i}{Q\varphi_j}$ are the matrix elements of $Q$. For simplicity, we will assume that the perturbations $V_1,\ldots,V_n$ are in $\cup_K H_K$. For us, a \textit{Feynman diagram} is a finite multigraph with a number of marked vertices of degree 1 called external vertices, all other vertices being called internal vertices or, by abuse of terminology, vertices. An edge, sometimes called a propagator, is called external if it connects to an external vertex, and internal otherwise. The external vertices are simply places for the external edges to attach to, and are often left out of the discussion. An $n$-point diagram is a Feynman diagram with $n$ external edges. A Feynman diagram is called one-particle-irreducible if any multigraph obtained by removing one of the internal edges is connected. \subsection{Diagrammatic expansion of the spectral action} \label{sect:sa} Viewing the spectral action as a classical action, and following the background field method, the vertices of degree $n$ in the corresponding quantum theory should correspond to $n\th$-order functional derivatives of the spectral action. However, in the paradigm of noncommutative geometry, a base manifold is absent, and functional derivatives do not exist in the local sense. Therefore, a more abstract notion of a vertex is needed. The brackets $\br{\cdot}$ from \eqref{eq:SA divdiff} that power the expansion of the spectral action in Theorems \ref{thm:main thm} and \ref{thm:asymptotic expansion} are by construction cyclic and multilinear extensions of the derivatives of the spectral action, and as such provide an appropriate notion of \textit{noncommutative vertices}. We define a noncommutative vertex with $V_1,\ldots,V_n\in\cup_K H_K$ on the external edges by \begin{align} \raisebox{-43pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (0,2) to (2,2); \draw[edge] (1,0) to (2,2); \draw[edge] (1,4) to (2,2); \draw[edge] (3,4) to (2,2); \draw[edge] (4,2) to (2,2); \ncvertex{2,2} \draw[line width=2pt, line cap=round, dash pattern=on 0pt off 3\pgflinewidth] (2,0) arc (-90:0:1.5cm); \node at (-0.5,2) {\huge $V_1$}; \node at (0.8,4.4) {\huge $V_2$}; \node at (3.2,4.4) {\huge $V_3$}; \node at (4.5,2) {\huge $V_4$}; \node at (0.7,-0.4) {\huge $V_n$}; \end{tikzpicture}}} \quad :=\quad\br{V_1,\ldots,V_n}. \label{eq:bracket} \end{align} In contrast to a normal vertex of a Feynman diagram, a noncommutative vertex is decorated with a cyclic order on the edges incident to it. By convention, the edges are attached clockwise with respect to this cyclic order. As such, with perturbations $V_1,\ldots,V_n$ decorating the external edges, the diagram \eqref{eq:bracket} reflects the cyclicity of the bracket: $\br{V_1,\ldots,V_n}=\br{V_n,V_1,\ldots,V_{n-1}}$, the first property of Lemma \ref{cycl bracket}. In order to diagramatically represent the second property of Lemma \ref{cycl bracket} as well, we introduce the following notation. Wherever a gauge edge meets a noncommutative vertex we can insert a dashed line decorated with an element $a\in\mathcal{A}$ before or after the gauge edge, with the following meaning: \begin{align*} \raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[edge] (2.65,0) to (2.65+0.5,2); \draw[streepjes] (2.65,0) to (2.3,2); \node at (2.2,2.5) {\huge $a$}; \node at (3.3,2.5) {\huge $V$}; \end{tikzpicture}}} \quad \raisebox{10pt}{ := } \raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[edge] (2.65,0) to (2.65+0.5,2); \node at (3.3,2.5) {\huge $aV$}; \end{tikzpicture}}} \quad \raisebox{10pt}{ , } \qquad\qquad \raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[edge] (0,0) to (-0.5,2); \draw[streepjes] (0,0) to (0.35,2); \node at (0.3,2.5) {\huge $a$}; \node at (-0.65,2.5) {\huge $V$}; \end{tikzpicture}}} \quad \raisebox{10pt}{ := } \quad \raisebox{-10pt}{ \scalebox{0.45}{\begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[edge] (0,0) to (-0.5,2); \node at (-0.65,2.5) {\huge $Va$};\end{tikzpicture}}}. \end{align*} With this notation, the equation \begin{align}\label{eq:classical Ward} \br{a V_1,\ldots,V_n}-\br{V_1,\ldots,V_n a} =\br{V_1,\ldots,V_n,[D,a]}, \end{align} is represented as \begin{align} \label{eq:ward} \raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[streepjes] (2.65,0) to (2.3,2); \node at (2.2,2.5) {\huge $a$}; \end{tikzpicture}}} \quad\,\,\, - \,\,\quad \raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[streepjes] (0,0) to (0.35,2); \node at (0.3,2.5) {\huge $a$}; \end{tikzpicture}}} \quad = \quad \raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (110:70:4cm); \draw (0,0.025) arc (110:70:4cm); \draw (0,0.05) arc (110:70:4cm); \draw (0,0.075) arc (110:70:4cm); \draw[edge] (1.325,0.3) to (1.325,2); \node at (1.325,2.5) {\huge $[D,a]$}; \end{tikzpicture}}} \quad , \end{align} and is as such referred to as the \textit{Ward identity}. To illustrate, let us give the relevant lower order computations. The cyclic cocycles are expressed in terms of diagrams as \begin{align} \label{eq:bracket-cochain} \int_{\phi_n} a^0 da^1 \cdots da^n &= \raisebox{-41pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (0,2) to (2,2); \draw[edge] (1,0) to (2,2); \draw[edge] (1,4) to (2,2); \draw[edge] (3,4) to (2,2); \draw[edge] (4,2) to (2,2); \ncvertex{2,2} \draw[line width=2pt, line cap=round, dash pattern=on 0pt off 3\pgflinewidth] (2.5,-0.5) arc (-85:-15:2cm); \node at (-1.4,2) {\huge $a^0[D,a^1]$}; \node at (0.5,4.4) {\huge $[D,a^2]$}; \node at (3.5,4.4) {\huge $[D,a^3]$}; \node at (5.1,2) {\huge $[D,a^4]$}; \node at (0.7,-0.4) {\huge $[D,a^n]$}; \end{tikzpicture}}}. \end{align} \noindent For one external edge we find, writing $A=\sum_j a_jdb_j$ and suppressing summation over $j$, \begin{align} \br{V} = \br{a_j [D,b_j]} &= \,\, \raisebox{-4pt}{\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (1,0) to (3,0); \ncvertex{3,0} \node at (0.5,0.6) {\huge $a_j[D,b_j]$}; \end{tikzpicture}}} \quad = \int_{\phi_1} A. \end{align} For two external edges, we apply the Ward identity \eqref{eq:ward} and derive \begin{align*} \br{V,V}&= \quad \raisebox{-5pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (1,0) to (3,0); \draw[streepjes] (3.4,0.02) to (5,2); \draw[edge] (3,0) to (5,0); \ncvertex{3,0} \node at (-0.5,0) {\huge $a_j[D,b_j]$}; \node at (6,0) {\huge $[D,b_{j'}]$}; \node at (5.6,2.1) {\huge $a_{j'}$}; \end{tikzpicture}}} \\[1mm] &= \quad \raisebox{-5pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (1,0) to (3,0); \draw[streepjes] (2.55,0.08) to (1,2); \draw[edge] (3,0) to (5,0); \ncvertex{3,0} \node at (-0.5,0) {\huge $a_j[D,b_j]$}; \node at (6,0) {\huge $[D,b_{j'}]$}; \node at (0.4,2.1) {\huge $a_{j'}$}; \end{tikzpicture}}} \quad + \quad \raisebox{-5pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (1,0) to (3,0); \draw[edge] (3,0) to (3,1.6); \draw[edge] (3,0) to (5,0); \ncvertex{3,0} \node at (-0.5,0) {\huge $a_j[D,b_j]$}; \node at (6,0) {\huge $[D,b_{j'}]$}; \node at (3,2.1) {\huge $[D,a_{j'}]$}; \end{tikzpicture}}} \\[4mm] &= \int_{\phi_2} A^2 + \int_{\phi_3} A d A \end{align*} \subsubsection{The propagator} An important part of the quantization process introduced here is to find a mathematical formulation for the propagator. In other words, we need to introduce more general diagrams than the one-vertex diagram in \eqref{eq:bracket}, and assign each an amplitude. As usual in quantum field theory, the amplitudes depend on a cutoff $N$ and are possibly divergent as $N\to\infty$. What we will call a \textit{noncommutative Feynman diagram} (or, for brevity, a diagram) is a Feynman diagram in which every internal vertex $v$ is decorated with a cyclic order on the edges incident to $v$. These decorated vertices are what we call the noncommutative vertices, and are denoted as in \eqref{eq:bracket}. The edges of a diagram are always drawn as wavy lines. They are sometimes called gauge edges to distinguish them from any dashed lines in the diagram, which do not represent physical particles, but are simply notation. The \textit{loop order} is defined to be $L:=1-V+E$, where $V$ is the amount of (noncommutative) vertices and $E$ is the amount of internal edges. We also say the noncommutative Feynman diagram is $L$-loop, e.g., the noncommutative Feynman diagram in \eqref{eq:bracket} is zero-loop. When the respective multigraph is planar, $L$ corresponds to the number of internal faces. Following physics terminology, these faces are referred to as \textit{loops}. As usual for Feynman diagrams, the external edges are marked, say by the numbers $1,\ldots,n$. Note that, by our definition, a noncommutative Feynman diagram is almost the same as a ribbon graph, the sole difference being that ribbons are sensitive to twisting, whereas our edges are not. Each nontrivial noncommutative Feynman diagram will be assigned an \textit{amplitude}, as follows. Here \textit{nontrivial} means that every connected component contains at least one vertex with nonzero degree. \begin{defi}\label{def:Propagator} Let $N\in\mathbb N$ and let $f\in C^\infty$ satisfy $f'[\lambda_i,\lambda_j]>0$ for $i,j\leq N$. Given a nontrivial $n$-point noncommutative Feynman diagram $G$ with external vertices marked by $1,\ldots,n$, its \textbf{amplitude} at level $N\in\mathbb N$ on the gauge fields $V_1,\ldots,V_n\in\cup_K H_K$ is denoted $\Gamma_N^G(V_1,\ldots,V_n)$, and is defined recursively as follows. When $G$ has precisely one vertex and the markings $1,\ldots,n$ respect its cyclic order, we set $\Gamma_N^G(V_1,\ldots,V_n):=\br{V_1,\ldots,V_n}$. Suppose the amplitudes of diagrams $G_1$ and $G_2$ with external edges $1,\ldots,n$ and $n+1,\ldots,m$ are defined. Then to the disjoint union $G$ of the diagrams we assign the amplitude \begin{align*} \Gamma_N^{G}(V_1,\ldots,V_m):=\Gamma_N^{G_1}(V_1,\ldots,V_n)\Gamma_N^{G_2}(V_{n+1},\ldots,V_m). \end{align*} Suppose the amplitude of a diagram $G$ is defined. Then, for any two distinct numbers $i,j\in\{1,\ldots,n\}$, let $G'$ be the diagram obtained from $G$ by connecting the two external edges $i$ and $j$ by a gauge edge (a propagator). We then define the amplitude of $G'$ as \begin{align*} \Gamma_N^{G'}(V_1,\ldots,\widehat{V_i},\ldots,\widehat{V_j},\ldots,V_{n}):=-\frac{\int_{H_N}\Gamma_N^G(V_1,\ldots,\overset{i}{Q},\ldots,\overset{j}{Q},\ldots,V_{n})e^{-\tfrac12\br{Q,Q}}dQ}{\int_{H_N}e^{-\tfrac12\br{Q,Q}}dQ}. \end{align*} \end{defi} Well-definedness is a straightforward consequence of Fubini's theorem. Note that, in general, $\Gamma^G_N$ is not cyclic in its arguments, as was the case in \eqref{eq:bracket}. \begin{figure}[h!] \hspace{30pt} \scalebox{0.45}{ \hspace{24pt} \begin{subfigure}[t]{0.4\textwidth} \image{0.92}{graph_links.png} \put(-104,116){\Large $V_n$} \put(-104,11){\Large $V_1$} \put(-10,63){\Large $Q$} \put(-72.4,62.5){\Large $G_1$} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \image{0.92}{graph_rechts.png} \put(-37,114){\Large $V_{n+1}$} \put(-37,12){\Large $V_m$} \put(-132,63){\Large $Q$} \put(-74,62.5){\Large $G_2$} \end{subfigure} \begin{subfigure}[t]{\textwidth} \image{0.664}{propagator.png} \put(-214,118){\Large $V_n$} \put(-215,11){\Large $V_1$} \put(-178,65){\Large $G_1$} \put(-37,117){\Large $V_{n+1}$} \put(-37,14){\Large $V_m$} \put(-75,65){\Large $G_2$} \end{subfigure} } \caption{Constructing the propagator.} \end{figure} The assumption that $f'[\lambda_i,\lambda_j]>0$ for $i,j\leq N$ can be accomplished by allowing $f$ to be unbounded, and replacing the spectral action $$\operatorname{Tr}(f(D))$$ with the regularized version $$\operatorname{Tr}(f_N(D))$$ where $f_N:=f\Phi_N$ for a sequence of bump functions $\Phi_N$ ($N\in\mathbb N$) that are 1 on $\{\lambda_k:~k\leq N\}$. As quantization takes place on the finite level (for a finite $N$), it is natural to also regularize the classical action before we quantize. Because we can now easily require $$f_N'[\lambda_k,\lambda_l]=f'[\lambda_k,\lambda_l]>0,$$ for all $k,l\leq N$, Definition \ref{def:Propagator} makes sense and can be studied by Gaussian integration as in \cite[Section 2]{BIZ80}. \subsection{Loop corrections to the spectral action} To obtain the propagator, we have chosen the approach of random noncommutative geometries (as done in \cite{AK19,KP20}, see \cite{BG16,GS19a} for computer simulations) in the sense that the integrated space in Definition \ref{def:Propagator} is the whole of $H_N$. Other approaches are conceivable by replacing $H_N$ by a subspace of gauge fields particular to the gauge theory under consideration (like $\Omega^1_D(\mathcal{A})_{\textnormal{sa}}$ for a finite spectral triple $(\mathcal{A},\H,D)$) but this should also take into account gauge fixing, and will quickly become very involved. We expect to require sophisticated machinery to perform such an integration, similar to the machinery in \cite{EF}. In our case, the propagator becomes quite simple, and can be explicitly expressed by the following result. \begin{lem}\label{lem:propagator} Let $f\in C^\infty$ satisfy $f'[\lambda_k,\lambda_l]>0$ for $k,l\leq N$. For $k,l,m,n\in\{1,\ldots,N\}$, we have $$ \frac{ \int_{H_N} Q_{kl} Q_{mn} e^{-\tfrac12\br{Q,Q}} dQ} {\int_{H_N} e^{-\tfrac12\br{Q,Q} } dQ} = \delta_{kn}\delta_{lm} G_{kl}, $$ in terms of $G_{kl} := \frac{1}{f'[\lambda_k, \lambda_l]}$. \end{lem} \begin{proof} By \eqref{eq:SA divdiff} we have the finite sum \begin{align*} \br{Q,Q}=\sum_{k,l} f'[\lambda_k,\lambda_l]\left(({\mathrm{Re}} (Q_{kl}))^2+({\mathrm{Im}}(Q_{kl}))^2\right), \end{align*} for all $Q\in H_N$. Moreover, we have \begin{align*} &\int_{H_N} Q_{kl} Q_{mn} e^{-\frac 12 \br{Q,Q}} dQ \\ &\quad= \int_{H_N} ({\mathrm{Re}}(Q_{kl}){\mathrm{Re}}(Q_{mn})-{\mathrm{Im}}(Q_{kl}){\mathrm{Im}}(Q_{mn}))e^{-\frac 12 \br{Q,Q}} dQ\\ &\qquad+i\int_{H_N}({\mathrm{Re}}(Q_{kl}){\mathrm{Im}}(Q_{mn}) + {\mathrm{Im}}(Q_{kl}){\mathrm{Re}}(Q_{mn}))e^{-\frac 12 \br{Q,Q}} dQ. \end{align*} The second integral on the right-hand side vanishes because its integrand is an odd function in at least one of the coordinates of $H_N$. The same holds for the first integral whenever $\{k,l\}\neq\{m,n\}$. Otherwise, we use that ${\mathrm{Re}}(Q_{lk})={\mathrm{Re}}(Q_{kl})$ and ${\mathrm{Im}}(Q_{lk})=-{\mathrm{Im}}(Q_{kl})$ and see that the two terms of the first integral cancel when $k=m$ and $l=n$. When $k=n\neq l=m$, we instead find that these terms give the same result when integrated. By using symmetry of the divided difference (i.e., $f'[x,y]=f'[y,x]$) and integrating out all trivial coordinates, we obtain \begin{align*} \frac{\int_{H_N} Q_{kl} Q_{mn} e^{-\frac 12 \br{Q,Q}} dQ}{\int_{H_N} e^{-\frac 12 \br{Q,Q}} dQ} =& \delta_{kn}\delta_{lm}\frac{2\int_\mathbb R ({\mathrm{Re}}(Q_{kl}))^2 e^{-f'[\lambda_k,\lambda_l]({\mathrm{Re}}(Q_{kl}))^2}d{\mathrm{Re}}(Q_{kl})}{\int_\mathbb R e^{-f'[\lambda_k,\lambda_l]({\mathrm{Re}}(Q_{kl}))^2}d{\mathrm{Re}}(Q_{kl})}, \end{align*} a Gaussian integral that gives the $G_{kl}$ required by the lemma. When $k=l=n=m$, the result follows similarly. \end{proof} The above lemma allows us to leave out all integrals from the subsequent computations. In place of those integrals, we use the following notation. \begin{defi}\label{def:propagator} We define, with slight abuse of notation, \begin{align*} \wick{\c Q_{kl} \hspace{16pt}\c Q_{mn} }:=\delta_{kn}\delta_{lm} G_{kl}, \end{align*} and refer to $G_{kl}$ as the \textit{propagator}. \end{defi} As an example and to fix terminology, we will now compute the amplitudes of the three most basic one-loop diagrams with two external edges. These are given in Figure \ref{fig:1loop-2pt}. Using Lemma \ref{lem:propagator} and Definition \ref{def:propagator}, we find the amplitude for the first diagram to be \begin{align} \label{eq:2ptA} \hspace{-3pt}\raisebox{-10pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (0.7,0) to (1.8,0); \draw[edge] (1.8,0) to (2.9,0); \draw[edge] (3.1,0) to (4,0); \draw[edge] (1.8,0) arc (0:180:-0.6cm); \draw[edge] (3,0) arc (180:0:0.6cm); \draw[edge] (4,0) to (5.3,0); \ncvertex{2,0} \ncvertex{4,0} \node at (0.3,0) {\huge $V_1$}; \node at (5.7,0) {\huge $V_2$}; \end{tikzpicture}}} &= \sum_{\begin{smallmatrix} i,j,k,l, \\ m,n\leq N\end{smallmatrix}} f'[\lambda_i,\lambda_j,\lambda_k](V_1)_{ij} \wick{ \c1 Q_{jk} \c2 Q_{ki} f'[\lambda_l,\lambda_m,\lambda_n](V_2)_{lm} \c1 Q_{mn} \c2 Q_{nl} } \nonumber \nonumber \\ & = \sum_{i,k\leq N} f'[\lambda_i, \lambda_i,\lambda_k]f'[\lambda_i, \lambda_k,\lambda_k](V_1)_{ii} (V_2)_{kk} (G_{ik})^2 . \end{align} As $V_1$ and $V_2$ are assumed of finite rank, the above expression converges as $N\to\infty$. To see this explicitly, let $K$ be such that $V_1,V_2\in H_K$, and let $G$ be the diagram on the left-hand side of \eqref{eq:2ptA}. We then obtain \begin{align}\label{eq:irrelevant diagram} \lim_{N\to\infty}\Gamma^G_N(V_1,V_2)=\sum_{i,k\leq K}f'[\lambda_i,\lambda_i,\lambda_k]f'[\lambda_i,\lambda_k,\lambda_k](V_1)_{ii}(V_2)_{kk}(G_{ik})^2, \end{align} a finite number. In general we can say that if all summed indices of an amplitude occur in a matrix element of any of the perturbations (e.g., $(V_1)_{ii}$ and $(V_2)_{kk}$) then the amplitude remains finite even when the size $N$ of the random matrices $Q$ is sent to $\infty$. In physics terminology, the first diagram in Figure \ref{fig:1loop-2pt} is \textit{irrelevant}, and can be disregarded for renormalization purposes. We then turn to the second diagram in Figure \ref{fig:1loop-2pt}, and compute \begin{align} \raisebox{-8pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (1,0) to (2.1,0); \draw[edge] (2.1,0) to[out=90,in=90] (3.9,0); \draw[edge] (2.1,0) to[out=-90,in=-90] (3.9,0); \draw[edge] (3.9,0) to (5,0); \ncvertex{2,0} \ncvertex{4,0} \node at (0.5,0) {\huge $V_1$}; \node at (5.5,0) {\huge $V_2$}; \end{tikzpicture}}} &= \sum_{\begin{smallmatrix} i,j,k,l, \\ m,n\leq N\end{smallmatrix}} f'[\lambda_i,\lambda_j,\lambda_k] (V_1)_{ij} \wick{ \c1 Q_{jk} \c2 Q_{ki} f'[\lambda_l,\lambda_m,\lambda_n](V_2)_{lm} \c2 Q_{mn} \c1 Q_{nl} } \nonumber \\ &= \sum_{i,j,k\leq N} (f'[\lambda_i, \lambda_j,\lambda_k])^2 (V_1)_{ij} (V_2)_{ji} G_{ik}G_{kj} . \label{eq:vertexcontr} \end{align} This diagram is planar, and the indices $i,j,k$ correspond to regions in the plane, assuming the external edges are regarded to stretch out to infinity. The index $k$ corresponds to the region within the loop, and is called a \textit{running loop index}. As the index $k$ is not restricted by $V_1$ and $V_2$ as in \eqref{eq:2ptA}, we find that in general the amplitude \eqref{eq:vertexcontr} diverges as $N \to \infty$. In physical terms, this is a \textit{relevant} diagram. The amplitude of the final diagram becomes \begin{align} \raisebox{-15pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw[edge] (0,-1) to (2,0); \draw[edge] (4,-1) to (2,0); \draw[edge] (1.9,0) arc (-90:270:0.8cm); \ncvertex{2,0} \node at (-.4,-1) {\huge $V_1$}; \node at (4.4,-1) {\huge $V_2$}; \end{tikzpicture}}} \quad&= -\sum_{ i,j,k,l\leq N} f'[\lambda_i,\lambda_j,\lambda_k,\lambda_l](V_1)_{ij} \wick{ \c Q_{jk} \c Q_{kl} } (V_2)_{li} \nonumber \\ &= -\sum_{i,j,k\leq N} f'[\lambda_i, \lambda_j,\lambda_j,\lambda_k](V_1)_{ij} (V_2)_{ji} G_{jk} . \label{eq:vertexcontr2} \end{align} Again, this amplitude contains a running loop index and is therefore potentially divergent in the limit $N \to \infty$. \begin{figure} \hspace{24pt} \begin{tabular}{p{.3\linewidth}p{.3\linewidth}p{.3\linewidth}} \scalebox{.6}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0) to (1.8,0); \draw[edge] (1.8,0) to (2.9,0); \draw[edge] (3.1,0) to (4.2,0); \draw[edge] (1.8,0) arc (0:180:-0.6cm); \draw[edge] (3,0) arc (180:0:0.6cm); \draw[edge] (4.2,0) to (5.5,0); \ncvertex{2,0} \ncvertex{4,0} \end{tikzpicture}} &\scalebox{.6}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0) to (2.1,0); \draw[edge] (2.1,0) to[out=90,in=90] (3.9,0); \draw[edge] (2.1,0) to[out=-90,in=-90] (3.9,0); \draw[edge] (3.9,0) to (5.5,0); \ncvertex{2,0} \ncvertex{4,0} \end{tikzpicture}} & \scalebox{.6}{ \begin{tikzpicture}[thick] \draw[edge] (0,-1) to (2,0); \draw[edge] (4,-1) to (2,0); \draw[edge] (1.9,0) arc (-90:270:0.8cm); \ncvertex{2,0} \end{tikzpicture}} \end{tabular} \caption{Two-point diagrams with one loop. The first one is irrelevant, the second and third are relevant.} \label{fig:1loop-2pt} \end{figure} \subsubsection{One-loop counterterms to the spectral action} Because we are interested in the behavior of the one-loop quantum effective spectral action as $N\to\infty$, we wish to consider only one-loop noncommutative Feynman diagrams whose amplitudes involve a running loop index. For example, the final two diagrams in Figure \ref{fig:1loop-2pt}, but not the first. As dictated by the background field method, in order to obtain a quantum effective action we should further restrict to one-particle-irreducible diagrams whose vertices have degree $\geq 3$. Let us fix a one-loop one-particle-irreducible diagram $G$ in which all vertices have degree $\geq3$, and investigate whether the amplitude of $G$ contains a running loop index. Fix a noncommutative vertex $v$ in $G$. The vertex $v$ will have precisely two incident edges that belong to the loop of the diagram, and at least one external edge. Each index associated with $v$ is associated specifically with two incident edges of $v$. If one of these edges is external, the index will not run, because it will be fixed by the gauge field attached. A running index can only occur if the two incident loop edges of $v$ succeed one another, and the index is placed in between them. The latter of these two loop edges will attach to another noncommutative vertex, $w$, and the possibly running index will also be associated with the succeeding edge in $w$, which also has to be a loop edge if the index is to run. This process may continue throughout the loop until we end up at the original vertex $v$. By this argument, the amplitude of $G$ will contain a running loop index if and only if $G$ can be drawn in the plane with all noncommutative vertices oriented clockwise and all external edges extending outside the loop. The wonderful conclusion is that the external edges of the relevant diagrams obtain a natural cyclic order. This presents us with a natural one-loop quantization of the bracket $\br{\cdot}$, and thus with a natural proposal for the one-loop quantization of the spectral action. \begin{defi}\label{def:quantum effective SA} Let $N\in\mathbb N$ and let $f\in C^\infty$ satisfy $f'[\lambda_i,\lambda_j]>0$ for $i,j\leq N$. We define \begin{align*} \bbr{V_1,\ldots,V_n}:=\sum_G \Gamma_N^G(V_1,\ldots,V_n), \end{align*} where the sum is over all planar one-loop one-particle-irreducible $n$-point noncommutative Feynman diagrams $G$ with clockwise vertices of degree $\geq3$ and external edges outside the loop and marked cyclically. The \textbf{one-loop quantum effective spectral action} is defined to be the formal series $$ \sum_{n=1}^\infty \frac{1}{n} \bbr{V,\ldots,V}. $$ \end{defi} Directly from the definition of $\bbr{\cdot}$, we see that \begin{align*} \bbr{V_2,\ldots,V_n,V_1}=\bbr{V_1,\ldots,V_n}. \end{align*} In other words, the property \ref{cyclicity general} holds for the bracket $\brr{\cdot}=\bbr{\cdot}$. In the next subsection we will show that \ref{commutation general} holds as well. \subsubsection{Ward identity for the gauge propagator} In addition to the Ward identity \eqref{eq:ward} for the noncommutative vertex, we claim that we also have the following Ward identity for the gauge edge: \begin{align} \label{eq:ward-gauge} \raisebox{-20pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (-20:20:4cm); \draw (-0.025,0) arc (-20:20:4cm); \draw (-0.05,0) arc (-20:20:4cm); \draw (-0.075,0) arc (-20:20:4cm); \draw[edge] (0.15,1.4) to (2.95,1.4); \draw (3.1,0) arc (20:-20:-4cm); \draw (3.125,0) arc (20:-20:-4cm); \draw (3.15,0) arc (20:-20:-4cm); \draw (3.175,0) arc (20:-20:-4cm); \draw[streepjes] (2.9,1.4) to (2.3,2.9); \node at (2.1,3.2) {\huge $a$}; \end{tikzpicture}}} \quad-\quad \raisebox{-20pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (-20:20:4cm); \draw (-0.025,0) arc (-20:20:4cm); \draw (-0.05,0) arc (-20:20:4cm); \draw (-0.075,0) arc (-20:20:4cm); \draw[edge] (0.15,1.4) to (2.95,1.4); \draw (3.1,0) arc (20:-20:-4cm); \draw (3.125,0) arc (20:-20:-4cm); \draw (3.15,0) arc (20:-20:-4cm); \draw (3.175,0) arc (20:-20:-4cm); \draw[streepjes] (0.2,1.4) to (0.75,2.9); \node at (0.9,3.2) {\huge $a$}; \end{tikzpicture}}} \quad=\quad \raisebox{-20pt}{\scalebox{0.45}{ \begin{tikzpicture}[thick] \draw (0,0) arc (-20:20:4cm); \draw (-0.025,0) arc (-20:20:4cm); \draw (-0.05,0) arc (-20:20:4cm); \draw (-0.075,0) arc (-20:20:4cm); \draw[edge] (0.15,1.4) to (1.5,1.4); \draw[edge] (1.5,1.4) to (2.95,1.4); \draw (3.1,0) arc (20:-20:-4cm); \draw (3.125,0) arc (20:-20:-4cm); \draw (3.15,0) arc (20:-20:-4cm); \draw (3.175,0) arc (20:-20:-4cm); \draw[edge] (1.5,1.4) to (1.5,2.8); \ncvertex{1.5,1.4} \node at (1.5,3.2) {\huge $[D,a]$}; \end{tikzpicture}}} \end{align} Indeed, the left-hand side yields terms \begin{align*} \sum_{m\leq N}\big(\wick{ \c Q_{ik} \c Q_{lm} a_{mn} }- \wick{ a_{im} \c Q_{mk} \c Q_{ln}}\big) &= \sum_{m\leq N}\big(G_{ik} \delta_{im} \delta_{kl} a_{mn} - G_{ln} \delta_{mn} \delta_{kl} a_{im}\big) \\ &= ( G_{ik}- G_{nk} )\delta_{kl} a_{in}, \end{align*} for arbitrary values of $i$, $k$, $l$, and $n$ determined by the rest of the diagram. The right-hand side, by the defining property of the divided difference, and because every internal edge adds a minus sign, yields the terms \begin{align*} &-\sum_{p,q,r\leq N}\wick{ \c Q_{ik} f'[\lambda_p, \lambda_q, \lambda_r] \c Q_{pq}} [D,a]_{qr}\wick{ \c Q_{rp} \c Q_{ln}} \\ &\qquad= - \sum_{p,q,r\leq N}f'[\lambda_p, \lambda_q, \lambda_r](\lambda_q -\lambda_r)a_{qr}G_{ik} \delta_{iq} \delta_{kp} G_{rp} \delta_{rn} \delta_{pl} \\ &\qquad= \left( f'[\lambda_k, \lambda_n] - f'[\lambda_i, \lambda_k] \right)G_{ik} G_{nk} \delta_{kl} a_{in}. \end{align*} Because $G_{kl}=1/f'[\lambda_k,\lambda_l]$ (see Lemma \ref{lem:propagator}) the two expressions coincide for every value of $i$, $k$, $l$, and $n$, thereby allowing us to apply the rule \eqref{eq:ward-gauge} whenever it comes up as part of a diagram. For example, by combining \eqref{eq:ward-gauge} with \eqref{eq:ward}, we have \begin{align*}\label{eq:example quantum Ward} &\raisebox{-20pt}{\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0) to (2.1,0); \draw[edge] (2.1,0) to[out=90,in=90] (3.9,0); \draw[edge] (2.1,0) to[out=-90,in=-90] (3.9,0); \draw[edge] (3.9,0) to (5.5,0); \draw[streepjes] (1.6,0) to (1,-1.5); \ncvertex{2,0} \ncvertex{4,0} \node at (0,0) {\huge $V_1$}; \node at (6,0) {\huge $V_2$}; \node at (0.9,-1.7) {\huge $a$}; \end{tikzpicture}}} \quad - \raisebox{-20pt}{\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0) to (2.1,0); \draw[edge] (2.1,0) to[out=90,in=90] (3.9,0); \draw[edge] (2.1,0) to[out=-90,in=-90] (3.9,0); \draw[edge] (3.9,0) to (5.5,0); \draw[streepjes] (4.4,0) to (5,-1.5); \ncvertex{2,0} \ncvertex{4,0} \node at (0,0) {\huge $V_1$}; \node at (6,0) {\huge $V_2$}; \node at (5.1,-1.7) {\huge $a$}; \end{tikzpicture}}}\\ &\raisebox{-20pt}{ \quad\vspace{20pt}\raisebox{-15pt}{ = } \raisebox{-35pt}{\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0) to (2.1,0); \draw[edge] (2.1,0) to[out=90,in=90] (3.9,0); \draw[edge] (2.1,0) to[out=-90,in=-90] (3.9,0); \draw[edge] (3.9,0) to (5.5,0); \draw[edge] (2,0) to (1.2,-1.5); \ncvertex{2,0} \ncvertex{4,0} \node at (0,0) {\huge $V_1$}; \node at (6,0) {\huge $V_2$}; \node at (1,-2) {\huge $[D,a]$}; \end{tikzpicture}}} \,\,\raisebox{-15pt}{ + }\,\, \raisebox{-45pt}{\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (-.2,2.7) to (1,2.7); \draw[edge] (2,-0.1) to (2,1); \draw[edge] (2,1) to (1,2.7); \draw[edge] (1,2.7) to (3,2.7); \draw[edge] (3,2.7) to (2,1); \draw[edge] (3,2.7) to (4.2,2.7); \ncvertex{1,2.7} \ncvertex{2,1} \ncvertex{3,2.7} \node at (-0.7,2.7) {\huge $V_1$}; \node at (4.7,2.7) {\huge $V_2$}; \node at (2,-0.5) {\huge $[D,a]$}; \end{tikzpicture}}} \,\,\raisebox{-15pt}{ + }\,\, \raisebox{-35pt}{\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0) to (2.1,0); \draw[edge] (2.1,0) to[out=90,in=90] (3.9,0); \draw[edge] (2.1,0) to[out=-90,in=-90] (3.9,0); \draw[edge] (3.9,0) to (5.5,0); \draw[edge] (4,0) to (4.8,-1.5); \ncvertex{2,0} \ncvertex{4,0} \node at (0,0) {\huge $V_1$}; \node at (6,0) {\huge $V_2$}; \node at (5,-2) {\huge $[D,a].$}; \end{tikzpicture}}} \!\! }\nonumber \end{align*} The Ward identity for the gauge propagator, in combination with the Ward identity for the fermion propagator \eqref{eq:ward} allows us to derive the so-called {\it quantum Ward identity}: $$ \bbr{aV_1,\ldots,V_n} - \bbr{V_1,\ldots,V_na} = \bbr{{[D,a],V_1,\ldots, V_n}}. $$ We derived this identity diagrammatically in \cite{NS21b} for low orders; below we give a general derivation. The quantum Ward identity, in combination with the obvious cyclicity, shows that $\bbr{\cdot}$ is a special case of the generic bracket $\brr{\cdot}$ satisfying property \ref{cyclicity general} and \ref{commutation general} on page \pageref{cyclicity general}, and hence allows us to apply Proposition \ref{prop:bB} and Theorem \ref{thm:asymptotic expansion}. We thus obtain our final result: an expansion of the one-loop quantum effective action in terms of cyclic cocycles. \begin{figure} \hspace{.05\linewidth} \begin{tabular}{p{.181\linewidth}p{.21\linewidth}p{.19\linewidth}p{.20\linewidth}} \scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0,-1) to (2,0); \draw[edge] (4,-1) to (2,0); \draw[edge] (2,-1.6) to (2,0); \draw[edge] (1.9,0) arc (-90:270:0.8cm); \ncvertex{2,0} \end{tikzpicture}} &\scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (1,0) to (2,2); \draw[edge] (0.5,2) to (2,2); \draw[edge] (1,4) to (2,2); \draw[edge] (2,2) to[out=90,in=90] (4,2); \draw[edge] (2,2) to[out=-90,in=-90] (4,2); \draw[edge] (4,2) to (5,4); \draw[edge] (4,2) to (5.5,2); \draw[edge] (4,2) to (5,0); \ncvertex{2,2} \ncvertex{4,2} \end{tikzpicture}} & \scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0.5,0.9) to (2,1); \draw[edge] (0.5,0) to (2,1); \draw[edge] (2,1) to (4,1); \draw[edge] (2,1) to (3,2.6); \draw[edge] (3,2.6) to (4,1); \draw[edge] (3,2.6) to (2.5,4); \draw[edge] (3,2.6) to (3.5,4); \draw[edge] (4,1) to (5.5,0.9); \draw[edge] (4,1) to (5.5,0); \ncvertex{2,1} \ncvertex{4,1} \ncvertex{3,2.6} \end{tikzpicture}} &\quad \scalebox{.45}{ \begin{tikzpicture}[thick] \draw[edge] (0,1) to (1,2); \draw[edge] (0,2) to (1,2); \draw[edge] (0,3) to (1,2); \draw[edge] (1,2) to (2,1); \draw[edge] (2,1) to (3,2); \draw[edge] (1,2) to (2,3); \draw[edge] (2,3) to (3,2); \draw[edge] (2,3) to (1,4); \draw[edge] (2,3) to (2,4); \draw[edge] (2,3) to (3,4); \draw[edge] (3,2) to (4,3); \draw[edge] (3,2) to (4,2); \draw[edge] (3,2) to (4,1); \draw[edge] (2,1) to (1,0); \draw[edge] (2,1) to (2,0); \draw[edge] (2,1) to (3,0); \ncvertex{1,2} \ncvertex{2,1} \ncvertex{3,2} \ncvertex{2,3} \end{tikzpicture}} \end{tabular} \caption{Relevant one-loop $n$-point functions with increasing number of vertices.} \label{table:skel-1l} \end{figure} \begin{thm} There exist $(b,B)$-cocycles $\phi^N$ and $\tilde\psi^N$ (namely, those defined by taking $\brr{\cdot}=\bbr{\cdot}$ in \eqref{eq:def phi_n} and \eqref{eq:psi}) for which the one-loop quantum effective spectral action can be expanded as $$ \sum_{n=1}^\infty \frac{1}{n} \bbr{V,\ldots,V} \sim \sum_{k=1}^\infty \left( \int_{\psi_{2k-1}^N} \!\!\!\!\! \mathrm{cs}_{2k-1} (A) +\frac 1 {2k} \int_{\phi_{2k}^N} \!\!\!\!\! F^{k} \right). $$ As before, $\tilde\psi_{2k-1}^N=(-1)^{k-1}\tfrac{(k-1)!}{(2k-1)!}\psi_{2k-1}^N$. \end{thm} \begin{proof} Applying Definition \ref{def:quantum effective SA}, and combining two sums, we obtain \begin{align*} \bbr{aV_1,\ldots,V_n}-\bbr{V_1,\ldots,V_na}=\sum_G \left(\Gamma_N^G(aV_1,\ldots,V_n)-\Gamma_N^G(V_1,\ldots,V_na)\right), \end{align*} where the sum is over all \textit{relevant} diagrams $G$, by which we mean the planar one-loop one-particle-irreducible $n$-point noncommutative Feynman diagrams $G$ with clockwise vertices of degree $\geq3$ and external edges outside the loop and marked cyclically. Let $G$ be a relevant diagram marked $1,\ldots,n$. We let $I(G)$ denote the set of diagrams one can obtain from $G$ by inserting a single gauge edge at any of the places one visits when walking along the outside of the diagram from the external edge $n$ to the external edge $1$. To be precise, if the edges $n$ and $1$ attach to the same noncommutative vertex $v$, we set $$I(G):=\{G'\},$$ where $G'$ is the diagram obtained from $G$ by inserting an external edge marked $n+1$ at $v$ between the edges marked $n$ and $1$. If the edges $n$ and $1$ attach to different vertices $v$ and $w$, respectively, then the edge $e$ succeeding the edge marked $n$ on $v$ necessarily attaches to $w$, preceding the edge marked $1$. In this case, we set $$I(G):=\{G_n,G_e,G_1\},$$ where $G_n$ is obtained from $G$ by inserting an external edge marked $n+1$ at $v$ between $n$ and $e$, $G_e$ is obtained from $G$ by inserting a noncommutative vertex $v_0$ along $e$ and inserting an external edge marked $n+1$ along the outside of $v_0$, and $G_1$ is obtained from $G$ by inserting an external edge marked $n+1$ at $w$ between $e$ and $1$. By construction of $I(G)$, we find \begin{align*} &\llangle a V_1, \ldots, V_n \rrangle_N^{1L} - \llangle V_1, \ldots, V_n a \rrangle_N^{1L} = \sum_{G}\sum_{G'\in I(G)} \Gamma_N^{G'}(V_1, \ldots ,V_n,[D,a]). \end{align*} The sum over $G$ and $G'$ yields all relevant $n+1$-point diagrams, and, moreover, any relevant $n+1$-point diagram with labels $V_1, \ldots ,V_n,[D,a]$ is obtained in a unique manner from an insertion of an external edge in an $n$-point diagram, as described above. We are therefore left precisely with \begin{align*} \llangle a V_1, \ldots, V_n \rrangle_N^{1L} - \llangle V_1, \ldots, V_n a \rrangle_N^{1L} = \bbr{V_1,\ldots,V_n,[D,a]}. \end{align*} In combination with cyclicity, $\llangle V_1, \ldots, V_n \rrangle_N^{1L} = \llangle V_n , V_1, \ldots, V_{n-1} \rrangle_N^{1L}$, this identity allows us to apply Proposition \ref{prop:bB} and Theorem \ref{thm:asymptotic expansion}. We thus arrive at the conclusion of the theorem. \end{proof} We conclude that the passage to the one-loop renormalized spectral action can be realized by a transformation in the space of cyclic cocycles, sending $\phi \mapsto \phi+ \phi^N$ and $\psi \mapsto \psi+ \psi^N$. One could say the theory is therefore one-loop renormalizable in a generalized sense, allowing for infinitely many counterterms, as in \mbox{\cite{GW96}}. Most notably, we have stayed within the spectral paradigm of noncommutative geometry. \newcommand{\noopsort}[1]{}\def$'${$'$}
1,116,691,500,541
arxiv
\section{Introduction} \label{intro} Interest towards working with large engineering systems is increasing recently, but long simulation time is one of the main limiting factors. Although the development of the computational power of modern computers has been very fast in recent years, increasing model complexity, more precise description of model properties and more detailed representation of the system geometry still result in considerable execution time and memory usage. Model reduction \citep{khorsand2012improved, rahrovani2014modal}, efficient simulation \citep{yaghoubi2016efficient, avitabile2009efficient, liu2012efficient} and parallel simulation methods \citep{yaghoubiparallel, tak2013high} are different strategies to address this issue. Consequently, uncertainty propagation in these systems cannot be carried out by classical approaches such as crude Monte-Carlo (MC) simulation. More advanced methods such as stochastic model reduction \citep{amsallem2011online} or surrogate modeling \citep{frangos2010surrogate} are required to replace the computationally expensive model with an approximation that can reproduce the essential features faster. Of interest here are surrogate models. They can be created intrusively or non-intrusively. In intrusive approaches, the equations of the system are modified such that one explicit function relates the stochastic properties of the system responses to the random inputs. The perturbation method \citep{schueller2009uncertain} is a classical tool used for this purpose but it is only accurate when the random inputs have small coefficients of variation (COV). An alternative method is intrusive polynomial chaos expansion \citep{ghanem2003stochastic}. It was first introduced for Gaussian input random variables \citep{wiener1938homogeneous} and then extended to the other types of random variables leading to generalized polynomial chaos \citep{xiu2002wiener,soize2004physical}. In non-intrusive approaches, already existing deterministic codes are evaluated at several sample points selected over the parameter space. This selection depends on the methods employed to build the surrogate model, namely regression \citep{blatman2010adaptive, berveiller2006stochastic} or projection methods \citep{gilli2013uncertainty, knio2001stochastic}. Kriging \citep{fricker2011probabilistic, jones1998efficient} and non-intrusive PCE \citep{Blatman2011a} or combination thereof \citep{kersaudy2015new, SchoebiIJUQ2015} are examples of the non-intrusive approaches. The major drawback of PCE methods, both intrusive and non-intrusive, is the large number of unknown coefficients in problems with large parameter spaces, which is referred to as the curse of dimensionality \citep{Sudret2007}. Sparse \citep{blatman2008sparse} and adaptive sparse \citep{blatman2011adaptive} polynomial chaos expansions have been developed to dramatically reduce the computational cost in this scenario. To propagate and quantify the uncertainty in a Quantity of Interest (QoI) of a system, its response should be monitored all over the parameter space. This response could be calculated in time, frequency or modal domain. For dynamic systems, the frequency response is important because it provides information over a frequency range with a clear physical interpretation. This is the main reason of the recent focus on frequency response functions (FRF) for uncertainty quantification of dynamic systems and their surrogates \citep{fricker2011probabilistic, goller2011interpolation, kundu2014hybrid,adhikari2011doubly,Chatterjee2015}. Several attempts have been made to find a surrogate model for the FRF by using modal properties or random eigenvalue problems. \citet{pichler2009mode} proposed a mode-based meta-model for the frequency response functions of stochastic structural systems. \citet{yuhermite} used Hermite polynomials to solve the random eigenvalue problem and then employed modal assurance criteria (MAC) to detect the phenomenon of modal intermixing. \citet{manan2010prediction} used non-intrusive polynomial expansions to find the modal properties of a system and predict the bounds for stochastic FRFs. They implemented the method on models with one or two parameters and COV $\leq$ 2\%. Very few and recent papers addressed the direct implementation of PCE on the frequency responses of systems. \citet{kundu2015dynamic} proposed to obtain the frequency response of a stochastic system by projecting the response on a reduced subspace of eigenvectors of a set of complex, frequency-adaptive, rational stochastic weighting functions. \citet{pagnacco2013polynomial} investigated the use of polynomial chaos expansions for modeling multimodal dynamic systems using the intrusive approach by studying a single degree of freedom (DOF) system. They showed that the direct use of the polynomial chaos results in some spurious peaks and proposed to use multi-element PCE to model the stochastic frequency response but, to the knowledge of the authors, they did not publish anything on more complex systems yet. \citet{jacquelinpolynomial2015} studied a 2-DOF system to investigate the possibility of direct implementation of PCE for the moments of the FRFs and they also reported the problem of spurious peaks. They showed that the PCE converges slowly on the resonance parts. They accelerate the convergence of the first two statistical moments by using Aitken's method and its generalizations \citep{Jacquelin2015144}. In general, there are two main difficulties to make the PCE surrogate model directly for the FRFs: $(i)$ their non-smooth behavior over the frequency axis due to abrupt changes of the amplitude that occur close to the resonance frequencies. At such frequencies, the amplitudes are driven by damping \citep{craig2006fundamentals}. In \citet{adhikari2016damping}, Adhikari and Pascal investigated the effect of damping in the dynamic response of stochastic systems and explain why making surrogate models in the areas close to the resonance frequencies is very challenging. $(ii)$ the frequency shift of the eigenfrequencies due to uncertainties in the parameters. This results in very high-order PCEs even for the FRFs obtained from cases with 1 or 2 DOFs. The main contribution of this work is to propose a method that can solve both problems. The proposed approach consists of two steps. First, the FRFs are transformed via a stochastic frequency transformation such that their associated eigenfrequencies are aligned in the transformed frequency axis, called \textit{scaled frequency}. Then, PCE is performed on the \textit{scaled frequency} axis. The advantage of this procedure is the fact that after the transformation, the behavior of the FRFs at each \textit{scaled frequency} is smooth enough to be surrogated with low-order PCEs. However, since PCE is made for each \textit{scaled frequency}, this approach results in a very large number of random outputs. To solve this issue, an efficient version of principal component analysis is employed. Moreover, the problem of the curse of dimensionality is resolved here by means of adaptive sparse PCEs. The outline of the paper is as follows. In Section \ref{frf}, the required equations for deriving the FRFs of a system are presented. In Section \ref{method}, all appropriate mathematics for approximating a model by polynomial chaos expansion are presented. The main challenges for building PCEs for FRFs are elaborated and the proposed solutions are presented. In Section \ref{example}, the method is applied to two case studies, a simple case and a case with a relatively large number of input parameters. \section{Frequency response function (FRF)} \label{frf} Consider the spatially-discretized governing second-order equation of motion of a structure as \begin{equation} \label{eq1} \ve{M}\ddot{\ve{q}} +\ve{V}\dot{\ve{q}}+\ve{K}\ve{q}=\ve{f}(t) \end{equation} where for an $n$-DOF system with $n_u$ system inputs and $n_y$ system outputs, $\ve{q}(t)\in \mathbb{R}^n $ is the displacement vector, $\ve{f}(t)$ is the external load vector which is governed by a Boolean transformation of stimuli vector $\ve{f}(t)=\ve{P_uu}(t)$; with $\ve{u}(t)\in \mathbb{R}^{n_u}$. Real positive-definite symmetric matrices $\ve{M},\ve{V},\ve{K} \in \mathbb{R}^{n\times n}$ are mass, damping and stiffness matrices, respectively. The state-space realization of the equation of motion in Eq. (\ref{eq1}) can be written as \begin{equation} \label{eq2} \dot{\ve{x}}(t)=\ve{Ax}(t)+\ve{Bu}(t), \hspace{1cm} \ve{y}(t)=\ve{Cx}(t)+\ve{Du}(t) \end{equation} where $\ve{A}\in \mathbb{C}^{2n\times 2n}$, $\ve{B}\in \mathbb{C}^{2n\times n_u}$, $\ve{C}\in \mathbb{C}^{n_y\times 2n}$, and $\ve{D}\in \mathbb{C}^{n_y\times n_u}$. $ \ve{x}^T(t)=[\ve{q}(t)^T, \dot{\ve{q}}^T(t)]\in \mathbb{R}^{2n} $ is the state vector, and $ \ve{y}(t)\in \mathbb{R}^{n_y}$ is the system output. $\ve{A}$ and $\ve{B}$ are related to mass, damping and stiffness as follows \begin{equation} \label{eq3} \ve{A}= \left[\begin{array}{cc} \ve{0} & \ve{I}\\ -\ve{M}^{-1}\ve{V} & -\ve{M}^{-1}\ve{K} \end{array}\right], \ve{B}= \left[\begin{array}{c} \ve{0} \\ \ve{M}^{-1}\ve{P}_u \end{array}\right]. \end{equation} The output matrix $\ve{C}$, which has application dependent elements, linearly maps the states to the output $\ve{y}$ and $\ve{D}$ is the associated direct throughput matrix. The frequency response of the model (\ref{eq2}) can be written as \begin{equation} \label{eq4} \ve{\mathcal{H}}(j\omega)=\ve{C}(j\omega \ve{I}-\ve{A})^{-1}\ve{B}+\ve{D}, \end{equation} where $\ve{\mathcal{H}}=[\mathcal{H}_1, \mathcal{H}_2,\cdots,\mathcal{H}_{n_u \times n_y}]^\text{T} \in \mathbb{C}^{(n_y \times n_u)\times 1}, \forall \omega$ and $j=\sqrt{-1}$. $(\bullet)^\text{T}$ stands for the transpose of the matrix. It should be mentioned that the eigenvalues of $\ve{A}$ are the poles of the system. They are complex and their imaginary parts can be approximated as the frequencies, in rad/s, at which the maximum amplitude occurs. \section{Methodology} \label{method} This section first, briefly reviews polynomial chaos expansion for real-valued responses. Then, the method of stochastic frequency transformation is explained in conjunction with the proposed method as well as its application to the complex-valued FRF responses. \subsection{Polynomial chaos expansions} \label{PCE} Let $\cm$ be a computational model with \emph{M}-dimensional random inputs $\ve{X}$=$\{X_1, X_2, ...,X_M\}^\text{T}$ and a scalar output $Y$. Further, let us denote the joint probability distribution function (PDF) of the random inputs by $f_{\ve{X}}(\ve{x}) $ defined in the probability space ($\Omega$,$\mathscr{F}$, $\mathbb{P}$). Assume that the system response $Y=\cm(\Ve{X})$ is a second-order random variable, \ie $\Esp{Y^2}<+\infty$ and therefore it belongs to the Hilbert space $\mathscr{H}=\mathscr{L}_{f_{\Ve{X}}}^2(\mathbb{R}^M, \mathbb{R}) $ of $f_{\Ve{X}} $-square integrable functions of $\Ve{X}$ with respect to the inner product: \begin{equation} \Esp{\psi(\Ve{X})\phi(\Ve{X})}=\int_{\cd_{\Ve{X}}} \psi(\Ve{x})\phi(\Ve{x})f_{\Ve{X}}(\Ve{x}) \di \Ve{x} \label{eq:PCE:Hilbert} \end{equation} where $\cd_{\Ve{X}}$ is the support of $\ve{X}$. Further assume that the input variables are independent, \ie $f_{\ve{X}}(\ve{x})=\prod_{i=1}^{\emph{M}}\emph{f}_{X_i}(x_i)$. Then the generalized polynomial chaos representation of $Y$ reads \citep{xiu2002wiener}: \begin{equation} Y=\sum_{\boldsymbol{\alpha\in \mathbb{N}^{\emph{M}}}}\tilde{u}_{\ua} \psi_{\ve{\alpha}}(\ve{X}) \label{eq:PCE:infinite} \end{equation} in which $\tilde{u}_{\ua}$ is a set of unknown deterministic coefficients, $\boldsymbol{\alpha} = (\alpha_1, \alpha_2, ..., \alpha_M)$ is a multi-index set which indicates the polynomial degree of $\psi_{\ve{\alpha}}(\ve{X})$ in each of the \emph{M} input variables. $\psi_{\ve{\alpha}}$s are multivariate orthonormal polynomials with respect to the joint PDF $f_{\Ve{X}}(\Ve{x})$, \ie: \begin{equation} \Esp{\psi_{\ve{\alpha}}(\Ve{X})\psi_{\ve{\beta}}(\Ve{X})}=\int_{\cd_{\Ve{X}}} \psi_{\ve{\alpha}}(\Ve{x})\psi_{\ve{\beta}}(\Ve{x}) f_{\Ve{X}}(\Ve{x}) \di \Ve{x}=\delta_{\ve{\alpha}\ve{\beta}} \label{eq:PCE:orthonormal} \end{equation} where $\delta_{\ve{\alpha}\ve{\beta}}$ is the Kronecker delta. Since the input variables are assumed to be independent, these multivariate polynomials can be constructed by a tensorization of univariate orthonormal polynomials with respect to the marginal PDFs, \ie $ {\psi}_{\ve{\alpha}}(\ve{X})=\prod_{i=1}^M \psi_{\alpha_i}^{(i)}({X}_i) $. For instance, if the inputs are standard normal or uniform variables, the corresponding univariate polynomials are Hermite or Legendre polynomials, respectively. In practice, the infinite series in Eq. (\ref{eq:PCE:infinite}) has to be truncated. Given a maximum polynomial degree $p$, the standard truncation scheme includes all polynomials corresponding to the set $\ca^{M,p} = \{ \ua \in \Nn^M\; \colon \; |\ua| \le p \}, $ where $|\ua|= \sum_{i = 1}^M \alpha_i$ is the total degree of polynomial $\psi_{\ve{\alpha}}$. The cardinality of the set $\ca^{M,p}=\binom{M+p}{p}=P$ increases rapidly by increasing the number of parameters \emph{M} and the order of polynomials $p$. However, it can be controlled with suitable truncation strategies such as $q$-norm hyperbolic truncation \citep{blatman2010adaptive}, that drastically reduce the number of unknowns when $M$ is large. The estimation of the vector of coefficient $\tilde{u}_{\ua}$ can be done non-intrusively by projection \citep{Ghiocel98,Ghiocel2002} or least square regression methods \citep{blatman2010adaptive, berveiller2006stochastic}. The latter is based on minimizing the truncation error $\epsilon$ via least square as follows: \begin{equation} \label{eqn:PCE:coeff:LS:1} {Y}=\cm(\Ve{X} )=\sum_{\ua \in \ca^{M,p}} {\tilde{u}_{\ua}} \, {{\psi}_{\ve{\alpha}}({\ve{X}})} + {{\epsilon}} \equiv \ve{\tilde{U}}^\text{T} {\ve{\Psi}}(\Ve{X}) + {\ve{\epsilon}} \end{equation} This can be formulated as \begin{equation} \label{eqn:PCE:coeff:LS:2} \ve{\hat{\tilde{U}}} = \arg \min \Esp{ \left(\ve{\tilde{U}}^\text{T} \ve{\Psi}(\Ve{X}) - \cm(\Ve{X}) \right)^2}. \end{equation} Let $ \ve{\cx} = \{\ve{x}^{(1)}, \ve{x}^{(2)}, ..., \ve{x}^{(N_{ED})}\} $ and $\ve{\cy}=\{{y}^{(1)}=\cm(\ve{x}^{(1)}), {y}^{(2)}=\cm(\ve{x}^{(2)}), ..., {y}^{(N_{ED})}=\cm(\ve{x}^{(N_{ED})})\}$ be an experimental design with $N_{ED}$ space-filling samples of $\Ve{X}$ and the corresponding system responses, respectively. Then, the minimization problem (\ref{eqn:PCE:coeff:LS:2}) admits a closed form solution \begin{equation} \label{eqn:PCE:coeff:LS:3} \ve{\hat{\tilde{U}}} = (\ve{\Psi}^\text{T}\ve{\Psi})^{-1} \ve{\Psi}^\text{T} \ve{\cy}, \end{equation} in which $\ve{\Psi}$ is the matrix containing the evaluations of the Hilbertian bases, that is $\ve{\Psi}_{ij}={\psi}_{\ua_j}(\ve{x}^{(i)}), i=1,2,...,N_{ED}, j=1,2,...,P$. The accuracy of PCE will be improved by reducing the effect of over-fitting in least square regression. This can be done by using sparse adaptive regression algorithms proposed in \citet{hastie2007forward, efron2004least}. In particular, the Least Angle Regression (LAR) algorithm has been demonstrated to be effective in the context of PCE by \citet{Blatman2011a}. \subsection{Vector-valued response} \label{PCE:PCA} In the case of vector-valued response, \ie $\ve{Y} \in \mathbb{R}^N, N>1$, the presented approach may be applied componentwise. This can make the algorithm computationally cumbersome for models with large number of random outputs. To decrease the computational cost, one can extract the main statistical features of the vector random response by principal component analysis (PCA). The concept has been adapted to the context of PCE by \citet{Blatman2013}. To perform sample-based PCA, let us rewrite the $\ve{\cy}$ as the combination of its mean $\bar{\ve{\cy}}$ and covariance matrix as follows: \begin{equation} \label{eqn:PCE:PCA:1} \ve{\cy}=\bar{\ve{\cy}}+\sum_{i=1}^{N}\ve{u}_i\ve{v}^\text{T}_i \end{equation} where the $\ve{v}_i$'s are the eigenvectors of the covariance matrix: \begin{equation} \label{eqn:PCE:PCA:2} COV(\ve{\cy}) = \Esp{(\ve{\cy}-\bar{\ve{\cy}})^\text{T} (\ve{\cy}-\bar{\ve{\cy})}} =[\ve{v}_1, ..., \ve{v}_{N}] \left[\begin{array}{ccc} l_1 & \dots & 0 \\ & \ddots & \\ 0 & \dots & l_{N} \\ \end{array} \right] \left[\begin{array}{c} \ve{v}_1^\text{T}\\ \vdots \\ \ve{v}_{N}^\text{T} \end{array}\right]^\text{T} \end{equation} and the $\ve{u}_i$'s are vectors such that \begin{equation} \label{eqn:PCE:PCA:3} \ve{u}_i=(\ve{\cy}-\bar{\ve{\cy}})\ve{v}_i. \end{equation} One can approximate $\ve{\cy}$ by the $\hat{N}$-term truncation: \begin{equation} \label{eqn:PCE:PCA:4} \ve{\cy}=\bar{\ve{\cy}}+\sum_{i=1}^{\hat{N}}\ve{u}_i\ve{v}^\text{T}_i, \quad \hat{N} \ll {N}. \end{equation} Since $\bar{\ve{\cy}}$ and $\ve{v}^\text{T}_i$ are the mean and the eigenvectors of the system responses, respectively, they are independent of the realization. Therefore, PC expansion can be applied directly on the $\hat{N} \ll N$ auxiliary variables $\ve{u}_i$. Besides, acknowledging the fact that the PCA is an invertible transform, the original output can be retrieved directly from Eq. (\ref{eqn:PCE:PCA:4}) for every new prediction of $\ve{u}$. \subsubsection{Vector-valued data with extremely large output size} \label{method:PCA} Assume $\ve{\cy} \in \mathbb{R}^{{N}_{ED} \times N}$, has an extremely large $N$ and $N \gg N_{ED}$. Then $ COV(\ve{\cy}) \in \mathbb{R}^{N \times N}$ is exceptionally large and solving the eigenvalue problem numerically may be unfeasible. To address this issue, the following well-known theorem and the associated corollary is presented. \begin{theorem} (Singular value decomposition) \label{thm:PCE:PCA} Let $A \in \mathbb{R}^{n \times N}, n<N$ and $rank(A)=n$ then there exist two orthogonal matrices, $U$ and $V$ and two diagonal matrices $S$ and $\Sigma$ such that $A=USV^\text{T}=U \left[\begin{array}{ll} \Sigma & 0\\ 0 & 0 \end{array}\right] V^\text{T}$ in which $ U \in \mathbb{R}^{n \times n}$, $ S \in \mathbb{R}^{n \times N}$, $ \Sigma \in \mathbb{R}^{n \times n}$ and $ V \in \mathbb{R}^{N \times N}$. Furthermore, this decomposition can be written as eigenvalue decomposition as $AA^\text{T} U=U\Sigma$ and $A^\text{T} AV=VS$. \end{theorem} \begin{corollary} \label{cor:PCE:PCA} The nonzero eigenvalues of $A^\text{T} A$ and $AA^\text{T}$ are equal. Furthermore, U and V are related to each other by \begin{equation} \label{eq:cor:PCE:PCA} U = AVS^{-1} \end{equation} \end{corollary} The proof of the theorem can be found in any matrix analysis book, \eg \citet{laub2005matrix} and the corollary directly follows from the theorem. Therefore, instead of the eigenvalue calculation of $(\ve{\cy}-\bar{\ve{\cy}})^\text{T} (\ve{\cy}-\bar{\ve{\cy}}) \in \mathbb{R}^{N \times N}$, which may be an extremely large matrix, one can consider $(\ve{\cy}-\bar{\ve{\cy}}) (\ve{\cy}-\bar{\ve{\cy}})^\text{T} \in \mathbb{R}^{N_{ED} \times N_{ED}}$, which is much smaller. The associated eigenvectors can be transformed to the ones in Eq. (\ref{eqn:PCE:PCA:4}) through Eq. (\ref{eq:cor:PCE:PCA}). \subsection{Stochastic frequency transformation} \label{transformation} In this section, the method of stochastic frequency transformation is developed to address the challenge of frequency shift at eigenfrequencies due to uncertainty in the parameters. The idea is basically to apply a transformation to the system responses to maximize their similarity before building PCE, as first proposed by \citet{mai2015polynomial}. Here, the technique is extended and adopted into the frequency domain to obtain PCEs of the FRFs. To this end, the following algorithm is proposed. First, an experimental design $\ve{\mathcal{X}}$ and the corresponding model responses $\ve{\cy}$ are evaluated. Each system response will be called a trajectory in the remainder of this paper. Let the frequency range of interest be discretized to $n_{\omega}$ equidistant frequencies $\Omega_d=\left[\omega_1, \omega_2,..., \omega_{n_{\omega}}\right]$. Then, the required system responses are matrices $\ve{\mathcal{H}}(\Omega_d) \in \mathbb{C}^{n_u n_y \times n_\omega} $ and $\ve{\ve{\mathcal{F}}}\in \mathbb{R}^{n_u n_y \times n_{sf}}$. The matrix $\ve{\mathcal{H}} (\Omega_d)$ is obtained by evaluating Eq. (\ref{eq4}) at frequencies $\Omega_d$. The matrix $\ve{\ve{\mathcal{F}}}$ consists of all the resonance and antiresonance frequencies of the system's input-output relations for one system realization, as follows: \begin{equation} \label{eq16} \ve{\mathcal{F}}= \left[\begin{array}{ccccccccc} \omega_1 & \omega_{p_1} & \omega^1_{m_1} & \omega_{p_2} & \ldots & \omega_{p_{n_p-1}} & \omega^1_{m_{n_p-1}} & \omega_{p_{n_p}} & \omega_{n_\omega} \\ \omega_1 & \omega_{p_1} & \omega^2_{m_1} & \omega_{p_2} & \ldots & \omega_{p_{n_p-1}} & \omega^2_{m_{n_p-1}} & \omega_{p_{n_p}} & \omega_{n_\omega} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots \\ \omega_1 & \omega_{p_1} & \omega^{n_u \times n_y}_{m_1} & \omega_{p_2} & \ldots & \omega_{p_{n_p-1}} & \omega^{n_u \times n_y}_{m_{n_p-1}} & \omega_{p_{n_p}} & \omega_{n_\omega} \end{array}\right]= \left[\begin{array}{c} {\mathcal{F}}_1\\ {\mathcal{F}}_2\\ \vdots\\ {\mathcal{F}}_{n_u \times n_y}\\ \end{array}\right] \end{equation} in which $n_p$ is the number of eigenvalues of the system. Furthermore, $\{\omega_{p_i} i=1,2,...,n_p\}$ are the resonant frequencies and $\{\omega^l_{m_i} i=1,2,...,n_p-1,l=1,2,...,n_u \times n_y\}$ are frequencies between each two consecutive resonant frequencies at which the minimum amplitude occurs. Throughout the paper, these important frequencies, shown by red asterisks in Figure \ref{fig:FRF:2DOF} for a typical frequency response, will be referred to as \emph{selected frequencies}. Their number $n_{sf}$ is assumed to be constant across different realizations of the system inputs. $\{\mathcal{F}_i, i=1,2,\cdots,n_u\times n_y\}$ includes all the \textit{selected frequencies} for the $i^{th}$ input-output relation. For the next step of the algorithm, let $ \ve{x}^{(ref)} $ be selected randomly among the sample points in the ED to have its associated trajectory as the reference, \ie: $$\ve{\mathcal{H}}^{ref}=\ve{\mathcal{H}}(\ve{x}^{(ref)},\Omega_d), \quad \ve{\mathcal{F}}^{ref}=\ve{\mathcal{F}}(\ve{x}^{(ref)};\omega).$$ Then, the other trajectories are transformed in the frequency axis so as to have the peaks and valleys as close to the corresponding locations in the reference trajectory as possible \ie: \begin{equation} \label{eq:sTrans:PLT} \mathcal{T}^k_i=\mathcal{T}^{(k)}_i(\omega, \nu^{(k)}_i)=\{\nu^{(k)}_i = f(\omega)|\mathcal{F}_i(\ve{x}^{(k)};\nu^{(k)})=\mathcal{F}^{ref}_i\} \end{equation} where $i=1,2,\cdots,n_u\times n_y$, $k=1,2,\cdots,N_{ED}$ and $\nu$ is the transformed frequency axis called \textit{scaled frequency}. The transform $\mathcal{T}^k_i$ consists of a continuous piecewise linear transform of the intervals between the identified \textit{selected frequencies} that align them to the corresponding ones of the reference trajectory as follows, \begin{equation} \label{eq:trans:scaling} \mathcal{T}^k_i : \nu^{(k)}_{i,l}=a^{(k)}\omega_l+b^{(k)} \quad \mathcal{F}^{(k)}_i(j)\leq \omega_l \leq \mathcal{F}^{(k)}_i(j+1) \end{equation} where $$ a^{(k)}=\frac{\mathcal{F}^{ref}_i(j)-\mathcal{F}^{ref}_i(j+1)}{\mathcal{F}^{(k)}_i(j)-\mathcal{F}^{(k)}_i(j+1)}, $$ $$ b^{(k)}=\frac{\mathcal{F}^{ref}_i(j)\mathcal{F}^{(k)}_i(j+1)-\mathcal{F}^{ref}_i(j+1)\mathcal{F}^{(k)}_i(j+1)}{\mathcal{F}^{(k)}_i(j)-\mathcal{F}^{(k)}_i(j+1)},$$ $j=1,2,\cdots,n_{sf}-1$ and $l=1,2,\cdots,n_\omega$. This transformation results in the FRFs which are similar to the reference one in the \textit{scaled frequency} domain: \begin{equation} \label{eq:sTrans:sFRF} \tilde{\mathcal{H}}_i(\ve{x}^{(k)},\cn^{(k)}_i)=\mathcal{H}_i(\ve{x}^{(k)},\Omega_d) \circ \mathcal{T}^k_i \end{equation} where the set $\cn^{(k)}_i=\{\nu_{i,1}^{(k)}, \nu_{i,2}^{(k)}, \cdots, \nu_{i,n_\omega}^{(k)}\}$ consists of the discretized \textit{scaled frequencies} which are non-equidistantly spread over the frequency range of interest. Figures \ref{fig:FRF:2DOF:freq} and \ref{fig:FRF:2DOF:Gfreq} illustrate the FRFs of a 2-DOF system versus frequency and \textit{scaled frequency}, respectively. An example of such a transform used for transforming the FRFs of a 2-DOF is presented in Figure \ref{fig:FRF:2DOF:CPL}. One should notice that since $\cn^{(k)}_i$ contains the non-equidistant \textit{scaled frequencies}, a final interpolation is required to obtain a common discretized \textit{scaled frequency} $\cn^{ref}=\{\nu_1^{ref}, \nu_2^{ref}, \cdots, \nu_{n_\omega}^{ref}\}$ between the reference and all other trajectories. To reduce interpolation error in the system response, small frequency steps should be selected. The proposed approach for preprocessing the FRFs is summarized in Algorithm \ref{alg:method:scale}. \clearpage \begin{algorithm} \begin{algorithmic}[1] \STATE {\bf Input}: ${\ve{\cx}}=\{{\ve{x}^{(1)}, \ve{x}^{(2)},...,\ve{x}^{(N_{ED})}}$\} \STATE $\ve{\mathcal{H}}^{ref}$=$\ve{\mathcal{H}}(\ve{x}^{(r)},\Omega_d)$, $\ve{\mathcal{F}}^{ref}=\ve{\mathcal{F}}(\ve{x}^{(r)};\omega) $, for a random $r \in [1,...,N_{ED}]$ \FOR{$k=1$ \TO $N_{ED}$} \STATE $\ve{\mathcal{F}}(\ve{x}^{(k)};\omega)$=$[\mathcal{F}_1(\ve{x}^{(k)};\omega) \mathcal{F}_2(\ve{x}^{(k)};\omega), \cdots, \mathcal{F}_{n_u\times n_y}(\ve{x}^{(k)};\omega)]^\text{T}$ using Eq. (\ref{eq16}) \STATE $\ve{\mathcal{H}}(\ve{x}^{(k)};\Omega_d)$=$[\mathcal{H}_1(\ve{x}^{(k)};\Omega_d), \mathcal{H}_2(\ve{x}^{(k)};\Omega_d), \cdots, \mathcal{H}_{n_u\times n_y}(\ve{x}^{(k)};\Omega_d) ]^\text{T}$ using Eq. (\ref{eq4}) \FOR{$i=1$ \TO $n_u \times n_y$} \STATE Evaluate $\mathcal{T}^k_i$ using Eq. (\ref{eq:trans:scaling}) \STATE $\tilde{\mathcal{H}}_i(\ve{x}^{(k)},\cn^{(k)}_i)$=$\mathcal{H}_i(\ve{x}^{(k)},\Omega_d) \circ \mathcal{T}^k_i$ \STATE $\tilde{\mathcal{H}}_i(\ve{x}^{(k)},\cn^{ref})$=interpolate$(\tilde{\mathcal{H}}_i(\ve{x}^{(k)},\cn^{(k)}_i), \cn^{(k)}_i, \cn^{ref})$ \ENDFOR \STATE $\tilde{\ve{\mathcal{H}}}(\ve{x}^{(k)};\cn^{ref})$=$[\tilde{\mathcal{H}}_1(\ve{x}^{(k)};\cn^{ref}), \tilde{\mathcal{H}}_2(\ve{x}^{(k)};\cn^{ref}), \cdots, \tilde{\mathcal{H}}_{n_u\times n_y}(\ve{x}^{(k)};\cn^{ref}) ]^\text{T}$ \ENDFOR \STATE {\bf F}=\{vect$(\ve{\mathcal{F}}(\ve{x}^{(1)};\omega))$, vect$(\ve{\mathcal{F}}(\ve{x}^{(2)};\omega)),...,\text{vect}(\ve{\mathcal{F}}(\ve{x}^{(N_{ED})};\omega))\}^\text{T}$ \STATE $\tilde{\bf H}$($\cn^{ref}$)= \{vec$(\tilde{\ve{\mathcal{H}}}(\ve{x}^{(1)},\cn^{ref})), \text{vec}( \tilde{\ve{\mathcal{H}}}(\ve{x}^{(2)},\cn^{ref})),...,\text{vec}(\tilde{\ve{\mathcal{H}}}(\ve{x}^{(N_{ED})},\cn^{ref}))\}^\text{T} $, \STATE {\bf Output}: {\bf F}, {$\ve{\mathcal{G}}^\mathfrak{R}$}=real($\tilde{\bf H}(\cn^{ref})$), {$\ve{\mathcal{G}}^\mathfrak{I}$}=imag($\tilde{\bf H}(\cn^{ref})$) \end{algorithmic} \caption{Data preprocessing: continuous piecewise-linear transformation} \label{alg:method:scale} \end{algorithm} \begin{figure}[H] \centering \begin{subfigure}[b]{1\columnwidth} \centering \label{fig:FRF:2DOF:indirect} \includegraphics[width=.75\linewidth]{FRFvsfreq2DOF_direct_F} \caption{FRFs calculated at first system output} \end{subfigure \\ \begin{subfigure}[b]{1\columnwidth} \centering \label{fig:FRF:2DOF:direct} \includegraphics[width=.75\linewidth]{FRFvsfreq2DOF_indirect_F} \caption{FRF calculated at second system output} \end{subfigure} \caption{FRFs of the 2-DOF system presented in Figure \ref{fig:2DOF:simple}. The \textit{selected frequencies} $\ve{\mathcal{F}}$ and the associated notations are illustrated with asterisks (\textasteriskcentered).} \label{fig:FRF:2DOF} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=.85\linewidth]{FRFvsfreq2DOF_direct} \caption{FRFs before frequency transformation.} \label{fig:FRF:2DOF:freq} \end{subfigure \\ \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=.85\linewidth]{FRFvsGfreq2DOF_direct} \caption{FRFs after frequency transformation.} \label{fig:FRF:2DOF:Gfreq} \end{subfigure} \caption{Several realizations of the FRFs of the 2-DOF system at first system output.} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\columnwidth]{ferqvsGfreq2DOF_direct} \caption{Continuous piecewise-linear function used to transform the FRFs in Figure \ref{fig:FRF:2DOF:freq} into Figure \ref{fig:FRF:2DOF:Gfreq}} \label{fig:FRF:2DOF:CPL} \end{figure} \subsection{Polynomial chaos representation} \label{PCE:transformation} The non-smooth behavior of the FRFs makes their surrogation by polynomials a problematic task. To solve this issue, one PCE could be calculated for each \textit{scaled frequency}. This means that to compute PCEs for the FRFs, two sets of PCEs are required. The first set is to predict the \textit{selected frequencies}, collected in the matrix $\ve{\mathcal{F}}$ (\ref{eq16}), which are required for performing the stochastic transformation as explained in Section \ref{transformation}. This matrix includes eigenfrequencies of the system, therefore by obtaining this set of PCEs, the problem of random eigenvalue calculation is solved by the use of PCE as a byproduct. This problem has been addressed in some recent works, \eg \citet{PichleraMetamodelNaturalfrequency2012}. Since the number of random outputs for this set is not very large, PCE can be applied to each of the \textit{selected frequencies} separately, \ie for $i=1,2,\cdots,n_u\times n_y$ and $j=1,2,\cdots \times n_{sf} $ \begin{equation} \label{eqn:PCE:freq} \hat{{\mathcal{F}}}_i(j)=\sum_{\ua \in \ca^{M,p}} f_{\ua}^i(j){{\psi}_{\ve{\alpha}}({\ve{x}})}. \end{equation} The second set of PCE is for the system response at each individual \textit{scaled frequency}. To this end, let $\tilde{\bf H}(\cn^{ref}) \in \mathbb{C}^{N_{ED} \times (n_\omega \times n_u \times n_y)}$, defined in Algorithm \ref{alg:method:scale}, be a matrix of trajectories at \textit{scaled frequencies}. Since the FRFs have complex-valued responses, whereas the PCEs are defined for real-valued functions only\footnote{Limited literature is available on the use of PCE for complex-valued functions, see \eg \citet{soize2004physical}.}, separate PCEs need to be performed for real and imaginary parts of the FRFs. Therefore, the matrix $\ve{\mathcal{G}} = \{\ve{\mathcal{G}}^\mathfrak{R}, \ve{\mathcal{G}}^\mathfrak{I}\} = \{real(\tilde{\bf H}(\cn^{ref})), imag(\tilde{\bf H}(\cn^{ref}))\} \in \mathbb{R}^{N_{ED} \times (2 \times n_\omega \times n_u \times n_y)}$ is the response matrix for which the PCE should be built. The number of random outputs for this set, $N = 2 \times n_\omega \times n_u \times n_y$, can be extremely large. As discussed in Section \ref{PCE:PCA}, the PCEs are therefore applied directly to the principal components of $\mathcal{G}$, yielding: \begin{equation} \label{eqn:PCE:rFRF} \hat{\ve{\mathcal{G}}}^\mathfrak{R}= \bar{\ve{\mathcal{G}}}^\mathfrak{R}+ \sum_{j=1}^{N'} \sum_{\ua \in \ca^{M,p}} (u_{\ua}^\mathfrak{R}{{\psi}_{\ve{\alpha}}({\ve{x}})})_j \ve{v}^{\mathfrak{R}^\text{T}}_j, \end{equation} \begin{equation} \label{eqn:PCE:iFRF} \hat{\ve{\mathcal{G}}}^\mathfrak{I}= \bar{\ve{\mathcal{G}}}^\mathfrak{I}+ \sum_{j=1}^{N'} \sum_{\ua \in \ca^{M,p}} (u_{\ua}^\mathfrak{I}{{\psi}_{\ve{\alpha}}({\ve{x}})})_j \ve{v}^{\mathfrak{I}^\text{T}}_j, \end{equation} where $u_{\ua}^\mathfrak{R}$ and $u_{\ua}^\mathfrak{I}$ are respectively the vectors of coefficients of the PCEs made for the real and imaginary parts of the FRFs. \subsection{Surrogate response prediction} \label{Method:predict} To predict the surrogate model response at a new sample point $\ve{x}^{(0)}$, several steps need to be taken to transform the PCE predictors in Eqs. (\ref{eqn:PCE:rFRF}) and (\ref{eqn:PCE:iFRF}) from the \textit{scaled frequency} axis $\nu$ to the original frequency axis $\omega$. The matrices $\hat{\ve{\mathcal{G}}}^\mathfrak{R}$ and $\hat{\ve{\mathcal{G}}}^\mathfrak{I}$ are obtained by evaluating the second set of PCEs in Eqs. (\ref{eqn:PCE:rFRF}) and (\ref{eqn:PCE:iFRF}), respectively. Then, the FRFs at the \textit{scaled frequencies} can be obtained at the new sample point by the inverse vectorization of ${\hat{\tilde{\bf{H}}}}(\cn^{ref})=\hat{\ve{\mathcal{G}}}^\mathfrak{R}+j\hat{\ve{\mathcal{G}}}^\mathfrak{I}$ where $j=\sqrt{-1}$. To obtain the FRF at the original frequency $\omega$ the following transformation is used, \begin{equation} \label{eq:siTrans:sFRF} \hat{\tilde{\mathcal{H}}}_i(\ve{x}^{(0)},\Omega^{(0)}_i)=\hat{\tilde{\mathcal{H}}}_i(\ve{x}^{(0)},\cn^{ref}) \circ (\mathcal{T}^0_i)^{-1}, \quad i=1,2,\cdots,n_u\times n_y \end{equation} where $\mathcal{T}^0_i$ is obtained by evaluating Eq. (\ref{eq:trans:scaling}) at $\hat{\ve{\mathcal{F}}}^{(0)}=\hat{\ve{\mathcal{F}}}(\ve{x}^{(0)}; \omega)$ which is the matrix of \textit{selected frequencies} at the new sample point $\ve{x}^{(0)}$ evaluated by Eq. (\ref{eqn:PCE:freq}). Besides, $\Omega_i^{(0)}$ is a set of discretized frequencies which are non-equidistantly spread over the frequency rage of interest. In order to provide the frequency response at the desired frequencies $\Omega_d$, interpolation is inevitable. The algorithm for predicting the system response at a new sample point is briefly presented in Algorithm \ref{alg:method:predict}. \begin{algorithm} \begin{algorithmic}[1] \STATE {\bf Input}: $\ve{x}^{(0)} \neq \ve{x}^{(l)}, \quad l=1,2,...,N_{ED}$ and $\mathcal{H}^{ref}, \ve{\mathcal{F}}^{ref}, \bar{\mathcal{G}}$ and $\cn^{ref}$ \STATE $\hat{\ve{\mathcal{G}}}^\mathfrak{R}$= $\hat{\ve{\mathcal{G}}}^\mathfrak{R}(\ve{x}^{(0)}, \cn^{ref})$ using Eq. (\ref{eqn:PCE:rFRF}). \STATE $\hat{\ve{\mathcal{G}}}^\mathfrak{I}$= $\hat{\ve{\mathcal{G}}}^\mathfrak{I}(\ve{x}^{(0)}, \cn^{ref})$ using Eq. (\ref{eqn:PCE:iFRF}). \STATE $\hat{\tilde{\bf{H}}}$($\ve{x}^{(0)}, \cn^{ref}$) = $\hat{\ve{\mathcal{G}}}^\mathfrak{R}$+$j \hat{\ve{\mathcal{G}}}^\mathfrak{I}$ \STATE Construct $\hat{\tilde{\ve{\mathcal{H}}}}$($\ve{x}^{(0)}, \cn^{ref}$) from $\hat{\tilde{\bf{H}}}$($\ve{x}^{(0)}, \cn^{ref}$) by inverse vectorization operation \FOR{$i=1$ \TO $n_u \times n_y$} \STATE Evaluate $\hat{\mathcal{F}}_i^{0}$=$\hat{\mathcal{F}}_i(\ve{x}^{(0)}; \omega)$ using Eq. (\ref{eqn:PCE:freq}) \STATE Evaluate $\mathcal{T}^0_i$ using Eq. (\ref{eq:trans:scaling}) \STATE $\hat{\mathcal{H}}_i(\ve{x}^{(0)},\Omega_i^{(0)})$=$\hat{\tilde{\mathcal{H}}}_i(\ve{x}^{(0)},\cn^{ref}) \circ (\mathcal{T}^0_i)^{-1}$ \STATE $\hat{\mathcal{H}}_i(\ve{x}^{(0)},\Omega_d)$=interpolate$(\hat{\mathcal{H}}_i(\ve{x}^{(0)},\Omega_i^{(0)}), \Omega_i^{(0)},\Omega_d)$ \ENDFOR \STATE {\bf Output}: $\hat{\ve{\mathcal{H}}}(\Omega_d)$=$\{\hat{\mathcal{H}}_1(\ve{x}^{(0)},\Omega_d), \hat{\mathcal{H}}_2(\ve{x}^{(0)},\Omega_d), \cdots,\hat{\mathcal{H}}_{n_u\times n_y}(\ve{x}^{(0)},\Omega_d)\}$ \end{algorithmic} \caption{Predicting system responses} \label{alg:method:predict} \end{algorithm} \section{Examples} \label{example} \subsection{Introduction} \label{exam:intro} In this section, the proposed method will be applied to two case studies. The first one is a simple 2-DOF system to illustrate how the method works. The second one is a 6-DOF system with a relatively large (16-dimensional), parameter space. For the sake of readability, only results for one output (the $1^{st}$ output for the 2-DOF and the $6^{th}$ output for the 6-DOF system) are shown for each case while the results for the other outputs are reported for completeness in the Appendices. To assess the accuracy of the surrogate models quantitatively, the following measure based on the root mean square (rms) error of the vectors is defined. \begin{equation} \label{eq:Err:rms} Error(\bullet) = \frac{\text{rms}((\bullet)^{ex}-(\bullet)^{approx}))}{\text{rms}((\bullet)^{ex})}\times 100, \end{equation} in which $(\bullet)$ is the vector of interest. $(\cdot)^{ex}$ and $(\cdot)^{approx}$ represent results obtained by running the true and surrogate models, respectively. This error aims at measuring the relative difference between vectorial data, such as one FRF or the mean and standard deviation of several FRFs. For the mean and standard deviation of the data, the reference results are obtained by evaluating the true model at 10,000 Monte-Carlo samples and the approximations are calculated by the PCE surrogate at the same 10,000 points. \subsection{Simple 2-DOF system \citep{jacquelinpolynomial2015}} As the first example, the simple 2-DOF system shown in Figure \ref{fig:2DOF:simple} is selected to highlight the steps of the proposed method. In this system, stiffness is assumed to be uncertain \begin{equation} \label{eqn:2DOF:stiffness} k=\bar{k}(1+\delta_k \xi) \end{equation} where $\xi$ is a standard normal random variable. Other properties of the system are listed in Table \ref{tab:2DOF}. The system has one input force $f$ at mass 1, two physical outputs $q_1$ and $q_2$ and thus, two FRFs. The FRFs of the system are obtained in the range of 10 to 35 Hz with a frequency step of 0.01 Hz, as shown in Figure \ref{fig:FRF:2DOF}. The \textit{selected frequencies} are also shown in the figure with red asterisks. 40 points are sampled in the parameter space using Latin Hypercube Sampling (LHS) to form an experimental design (ED) ${\ve{\cx}}$ and the model is evaluated at these points to find the system responses of interest, namely the FRFs, $\ve{\mathcal{H}}$, and the \textit{selected frequencies} $\ve{\mathcal{F}}$. \begin{figure} [H] \centering \begin{adjustbox}{max width=0.5\columnwidth} \begin{tikzpicture} \tikzstyle{spring}=[thick,decorate,decoration={zigzag,pre length=0.3cm,post length=0.3cm,segment length=6}] \tikzstyle{damper}=[thick,decoration={markings, mark connection node=dmp, mark=at position 0.5 with { \node (dmp) [thick,inner sep=0pt,transform shape,rotate=-90,minimum width=10pt,minimum height=3pt,draw=none] {}; \draw [thick] ($(dmp.north east)+(2pt,0)$) -- (dmp.south east) -- (dmp.south west) -- ($(dmp.north west)+(2pt,0)$); \draw [thick] ($(dmp.north)+(0,-4pt)$) -- ($(dmp.north)+(0,4pt)$); } }, decorate] \tikzstyle{ground}=[fill,pattern=north east lines,draw=none,minimum width=0.75cm,minimum height=0.3cm] \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M1) [minimum width=1.5cm, minimum height=1.5cm] {$m$}; \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M2) at (3,0) [minimum width=1.5cm, minimum height=1.5cm] {$m$}; \node (ground1) [ground,anchor=north,yshift=-0.25cm,minimum width=4.8cm] at (M1.south) {}; \draw (ground1.north east) -- (ground1.north west); \draw [thick, fill={cyan}] (M1.south west) ++ (0.2cm,-0.125cm) circle (0.125cm) (M1.south east) ++ (-0.2cm,-0.125cm) circle (0.125cm); \node (ground2) [ground,anchor=north,yshift=-0.25cm,minimum width=3cm] at (M2.south) {}; \draw (ground2.north east) -- (ground2.north west); \draw [thick, fill={cyan}] (M2.south west) ++ (0.2cm,-0.125cm) circle (0.125cm) (M2.south east) ++ (-0.2cm,-0.125cm) circle (0.125cm); \node (wall1) [ground, rotate=-90, minimum width=2cm,yshift=-2.5cm] {}; \draw (wall1.north east) -- (wall1.north west); \draw [spring] ($(wall1.90) + (0,0.5)$) -- ($(M1.west) + (0,0.5)$) node [midway,above] {$k$}; \draw [damper] ($(wall1.90) - (0,0.5)$) -- ($(M1.west) - (0,0.5)$) node [midway,above=3] {$c$}; \draw[spring] ($(M1.east) + (0,0.5)$) -- ($(M2.west) + (0,0.5)$) node [midway,above] {$k$}; \draw[damper] ($(M1.east) - (0,0.5)$) -- ($(M2.west) - (0,0.5)$) node [midway,above=3] {$c$}; \draw[thick, dashed] ($(M1.north west)$) -- ($(M1.north west) + (0,1.1)$); \draw[thick, dashed] ($(M2.north west)$) -- ($(M2.north west) + (0,0.6)$); \draw[ultra thick, -latex] ($(M2.north west) + (0,0.5)$) -- ($(M2.north west) + (1,0.5)$) node [midway, below] {$q_2$}; \draw[ultra thick, -latex] ($(M1.north west) + (0,0.5)$) -- ($(M1.north west) + (1,0.5)$) node [midway, below] {$q_1$}; \draw[ultra thick, -latex] ($(M1.north west) + (0,1)$) -- ($(M1.north west) + (1.5,1)$) node [midway, below] {$f$}; \end{tikzpicture} \end{adjustbox} \caption{2-DOF system} \label{fig:2DOF:simple} \end{figure} \begin{table} [H] \centering \caption{2-DOF system's charactristic} \begin{tabular}{l|cccc} \hline Characteristics & m (kg) & $\bar{k}$ (Nm$^{-1}$) & c (Nm$^{-1}$s$^{-1})$ & $\delta_k$ \\ \hline Value & 1 & 15000 & 1 & 5\% \\ \hline \end{tabular} \label{tab:2DOF} \end{table} Figure \ref{fig:FRF:2DOF:freq} shows the FRFs of the system evaluated at ${\ve{\cx}}$. To find the transformed FRFs, \ie FRFs at \textit{scaled frequencies} $\nu$, one trajectory was selected randomly as the reference and the others were scaled such that their peaks and valleys were at the same \textit{scaled frequencies} as that of the reference trajectory. The transformed FRFs are shown in Figure \ref{fig:FRF:2DOF:Gfreq} and the corresponding continuous piecewise-linear transformations in Figure \ref{fig:FRF:2DOF:CPL}. The next step is to find a suitable basis and the associated coefficients for the polynomial chaos expansion. In this case, since the random variable is Gaussian, the basis of the polynomial chaos consists of Hermite polynomials. The LARS algorithm \citep{blatman2011adaptive} is employed here to calculated a sparse PCE with adaptive degree. \begin{figure}[H] \centering \includegraphics[width=0.75\columnwidth]{eigenvalue_2DOF} \caption{Spectrum of the eigenvalues of the covariance matrix for the 2-DOF system; evaluated for both real part $\ve{\mathcal{G}}^\mathfrak{R}$ and imaginary part $\ve{\mathcal{G}}^\mathfrak{I}$} \label{fig:eigval:2DOF} \end{figure} \newpage The first set of expansion consists of 10 PCEs to surrogate the \textit{selected frequencies} $\ve{\mathcal{F}}$, shown in Figure \ref{fig:FRF:2DOF} by red asterisks. As the second set of expansions, PCE is made for the dominant components of $\ve{\mathcal{G}}$ as explained in Section \ref{method:PCA}. To do so, the $\hat{N}$ largest principal components are selected such that the sum of their associated eigenvalues amounts to $99\%$ of the sum of all the eigenvalues, \ie $\sum_{i=1}^{\hat{N}}\lambda_i = 0.99 \sum_{i=1}^{{N}}\lambda_i $ in which $\lambda_i$'s are the eigenvalues of the covariance matrix of either $\ve{\mathcal{G}}^\mathfrak{R}$ or $\ve{\mathcal{G}}^\mathfrak{I}$. By this truncation, the number of random outputs is reduced from 2501 $\times$ 2 $\times$ 2 to 6 components, namely 3 components for the real part and 3 components for the imaginary part. The spectra of the eigenvalues of the covariance matrix of the $\ve{\mathcal{G}}^\mathfrak{R}$ and $\ve{\mathcal{G}}^\mathfrak{I}$ are displayed in Figure \ref{fig:eigval:2DOF}. All the PCEs used to make this surrogate model, including those for the \textit{selected frequencies} and those for the dominant components have orders between 3 and 8. The efficiency of the proposed approach is assessed by comparing the prediction accuracy of the surrogate model on a large reference validation set (10,000 samples calculated with the full model). PCE estimate of the mean and standard deviation of the surrogate model are compared to their Monte-Carlo estimators on experimental designs of increasing size. The resulting convergence curves are given in Figure \ref{fig:FRF:2DOF:conv}. They indicate that the PCE converges faster to the reference results for the mean and standard deviation. Their estimates are approximately two and one order of magnitude more accurate than those from the MC estimators. In addition, one can conclude that 40 points are enough for the ED in this example, since for larger sizes the accuracy does not improve significantly. It is worth mentioning that the coefficient of variation (COV) of the parameters and the level of damping are among the criteria that can affect the size of the ED. Therefore, larger COV and lower levels of damping are not obstacles for the proposed method provided a sufficiently large ED is used. \begin{figure}[H] \centering \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=.7\columnwidth]{conv_mean_2DOF} \caption{Convergence plot of the mean value of the FRFs} \label{fig:2DOF:PCE:convmean} \end{subfigure \\ \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=.7\columnwidth]{conv_std_2DOF} \caption{Convergence plot of the standard deviation of the FRFs} \label{fig:2DOF:PCE:convstd} \end{subfigure} \caption{Convergence plot of the statistics of the FRFs obtained by the PCE ($\ast$) and the true model ($\times$) with increasing ED size. The reference results were obtained by 10,000 Monte-Carlo simulations of the true model. } \label{fig:FRF:2DOF:conv} \end{figure} The 10,000 model evaluations used to produce the convergence curves in Figure \ref{fig:FRF:2DOF:conv} are also used to provide a detailed validation of the performance of the PCE surrogates on various quantities of interests. The results are presented in the following figures. In Figure \ref{fig:2DOF:predict:eigval}, the \textit{selected frequencies} obtained by the PCE are shown versus the true ones. The results show that the PCE model accurately predicts the \textit{selected frequencies}. Since the amplitudes at the resonant frequencies are the most sensitive parts of the FRFs and contains most of crucial information about the system, it is one of the most interesting parts of the FRFs for the researcher. Figure \ref{fig:2DOF:Amp:freqs} shows the histograms of these amplitudes obtained by the true and the surrogate models at at all 10,000 validation points. Moreover, in Figure \ref{fig:FRF:2DOF:envelope}, the whole FRFs at the validation points are depicted and compared. Their associated means are also shown by black lines. The results indicate accurate prediction of the amplitude at the resonant frequencies as well as the whole FRFs. A quantitative accuracy analysis for the whole FRF was done by using Eq. (\ref{eq:Err:rms}) and its histogram presented in Figure \ref{fig:2DOF:error:frf} confirms the high accuracy of the surrogate model. \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=1\columnwidth]{nfreq_1_2DOF} \label{fig:2DOF:eig:1} \end{subfigure \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=1\columnwidth]{nfreq_2_2DOF} \label{fig:2DOF:eig:2} \end{subfigure \\ \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=1\columnwidth]{mfreq_1_2DOF} \label{fig:2DOF:min:1} \end{subfigure \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=1\columnwidth]{mfreq_2_2DOF} \label{fig:2DOF:min:2} \end{subfigure \caption{Selected frequencies predicted by the surrogate model versus by the true model. upper row: eigenfrequencies, lower row: frequencies where minimum amplitude occurs, see matrix (\ref{eq16}) for notations.} \label{fig:2DOF:predict:eigval} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{Amp_hist_freq1_2DOF} \caption{First resonant frequency} \label{fig:2DOF:Amp:freq1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{Amp_hist_freq2_2DOF} \caption{Second resonant frequency} \label{fig:2DOF:Amp:freq2} \end{subfigure \caption{Histogram of the amplitude of the FRF at the resonant frequencies, obtained by evaluating the true and surrogate models on the 10,000 validation points.} \label{fig:2DOF:Amp:freqs} \end{figure} Another accuracy test is given by the comparison between the first two moments of the FRFs. The mean and standard deviation of the trajectories obtained by the true model and the surrogate model are compared in Figure \ref{fig:FRF:2DOF:out1} for the $1^{st}$ output and in \ref{app:2DOF:stat:out2} for the $2^{nd}$ output. They reveal the accuracy of the proposed surrogate model in predicting the first two moments of the FRFs. \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{realizations_out1_2DOF.png} \caption{First system output- True model} \label{fig:2DOF:envelope:FRF:MC1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{realizations_out1_PCE_2DOF.png} \caption{First system output- Surrogate model} \label{fig:2DOF:envelope:FRF:PCE1} \end{subfigure \caption{All the FRFs obtained by evaluating the true and the surrogate model at 10,000 MC samples.} \label{fig:FRF:2DOF:envelope} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{error_hyst_out1_2DOF} \caption{First system output} \label{fig:2DOF:error:frf:out1} \end{subfigure \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{error_hyst_out2_2DOF} \caption{Second system output} \label{fig:2DOF:error:frf:out2} \end{subfigure} \caption{Error of the FRF predicted by PCE surrogate model, evaluated by Eq. (\ref{eq:Err:rms}).} \label{fig:2DOF:error:frf} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{mean_out1_2DOF} \caption{Magnitude of the mean} \label{fig:2DOF:FRF:mean1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{mean_out1_2DOF_phase} \caption{phase of the mean} \label{fig:2DOF:FRF:mean1:phase} \end{subfigure}% \\ \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_out1_2DOF_abs} \caption{Magnitude of the standard deviation} \label{fig:2DOF:FRF:std1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_out1_2DOF_phase} \caption{Phase of the standard deviation} \label{fig:2DOF:FRF:std1:phase} \end{subfigure}% \caption{Mean and standard deviation of the FRFs evaluated over 10,000 sample points, by the true model (red) and by the surrogate model (black).} \label{fig:FRF:2DOF:out1} \end{figure} To show the feasibility of the proposed method to estimate the statistics of the FRFs, the results obtained here are compared to their counterparts in two of the most recent works available in the literature. The first study \citep{Jacquelin2015144} directly uses high-order PCE for estimating the first two moments of the FRF, whereas the second method \citep{jacquelinpolynomial2015} proposes to use Aitken's transformation in conjunction with PCEs, Both methods use PCEs of order 50 and tend to produce spurious peaks around the resonance region. The use of Aitken's transformation slightly improves convergence. Their results for the mean and standard deviation are shown in Figures \ref{fig:2DOF:FRF:mean:compinone} and \ref{fig:2DOF:FRF:std:compinone}, respectively. For comparison, the results from our approach in Figures \ref{fig:2DOF:FRF:mean1} and \ref{fig:2DOF:FRF:std1} are reproduced in Figure \ref{fig:FRF:2DOF:compinone} with a scaling similar to the other panels. They indicate that the stochastic frequency transformation approach proposed here, significantly improves the estimation accuracy of the PCE surrogate, as no spurious peaks are visible in this case. As far as individual comparison between the true and the predicted FRF are concerned, the worst predicted FRF, the one with the maximum error, is presented in Figure \ref{fig:FRF:2DOF:predict:worst}. They indicate that even for the worst-case, the presented approach results in prediction of the FRFs with excellent accuracy. \begin{figure}[H] \centering \begin{subfigure}[b]{0.47\columnwidth} \centering \includegraphics[width=1\columnwidth]{combined_mean_edit} \caption{Mean of the FRFs} \label{fig:2DOF:FRF:mean:compinone} \end{subfigure \begin{subfigure}[b]{0.47\columnwidth} \centering \includegraphics[width=1\columnwidth]{combined_std_edit} \caption{Standard deviation of the FRFs} \label{fig:2DOF:FRF:std:compinone} \end{subfigure} \caption{Comparison between the methods to estimate the statistics of the FRFs at the first mass by PCE. Green line: Direct use of PCE with order 50. Shown in \citet{Jacquelin2015144, jacquelinpolynomial2015}, blue line: PCE with order 50 and Aitken's transformation. Proposed in \citet{jacquelinpolynomial2015}, black line: the proposed stochastic transformation method, red line: reference result.} \label{fig:FRF:2DOF:compinone} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{worst_out1_2DOF} \caption{First system output} \label{fig:2DOF:predict:worstFRF:out1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{worst_out1_2DOF_phase} \caption{First system output} \label{fig:2DOF:predict:worstFRF:out1:phase} \end{subfigure \caption{Worst case FRF prediction among 10,000 sample points, true (in red) and predicted (in black) FRF of the system.} \label{fig:FRF:2DOF:predict:worst} \end{figure} \subsection{6-DOF system: large parameter space} The second example is chosen to illustrate the application of the proposed method to a problem with a relatively large parameter space. The system, shown in Figure \ref{fig:6DOF}, consists of 10 springs and 6 masses which are modeled by random variables with lognormal distributions. Their mean values are listed in Table \ref{tab:6DOF:variables}. The uncertainty on the springs (resp. masses) has a COV = 10\% (resp. COV = 5\%). The damping matrix is $\ve{V}=0.1\widehat{\ve{M}}$, where $\widehat{\ve{M}}$ is the matrix of the mean value of the system masses. Table \ref{tab:6DOF:damping} provides its corresponding mean of modal dampings evaluated over 10,000 samples. The system has one input force at mass 6 and 6 system outputs, one for each mass. The FRF of the system is evaluated at a frequency range from 1 to 25 rad/s with the step of 0.01$\pi$ rad/s. In this example, the ED consists of 400 points sampled from the parameter space using LHS. The marginal distributions of the input vectors $\ve{X}$ consists of lognormal distributions. Therefore, the chosen PCE basis consists of Hermite polynomials on the reduced variable ${\ve{Z}}=\ln({\ve{X}})$. Eq. (\ref{eq:PCE:infinite}) thus, can be written as $$\ve{Y}=\cm(\ve{X} )=\sum_{\ua \in \ca^{M,p}} {{{\tilde{u}}_{\ua}}} \, {{\psi}_{\ve{\alpha}}(\ln({\ve{X}}))}.$$ The LARS algorithm has been employed to build sparse PCEs with adaptive degree for both the \textit{selected frequencies} and the principal components of the scaled FRF. For the second set of PCEs, PCA has been performed and the dominant components are selected such that $\sum_{i=1}^{\hat{N}}\lambda_i = 0.999 \sum_{i=1}^{{N}}\lambda_i$. This truncation reduced the number of random outputs from 761 $\times$ 6 $\times$ 2 to 102 components. Since the dimension of the input parameter space is large, to reduce the unknown coefficients of the PCEs and avoid the curse of dimensionality, a hyperbolic truncation with $q$-norm of 0.7 was used before the LARS algorithm. Besides, only polynomials up to rank 2 were selected here (\ie polynomials that depend at most on 2 of the 16 parameters). It should be mentioned that all the PCEs used for the surrogate model have eventually maximum degrees less than 10. \begin{figure} [H] \centering \begin{adjustbox}{max width=0.8\columnwidth} \begin{tikzpicture} \tikzstyle{spring}=[thick,decorate,decoration={zigzag,pre length=0.3cm,post length=0.3cm,segment length=6}] \tikzstyle{damper}=[thick,decoration={markings, mark connection node=dmp, mark=at position 0.5 with { \node (dmp) [thick,inner sep=0pt,transform shape,rotate=-90,minimum width=10pt,minimum height=3pt,draw=none] {}; \draw [thick] ($(dmp.north east)+(2pt,0)$) -- (dmp.south east) -- (dmp.south west) -- ($(dmp.north west)+(2pt,0)$); \draw [thick] ($(dmp.north)+(0,-4pt)$) -- ($(dmp.north)+(0,4pt)$); } }, decorate] \tikzstyle{ground}=[fill,pattern=north east lines,draw=none,minimum width=0.75cm,minimum height=0.3cm] \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M1) [minimum width=1cm, minimum height=4.5cm] {$m_1$}; \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M2) at (2.5,1.5) [minimum width=1cm, minimum height=1.5cm] {$m_2$}; \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M3) at (2.5,-1.5) [minimum width=1cm, minimum height=1.5cm] {$m_3$}; \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M6) at (5,0) [minimum width=1cm, minimum height=4.5cm] {$m_6$}; \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M5) at (7.5,1.5) [minimum width=1cm, minimum height=1.5cm] {$m_5$}; \node[draw,outer sep=0pt,thick, fill=white!60!yellow] (M4) at (7.5,-1.5) [minimum width=1cm, minimum height=1.5cm] {$m_4$}; \node (ground1) [ground,anchor=north,yshift=-0.25cm,minimum width=4.8cm] at (M1.south) {}; \draw (ground1.north east) -- (ground1.north west); \draw [thick, fill={cyan}] (M1.south west) ++ (0.2cm,-0.125cm) circle (0.125cm) (M1.south east) ++ (-0.2cm,-0.125cm) circle (0.125cm); \node (ground2) [ground,anchor=north,yshift=-0.25cm,minimum width=4.8cm] at (M3.south) {}; \draw (ground2.north east) -- (ground2.north west); \draw [thick, fill={cyan}] (M3.south west) ++ (0.2cm,-0.125cm) circle (0.125cm) (M3.south east) ++ (-0.2cm,-0.125cm) circle (0.125cm); \node (ground3) [ground,anchor=north,yshift=-0.25cm,minimum width=4.8cm] at (M6.south) {}; \draw (ground3.north east) -- (ground3.north west); \draw [thick, fill={cyan}] (M6.south west) ++ (0.2cm,-0.125cm) circle (0.125cm) (M6.south east) ++ (-0.2cm,-0.125cm) circle (0.125cm); \node (ground4) [ground,anchor=north,yshift=-0.25cm,minimum width=4.8cm] at (M4.south) {}; \draw (ground4.north east) -- (ground4.north west); \draw [thick, fill={cyan}] (M4.south west) ++ (0.2cm,-0.125cm) circle (0.125cm) (M4.south east) ++ (-0.2cm,-0.125cm) circle (0.125cm); \node (wall1) [ground, rotate=-90, minimum width=2cm,yshift=-2.5cm] {}; \draw (wall1.north east) -- (wall1.north west); \node (wall2) [ground, xshift=7.5cm, yshift=1.5cm, rotate=90, minimum width=2cm,yshift=-2.5cm] {}; \draw (wall2.north east) -- (wall2.north west); \node (wall3) [ground, xshift=7.5cm, yshift=-1.5cm, rotate=90, minimum width=2cm,yshift=-2.5cm] {}; \draw (wall3.north east) -- (wall3.north west); \draw [spring] ($(wall1.90)+ (0,0.35)$) -- ($(M1.west)+ (0,0.35)$) node [midway,above] {$k_1$}; \draw [damper] ($(wall1.90) - (0,0.35)$) -- ($(M1.west) - (0,0.35)$) node [midway,above=3] {$c_1$}; \draw[spring] ($(M1.east) + (0,1.85)$) -- ($(M2.west)+(0,0.35)$) node [midway,above] {$k_2$}; \draw[damper] ($(M1.east) + (0,1.15)$) -- ($(M2.west)-(0,0.35)$) node [midway,above=3] {$c_2$}; \draw [spring] ($(M1.east) + (0,-1.15)$) -- ($(M3.west)+(0,0.35)$) node [midway,above] {$k_3$}; \draw [damper] ($(M1.east) + (0,-1.85)$) -- ($(M3.west)-(0,0.35)$) node [midway,above=3] {$c_3$}; \draw [spring] ($(M1.east)+(0,0.35)$) -- ($(M6.west)+(0,0.35)$) node [midway,above] {$k_4$}; \draw [damper] ($(M1.east) - (0,0.35)$) -- ($(M6.west)-(0,0.35)$) node [midway,above=3] {$c_4$}; \draw [spring] ($(M2.east)+ (0,0.35)$) -- ($(M6.west) + (0,1.85)$) node [midway,above] {$k_5$}; \draw [damper] ($(M2.east)-(0,0.35)$) -- ($(M6.west) + (0,1.15)$) node [midway,above=3] {$c_5$}; \draw [spring] ($(M3.east)+(0,0.35)$) -- ($(M6.west) - (0,1.15)$) node [midway,above] {$k_6$}; \draw [damper] ($(M3.east)-(0,0.35)$) -- ($(M6.west) - (0,1.85)$) node [midway,above=3] {$c_6$}; \draw [spring] ($(M6.east) + (0,-1.15)$) -- ($(M4.west)+(0,0.35)$) node [midway,above] {$k_7$}; \draw [damper] ($(M6.east) + (0,-1.85)$) -- ($(M4.west)-(0,0.35)$) node [midway,above=3] {$c_7$}; \draw [spring] ($(M6.east) + (0,1.85)$) -- ($(M5.west)+(0,0.35)$) node [midway,above] {$k_8$}; \draw [damper] ($(M6.east) + (0,1.15)$) -- ($(M5.west)-(0,0.35)$) node [midway,above=3] {$c_8$}; \draw [spring] ($(M5.east)+(0,0.35)$) -- ($(wall2.90)+(0,0.35)$) node [midway,above] {$k_9$}; \draw [damper] ($(M5.east)-(0,0.35)$) -- ($(wall2.90)-(0,0.35)$) node [midway,above=3] {$c_9$}; \draw [spring] ($(M4.east)+(0,0.35)$) -- ($(wall3.90)+(0,0.35)$) node [midway,above] {$k_{10}$}; \draw [damper] ($(M4.east)-(0,0.35)$) -- ($(wall3.90)-(0,0.35)$) node [midway,above=3] {$c_{10}$}; \draw[ultra thick, -latex] ($(M6.east)$) -- ($(M6.east)+(1.5,0) $) node [midway, below] {$f$}; \end{tikzpicture} \end{adjustbox} \caption{The 6-DOF system} \label{fig:6DOF} \end{figure} \begin{table} [H] \centering \caption{The 6-DOF system's variables} \begin{tabular}{lccc} \hline \multicolumn{2}{r}{Variables} & mean & Coeff. of variation (\%) \\ \hline \multirow{6}{*}{Masses (Kg)} & $m_1$ & 50 & 5 \\ & $m_2$ & 35 & 5 \\ & $m_3$ & 12 & 5 \\ & $m_4 $& 33 & 5 \\ & $m_5$ & 100 & 5 \\ & $m_6$ & 45 & 5 \\ \hline \multirow{10}{*}{Stiffnesses (N/m)} & $k_1$ & 3000 & 10 \\ & $k_2$ & 1725 & 10 \\ & $k_3$ & 1200 & 10 \\ & $k_4$ & 2200 & 10 \\ & $k_5$ & 1320 & 10 \\ & $k_6$ & 1330 & 10 \\ & $k_7$ & 1500 & 10 \\ & $k_8$ & 2625 & 10 \\ & $k_9$ & 1800 & 10 \\ & $k_{10}$ & 850 & 10 \\ \hline \end{tabular} \label{tab:6DOF:variables} \end{table} \begin{table} [H] \centering \caption{Mean of modal dampings of the 6-DOF system evaluated over 10,000 samples} \begin{tabular}{l|cccccc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Damping} } & \multicolumn{6}{c}{Mean of modal dampings (\%)} \\ \cline{2-7} \multicolumn{1}{c|}{} & $1^{st} $ & $2^{nd}$ & $3^{rd}$ & $4^{th}$ & $5^{th}$ & $6^{th}$\\ \hline $\ve{V}=0.1\widehat{\ve{M}}$ & $1.30$ & $0.72$ & $0.52$ & $0.44$ & $0.33$ & $0.30$ \\ \hline \end{tabular} \label{tab:6DOF:damping} \end{table} The efficiency of the proposed method is assessed by comparing the PCE estimates of the first two moments of the surrogate model with the plain Monte-Carlo estimators on experimental designs of increasing size. The reference validation set is obtained by 10,000 points sampled from the parameter space by LHS at which the full model is evaluated. The results are shown in Figure \ref{fig:6DOF:conv:p1D} for the mean and standard deviation at $6^{th}$ output. The results for the other outputs are presented in \ref{app:6DOF:conv}. They indicate that both mean and standard deviation evaluated by the surrogate model converge faster than those of the Monte-Carlo simulations. Besides, it can be inferred that 400 points are enough for the ED. \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{mean_conv_out6_6DOF_p1D} \caption{Mean} \label{fig:6DOF:FRF:mean:conv:out6:p1D} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_conv_out6_6DOF_p1D} \caption{Standard deviation} \label{fig:6DOF:FRF:std:conv:out6:p1D} \end{subfigure \caption{Convergence plot of the first two moments of the FRFs of the 6-DOF system calculated at $6^{th}$ mass by the PCE (black $\ast$) and the true model (red $\times$) by enlarging the experimental design. The reference results were obtained by 10,000 Monte-Carlo simulation of the true model.} \label{fig:6DOF:conv:p1D} \end{figure} In order to assess the accuracy of the surrogate model in estimating various quantities of interests, the same 10,000 points used as the reference validation set to study the convergence are used here. Figure \ref{fig:6DOF:eigval:PCEvsMC:p1D} illustrates two of the predicted \textit{selected frequencies} versus the true ones, namely the best and worst predicted eigenfrequency, so that the accuracy of the surrogate model in this step can be inferred. While the overall accuracy is very good for all frequencies, it tends to degrade somewhat at higher frequencies. Besides, at all the validation points the FRFs are calculated by both the true model and the surrogate model. The variation of the amplitudes at the first and fifth resonant frequencies are shown as histograms in Figure \ref{fig:6DOF:Amp:freqs}. Plots of the individual FRFs are reported in Figure \ref{fig:FRF:6DOF:envelope} for $6^{th}$ output. In order to assess the error quantitatively, each response of the surrogate model has been compared with the corresponding one of the true model in the root-mean-square sense. This error is evaluated using Eq. (\ref{eq:Err:rms}) and the corresponding results are presented in Figure \ref{fig:6DOF:error:frf:p1D}. They indicate the high accuracy of the proposed surrogate model in predicting the FRFs. As an individual comparison between the true FRFs and predicted by the surrogate model, two cases are considered: one case with an average and one with the maximum overall error, are selected and their $6^{th}$ output are demonstrated in Figure \ref{fig:FRF:6DOF:predict:individual:p1D}. The other outputs are presented in \ref{app:6DOF:individual:FRF}. \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{nfreq_1_6DOF_p1D} \caption{The best predicted eigenfrequency} \label{fig:6DOF:eigenvalue:1st:p1D} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{nfreq_5_6DOF_p1D.pdf} \caption{The worst predicted eigenfrequency} \label{fig:6DOF:eigenvalue:5st:p1D} \end{subfigure \caption{The eigenfrequencies predicted by the surrogate model versus obtained by the true model, evaluated at 10,000 MC samples.} \label{fig:6DOF:eigval:PCEvsMC:p1D} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{Amp_hist_freq1_6DOF} \caption{First resonant frequency} \label{fig:6DOF:Amp:freq1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{Amp_hist_freq5_6DOF} \caption{Fifth resonant frequency} \label{fig:6DOF:Amp:freq6} \end{subfigure \caption{Histogram of the amplitude of the FRF at the first and fifth resonant frequencies, obtained by evaluating the true and surrogate models on the 10,000 MC samples.} \label{fig:6DOF:Amp:freqs} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{realizations_out6_6DOF.png} \caption{$6^{th}$ system output- True model} \label{fig:6DOF:envelope:FRF:out6} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{realizations_out6_PCE_6DOF.png} \caption{$6^{th}$ system output- Surrogate model} \label{fig:6DOF:envelope:FRF:out6:PCE} \end{subfigure \caption{FRFs at $6^{th}$ mass obtained by evaluating the true and the surrogate model at 10,000 MC samples.} \label{fig:FRF:6DOF:envelope} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{error_hyst_out1_6DOF_p1D} \caption{First output} \label{fig:6DOF:FRF:error:out1:p1D} \end{subfigure \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=1\columnwidth]{error_hyst_out6_6DOF_p1D} \caption{Sixth output} \label{fig:6DOF:FRF:error:out6:p1D} \end{subfigure} \caption{Error of the FRFs predicted by the surrogate model, evaluated at 10,000 MC samples by Eq. (\ref{eq:Err:rms}).} \label{fig:6DOF:error:frf:p1D} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{typical_out6_6DOF_p1D} \caption{Typical FRF prediction} \label{fig:6DOF:FRF:typ:out1} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{worst_out6_6DOF_p1D} \caption{Worst FRF prediction} \label{fig:6DOF:FRF:typ:out6} \end{subfigure \caption{Two samples of the FRFs predicted by the surrogate model at $6^{th}$ output, evaluated by the true model (red line) and the surrogate model (black line).} \label{fig:FRF:6DOF:predict:individual:p1D} \end{figure} The mean and standard deviations of the FRFs were compared with the reference ones. The results for $6^{th}$ output are plotted in Figure \ref{fig:6DOF:frf:stat:out6}. The other outputs are plotted in \ref{app:6DOF:moments:FRF}. They indicate that although both the mean and the standard deviation match with their reference, the standard deviation presents minor mismatch at the peaks. \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{mean_out6_6DOF_p1D} \caption{Magnitude of the mean} \label{fig:6DOF:FRF:mean:out6:abs} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{mean_out6_6DOF_p1D_phase} \caption{Phase of the mean} \label{fig:6DOF:FRF:mean:out6:phase} \end{subfigure \\ \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_out6_6DOF_p1D_abs} \caption{Magnitude of the standard deviation} \label{fig:6DOF:FRF:std:out6:abs} \end{subfigure \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_out6_6DOF_p1D_phase} \caption{Phase of the standard deviation} \label{fig:6DOF:FRF:std:out6:phase} \end{subfigure \caption{Mean and standard deviation of the FRF of the 6-DOF system at $6^{th}$ output, evaluated at 10,000 MC sample points by the true model (red line) and the surrogate model (black line).} \label{fig:6DOF:frf:stat:out6} \end{figure} In order to study the effect of the damping level on the accuracy of the proposed method, the study was repeated on the 6-DOF system after setting a much lower damping level. As is shown in Figure \ref{fig:6DOF:conv:p01D}, if the damping is decreased of one order of magnitude $\ve{V}=0.01\widehat{\ve{M}}$, the convergence of the mean response is still improved, whereas the standard deviation shows a more erratic behavior and reaches a plateau of approximately 40\% RMS error. To analyze this phenomenon, a surrogate model has been made with a larger experimental design comprised of 1000 points. A typical FRF predicted by this surrogate model is shown in Figure \ref{fig:6DOF:FRF:out6:typ:p01D}. Significant mismatch is observed especially around the peaks, which leads to an overall less accurate estimation of the standard deviation, (see Figure \ref{fig:6DOF:FRF:std:out6:abs:p01D}). Since this error did not change by enriching the ED, it is concluded that the main source for this error must lie somewhere else in the processing chain. An in-depth analysis revealed that with such low damping levels, peak estimation is inaccurate even when using the full model, if the frequency step is not chosen fine enough. This is one of the main sources of error. Similarly, the interpolation step between pre- and post-processing the frequency axis of the PCE also plays an important role. Both of the errors could in fact be reduced by reducing the frequency step. Indeed, when in a further investigation we reduced the frequency step to 0.005$\pi$, the error plateau was reduced to 20\%. \begin{figure}[H] \centering \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{mean_conv_out6_6DOF_p01D} \caption{Mean} \label{fig:6DOF:FRF:mean:conv:out6:p01D} \end{subfigure \begin{subfigure}[b]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_conv_out6_6DOF_p01D} \caption{Standard deviation} \label{fig:6DOF:FRF:std:conv:out6:p01D} \end{subfigure \caption{Convergence plot of the first two moments of the FRFs of the 6-DOF system with $\ve{V}=0.01\widehat{\ve{M}}$ calculated at $6^{th}$ mass by the PCE (black $\ast$) and the true model (red $\times$) by enlarging the experimental design. The reference results were obtained by 10,000 Monte-Carlo simulation of the true model.} \label{fig:6DOF:conv:p01D} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{typical_out6_6DOF_p01D} \caption{Typical FRF prediction} \label{fig:6DOF:FRF:out6:typ:p01D} \end{subfigure \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{std_out6_6DOF_p01D_abs} \caption{Magnitude of the standard deviation} \label{fig:6DOF:FRF:std:out6:abs:p01D} \end{subfigure \caption{Analysis of the FRF of the 6-DOF system with $\ve{V}=0.01\widehat{\ve{M}}$ at $6^{th}$ output, evaluated at 10,000 MC sample points by the true model (red line) and the surrogate model (black line). The surrogate model has been made with 1000 ED points.} \label{fig:6DOF:frf:out6:p01D} \end{figure} \section{Conclusions} A novel method to build a surrogate model directly for the FRFs of stochastic linear dynamic systems based on sparse PCE has been proposed. To this end, there were two major challenges which have been addressed in this paper: the frequency shifts in the \textit{selected frequencies} of the FRFs, \ie peaks and valleys, due to the uncertainty in the parameters of the system and the non-smooth behavior of the FRFs. These can lead to very high-order PCEs even for the FRFs obtained from cases with 1 or 2 DOFs. We thus propose a stochastic frequency-transformation as a preprocessing step before building PCEs. This transformation scales the FRFs in the frequency horizon so that their \textit{selected frequencie}s become aligned. Although this preprocessing step results in one extra set of PCEs, they do not require any additional full model evaluations. After the transformation, FRFs are very similar and low-order PCEs can be built for each frequency. This leads to an enormously large number of random outputs. An efficient implementation of principal component analysis has been used to alleviate this issue. Moreover, the problem of curse of dimensionality of PCEs in cases with large parameter space was resolved by employing the LARS algorithm to build sparse PCEs together with adaptive degree. Successfully applied to two case studies, the proposed method shows its capability of accurately 1) predicting individual FRFs, 2) estimating the mean and standard deviation of the FRFs, and 3) estimating the eigenfrequencies of the system and their statistics. In cases of very low damping, significant errors can be observed around the peaks. Interpolation error, both for the full model and for the surrogate model were identified as the main cause. In fact, when the frequency step in the full model was too coarse, both the estimation of the resonant frequencies and that of their amplitudes were significantly inaccurate. This in turn resulted in an inaccurate experimental design and, consequently, in inaccurate surrogate models. Reducing the frequency step has been shown to be effective in reducing interpolation error for very low-damping applications.
1,116,691,500,542
arxiv
\section{Introduction} The two dimensional $O(3)$ nonlinear sigma model is a asymptotically free field theory \cite{a_f}. It deserves special interest in view of asymptotically free four dimensional nonabelian gauge theories describing the strong interactions inbetween Quarks. A fundamental feature of these theories is the existence of a nonvanishing mass-gap $m_0$. Recently the mass-gap of the sigma model has been calculated analytically on the basis of the thermodynamic Bethe Ansatz \cite{HaNi1,HaMaNi1}. Choosing the simplest regulator, namely a two dimensional hypercubic lattice with lattice spacing $a$, and the standard nearest neighbor action functional with nearest neighbor coupling $\beta$ we consider \begin{equation} S= - \beta \sum_{<i,j>} \phi_i^{\alpha} \phi_j^{\alpha}, \end{equation} where the fields $\phi^{\alpha}_i$ are $3$-component unit vectors and the sum $<i,j>$ runs over the nearest neighbors on the lattice. The mass-gap correlation length $\xi_0=m_0^{-1}$ has the value \begin{equation} \xi_0^{theor}=a \ {e^{{1-\pi/2}}\over{8 \ 2^{5/2}}} \ {e^{2 \pi \beta}\over{2\pi\beta}} \ (1-{.091\over\beta}+O(1/\beta^2)), \end{equation} corresponding to the exact mass gap $m_0 = 80.0864~\Lambda_{latt}$ \cite{HaMaNi1}, $\Lambda_{latt}$ denoting the 3-loop lattice cut-off parameter \cite{three_loop}. The past numerical studies of the path integral \cite{HaNi2,Wolff} consistently overestimated this number. These studies considered infinite volume mass-gap correlation length values $\xi_0$ up to about hundred lattice spacings $a$ with the help of cluster algorithms. It was also attempted to enlarge the accessible correlation length region utilizing real space renormalization group methods \cite{HaNi2}. It was found, that asymptotic perturbative scaling only sets in at mass-gap correlation length values, which can hardly be fitted into the memory of todays computers. In addition it may also be appropriate to mention the still standing criticism of Patrascioiu, Seiler and others on the continuum limit of the sigma model \cite{Seiler1}. These authors argue in favor of a the existence of a phase with vanishing mass-gap in the sigma model, which is however not supported by the numerical data \cite{o3_sim}. It is therefore a challenge to confront the known analytical result on the mass-gap with the precision numerical simulations of the path integral. From such a study we might also gain better insight on how to accurately determine the mass-gap of lattice QCD. Finite size scaling theory \cite{Fisher1} has been very successful throughout the years in predicting the behavior of the infinite volume correlation length of spin systems at criticality from the properties of finite systems, which in this paper are taken to be square boxes with linear extent $L$ and periodic boundary conditions. In a situation where the control parameter $x_0=\xi_0/L$ is large and, even at the critical point, where the correlation length is infinite, the path integrals singular part of the free energy on finite volume systems is a function of the control parameter $x_0$ alone. The Fisher scaling analysis leads to reliable determinations of the correlation length divergence critical exponent $\nu$ in statistical mechanics models. In this paper we attempt a generalization of this idea to the $D=2$ nonlinear sigma model. Its asymptotic freedom fixed point is located at infinite value of the bare nearest neighbor coupling. The quantity under consideration is a free energy difference of twisted relative to untwisted spin configurations - the spin stiffness - forming a Bloch wall, whose finite size scaling behavior is dominated by logarithmic terms in the control parameter $x_s=\xi_s/L$ in accord with the perturbative treatment of the path integral in the one-loop approximation. The quantity $\xi_s$ hereby denotes the stiffness correlation length. We will argue, that a precise extraction of the $\Delta\beta(\beta)$-shift corresponding to the stiffness correlation length is feasible up to $\beta$-values as large as $3.1$, upon reasonable assumptions on the coefficients of the finite size scaling law. The integration of the shift in junction with a start value for the mass-gap correlation length $\xi_0$ allows the determination of the mass-gap $m_0$. The paper is organized as follows: Section 2 is devoted to the theoretical background and introduces the constraint effective potential, the considered free energy difference and discusses the finite size scaling hypothesis. Section 3 contains details of the Multicanonical Ensemble simulation. In section 4 we present our numerical analysis and results. Section 5 concludes the of the paper. \section{Theoretical Considerations} In statistical field theories, which share the universality class of continuous ferromagnets, it is well known on the basis of the $\epsilon$-expansion and the renormalization group \cite{rudnick_jasnow}, that the critical behavior of the system can be studied in the symmetry broken phase of the theory by considering path integrals of configurations, which interpolate inbetween regions of different order parameter orientation. The helicity modulus $Y$ carries the information on the nonanalytical behavior in the symmetry broken phase of the theory. It is common to define the helicity modulus by the response of the system with respect to a twist angle $\Theta$, which is applied microscopically to the fields on the boundary of the lattice in one of the lattice directions. Denoting ${F=-lnZ}$ the free energy, the free energy difference \begin{equation} \Delta F_Y(\theta)= F(\theta)-F(\theta=0) \label{free_energy_difference} \end{equation} then defines the helicity modulus \cite{helicity} \begin{equation} Y = \lim_{(\Omega,L) \to \infty } {{2 \ L} \over { \theta^2 \ \Omega}} \Delta F_Y(\Theta), \label{helicity_modulos} \end{equation} which is finite in the thermodynamic limit and allows for a geometry independent i.e., $\Theta$-independent, characterization of the Bloch wall. Hereby denotes $\Omega$ the cross section orthogonal to the direction of the twist. E. g., on a hypercubic lattices with linear extent $L$ one has $\Omega \propto L^{D-1}$ and one expects $\Delta F_Y(\theta) \propto L^{D-2}$. The critical behavior of the theory in terms of the helicity modulus is expressed by Josephsons scaling law \cite{josephsons_scaling_law}: $Y = A_{Y} t^{\mu_J}$ with $\mu_J=(D-2)\nu$. It is clear, that the presented scaling laws refer to the ferromagnetic case. In fact, when the dimension $D=2$ is approached from above for field theories with continuous symmetry , like the $O(N)$ nonlinear sigma model with $N \ge 3$ \footnote{We refrain from a discussion of the $O(2)$ symmetric XY model.}, the ferromagnetic character of the theory is lost and the helicity modulus $Y$ according to eq.(\ref{helicity_modulos}) vanishes for any value of the bare coupling $\beta$. In dimension $D=2$ the infinite volume system exhibits domains of different order parameter orientations, which are free to fluctuate. This is the content of the Mermin Wagner theorem. At the heart of the problem is a study of the correspondingly defined free energy difference $\Delta F_s(\Theta)$ on finite lattices in the $D=2$ $O(3)$ nonlinear sigma model at large values of the nearest neighbor coupling i.e., in a situation in which the mass-gap correlation length $\xi_0$ exceeds the linear extent $L$ of the symmetric hypercubic lattice. In this situation one expects $\Delta F_s(\Theta)$ to be controlled by $x_s=\xi_s/L$, where $\xi_s$ denotes the stiffness correlation length. Based on the renormalization group, the $\epsilon=D-2$-expansion and the 1-loop approximation, the calculation has been performed for the case of a continuum square box with fixed twisted boundary conditions \cite{Chakravarty,Zinn1}. We quote the result \begin{equation} \Delta F_s(\Theta)= {\Theta^2 \over {4\pi}} [ {{ln}(\xi_s/L)} +{{lnln}(\xi_s/L)} ] + R(\Theta) + {\cal O(1)}. \label{zinn_formula} \end{equation} A few remarks are in order: 1) According to the expected Fisher scaling $\Delta F_s(\Theta)$ splits into a singular part, which is a function of $x_s={{\xi_s}/{L}}$ alone and a regular contribution $R(\Theta)$. The stiffness correlation length $\xi_s$ follows the perturbative renormalization group, and thus in the large $\beta$-limit has the form \begin{equation} \xi_s(\beta) \propto {1\over\beta}e^{2 \pi \beta}. \label{asymptotic_scaling} \end{equation} However, $\xi_s$ is not identical to the mass-gap correlation length $\xi_0$ as is the control parameter $x_s$ as compared to $x_0$. The stiffness correlation length $\xi_s$ describes the crossover of the system from its small volume behavior at large values of $x_s$ into a state of various domains of order parameter orientation on large volumes. It is expected to be larger than the mass-gap correlation length $\xi_0$. Recently Billoire performed a high statistics numerical calculation of the stiffness correlation length \cite{Alain} at $\beta$-values far in the perturbative region $\beta =5,10$ and $20$ respectively. Expressing the measurement in units of the exact mass-gap correlation length we quote $\xi_s = 9.39(1), 9.46(1)$ and $9.47(1) \xi_0^{theor}$. In this paper we assume $\xi_s$ takes its perturbative value that at a $\beta$-value of $3$. In addition we expect a region of $\beta$-values, where $\xi_s$ as well as $\xi_0$ are governed by the same beta-shift $\Delta\beta(\beta)$. Let us denote with $\Delta \beta(\beta)$ the change of the coupling $\beta$ to $\hat{\beta}=\beta-\Delta\beta(\beta)$ corresponding to the decrease of the stiffness correlation length by a factor of $2$ : $\xi_s(\hat{\beta})={1\over2}\xi_s (\beta)$. 2) The free energy difference $\Delta F_s(\Theta)$ at $\Theta=0$ defines the $\Theta$ independent spin stiffness $\rho={2 \over \Theta^2} \Delta F_s(\Theta)$. The large $\beta$ perturbative limit of $\rho_s=-{\partial\over{\partial lnL}} \rho$ defines the spin stiffness constant whose value is ${1\over{2\pi}}$. Precision numerical simulations of the spin stiffness \cite{Alain,Caffarel} confirm this theoretical expectation. In the context of the present paper we assume that the spin stiffness coefficient proportional to $ln L$ takes its perturbative value in the whole considered $\beta$ region. 3) Upon insertion of eq.(\ref{asymptotic_scaling}) into eq.(\ref{zinn_formula}) we observe, that the leading term in $\Delta F_s$ is ${\beta \Theta^2 / {2}}$. This is the classical action difference of a twisted configuration at twist angle $\Theta$ relative to a configuration with twist angle $\Theta=0$. The dominant effects of quantum fluctuations at fixed $\beta$ are linear in $lnL$ and are given by a term $-{\Theta^2}{lnL}/4\pi$. They drive the spin stiffness to smaller values on the larger system sizes, as expected. 4) The regular contribution $R(\Theta)$ in eq.(\ref{zinn_formula}) can be calculated in the small $\Theta$ expansion. For the boundary conditions considered in \cite{Zinn1} one finds \cite{Alain} $R(\Theta)={-2.501{{\Theta^4}\over{8\pi^4}}+{\cal O}(\Theta^6)}$ i.e., a negative contribution at finite $\Theta$ values. We note that these contributions are nonuniversal in character and depend on the details of the implementation of the twist. Numerical simulations of free energies are computationally difficult. In the standard approach one integrates the expectation value of the action difference of systems with nonzero twist $\Theta$ and twist $\Theta=0$ along the nearest neighbor coupling parameter direction $\beta$. One can avoid the integration of the action by differentiating the path integral with respect to $\Theta$ at a value of $\Theta=0$ \cite{Alain,KKMon}. In this paper we consider the constraint effective potential (CEP) \cite{CEP_1} of the mean field of the theory. The consideration of the CEP is motivated by the analogous problem arising in the interfacial case in $Z(2)$ symmetric theories and recently numerical simulations of it have been conducted with the help of Multicanonical Ensemble simulations \cite{Berg_Neuhaus}. One considers the mean field $ M= {1\over L^D} \sum_i S_i $ of the Ising fields. Its probability distribution $P(M)$ contains at the same time the information concerning bulk as well as interfacial properties of the system. Relevant for interfacial effects are states at $M=0$ on finite boxes with periodic boundary conditions. In this region of the phase space configurations contain two domains of opposite order parameter orientation and two interfaces are formed. Widoms scaling law can be analyzed \cite{OurIsing}. Lee and Kosterlitz have performed a finite size scaling analysis of the CEP at criticality \cite{LeeKosterlitz} leading to a determination of $\nu$. In case of the theory with continuous $O(3)$ symmetry we introduce the $3$-component field \begin{equation} \Phi^{\alpha}= {1 \over L^2} \sum_x \phi_x^{\alpha} \end{equation} and its absolute value, denoted the mean field $\bar{\Phi}$ in the following \begin{equation} {\bar{\Phi}} =\sqrt{ \sum_\alpha \Phi^{\alpha} \Phi^{\alpha} }, \end{equation} which will have the probability distribution \begin{equation} P({\bar{\Phi}}) \propto {{\bar{\Phi}}^2} e^{-U(\bar{\Phi})} \label{P_can} \end{equation} in the canonical ensemble of the theory. The function $U(\bar{\Phi})$ is the CEP of the theory. It can be obtained by rewriting the partition function \begin{equation} Z=\int D\phi e^{\beta \sum_{x,\nu} \phi_x^{\alpha}\phi_{x+\nu}^{\alpha}} \label{the_path_integral} \end{equation} into \begin{equation} Z=\int_0^\infty d{\bar{\Phi}} {\bar\Phi}^2 e^{-U(\bar\Phi)}, \end{equation} which can be achieved by introducing suitable $\delta$-functions and integration upon the remaining degrees of freedom. We note that $U(\bar\Phi)$ is defined up to constant, which can easily be absorbed into a multiplicative normalization of $Z$. In addition a factor ${\bar\Phi}^{N-1}$ at $N=3$ appears in the path integral definition of $U$. This factor is proportional to the surface of a $N$-sphere of radius $\bar\Phi$ and accounts for the degeneracy of states with respect to $O(N)$ rotations. Without taking this phase factor properly into account the CEP is a singular function at $\bar\Phi=0$. The CEP attracted in the past attention in the context of the Higgs-models represented by $O(N)$ symmetric theories in $D=3$ and $D=4$. There it was studied with analytic methods \cite{Leutwyler_Goeckeler} as well with numerical simulations \cite{Neuhaus_et_al}. In these models the CEP has at a finite value $\bar\Phi_{{min}}$ a minimum, which corresponds to the Higgs field expectation value. We may note, that both analytic, as well as the numerical considerations in these models were concerned with the shape of the CEP in the vicinity of its minimum. In the context of the present paper we are however concerned with the CEP at values of the mean field $\bar\Phi$ equal to $\bar\Phi=0$. We argue that states at $\bar\Phi=0$ correspond on hypercubic boxes with periodic boundary conditions to Bloch walls carrying formally a twist angle of $\Theta=2\pi$. This can be easily seen on the classical level by noting, that the field configuration \begin{equation} \phi^{1} = cos({{2\pi n_1}\over{L}}) ~~~ \phi^{2} = sin({{2\pi n_1}\over{L}}) ~~~ \phi^{3} = 0 \label{the_classical_state} \end{equation} minimizes the action functional for the given periodic boundary conditions at $\bar\Phi=0$. This configuration exhibits on the scale of one lattice spacing $a$ spin twists of value ${2\pi} \over L$ in the 1-direction of the lattice, which upon integration in the 1-direction adds up to a total twist angle of $\Theta=2\pi$. We expect this configuration and the added quantum fluctuations to dominate the $\bar\Phi=0$ state. Thus in the framework of the CEP it is strongly suggested, that the singular part of $\Delta F_s(\Theta=2\pi)$ exhibits the same finite size scaling as the constraint effective potential difference, or potential barrier \begin{equation} \Delta U=U(\bar\Phi=0)-U(\bar\Phi_{min})=\Delta F_s(\Theta=2\pi)+\hat{R}, \label{hypothesis} \end{equation} up to regular terms $\hat R$, provided the CEP exhibits a maximum and minimum corresponding to the twisted and untwisted states. It is this finite size scaling hypothesis, which we will examine in the subsequent analysis of this paper. We will find, that the numerical evaluation of the CEP confirms the hypothesis. \section{The Multicanonical Ensemble Simulation} By the use of standard Monte Carlo algorithms it would be very difficult to sample values of $\bar\Phi$ close to zero, if the value of the nearest neighbor coupling $\beta$ is larger than about $1.6$ on lattices of reasonable linear size $L$. To overcome this difficulty, we modify the importance sampling by introducing a Multicanonical-weight factor into the Monte Carlo sampling process \cite{Berg_Neuhaus}. We remark, that the idea of Multicanonical Ensemble simulations consists in modifying the Boltzmann-weight for the purpose of the improvement of the actual Monte Carlo sampling process. The modification is however done in such a way, that the effect of the Multicanonical-weight will be removed in a well controlled way. Thus in the end the CEP can be determined in the canonical ensemble of the theory. In case of the CEP the Multicanonical-weight factor is chosen to be a function of the mean field $\bar\Phi$ evaluated on each single configuration and will be denoted $W_{mc}(\bar\Phi)$. It is in principal defined on the real interval $[0,L^2]$ and it is sensible to split the Multicanonical-weight into a part, which takes care of the degeneracy with respect to $O(3)$ rotations, and a yet unknown contribution $\hat W_{mc}(\bar\Phi)$. Thus for the purpose of the Monte Carlo simulation we consider the Multicanonical-weight \begin{equation} W_{mc}({\bar\Phi})=-2~ln{\bar\Phi}+{\hat W}_{mc}({\bar\Phi}) \label{the_weights_1} \end{equation} and \begin{equation} e^{-S+W_{mc}(\bar\Phi)} \label{the_weights_2} \end{equation} to be the Boltzmann-factor, which generates the Markov process. In order to obtain a representation of the Multicanonical-weight $\hat{W}_{mc}$, which in a computer is easily calculable, we choose $\hat W_{mc}$ to be a polygon, which on a finite set of $i=1,m$ intervals $I_i:\bar\Phi_{i}\le\bar\Phi<\bar\Phi_{i+1}$ for each interval is characterized by two parameters $g_i$ and $h_i$: \begin{equation} \hat{W}_{mc} (\bar\Phi)= g_i+h_i \bar\Phi. \end{equation} One may view the parameters $h_i$ as magnetic sources, which are applied on piecewise intervals of the operator $\bar\Phi$ to the theory. In one of our earlier publications we therefore named the resulting ensemble the Multimagnetical Ensemble \cite{OurIsing}. We found it sufficient in the actual numerical simulations, to first determine a value $\bar\Phi_{max}$ in a standard simulation. In this situation $\bar\Phi_{max}$ is calculated in such a way, that all the relevant structure of the CEP, namely the location of its minimum, and the states at $\bar\Phi=0$ are included in the considered $\bar\Phi$-interval. It means that we will only evaluate the shape of the CEP for values $0\le\bar\Phi\le\bar\Phi_{max}$. We then choose for reasons of simplicity an aequidistant partition of the considered $\bar\Phi$-interval into $m=25$ or $m=40$ different intervals $I_i$, which in the actual simulations turned out to be a sufficiently fine polygon approximation to the Multicanonical-weight $\hat{W}_{mc}$. Multicanonical Ensemble simulations are defined by the requirement, that the considered operator exhibits an almost constant, ideally a constant, probability distribution in the simulation. If in the canonical ensemble the density of states function with mean field $\bar\Phi$ is denoted $n(\bar\Phi)$, then the ideal Multicanonical-weight factor is \begin{equation} W_{mc}(\Phi_0)={-ln~n(\bar\Phi)}. \end{equation} Though trivial, we note, that a priori the Multicanonical-weight is not known. Less trivial we remark, that it is possible to obtain an estimate of the weight factor from the Monte Carlo simulation. In the vicinity of a given value of the operator $\bar\Phi$ one may sample the phase space in an attempt to estimate the density of states. In practice we perform for each of the considered intervals $I_i$ a separate simulation with $\bar\Phi$-values constrained to the given interval. In the simulation each parameter $h_i$ is then determined under the requirement that the probability distribution of the mean field $\bar\Phi$ approximates the Multicanonical distribution at its best. The full set of all parameters $h_i$ serves the purpose of the construction of the complete Multicanonical-weight according to eq.(\ref{the_weights_1}) and eq.(\ref{the_weights_2}). Clearly, for given $h_i$ the parameters $g_i$ can be chosen in such a way, that the resulting polygon is a steady function of $\bar\Phi$. We note, that one parameter value, e.g. $g_1$ is left undetermined by this procedure. This again corresponds to the overall normalization freedom of the path integral. In our actual simulations we have found, that this simple and robust procedure results into acceptable Multicanonical-weights and distribution functions. Once the Multicanonical-weight factor has been determined, we implement one sweep of the Monte Carlo sampling by a combination of a Swendsen-Wang reflection cluster update \cite{SweWa,Wolff_c}, followed by a subsequent accept-reject decision, which then depends on $W_{mc}$ alone. It means, that we define the Cluster degrees of freedom from the cinetic term of the action, and, that we replace the usual equal probability rule of the Cluster update, by a $4$-hit Metropolis accept-reject decision in all the Cluster degrees of freedom, depending on the magnetic properties of the system. At this value for the number of hits we obtained average acceptance rates for moves of the system at about one half for almost all the considered $\beta$ and $L$ values. In order to monitor the performance of this update procedure, we divided the considered $\bar\Phi$-interval into an arbitrarily chosen $8$, but equally spaced bins. We measure the average flip-autocorrelation time $\tau_{f}$ for a transition of the system from the first to the eights bin, or vice versa. While due to the partitioning some arbitrariness in the definition of $\tau_f$ is induced, we nevertheless expect, that it will exhibit the basic aspects of the Monte Carlo time dynamics. Fig. 1) contains data for the quantity $\tau_{f}$ obtained on a $L=36$ lattice for $\beta$-values in between $1.6$ and $3.0$ (circles), and for a fixed $\beta$-value $\beta=2.4$ on lattice sizes ranging in between $L=20$ to $L=68$ (triangles). One notices a typical time scale of several thousand sweeps, which in general is characteristic for our simulation. In detail we observe, that at a given $\beta$-value the flip-autocorrelation time stays almost constant, or even decreases with increasing lattice sizes. We attribute this favorable property to the nonlocal nature the algorithm. We also observe on the given lattice a rapid increase of the flip-autocorrelation time with increasing $\beta$. Such an increase is expected, as the simulational complexity of the theory will rise in the proximity of the asymptotic freedom fixed point. In the Multicanonical simulation we measure the Multicanonical probability distribution $P_{mc}(\bar\Phi)$ of the mean field $\bar\Phi$. It is related to the probability distribution in the canonical ensemble $P(\bar\Phi)$ by a simple reweighing step \begin{equation} P(\bar\Phi) \propto P_{mc}(\Phi_0)e^{-{W}_{mc}(\Phi)} \end{equation} and thus with the help of eq.({\ref{P_can}}) the CEP can be determined in the Multicanonical simulations. Throughout our simulations we have fixed the minimum value of the CEP to be zero. In addition we perform a bias corrected jackknife error analysis for all the measured quantities. The typical statistics accumulated for each data point in our simulation was inbetween $1$ and $5$ megasweeps, depending on the considered $\beta$-value. \section{Numerical Analysis and Results} The mass-gap correlation length $\xi_0$ of the $D=2$ $O(3)$ nonlinear sigma model has been calculated in previous studies at not too large values of $\beta$ and we quote here the results of Wolff \cite{Wolff}: At the $\beta$-values $\beta=1.6,1.7,1.8$ and $1.9$ $\xi_0/a$ takes the values $19.07(06),34.57(07),64.78(15)$ and $121.2(6)$ respectively. In this paper we study small systems of sizes ranging in between $L=8$ up to $L=82$. For $\beta$-values larger than $\beta=1.6$ the mass-gap correlation length $\xi_0$ is comparable with the considered lattice sizes. Accordingly it is expected that the stiffness correlation length $\xi_s$ exceeds the linear size of the systems and the onset Fisher scaling may be expected. In a first set of simulations we have studied the $L$-dependence of the CEP at twelve values of the nearest neighbor coupling, namely at the $\beta$-values $\beta=1.6,1.7,1.8,1.9,2.0,2.1,2.2,2.3,2.4$ and at the $\beta$-values $\beta=2.6,2.8$ and $\beta=3.0$. In a second set of simulations the $\beta$-dependence of the CEP potential was determined on a $L=36$ lattice at $\beta$-values ranging inbetween $\beta=1.55$ up to $\beta=3.1$. In total we have accumulated $468$ pairs of couplings $\beta$ and lattice sizes $L$ within our simulation. It is important to check our basic assumption, that states at mean field $\bar\Phi=0$ carry a twist angle of $\Theta=2\pi$. For this purpose we use a special graphical representation of the field configuration, which is exhibited in Fig. 2a) and Fig. 2b). On the given configuration we apply in a first step a global rotation which equals the arbitrarily chosen $3$ component of the field $\Phi^\alpha$ to zero. We then map the $1$ and $2$ components of the fields along paths in the lattice onto the complex plane. In this mapping we connect fields, which are neighbors within the path, by a line. Fig. 2a) contains all such mappings, which are obtained by considering all $L$ paths along the main axis of the lattice in the 1-direction. Fig. 2b) contains the analogous graphs for paths along the main axis in the 2-direction. The considered lattice size is $L=36$ and the nearest neighbor coupling is $\beta=8$, far in the perturbative region. The value of $\bar\Phi$ for the considered configuration was $\bar\Phi=0.003904$, which is very close to zero. One clearly observes the winding angle of $2\pi$ for the latter case i.e., for all paths on the main axis of the lattice in the 2-direction, which is clear evidence in favor of the above assumption. It is noteworthy to remark that fluctuations around the classical state eq.(\ref{the_classical_state}) are sizable even for such a large $\beta$-value. They increase as $\beta$ is lowered and decrease as $\beta$ is enlarged. In addition we also mention, that states at $\bar\Phi=0$ apparently break the cubic group lattice symmetry i.e., in the here considered configuration the twist appears in the 2-direction of the lattice. We have checked, that statistical independent configurations at $\bar\Phi=0$ assume the possible different twist directions with equal probability. From a theoretical point of view it must be expected that this additional degree of freedom, as compared to a theory with a fixed direction of the twist, contributes to the unknown regular contributions of the finite size scaling relation eq.(\ref{hypothesis}). In Fig. 3a) and Fig. 3b) we display the CEP, as obtained by our simulations on lattice sizes $L=24,36$ and $L=70$ and at a values of $\beta=1.6$ and $\beta=2.4$. At $\beta=1.6$ we observe a crossover from a situation, where for the smaller $L$-values the CEP for states at $\bar\Phi=0$ exhibits a barrier $\Delta U > 0$, to a situation at the largest $L$-value, where the barrier vanishes. Thus the decrease of the spin stiffness with increasing $L$ is observed. This is attributed to the quantum fluctuations around the classical state with twist angle $\Theta=2\pi$. We may remark, that in simulations of ferromagnets, namely the $D=3$ $O(3)$ sigma model in its symmetry broken phase, we have witnessed an increase of the potential barrier with increasing $L$. At $\beta=2.4$ the mass-gap correlation length and $\xi_s$ exceed the linear lattice size by a large factor. Correspondingly a crossover cannot be observed and $\Delta U \approx 20$ is finite on the considered range of lattice sizes. Again its decrease with increasing $L$ is witnessed. The large value of $\Delta U$ corresponds to an exponentially large suppression of states with mean field $\bar\Phi=0$, even if the phase space factor of eq.(\ref{P_can}) is divided out from the probability distributions. In fact, a simulation of the $D=2$ $O(3)$ nonlinear sigma model at $\beta=2.4$ on small volumes resembles, as far as order parameter orientation is concerned, on the first sight the behavior of a ferromagnetic system. It is only, that on lattices with linear extent comparable and larger to $\xi_s$ , the system will approach its true vacuum state. From the shape of the CEP we have carefully determined the quantity $\Delta U=U(\bar\Phi=0)-U(\bar\Phi_{min})$. For this purpose we employ fits to the shape of the CEP in the vicinity of the minimum at $\bar\Phi_{min}$ and in the vicinity of $\bar\Phi=0$. In the vicinity of $\bar\Phi=0$ and for the determination of $U(\bar\Phi=0)$ we describe the CEP by a parabola. Such a functional form can be expected on the basis of a classical argument \begin{equation} U= U(\bar\Phi=0)+\alpha_1{\bar\Phi}^2 \end{equation} and is consistent with the data. We use it in a fit range of $\bar\Phi$-values close to $\bar\Phi=0$, for which corresponding $\chi^2_{d.o.f}$-values of the fits are smaller than unity. Inspecting the shape of the CEP in vicinity of its minimum we note an apparent asymmetric behavior and therefore we use the form \begin{equation} U= U(\bar\Phi_{min}) +\beta_1 (\bar\Phi-\bar\Phi_{min})^2 +\beta_2 (\bar\Phi-\bar\Phi_{min})^3 \end{equation} for its analytic description and the determination of $U(\bar\Phi_{min})$. Analogous remarks as in the previous case on the fit-intervals and the $\chi^2_{d.o.f.}$-values apply. We expect, that discretization effects of the lattice theory, as compared to the continuum, lead to correction terms to the spin stiffness. The leading contributions are of the order $1/a^2$. On the lattice and in the classical approximation we calculate the action difference $\Delta S_{0,latt}$ of a field configuration with twist $\Theta=2 \pi$ relative to untwisted fields. It has the expansion \begin{equation} \Delta S_{0,latt}= \Delta S_{0,c}[\rho(L)+{\cal O}({1\over a^4})], \end{equation} with $\rho(L)$ given by \begin{equation} \rho(L)=1-C_g{\pi^2\over{3(La)^2}}. \end{equation} Classically the constant $C_g$ is equal to unity, and $\Delta S_{0,c}=2\pi^2\beta$ denotes the action difference of the Bloch wall in the continuum. We will assume here, that the possible form of the $1 / a^2$-corrections is already determined on the classical level. Correspondingly we assume in the fully fluctuating theory, that additional contributions can be accounted for by a nonunity value of the parameter $C_g$. The potential barrier in the continuum $\Delta U_c$ is then related to $\Delta U$ evaluated on the lattice via the relation \begin{equation} \Delta U_c= \rho(L)^{-1} \Delta U, \label{ctogrid} \end{equation} which then allows the comparison of the lattice data with the continuum scaling form. We have analyzed the finite size scaling of almost all of our $\Delta U$ data by means of one $\chi^2$-fit with the form \begin{equation} \Delta U=[1-C_g{\pi^2\over{3(La)^2}}][\pi {{ln}(\xi_s/L)} + A {{lnln}(\xi_s/L)} + \tilde{R}]. \label{final_fit_formula} \end{equation} Aiming at a precise determination of the mass-gap $\xi_0$ we implement our knowledge about the perturbative limits of the spin stiffness $\rho$ and the spin stiffness correlation length $\xi_s$. We constrain certain parameters and adopt the following scheme: 1) In accord with theoretical expectations the prefactor of the single logarithmic scaling term $\propto \ln(\xi_s/L)$ of the continuum free energy is fixed to its exact value $\pi$. This is expected on the basis of our previous discussion on the spin twist and the spin stiffness. We leave the prefactor $A$ of the double logarithmic scaling term to be a free parameter. It may be tolerated, that some of the omitted ${\cal O}(1)$ higher order terms of the loop expansion in eq.(\ref{final_fit_formula}) may be represented effectively by a value of $A$, which differs from $\pi$. 2) The quantity $\partial_{\beta}ln\xi_s(\beta)$ is expanded in inverse powers of the nearest coupling parameter $\beta$. Thus \begin{equation} \partial_{\beta}ln\xi_s(\beta)=2\pi-{1\over\beta}+{0.091\over\beta^2} + \sum_{k=3}^{6}\gamma_k {1\over\beta^k} \label{fit_ln_xis} \end{equation} represents an analytic form, which incorporates the known 3-loop behavior of the mass-gap correlation length. It is valid, if in the asymptotic scaling region $\xi_0$ and $\xi_s$ follow the same $\beta$ function. Without loss of generality we have included 4 free parameters $\gamma_k$ for $k=3,...,6$. These additional degrees of freedom allow for deviations from perturbative scaling in the small $\beta$-value region, where the dip in the $\Delta \beta (\beta)$ function is expected. 3) Given a start value for the stiffness correlation length at a value of the nearest neighbor coupling $\beta^*$ in the perturbative region of the theory, $\xi_s$ can be integrated to smaller values of $\beta$ via \begin{equation} ln\xi_s(\beta)=\int_{\beta^{*}}^{\beta} \partial_{\beta} ln \xi_s(\beta) + ln \xi_s(\beta^{*}), \end{equation} if within the fit the quantity $\partial_{\beta}ln\xi_s(\beta)$ is represented by its expansion eq.(\ref{fit_ln_xis}). We exploit the recent measurements of the spin stiffness correlation length in the asymptotic scaling region \cite{Alain} and fix the integration constant $ln\xi_s(\beta^*)$ by choosing $\beta^*=3$ and $\xi_s(\beta^*)=9.47\xi_0^{theor}(\beta^*)$. Note then that at $\beta^*=3$, $\xi_s$ has a value of $934051a$. Any significant deviation of $\xi_s$ from its measured value in the asymptotic scaling region would appear unreasonable to us at such large values of the correlation length. 4) We also express the functional form for the quantity $C_g(\beta)$ through an expansion in inverse powers in $\beta$ \begin{equation} C_g(\beta)=1 + \sum_{k=1}^{4}\delta_k {1\over\beta^k}, \label{fit_c_g} \end{equation} upon introduction of 4 free parameters $\delta_k$ with $k=1,...,4$ parameterizing the departure of $1/a^2$ corrections from the classical result. At infinite value of $\beta$ the classical result is recovered. 5) The onset of finite size scaling is expected for large values of the control parameter $x_s>>1$. Into the actual fit we include data from the $\beta$-interval $1.65 \le \beta \le 3.1$, leaving us with $428$ independent data points. For these data and for the considered lattice sizes our final fit only includes data, whose control parameters obey the inequality $x_s>4.7$. Such large values of the control parameter $x_s$ hopefully put us in a region of the theory, where we can reliably trust our finite finite size scaling ansatz eq.(\ref{final_fit_formula}). The resulting $10$ parameter $\chi^2$-fit for the parameters $A,\tilde{R},\gamma_k$ and $\delta_k$ with $k=1,...,4$ has been executed with the double precision version of the CERN libraries MINUIT program. The error analysis for the parameter values and for all other derived quantities of this paper has been performed by a repeated execution of the fit on several data sets. In each data set any datum in the fit is distributed gaussian around its mean with a variance corresponding to its error. The fit has an excellent $\chi^2_{d.o.f}$-value of $\chi^2_{d.o.f}=0.75$ and is displayed in Fig. 4) and Fig. 5) in comparison with the measured $\Delta U$ values. The fit as denoted by the lines in the figures captures the $\beta$-dependence of the data in Fig. 4) as well as the $L$-dependence displayed in Fig. 5). As expected by the theoretical reasoning the fit supports a negative regular contribution to the scaling law eq.(\ref{final_fit_formula}) $\tilde{R}=-3.45(20)$. It can be compared with the mentioned result of the small $\Theta$-expansion of ${\cal O}(\Theta^4)$ on boxes with fixed boundary conditions at a formal value of the twist angle $2\pi$, which is $-5.002$. However there is no theoretical argument, that both numbers should be identical, rather they should differ by contributions induced by the different boundary conditions. We also find a positive coefficient $A=1.79(7)$ attached to the double logarithmic scaling terms. The former value differs from the expected expected value $\pi$, which we attribute to the subleading nature of the double logarithmic terms. In Fig. 6) the fit result for the constant $C_g$ is given. Finite grid effects are largest for $\beta$-values around $\beta=1.9$ and exceed their classical value in the constant $C_g$ by a factor of about $3$ there. With increasing $\beta$ a very slow approach to the classical result is witnessed. In Fig. 7) the logarithm of the stiffness correlation length as well as its $\beta$-derivative are displayed. Note again that we have fixed $\xi_s$ at $\beta^{*}=3$ to the measured perturbative result, the fat circle in the figure. The calculated $\beta$-derivative of $ln\xi_s$ as displayed by the solid curve in the inlay of Fig. 7) exhibits a very clear "peak like" deviation from 3-loop asymptotic perturbative scaling, the dashed curve in the inlay of Fig. 7). The peak corresponds to the dip in the $\Delta \beta(\beta)$-shift as noticed in the earlier studies of the sigma model. It is located at a $\beta$-value of about $\beta \approx 1.75$. Notably our data demonstrate, that deviations from 3-loop asymptotic scaling are present up to $\beta$-values of about $\beta=2.5$, while for all larger values of $\beta$ asymptotic perturbative scaling is realized within the numerical precision. This finding is a genuine outcome of the present analysis and also applies to the $\Delta \beta(\beta)$-shift corresponding to the decrease of the stiffness correlation length by a factor $2$. The $\Delta \beta(\beta)$-shift is displayed in Fig. 8) for $\beta$-values larger than $\beta=1.8$, the solid curve in the figure. It is compared with the 3-loop asymptotic perturbative scaling result, the dashed curve in the figure and with results from Hasenfratz et. al \cite{HaNi2}. Table 1) collects few entries for the shift at selected values of $\beta$. We thus observe a somewhat slower approach to asymptotic scaling as it was previously anticipated in the the numerical simulations. A similar observation has already been reported earlier \cite{my_muca_o3}. The calculation of the mass-gap of the theory in units of the 3-loop lattice cut-off can be attempted under the nontrivial assumption, that mass-gap as well as stiffness correlation length share a common $\beta$-value region, in which their flow towards the asymptotically free fixed point with varying couplings is governed by a universal $\Delta \beta(\beta)$-shift. In this scaling region the ratio $\xi_0/\xi_s$ stays constant. It is common believe, that this region stretches beyond the region where asymptotic scaling is observed. Providing a start-value of the mass-gap correlation length at a small value of $\beta$ outside the perturbative scaling region, but within the so called nonasymptotic scaling region, it may be feasible to integrate the mass-gap of the theory with the help of $\partial_\beta \xi_s(\beta)$ up to a $\beta$-value $\beta^*$ in the asymptotic scaling region. The existence of a such a universal behavior beyond the asymptotic regime is however by no means guaranteed. Referring to the results of the preceding paragraph we are confident now that the choice $\beta^*=3$ moves us into the asymptotic scaling region. For the start value of the mass-gap correlation length we choose one of Wolffs numbers \begin{equation} \xi_0(\beta=1.8)=64.78(15)a. \end{equation} This result has been obtained on a quite sizable $512 \times 512$-lattice with the help of the cluster algorithm and represents one of the most reliable mass-gap correlation length measurements at large values of the correlation length. It is consistent with another work \cite{HaNi2}. In Fig. 9) we display the integration of the mass-gap in units of $\Lambda_{latt}$ from its start value at $\beta=1.8$ up to the largest considered $\beta$-values, the solid curve in the figure. We obtain a determination of the mass-gap of the theory in units of the three-loop lattice cut-off parameter \begin{equation} {m_0}=79.62 \pm 1.92~\Lambda_{latt}. \end{equation} The quoted error is of statistical nature. This result agrees with the analytical result $m_0=80.0864 \Lambda_{latt}$ derived from the thermodynamic Bethe Ansatz. \section{Conclusion} The calculation of the mass-gap in the asymptotically free D=2 nonlinear sigma model from the numerical evaluation of the path integral poses a nontrivial problem. In the standard formulation of the theory asymptotic scaling is only exhibited for values of the nearest neighbor coupling $\beta>2.5$. Direct correlation function measurements of the mass-gap are ruled out. A theoretical device is needed in order to connect the small $\beta$-region of the theory with the perturbative regime. In this paper we argue, that the consideration of twisted spin configurations in union with a finite size scaling analysis is able to bridge the gap inbetween the small and large $\beta$-regions of the theory. It is possible to determine the coupling parameter flow of the spin stiffness correlation length in a $\beta$-interval which connects both regions. Focussing on the theories spin stiffness allows us in addition to adjust certain parameters of the finite size scaling law to their perturbative values. These are obtained either in the numerical simulation or can be given by theoretical arguments. The measured asymptotic value of the spin stiffness correlation length serves as an integration constant for the intgration of $\xi_s$ towards lower $\beta$-values, while the spin stiffness constant $\rho_s$ adjusts the amplitude of single logarithmic scaling terms of the considered scaling law. This facilitates the precise extraction of the $\Delta \beta (\beta)$-shift corresponding to the nearest neighbor coupling dependence of the stiffness correlation length. Our analysis strongly supports the existence of a scaling window starting at $\beta$-values of about $1.8$ with an extension into the perturbative scaling regime. Within this region the mass-gap and stiffness correlation length are presumed to flow in coupling parameter space under the rule of a universal $\Delta \beta (\beta)$-shift. The integration of the shift in union with a start value for the mass-gap confirms the known theoretical mass-gap result. During the course of our study we have not found any indication in favor of the existence of a phase with vanishing mass-gap in the sigma model. It would be interesting to apply the presented method to twisted gauge field configurations and calculate the mass-gap of lattice QCD. {\bf Acknowledgments:} The author likes to thank A. Billoire for helpful comments and M. G\"ockeler for many discussions on the constraint effective potential. \hfill\break \vfill\eject
1,116,691,500,543
arxiv
\section{NGC 3079} NGC 3079 is one of the nearby ($\sim16$ Mpc) edge-on spiral galaxies that have been studied extensively in different wavelengths. It is classified as Seyfert 2 \citep{Ford1986} and shows evidence of starburst activity \citep[e.g;][]{Veilleux1994}. The galaxy is known for its several unique features, including an ionized gas outflow as a "super-bubble" \citep{Cecil2001}. The outflows seem to originate from the nucleus and reach velocities up to $\sim 1500$ km s$^{-1}$ \citep{Cecil2001}. However the exact origin of this "super-bubble" feature is still ambiguous due to the difficulty to distinguish the role of the starburst from the AGN. Chandra X-ray observations also show a clear correspondence of X-ray filaments with H$\alpha$ filaments extended $\sim 65$ pc from the nucleus \citep{Cecil2002}. A nuclear jet outflow has also been observed in the radio continuum as two, kilo parsec-scale, radio lobes extending on both sides of the major axis of the galaxy \citep{Duric1988, Baan1995}. These radio lobes are believed to be associated with the nuclear activity of the galaxy.\\ It is interesting to note that NGC 3079 is found in a small group with two known small companion galaxies: NGC 3073 and MCG-917-9. The optical appearance of both companions do not show any disturbance, however, in radio NGC 3073 seems to be affected by nuclear activity in NGC 3079 \citep{Irwin1987}. Due to the proximity of the galaxy, NGC 3079, all the interesting features can be studied glorious detail. \section{The Complex HI Structure of NGC 3079 } Neutral hydrogen was detected in both emission and absorption. The HI emission from NGC 3079 appears to be more complex than thought in previous studies, with a much more extended and highly asymmetric distribution (see Fig. \ref{f:totHI}). The HI disk also shows an asymmetric warp in the outer regions. A large stream of gas, encircling the entire galaxy was discovered, making an HI bridge between NGC 3079 and MCG 9-17-9, which indicates interaction between the two galaxies. A new companion galaxy is also discovered taking part in the streaming gas around the galaxy. \begin{figure}[h] \centering \includegraphics[width=7cm,height=7cm,angle=-90]{shafinf1.eps} \caption{\footnotesize Total HI distribution of NGC 3079 with HI column density contours of 0.01, 0.1, 0.3, 0.5, 1, 3, 5, 10, 30 and 50 $\times 10^{20}$ cm$^{-2}$. The HI distribution of the companion galaxies, including four new companions, is also shown here.}\label{f:totHI} \end{figure} \subsection{ Are MCG 9-17-9 and NGC 3073 being Ram Pressure Stripped? } MCG 9-17-9: This galaxy appears to have an interaction with NGC 3079. The HI bridge between the two galaxies is evidence for this (see Fig. \ref{f:interaction}), and it seems a gas tail which has been stripped off MCG 9-17-9 during the interaction. The X-ray halo of NGC 3079 is strong enough to be responsible for stripping off gas from MCG 9-17-9.\\ NGC 3073: Interestingly enough, an extended HI tail was seen in NGC 3073 that is much longer than seen in previous observations with significant curvature (see Fig. \ref{f:totHI}). As suggested by \cite{Irwin1987} the HI tail of this galaxy can be explained by ram pressure due to the supper-wind from NGC 3079. While the wind ram pressure seems to be responsible for the 'Cometary' appearance of NGC 3073, we have seen a drop in the surface brightness of NGC 3073 along the tail that can not be explained by a steady wind or constant density. Perhaps temporal changes in the wind properties are responsible for this drop. \begin{figure}[htp] \centering \includegraphics[width=8cm,height=6cm]{shafinf2.eps} \caption{\footnotesize Position-velocity slice taken along PA=$115^{\circ}$ showing the HI bridge between MCG 9-17-9 and NGC 3079 to the right. The ``third tail'' feature is also shown to the left. Contours are in steps of 2$\sigma$.}\label{f:interaction} \end{figure} \subsection{A Third Tail?} Another interesting result (although tentative) obtained from our data is that the presence of a third galaxy that seems taking part in the interaction with NGC 3079. An HI gas has been detected at the position of the optical galaxy SDSS J100311.18+553557.6 (no previous redshift was available) at a velocity of 1050 km s$^{-1}$. This HI gas exhibits a tail like structure forming a bridge toward NGC 3079 (see Fig \ref{f:interaction}). \section{A fast outflow of neutral hydrogen in NGC3079? } Our data show strong and broad (over 600 km s$^{-1}$) HI absorption against the nuclear region of NGC 3079. Since the the absorption is unresolved at the highest resolution of our observation, one can only tell that it originates from the gas located in front of the inner 1 kpc of the radio continuum. The HI absorption spectra show multiple-component structure with a mixture of broad and narrow components. The deep part of the absorption covers the velocity range of the observed HI emission associated with the rotating disk (see Fig. \ref{f:absence}). This component has been studied in detail by \cite{Baan1995} and \cite{Pedlar1996}. In addition to this already known component, we report the tentative detection of a highly blueshifted, faint component of the absorption that reaches velocity up to 400 km s$^{-1}$ with respect to the systemic velocity. This absorption is much broader than the rotation velocities observed in the galaxy and therefore it is likely associated with fast non-circular motions. Blueshifted absorption has to be associated to an outflow. Fast outflows of neutral gas have already been detected in other radio galaxies \citep[e.g;][]{Morganti2003}. In other radio sources where such fast cold gas has been detected the most likely origin for the outflow is considered to be the interaction of the jet with a dense interstellar medium that surrounds the radio source. In case of NGC3079, however, it is more complicated to find the origin for this fast outflow. Both a starburst component as well as the AGN can be responsible . \begin{figure}[ht] \centering \begin{minipage}[b]{0.3\linewidth} \subfigure[]{ \includegraphics[width=7cm,height=4.5cm]{shafin3f1.eps} \label{f:cont} } \end{minipage} \subfigure[]{ \includegraphics[width=7cm,height=4.5cm]{shafin3f2.eps} \label{f:abspv} } \subfigure[]{ \includegraphics[width=4.5cm,height=4cm]{shafin3f3.eps} \label{f:absspec} } \subfigure[]{ \includegraphics[width=3.5cm,height=3.5cm]{shafin3f4.eps} \label{f:abszoom} } \caption{\footnotesize(a) Continuum map, obtained after applying “Peeling” technique, overlaid on DSS2 image. The lowest contour level is 0.2 mJy beam $^{-1}$. Extensive continuum halo and several extension are visible. (b) Position-velocity slice taken along the major axis. Gray contours are in steps of -2 $\sigma$ level. (c) The HI absorption spectra detected against the central region of NGC 3079. (d) A zoom-in of the HI absorption optical depth.}\label{f:abs} \end{figure} \acknowledgements N. S Acknowledges financial support from the South African Square Kilometer Array (SA SKA) project and National Research Foundation (NRF).
1,116,691,500,544
arxiv
\section{Introduction} What do a Wi-Fi router, a Bluetooth speaker, and a leaky microwave oven have in common? They all broadcast wireless signals in the same frequency range. Modern wireless technologies ensure reliable communication by employing techniques, such as subcarrier frequency hopping \cite{Torrieri2018} or cyclic prefixing \cite{tse_book}. Yet, powerful interference, such as a leaky microwave oven operating at a nearby frequency, may disrupt communication between devices, e.g., by rendering a substantial portion of the subcarrier frequencies unusable. Under such adversarial conditions, although the received signal strength indicator (RSSI) may alert the receiver about channel anomalies \cite{wu_rssi}, the retrieved data that is overpowered by interference is practically lost. The purpose of this work is to add resilience against such jamming events. We consider a channel model in which a powerful jammer impacts the transmitted data randomly, at a bit-level. In this case, the individual non-jammed bits are subject to the additive white Gaussian noise (AWGN) of the channel, and the jammed bits are impacted by a more extreme, additive noise. We aim to develop an error correction algorithm that provides data recovery capabilities under these adversarial conditions. We propose doing so in conjunction with Guessing Random Additive Noise Decoding (GRAND), a recently proposed error correction decoder algorithm that can work with any codebook \cite{duffy2019grand}. Besides the original hard-information GRAND algorithm, soft-information-based variants are also available \cite{solomon2020sgrand, duffy2022srgrand, duffy2021orbgrand}. Among them, the Ordered Reliability Bits GRAND (ORBGRAND) \cite{duffy2022orbgrand} lends itself to practical hardware implementations while maintaining a near maximum-likelihood (ML) decoding performance \cite{abbas2022orbgrand}. In this work, we propose a jamming-resilient algorithm based on GRAND algorithm and its variants. Our method seeks to perform error-correction on the non-jammed bits of the received frame, and perform erasure-correction on the jammed bits. It does so by: \begin{enumerate} \item Identifying jammed bits through RSSI observation; \item Performing error correction on non-jammed bits. As jammed bits occur randomly in any part of the received codeword, a challenge is to error-correct a partial code that changes on each communication. This challenge requires a universal decoding approach, for which we use hard- and soft-information variants of the GRAND algorithm; \item Having error-corrected the non-jammed bits, determining the values of the jammed bits through Gaussian elimination. \end{enumerate} We achieve this capability by empowering the syndrome check function of any GRAND-based algorithm with the ability of restoring the erased bits in a received frame. This upgraded syndrome check method is called \textit{Erasure Decoding by Gaussian Elimination}, or EDGE in short. In general, most existing error-and-erasure decoding (EED) are based on specific decoding schemes. Some EED schemes are designed to retrieve corrupted \emph{frames} rather than \emph{bits} in the presence of erasures. This includes schemes, such as random linear network coding (RLNC) \cite{koetter2003algebraic,katti2008xors}, product/staircase codes \cite{lukas2021} and fountain codes \cite{fountain2022}. On the other hand, our proposed EDGE method operates at the bit-level, and can be used with \emph{any} linear code. We introduce two variants of the EDGE subroutine: one with hard-information (GRAND-EDGE) and the other with soft-information (ORBGRAND-EDGE) decoding. Simulation results demonstrate that the EDGE subroutine improves both the block-error rate (BLER) performance and the computational complexity by up to five orders of magnitude, under adversarial channel conditions. We also compare ORBGRAND-EDGE with the Ordered Statistics Decoding (OSD) algorithm \cite{osd1995} which is also based on Gaussian elimination. We show that the proposed ORBGRAND-EDGE algorithm improves the BLER by up to three orders of magnitude while achieving lower complexity compared to OSD. The rest of the paper is organized as follows. The background on GRAND is detailed Section~\ref{sec:bg}. In Section~\ref{sec:bg}, the EDGE subroutine and its applications with universal hard- and soft-decoding variants, GRAND-EDGE and ORBGRAND-EDGE, are presented. The benefits of the proposed universal error-and-erasure decoding via simulation results are presented in Section~\ref{sec:res}, followed by concluding remarks in Section~\ref{sec:conc}. \section{The GRAND Algorithm}\label{sec:bg} Guessing Random Additive Noise Decoding (GRAND) \cite{duffy2019grand} is a recently introduced universal algorithm capable of decoding any code, using codebook membership checks. For a received (hard-information) frame vector $\mathbf{r}$, the membership (syndrome) check is performed as \begin{equation}\label{eq:pc} \mathbf{H}\cdot\mathbf{r^{\top}} \text{,} \end{equation} where $\mathbf{H}$ is the codebook-specific parity-check matrix of a linear code. Unlike traditional decoding algorithms, GRAND focuses on the noise component of the received frame. If (\ref{eq:pc}) is not equal to an all-zero vector ($\mathbf{0}$), then the received sequence $\mathbf{r}$ is not a member of the codebook due to noise-corrupted bits. The GRAND algorithm trials putative error sequences, represented by $\mathbf{e}$, in maximum likelihood order. It subtracts each of them from $\mathbf{r}$ until it finds one that satisfies \begin{equation}\label{eq:pc2} \mathbf{H}\cdot(\mathbf{r} \oplus \mathbf{e})^{\top} = \mathbf{0} \text{.} \end{equation} where $\oplus$ is the modulo-2 sum operator. \begin{figure}[t] \includegraphics[width=\columnwidth]{./figures/grand_algo_v3.pdf} \caption{A component-level description of the GRAND algorithm family, with in-detail syndrome generation process.} \label{fig:grand} \end{figure} Different ordering of guessing noise sequences lead to different variations of GRAND. A high-level description of the GRAND algorithm family is depicted in Fig.~\ref{fig:grand}. Among the variants, Ordered Reliability Bits GRAND (ORBGRAND)~\cite{duffy2021orbgrand} is a soft-information decoding algorithm that orders the putative noise sequences based on the \textit{logistic weight} ($LW$) of their sorted LLR magnitudes, and is amenable to hardware implementation~\cite{abbas2022orbgrand}. \section{The GRAND-EDGE Algorithm}\label{sec:3} In this Section, the channel model is characterized first. Then the EDGE subroutine is explained in detail, followed by a closer look at the Gaussian elimination process. Finally, the proposed GRAND-EDGE algorithm is described. \subsection{Channel Model}\label{sec:3:chn} \begin{figure}[t] \includegraphics[width=1.02\columnwidth]{./figures/chn_v3.pdf} \caption{The channel model.} \label{fig:chn} \end{figure} In this work, we consider an AWGN channel model that is randomly disrupted by a jammer, as depicted in Fig.~\ref{fig:chn}. The additive channel noise instance, represented by $n$, is added to the modulated signal $x$ carried over a specific frequency. A powerful jammer instance $j$, activated with a probability $\epsilon$ may be added to the transmitted signal. We assume that the probability of jamming $\epsilon$ is on the bit-level rather than the frame-level. Therefore, the received signal $y$ can be expressed as \begin{equation}\label{eqn:chn} y=\left\{ \begin{array}{@{}ll@{}} x + n + j & \text{with probability } \epsilon; \\ x + n & \text{otherwise.} \end{array}\right. \end{equation} The jammer is also modeled as AWGN, but with a variance that is far greater than that of the channel AWGN. Therefore, if the received signal magnitude is suspiciously stronger than expected, then it is assumed that the signal is jammed and its value is invalidated. For frame-level jamming or erasures, such as lost frames due to undecodable preambles, can also be converted to bit-level erasures by simple interleaving techniques, such as in \cite{riaz_icc2022}. \subsection{The EDGE Subroutine}\label{sec:3:edge} \begin{figure}[t] \includegraphics[width=1.05\columnwidth]{./figures/separation_v3.pdf} \caption{Isolation of erasures ($\mathbf{r_e}$) from the received (hard-decision) codeword and corresponding parity-check columns ($\mathbf{H_e}$) from the H-matrix, followed by the calculation of erasure syndrome ($\mathbf{s_e}$).} \label{fig:separate} \end{figure} Let $\mathbf{p} = \{0, 1, \cdots N-1\}$ represent the total set of 0-indexed indices of the received vector of length $N$, and let $\mathbf{q}$ represent the set of imputed (erased) indices in the received vector, such that $\mathbf{q}$ has $e$ elements, $\mathbf{q} \subseteq \mathbf{p}$, and $q_i < q_{i+1}, \forall{i}$. As shown in Fig.~\ref{fig:separate}, given $\mathbf{q}$, let us split the received vector $\mathbf{r}$ and the parity-check matrix $\mathbf{H}$ \begin{equation} \label{eq:split} \begin{split} \mathbf{r} & = \mathbf{r_e} \cup \mathbf{r_c} \text{,} \\ \mathbf{H} & = \mathbf{H_e} \cup \mathbf{H_c} \text{,} \end{split} \end{equation} where $\mathbf{r_e}$ and $\mathbf{r_c}$ represent the erased and non-erased subsets of $\mathbf{r}$ with sizes $e$ and $N-e$, respectively. Similarly, $\mathbf{H_e}$ is a $(N-k)\times e$ matrix that contains the columns of $\mathbf{H}$ which correspond to the erased bits, and $\mathbf{H_c}$ contains the remaining columns. Note that the original sets in (\ref{eq:split}) can be reconstructed from the separated subsets, using $\mathbf{q}$. The parity-check equation described in (\ref{eq:pc}) can be expanded as \begin{equation}\label{eq:pcnew} \mathbf{H_e}\cdot\mathbf{r_e^{\top}} \oplus \mathbf{H_c}\cdot\mathbf{r_c^{\top}} = \mathbf{0} \text{.} \end{equation} Here, all the components are known on the receiver side except $\mathbf{r_e}$. Let us define the \textit{erasure syndrome}, $\mathbf{s_e}$, as \begin{equation}\label{eq:se} \mathbf{s_e^{\top}} = \mathbf{H_c}\cdot\mathbf{r_c^{\top}} \text{,} \end{equation} and transfer it to the right-hand side of (\ref{eq:pcnew}) in the binary domain, to obtain \begin{equation}\label{eq:pc-era} \mathbf{H_e}\cdot\mathbf{r_e^{\top}} = \mathbf{s_e^{\top}} \text{.} \end{equation} as visualized in Fig.~\ref{fig:separate}. The EDGE subroutine performs Gaussian elimination on the \textit{linear set of equations} in (\ref{eq:pc-era}) to find $\mathbf{r_e}$. To find a unique solution for $\mathbf{r_e}$, the number of equations must not be smaller than the number of variables. In other words, the number of rows of $\mathbf{H_e}$ must be equal to or greater than its columns, therefore, we must first satisfy $e \leq N-k$. \begin{algorithm}[t] \caption{EDGE Subroutine Initialization}\label{alg:edge:init} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Inputs}{Inputs}\SetKwInOut{Outputs}{Outputs} \Inputs{$\mathbf{r}$, $\mathbf{H}$, $\mathbf{q}$, $N-k$, $e$} \Outputs{$\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}$} $\mathbf{r_e} \leftarrow \mathbf{r}[\mathbf{q}]$, ~ $\mathbf{r_c} \leftarrow \mathbf{r} \setminus \mathbf{r_c}$\\ $\mathbf{H_e} \leftarrow \mathbf{H}[\mathbf{q}]$, $\mathbf{H_c} \leftarrow \mathbf{H} \setminus \mathbf{H_c}$\\ \If{$N-k < e$}{ no unique solution, terminate decoding } $\mathbf{E} \leftarrow \text{GaussianElimination}(\mathbf{H_e}) $ \\ \end{algorithm} \begin{algorithm}[t] \caption{The EDGE Subroutine}\label{alg:edge} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Inputs}{Inputs}\SetKwInOut{Outputs}{Outputs} \Inputs{$\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}$, $\mathbf{q}$, $N-k$, $e$} \Outputs{$\mathbf{r}$, $\textit{success}$} $\mathbf{s_e} = \mathbf{E} \cdot \mathbf{H_c} \cdot \mathbf{r_c} $ \\ \If{$\mathbf{s_e}[e:N-k-1] \neq \mathbf{0}$}{ \textit{success} = 0, terminate subroutine\\ } $\mathbf{r_e} = \mathbf{s_e}[0:e-1]$ \\ $\mathbf{r} \leftarrow \mathbf{r_c} \cup \mathbf{r_e}$ using $\mathbf{q}$ \\ \textit{success} = 1\\ \end{algorithm} \begin{figure*}[t] \includegraphics[width=2.1\columnwidth]{./figures/rref_v3.pdf} \caption{Gaussian elimination example using two elementary row operations to transform $\mathbf{H_e}$ into reduced row echelon form (RREF), to find $\mathbf{r_e}$ from $\mathbf{s_e^*}$. 1s and 0s in the matrix and the vector are indicated by black and white, respectively. Leading 1s at each column of $\mathbf{H_e}$ are represented by red.} \label{fig:rref} \end{figure*} The initialization procedure for the EDGE subroutine is described in Algorithm~\ref{alg:edge:init}, where $\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$ are prepared (lines 1-2) and whether the number of erasures could be recovered by the code is determined (lines 3-5). This is followed by the Gaussian elimination process using $\mathbf{H_e}$, to store the required operations in an \textit{elimination matrix}, $\mathbf{E}$ (line 6). These stored Gaussian elimination operations are later used to reduce the erasure restoration complexity, explained in Section~\ref{sec:3:ge}. The EDGE subroutine is described in Algorithm~\ref{alg:edge}. First, the erasure syndrome is calculated $\mathbf{s_e}$ using $\mathbf{H_c}$ and $\mathbf{r_c}$, and the Gaussian elimination matrix $\mathbf{E}$. If the resulting $\mathbf{s_e}$ is \textit{error-free} (lines 2-4), then the erased sequence $\mathbf{r_e}$ is substituted from $\mathbf{s_e}$ (line 5) and the complete received sequence is restored (line 6). Otherwise, the subroutine is terminated with no success (line 3). \subsection{The Gaussian Elimination Process}\label{sec:3:ge} The Gaussian elimination process reduces the linear set of equations into a form from which the variables can be directly obtained. For our purpose, we review the binary matrix case of Gaussian elimination \cite{strang_book}. To achieve this form, the $\mathbf{H_e}$ matrix should be modified into a reduced row-echelon form (RREF), with the particular structure of $[\mathbf{I}|\mathbf{0}]^\top$ where $\mathbf{I}$ and $\mathbf{0}$ represent identity and all-zero matrices, respectively. Note that there is a unique RREF for any matrix. If the RREF of a matrix yields an all-zero column, there is no unique solution for $\mathbf{r_e}$. An example of the Gaussian elimination process is depicted in Fig.~\ref{fig:rref}. Starting from the leftmost column, a leading $1$ is first identified for each column. If the row index of the leading $1$ does not match its column index, an elementary swap operation takes place to place the leading $1$ cell to the diagonal of the matrix. This row with the leading $1$ is then subtracted from the other rows that also hold a $1$ on the subject column, to ensure there is a single $1$ remaining. The process is sequentially continued for all the columns in $\mathbf{H_e}$. The equivalent swap and add operations are also applied to the transposed $\mathbf{s_e^{\top}}$, to obtain $\mathbf{s_e^{*\top}}$ at the end. Changes to the $\mathbf{H_e}$ and $\mathbf{s_e^{\top}}$ are denoted with an asterisk ($^{*}$). In practice, Gaussian elimination is costly with a complexity of $O(n^3)$. To reduce its impact, it can be performed only once at the initialization (refer to Algorithm~\ref{alg:edge:init}) and all the operations towards obtaining the RREF $\mathbf{H_e^*}$ can be stored in an \textit{elimination matrix}, $\mathbf{E}$. This way, the final $\mathbf{s_e^{*\top}}$ can be obtained by \begin{equation} \mathbf{s_e^{*\top}} = \mathbf{E} \cdot \mathbf{s_e} \text{,} \end{equation} which has the same effective complexity as the syndrome check operation of GRAND, see~(\ref{eq:pc}). \begin{figure}[t] \includegraphics[width=1.05\columnwidth]{./figures/examples_v6.pdf} \caption{Two examples are shown for erased bits restoration using the EDGE algorithm, with $5$ erased bits and $n-k=8$. In both cases same indices are erased from the same codebook, yielding the same $H_e$. The result of Case 1 is consistent and is therefore valid. In Case 2, the result is inconsistent due to incurred errors and therefore is invalid.} \label{fig:example} \end{figure} \subsection{The GRAND-EDGE Algorithm Family}\label{sec:3:grandedge} Refer to Fig.~\ref{fig:example}, where two arbitrary Gaussian elimination examples are illustrated using the same $\mathbf{H_e}$, with $e=5$ and $N-k=8$. In the first case, $\mathbf{s_e}$ does not incur any channel errors. Therefore, the transformed $\mathbf{s_e^*}$ is in the form of $[\mathbf{r_e} | \mathbf{0}]$, in other words, the last $N-k-e$ equations with $0$ coefficients naturally yield $0$. In the second case, the $\mathbf{s_e}$ is infused with channel errors that yield an $\mathbf{s_e^*}$ with nonzero entries in its last $N-k-e$ indices. For the EDGE subroutine, this indicates that there are errors in the channel. When errors are involved, erased indices cannot be accurately restored, and an additional error-correction decoder is required. The process of the proposed GRAND-EDGE algorithm is described in Algorithm~\ref{alg:grand-edge}. Note that the EDGE subroutine (line 6) replaces the syndrome check function of GRAND. If the EDGE subroutine fails, then the GRAND algorithm generates the next putative error pattern and combines it with the received part of the codeword (lines 6-7, followed by line 4). As discussed in Section~\ref{sec:bg}, the agenda of the error pattern generation depends on the GRAND variant. For simplicity, inputs and parameters for soft-information GRAND variants are not shown in the algorithm, instead, we refer to \cite{solomon2020sgrand, duffy2022orbgrand} for details. When there are no erasures in the received codeword, then $\mathbf{r_e} = \mathbf{H_e} = \varnothing$, with $\mathbf{r_c} = \mathbf{r}$ and $\mathbf{H_c} = \mathbf{H}$. Hence, the expression in (\ref{eq:se}) becomes the same as in (\ref{eq:pc}), and GRAND-EDGE reverts to GRAND algorithm. Therefore, the proposed GRAND-EDGE does not present a computational burden to the original algorithm in the absence of erasures in the channel. \begin{algorithm}[t] \caption{The GRAND-EDGE Algorithm}\label{alg:grand-edge} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Inputs}{Inputs}\SetKwInOut{Outputs}{Outputs} \Inputs{$\mathbf{r}$,$\mathbf{H}$,$\mathbf{G}^{-1}$,$\mathbf{q}$} \Outputs{$\mathbf{\hat{u}}$} $\mathbf{e} \leftarrow \mathbf{0}$ \\ $\textit{iter} = 0$ \\ $\{\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}\} \leftarrow$ $\text{EDGE\_Init}(\mathbf{r}, \mathbf{H}, \mathbf{q}, N-k, e)$ \\ \While{$\textit{success} = 0 \land \textit{iter} \neq \textit{maxIters}$}{ $\mathbf{r_c^*} \leftarrow \mathbf{r_c} \oplus \mathbf{e}$ \\ $\{\textit{success}, \mathbf{r^*}\} \leftarrow $EDGE($\mathbf{r_c^*}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}$, $\mathbf{q}$, $N-k$, $e$) \\ $\mathbf{e} \leftarrow $NextErrorPattern$(\textit{iter})$ \\ $\textit{iter} = \textit{iter} + 1$ \\ $\mathbf{\hat{u}} \leftarrow \mathbf{r^*} \cdot \mathbf{G}^{-1}$ \\ \end{algorithm} \section{Performance Assessment}\label{sec:res} Simulations are carried over the channel model described in Fig.~\ref{fig:chn}, where the channel and jammer AWGN are generated through independent Gaussian processes. The overpowered jammer SNR is set to -100 dB. The probability of jamming, $\epsilon$, is generated through an independent Bernoulli process for each transmitted symbol. Prior to decoding, a threshold is required to decide whether the received signal is jammed. For that purpose, we follow the empirical rule; any signal that is observed within $3$ standard deviations of the modulated signal space is considered non-jammed, and is jammed otherwise. Note that different approaches for determining jamming in the channel can also be used with the proposed algorithm, but are beyond the scope of this paper. For all evaluations, a random linear code (RLC) with code length $N=128$ with $k=105$ information bits is featured, which is generated using the simulator on-the-fly. The abandonment threshold \cite{duffy2019grand} is set to a Hamming weight of $3$ for both GRAND and GRAND-EDGE. The logistic weight threshold \cite{duffy2021orbgrand} is set to $104$ for ORBGRAND and ORBGRAND-EDGE. \subsection{GRAND-EDGE Performance}\label{sec:res:grand} \begin{figure*}[t] \centering \input{figures/merged_grand_v2.tikz} \vspace{-1.25mm} \caption{\label{fig:res} Top: BLER performance comparison of the proposed GRAND-EDGE and ORBGRAND-EDGE algorithms against their GRAND-only counterparts, using RLC[128,105]. The comparisons are carried out with various channel AWGN SNR and bit-jamming probabilities as indicated in the x-axis labels and the legends. Bottom: Average number of iterations for each evaluated algorithm with matching legends.} \end{figure*} Fig.~\ref{fig:res}(a) presents the error correction performance for the proposed GRAND-EDGE algorithm against the GRAND algorithm. The x-axis represents the channel SNR, and $\epsilon$ is fixed for each simulation and is represented in the legend. We observed that GRAND-EDGE outperforms GRAND by up to five orders of magnitude, because it has the capability of restoring erased bits whereas GRAND attempts to guess their values iteratively. Fig.~\ref{fig:res}(e) shows the average number of iterations (codebook queries) for each BLER curve. With improving AWGN SNR conditions, the GRAND algorithm gets stuck in guessing the jammed bits in an attempt to find the correct codeword. This yields a high average number of iterations, even at high SNRs. On the other hand, the efficient handling of the erased bits by the EDGE routine helps the GRAND-EDGE algorithm reduces the avarage number of iterations compared to GRAND, by up to more than five orders of magnitude. Fig.~\ref{fig:res}(b) and Fig.~\ref{fig:res}(f) present the GRAND-EDGE performance against GRAND over a simulated set of bit jamming probability values $\epsilon$. The channel SNR values are fixed and are represented in the legend. Similar to Fig.~\ref{fig:res}(a), the GRAND-EDGE algorithm achieves superior BLER performance as the channel conditions improve. On the other hand, the average number of iterations of GRAND-EDGE increases as $\epsilon$ decreases. This is because GRAND-EDGE abandons decoding if the number of erasures is beyond its capability (if $e > N-k$). Nonetheless, the average number of iterations of GRAND-EDGE is always smaller or equal to that of GRAND, sometimes by up to five orders of magnitude less. Finally, we note that both the BLER performance and the average number of iterations of GRAND-EDGE meet the performance of GRAND at $\epsilon=0$. This is because the GRAND-EDGE algorithm reverts to the GRAND algorithm when there is no erasure in the channel, as mentioned previously in Section~\ref{sec:3:grandedge}. \subsection{ORBGRAND-EDGE Performance}\label{sec:res:orbgrand} Fig.~\ref{fig:res}(c) and Fig.~\ref{fig:res}(d) present the BLER performance evaluation for the proposed ORBGRAND-EDGE algorithm against the ORBGRAND algorithm, \textit{i.e.} when soft-information is involved in decoding the received codeword. Compared to GRAND, the ORBGRAND algorithm can find more errors by taking advantage of the bit-reliability information (LLRs). As a result, ORBGRAND demonstrates better BLER performance compared to GRAND. On the other hand, the improvement in BLER with ORBGRAND-EDGE is not as dramatic compared to the gains in hard-information GRAND-EDGE. This is because ORBGRAND can prioritize jammed indices for bit-flipping as long as their values are beyond the predetermined threshold. Nonetheless, we observe a BLER gain of up to one order of magnitude when ORBGRAND is enhanced with the EDGE subroutine. \begin{figure}[t] \centering \input{figures/results_osd_p_bler.tikz} \caption{\label{fig:res_osd_fer} ORBGRAND-EDGE performance compared to the OSD algorithm, using RLC[128,105]. The bit-level jamming probabilities are set to $\epsilon = \{0.02, 0.05, 0.1\}$ and x-axis represents the channel AWGN SNR.} \end{figure} Compared to the GRAND-EDGE performance in Fig.~\ref{fig:res}(a) with $\epsilon = 0.02$, the ORBGRAND-EDGE performance in Fig.~\ref{fig:res}(c) experiences an error floor. As a result, at high SNR regimes, GRAND-EDGE outperforms ORBGRAND-EDGE. This is due to the differences in the bit-flipping agenda of these two variants. With a logistic weight of $104$, only the least reliable $104$ indices and a subset of their combinations are considered for bit-flipping in ORBGRAND \cite{duffy2022orbgrand}. Therefore, when a jammed index is overlooked by the jamming detection, ORBGRAND-EDGE does not erase the bit, and it is likely considered an index reliable enough that it is never considered for bit-flipping. On the other hand, even when a jammed index is overseen, the GRAND-EDGE algorithm evaluates all non-erased indices for bit-flipping. As a result, especially when the $\epsilon$ is small, GRAND-EDGE can outperform ORBGRAND-EDGE. Although more sophisticated pre-decoding jamming detection patterns can be implemented \cite{ercan_jamming}, they are beyond the scope of this work. Fig.~\ref{fig:res}(g) and Fig.~\ref{fig:res}(h) present the computational complexity comparison for ORBGRAND-EDGE and ORBGRAND, with respect to channel SNR and bit-jamming probability, respectively. Compared to GRAND, ORBGRAND has better (less) computational complexity in general, due to the efficient identification of bit-flipping indices. On the other hand, when ORBGRAND is augmented with the EDGE subroutine, the average number of iterations reduces by up to five orders of magnitude. Similar to the observations made in GRAND, ORBGRAND-EDGE reverts to the ORBGRAND algorithm when $\epsilon=0$. Different than GRAND-EDGE, the computational complexity of ORBGRAND-EDGE begins to reduce as $\epsilon$ reduces further. This is because the complexity of the linear set of equations reduces with reducing $\epsilon$, and with the help of soft information ORBGRAND-EDGE can find the erroneous component of the received codeword faster compared to GRAND-EDGE. Fig.~\ref{fig:res_osd_fer} compares the BLER performance of the proposed ORBGRAND-EDGE algorithm against the Ordered Statistics Decoding (OSD) algorithm \cite{osd1995}, using RLC[128,105]. Similar to EDGE, OSD performs Gaussian elimination to find the most likely transmitted codeword. However, the Gaussian elimination in OSD is always performed over $k$ columns, whereas in EDGE it is only performed over up to $N-k$ columns. Moreover, OSD requires multiple permutations which adds to its implementation complexity. Nonetheless, it can be observed from Fig.~\ref{fig:res_osd_fer} that the ORBGRAND-EDGE outperforms OSD by up to three orders of magnitude in BLER. \section{Conclusion}\label{sec:conc} In this work, we introduced an adversarial model, whereby a jammer randomly overpowers bits of a transmitted signal, effectively causing erasures. To address this adversarial channel condition, we generalized the syndrome check component of the universal GRAND algorithm family to support erasure decoding. The proposed GRAND-EDGE algorithm and its variants address bit-level errors and erasures simultaneously. The erasure decoding component (i.e., the EDGE subroutine) presents no additional computational complexity when there is no detected erasure in the channel, reverting to the syndrome check function of the GRAND algorithm in that case. The proposed EDGE algorithm can be used with both hard and soft variants of GRAND, which we demonstrated through the implementation of GRAND-EDGE and ORBGRAND-EDGE. Compared to their original counterparts, the EDGE-enhanced GRAND algorithms achieve up to five orders of magnitude improvement both in terms of error-correction performance in terms of computational complexity under the considered adversarial model. \section*{Acknowledgements} This work was partially supported by Defense Advanced Research Projects Agency Contract number HR00112120008 and by National Science Foundation ECCS Award numbers 2128517 and 2128555. The content of the information does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred. This publication has emanated from research supported in part by a grant from Science Foundation Ireland under grant number 18/CRT/6049. The opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Science Foundation Ireland. \bibliographystyle{IEEEtran} \section{Introduction} What do a Wi-Fi router, a Bluetooth speaker, and a leaky microwave oven have in common? They all broadcast wireless signals in the same frequency range. Modern wireless technologies ensure reliable communication by employing techniques, such as subcarrier frequency hopping \cite{Torrieri2018} or cyclic prefixing \cite{tse_book}. Yet, powerful interference, such as a leaky microwave oven operating at a nearby frequency, may disrupt communication between devices, e.g., by rendering a substantial portion of the subcarrier frequencies unusable. Under such adversarial conditions, although the received signal strength indicator (RSSI) may alert the receiver about channel anomalies \cite{wu_rssi}, the retrieved data that is overpowered by interference is practically lost. The purpose of this work is to add resilience against such jamming events. We consider a channel model in which a powerful jammer impacts the transmitted data randomly, at a bit-level. In this case, the individual non-jammed bits are subject to the additive white Gaussian noise (AWGN) of the channel, and the jammed bits are impacted by a more extreme, additive noise. We aim to develop an error correction algorithm that provides data recovery capabilities under these adversarial conditions. We propose doing so in conjunction with Guessing Random Additive Noise Decoding (GRAND), a recently proposed error correction decoder algorithm that can work with any codebook \cite{duffy2019grand}. Besides the original hard-information GRAND algorithm, soft-information-based variants are also available \cite{solomon2020sgrand, duffy2022srgrand, duffy2021orbgrand}. Among them, the Ordered Reliability Bits GRAND (ORBGRAND) \cite{duffy2022orbgrand} lends itself to practical hardware implementations while maintaining a near maximum-likelihood (ML) decoding performance \cite{abbas2022orbgrand}. In this work, we propose a jamming-resilient algorithm based on GRAND algorithm and its variants. Our method seeks to perform error-correction on the non-jammed bits of the received frame, and perform erasure-correction on the jammed bits. It does so by: \begin{enumerate} \item Identifying jammed bits through RSSI observation; \item Performing error correction on non-jammed bits. As jammed bits occur randomly in any part of the received codeword, a challenge is to error-correct a partial code that changes on each communication. This challenge requires a universal decoding approach, for which we use hard- and soft-information variants of the GRAND algorithm; \item Having error-corrected the non-jammed bits, determining the values of the jammed bits through Gaussian elimination. \end{enumerate} We achieve this capability by empowering the syndrome check function of any GRAND-based algorithm with the ability of restoring the erased bits in a received frame. This upgraded syndrome check method is called \textit{Erasure Decoding by Gaussian Elimination}, or EDGE in short. In general, most existing error-and-erasure decoding (EED) are based on specific decoding schemes. Some EED schemes are designed to retrieve corrupted \emph{frames} rather than \emph{bits} in the presence of erasures. This includes schemes, such as random linear network coding (RLNC) \cite{koetter2003algebraic,katti2008xors}, product/staircase codes \cite{lukas2021} and fountain codes \cite{fountain2022}. On the other hand, our proposed EDGE method operates at the bit-level, and can be used with \emph{any} linear code. We introduce two variants of the EDGE subroutine: one with hard-information (GRAND-EDGE) and the other with soft-information (ORBGRAND-EDGE) decoding. Simulation results demonstrate that the EDGE subroutine improves both the block-error rate (BLER) performance and the computational complexity by up to five orders of magnitude, under adversarial channel conditions. We also compare ORBGRAND-EDGE with the Ordered Statistics Decoding (OSD) algorithm \cite{osd1995} which is also based on Gaussian elimination. We show that the proposed ORBGRAND-EDGE algorithm improves the BLER by up to three orders of magnitude while achieving lower complexity compared to OSD. The rest of the paper is organized as follows. The background on GRAND is detailed Section~\ref{sec:bg}. In Section~\ref{sec:bg}, the EDGE subroutine and its applications with universal hard- and soft-decoding variants, GRAND-EDGE and ORBGRAND-EDGE, are presented. The benefits of the proposed universal error-and-erasure decoding via simulation results are presented in Section~\ref{sec:res}, followed by concluding remarks in Section~\ref{sec:conc}. \section{The GRAND Algorithm}\label{sec:bg} Guessing Random Additive Noise Decoding (GRAND) \cite{duffy2019grand} is a recently introduced universal algorithm capable of decoding any code, using codebook membership checks. For a received (hard-information) frame vector $\mathbf{r}$, the membership (syndrome) check is performed as \begin{equation}\label{eq:pc} \mathbf{H}\cdot\mathbf{r^{\top}} \text{,} \end{equation} where $\mathbf{H}$ is the codebook-specific parity-check matrix of a linear code. Unlike traditional decoding algorithms, GRAND focuses on the noise component of the received frame. If (\ref{eq:pc}) is not equal to an all-zero vector ($\mathbf{0}$), then the received sequence $\mathbf{r}$ is not a member of the codebook due to noise-corrupted bits. The GRAND algorithm trials putative error sequences, represented by $\mathbf{e}$, in maximum likelihood order. It subtracts each of them from $\mathbf{r}$ until it finds one that satisfies \begin{equation}\label{eq:pc2} \mathbf{H}\cdot(\mathbf{r} \oplus \mathbf{e})^{\top} = \mathbf{0} \text{.} \end{equation} where $\oplus$ is the modulo-2 sum operator. \begin{figure}[t] \includegraphics[width=\columnwidth]{./figures/grand_algo_v3.pdf} \caption{A component-level description of the GRAND algorithm family, with in-detail syndrome generation process.} \label{fig:grand} \end{figure} Different ordering of guessing noise sequences lead to different variations of GRAND. A high-level description of the GRAND algorithm family is depicted in Fig.~\ref{fig:grand}. Among the variants, Ordered Reliability Bits GRAND (ORBGRAND)~\cite{duffy2021orbgrand} is a soft-information decoding algorithm that orders the putative noise sequences based on the \textit{logistic weight} ($LW$) of their sorted LLR magnitudes, and is amenable to hardware implementation~\cite{abbas2022orbgrand}. \section{The GRAND-EDGE Algorithm}\label{sec:3} In this Section, the channel model is characterized first. Then the EDGE subroutine is explained in detail, followed by a closer look at the Gaussian elimination process. Finally, the proposed GRAND-EDGE algorithm is described. \subsection{Channel Model}\label{sec:3:chn} \begin{figure}[t] \includegraphics[width=1.02\columnwidth]{./figures/chn_v3.pdf} \caption{The channel model.} \label{fig:chn} \end{figure} In this work, we consider an AWGN channel model that is randomly disrupted by a jammer, as depicted in Fig.~\ref{fig:chn}. The additive channel noise instance, represented by $n$, is added to the modulated signal $x$ carried over a specific frequency. A powerful jammer instance $j$, activated with a probability $\epsilon$ may be added to the transmitted signal. We assume that the probability of jamming $\epsilon$ is on the bit-level rather than the frame-level. Therefore, the received signal $y$ can be expressed as \begin{equation}\label{eqn:chn} y=\left\{ \begin{array}{@{}ll@{}} x + n + j & \text{with probability } \epsilon; \\ x + n & \text{otherwise.} \end{array}\right. \end{equation} The jammer is also modeled as AWGN, but with a variance that is far greater than that of the channel AWGN. Therefore, if the received signal magnitude is suspiciously stronger than expected, then it is assumed that the signal is jammed and its value is invalidated. For frame-level jamming or erasures, such as lost frames due to undecodable preambles, can also be converted to bit-level erasures by simple interleaving techniques, such as in \cite{riaz_icc2022}. \subsection{The EDGE Subroutine}\label{sec:3:edge} \begin{figure}[t] \includegraphics[width=1.05\columnwidth]{./figures/separation_v3.pdf} \caption{Isolation of erasures ($\mathbf{r_e}$) from the received (hard-decision) codeword and corresponding parity-check columns ($\mathbf{H_e}$) from the H-matrix, followed by the calculation of erasure syndrome ($\mathbf{s_e}$).} \label{fig:separate} \end{figure} Let $\mathbf{p} = \{0, 1, \cdots N-1\}$ represent the total set of 0-indexed indices of the received vector of length $N$, and let $\mathbf{q}$ represent the set of imputed (erased) indices in the received vector, such that $\mathbf{q}$ has $e$ elements, $\mathbf{q} \subseteq \mathbf{p}$, and $q_i < q_{i+1}, \forall{i}$. As shown in Fig.~\ref{fig:separate}, given $\mathbf{q}$, let us split the received vector $\mathbf{r}$ and the parity-check matrix $\mathbf{H}$ \begin{equation} \label{eq:split} \begin{split} \mathbf{r} & = \mathbf{r_e} \cup \mathbf{r_c} \text{,} \\ \mathbf{H} & = \mathbf{H_e} \cup \mathbf{H_c} \text{,} \end{split} \end{equation} where $\mathbf{r_e}$ and $\mathbf{r_c}$ represent the erased and non-erased subsets of $\mathbf{r}$ with sizes $e$ and $N-e$, respectively. Similarly, $\mathbf{H_e}$ is a $(N-k)\times e$ matrix that contains the columns of $\mathbf{H}$ which correspond to the erased bits, and $\mathbf{H_c}$ contains the remaining columns. Note that the original sets in (\ref{eq:split}) can be reconstructed from the separated subsets, using $\mathbf{q}$. The parity-check equation described in (\ref{eq:pc}) can be expanded as \begin{equation}\label{eq:pcnew} \mathbf{H_e}\cdot\mathbf{r_e^{\top}} \oplus \mathbf{H_c}\cdot\mathbf{r_c^{\top}} = \mathbf{0} \text{.} \end{equation} Here, all the components are known on the receiver side except $\mathbf{r_e}$. Let us define the \textit{erasure syndrome}, $\mathbf{s_e}$, as \begin{equation}\label{eq:se} \mathbf{s_e^{\top}} = \mathbf{H_c}\cdot\mathbf{r_c^{\top}} \text{,} \end{equation} and transfer it to the right-hand side of (\ref{eq:pcnew}) in the binary domain, to obtain \begin{equation}\label{eq:pc-era} \mathbf{H_e}\cdot\mathbf{r_e^{\top}} = \mathbf{s_e^{\top}} \text{.} \end{equation} as visualized in Fig.~\ref{fig:separate}. The EDGE subroutine performs Gaussian elimination on the \textit{linear set of equations} in (\ref{eq:pc-era}) to find $\mathbf{r_e}$. To find a unique solution for $\mathbf{r_e}$, the number of equations must not be smaller than the number of variables. In other words, the number of rows of $\mathbf{H_e}$ must be equal to or greater than its columns, therefore, we must first satisfy $e \leq N-k$. \begin{algorithm}[t] \caption{EDGE Subroutine Initialization}\label{alg:edge:init} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Inputs}{Inputs}\SetKwInOut{Outputs}{Outputs} \Inputs{$\mathbf{r}$, $\mathbf{H}$, $\mathbf{q}$, $N-k$, $e$} \Outputs{$\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}$} $\mathbf{r_e} \leftarrow \mathbf{r}[\mathbf{q}]$, ~ $\mathbf{r_c} \leftarrow \mathbf{r} \setminus \mathbf{r_c}$\\ $\mathbf{H_e} \leftarrow \mathbf{H}[\mathbf{q}]$, $\mathbf{H_c} \leftarrow \mathbf{H} \setminus \mathbf{H_c}$\\ \If{$N-k < e$}{ no unique solution, terminate decoding } $\mathbf{E} \leftarrow \text{GaussianElimination}(\mathbf{H_e}) $ \\ \end{algorithm} \begin{algorithm}[t] \caption{The EDGE Subroutine}\label{alg:edge} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Inputs}{Inputs}\SetKwInOut{Outputs}{Outputs} \Inputs{$\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}$, $\mathbf{q}$, $N-k$, $e$} \Outputs{$\mathbf{r}$, $\textit{success}$} $\mathbf{s_e} = \mathbf{E} \cdot \mathbf{H_c} \cdot \mathbf{r_c} $ \\ \If{$\mathbf{s_e}[e:N-k-1] \neq \mathbf{0}$}{ \textit{success} = 0, terminate subroutine\\ } $\mathbf{r_e} = \mathbf{s_e}[0:e-1]$ \\ $\mathbf{r} \leftarrow \mathbf{r_c} \cup \mathbf{r_e}$ using $\mathbf{q}$ \\ \textit{success} = 1\\ \end{algorithm} \begin{figure*}[t] \includegraphics[width=2.1\columnwidth]{./figures/rref_v3.pdf} \caption{Gaussian elimination example using two elementary row operations to transform $\mathbf{H_e}$ into reduced row echelon form (RREF), to find $\mathbf{r_e}$ from $\mathbf{s_e^*}$. 1s and 0s in the matrix and the vector are indicated by black and white, respectively. Leading 1s at each column of $\mathbf{H_e}$ are represented by red.} \label{fig:rref} \end{figure*} The initialization procedure for the EDGE subroutine is described in Algorithm~\ref{alg:edge:init}, where $\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$ are prepared (lines 1-2) and whether the number of erasures could be recovered by the code is determined (lines 3-5). This is followed by the Gaussian elimination process using $\mathbf{H_e}$, to store the required operations in an \textit{elimination matrix}, $\mathbf{E}$ (line 6). These stored Gaussian elimination operations are later used to reduce the erasure restoration complexity, explained in Section~\ref{sec:3:ge}. The EDGE subroutine is described in Algorithm~\ref{alg:edge}. First, the erasure syndrome is calculated $\mathbf{s_e}$ using $\mathbf{H_c}$ and $\mathbf{r_c}$, and the Gaussian elimination matrix $\mathbf{E}$. If the resulting $\mathbf{s_e}$ is \textit{error-free} (lines 2-4), then the erased sequence $\mathbf{r_e}$ is substituted from $\mathbf{s_e}$ (line 5) and the complete received sequence is restored (line 6). Otherwise, the subroutine is terminated with no success (line 3). \subsection{The Gaussian Elimination Process}\label{sec:3:ge} The Gaussian elimination process reduces the linear set of equations into a form from which the variables can be directly obtained. For our purpose, we review the binary matrix case of Gaussian elimination \cite{strang_book}. To achieve this form, the $\mathbf{H_e}$ matrix should be modified into a reduced row-echelon form (RREF), with the particular structure of $[\mathbf{I}|\mathbf{0}]^\top$ where $\mathbf{I}$ and $\mathbf{0}$ represent identity and all-zero matrices, respectively. Note that there is a unique RREF for any matrix. If the RREF of a matrix yields an all-zero column, there is no unique solution for $\mathbf{r_e}$. An example of the Gaussian elimination process is depicted in Fig.~\ref{fig:rref}. Starting from the leftmost column, a leading $1$ is first identified for each column. If the row index of the leading $1$ does not match its column index, an elementary swap operation takes place to place the leading $1$ cell to the diagonal of the matrix. This row with the leading $1$ is then subtracted from the other rows that also hold a $1$ on the subject column, to ensure there is a single $1$ remaining. The process is sequentially continued for all the columns in $\mathbf{H_e}$. The equivalent swap and add operations are also applied to the transposed $\mathbf{s_e^{\top}}$, to obtain $\mathbf{s_e^{*\top}}$ at the end. Changes to the $\mathbf{H_e}$ and $\mathbf{s_e^{\top}}$ are denoted with an asterisk ($^{*}$). In practice, Gaussian elimination is costly with a complexity of $O(n^3)$. To reduce its impact, it can be performed only once at the initialization (refer to Algorithm~\ref{alg:edge:init}) and all the operations towards obtaining the RREF $\mathbf{H_e^*}$ can be stored in an \textit{elimination matrix}, $\mathbf{E}$. This way, the final $\mathbf{s_e^{*\top}}$ can be obtained by \begin{equation} \mathbf{s_e^{*\top}} = \mathbf{E} \cdot \mathbf{s_e} \text{,} \end{equation} which has the same effective complexity as the syndrome check operation of GRAND, see~(\ref{eq:pc}). \begin{figure}[t] \includegraphics[width=1.05\columnwidth]{./figures/examples_v6.pdf} \caption{Two examples are shown for erased bits restoration using the EDGE algorithm, with $5$ erased bits and $n-k=8$. In both cases same indices are erased from the same codebook, yielding the same $H_e$. The result of Case 1 is consistent and is therefore valid. In Case 2, the result is inconsistent due to incurred errors and therefore is invalid.} \label{fig:example} \end{figure} \subsection{The GRAND-EDGE Algorithm Family}\label{sec:3:grandedge} Refer to Fig.~\ref{fig:example}, where two arbitrary Gaussian elimination examples are illustrated using the same $\mathbf{H_e}$, with $e=5$ and $N-k=8$. In the first case, $\mathbf{s_e}$ does not incur any channel errors. Therefore, the transformed $\mathbf{s_e^*}$ is in the form of $[\mathbf{r_e} | \mathbf{0}]$, in other words, the last $N-k-e$ equations with $0$ coefficients naturally yield $0$. In the second case, the $\mathbf{s_e}$ is infused with channel errors that yield an $\mathbf{s_e^*}$ with nonzero entries in its last $N-k-e$ indices. For the EDGE subroutine, this indicates that there are errors in the channel. When errors are involved, erased indices cannot be accurately restored, and an additional error-correction decoder is required. The process of the proposed GRAND-EDGE algorithm is described in Algorithm~\ref{alg:grand-edge}. Note that the EDGE subroutine (line 6) replaces the syndrome check function of GRAND. If the EDGE subroutine fails, then the GRAND algorithm generates the next putative error pattern and combines it with the received part of the codeword (lines 6-7, followed by line 4). As discussed in Section~\ref{sec:bg}, the agenda of the error pattern generation depends on the GRAND variant. For simplicity, inputs and parameters for soft-information GRAND variants are not shown in the algorithm, instead, we refer to \cite{solomon2020sgrand, duffy2022orbgrand} for details. When there are no erasures in the received codeword, then $\mathbf{r_e} = \mathbf{H_e} = \varnothing$, with $\mathbf{r_c} = \mathbf{r}$ and $\mathbf{H_c} = \mathbf{H}$. Hence, the expression in (\ref{eq:se}) becomes the same as in (\ref{eq:pc}), and GRAND-EDGE reverts to GRAND algorithm. Therefore, the proposed GRAND-EDGE does not present a computational burden to the original algorithm in the absence of erasures in the channel. \begin{algorithm}[t] \caption{The GRAND-EDGE Algorithm}\label{alg:grand-edge} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Inputs}{Inputs}\SetKwInOut{Outputs}{Outputs} \Inputs{$\mathbf{r}$,$\mathbf{H}$,$\mathbf{G}^{-1}$,$\mathbf{q}$} \Outputs{$\mathbf{\hat{u}}$} $\mathbf{e} \leftarrow \mathbf{0}$ \\ $\textit{iter} = 0$ \\ $\{\mathbf{r_c}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}\} \leftarrow$ $\text{EDGE\_Init}(\mathbf{r}, \mathbf{H}, \mathbf{q}, N-k, e)$ \\ \While{$\textit{success} = 0 \land \textit{iter} \neq \textit{maxIters}$}{ $\mathbf{r_c^*} \leftarrow \mathbf{r_c} \oplus \mathbf{e}$ \\ $\{\textit{success}, \mathbf{r^*}\} \leftarrow $EDGE($\mathbf{r_c^*}$, $\mathbf{r_e}$, $\mathbf{H_c}$, $\mathbf{E}$, $\mathbf{q}$, $N-k$, $e$) \\ $\mathbf{e} \leftarrow $NextErrorPattern$(\textit{iter})$ \\ $\textit{iter} = \textit{iter} + 1$ \\ $\mathbf{\hat{u}} \leftarrow \mathbf{r^*} \cdot \mathbf{G}^{-1}$ \\ \end{algorithm} \section{Performance Assessment}\label{sec:res} Simulations are carried over the channel model described in Fig.~\ref{fig:chn}, where the channel and jammer AWGN are generated through independent Gaussian processes. The overpowered jammer SNR is set to -100 dB. The probability of jamming, $\epsilon$, is generated through an independent Bernoulli process for each transmitted symbol. Prior to decoding, a threshold is required to decide whether the received signal is jammed. For that purpose, we follow the empirical rule; any signal that is observed within $3$ standard deviations of the modulated signal space is considered non-jammed, and is jammed otherwise. Note that different approaches for determining jamming in the channel can also be used with the proposed algorithm, but are beyond the scope of this paper. For all evaluations, a random linear code (RLC) with code length $N=128$ with $k=105$ information bits is featured, which is generated using the simulator on-the-fly. The abandonment threshold \cite{duffy2019grand} is set to a Hamming weight of $3$ for both GRAND and GRAND-EDGE. The logistic weight threshold \cite{duffy2021orbgrand} is set to $104$ for ORBGRAND and ORBGRAND-EDGE. \subsection{GRAND-EDGE Performance}\label{sec:res:grand} \begin{figure*}[t] \centering \input{figures/merged_grand_v2.tikz} \vspace{-1.25mm} \caption{\label{fig:res} Top: BLER performance comparison of the proposed GRAND-EDGE and ORBGRAND-EDGE algorithms against their GRAND-only counterparts, using RLC[128,105]. The comparisons are carried out with various channel AWGN SNR and bit-jamming probabilities as indicated in the x-axis labels and the legends. Bottom: Average number of iterations for each evaluated algorithm with matching legends.} \end{figure*} Fig.~\ref{fig:res}(a) presents the error correction performance for the proposed GRAND-EDGE algorithm against the GRAND algorithm. The x-axis represents the channel SNR, and $\epsilon$ is fixed for each simulation and is represented in the legend. We observed that GRAND-EDGE outperforms GRAND by up to five orders of magnitude, because it has the capability of restoring erased bits whereas GRAND attempts to guess their values iteratively. Fig.~\ref{fig:res}(e) shows the average number of iterations (codebook queries) for each BLER curve. With improving AWGN SNR conditions, the GRAND algorithm gets stuck in guessing the jammed bits in an attempt to find the correct codeword. This yields a high average number of iterations, even at high SNRs. On the other hand, the efficient handling of the erased bits by the EDGE routine helps the GRAND-EDGE algorithm reduces the avarage number of iterations compared to GRAND, by up to more than five orders of magnitude. Fig.~\ref{fig:res}(b) and Fig.~\ref{fig:res}(f) present the GRAND-EDGE performance against GRAND over a simulated set of bit jamming probability values $\epsilon$. The channel SNR values are fixed and are represented in the legend. Similar to Fig.~\ref{fig:res}(a), the GRAND-EDGE algorithm achieves superior BLER performance as the channel conditions improve. On the other hand, the average number of iterations of GRAND-EDGE increases as $\epsilon$ decreases. This is because GRAND-EDGE abandons decoding if the number of erasures is beyond its capability (if $e > N-k$). Nonetheless, the average number of iterations of GRAND-EDGE is always smaller or equal to that of GRAND, sometimes by up to five orders of magnitude less. Finally, we note that both the BLER performance and the average number of iterations of GRAND-EDGE meet the performance of GRAND at $\epsilon=0$. This is because the GRAND-EDGE algorithm reverts to the GRAND algorithm when there is no erasure in the channel, as mentioned previously in Section~\ref{sec:3:grandedge}. \subsection{ORBGRAND-EDGE Performance}\label{sec:res:orbgrand} Fig.~\ref{fig:res}(c) and Fig.~\ref{fig:res}(d) present the BLER performance evaluation for the proposed ORBGRAND-EDGE algorithm against the ORBGRAND algorithm, \textit{i.e.} when soft-information is involved in decoding the received codeword. Compared to GRAND, the ORBGRAND algorithm can find more errors by taking advantage of the bit-reliability information (LLRs). As a result, ORBGRAND demonstrates better BLER performance compared to GRAND. On the other hand, the improvement in BLER with ORBGRAND-EDGE is not as dramatic compared to the gains in hard-information GRAND-EDGE. This is because ORBGRAND can prioritize jammed indices for bit-flipping as long as their values are beyond the predetermined threshold. Nonetheless, we observe a BLER gain of up to one order of magnitude when ORBGRAND is enhanced with the EDGE subroutine. \begin{figure}[t] \centering \input{figures/results_osd_p_bler.tikz} \caption{\label{fig:res_osd_fer} ORBGRAND-EDGE performance compared to the OSD algorithm, using RLC[128,105]. The bit-level jamming probabilities are set to $\epsilon = \{0.02, 0.05, 0.1\}$ and x-axis represents the channel AWGN SNR.} \end{figure} Compared to the GRAND-EDGE performance in Fig.~\ref{fig:res}(a) with $\epsilon = 0.02$, the ORBGRAND-EDGE performance in Fig.~\ref{fig:res}(c) experiences an error floor. As a result, at high SNR regimes, GRAND-EDGE outperforms ORBGRAND-EDGE. This is due to the differences in the bit-flipping agenda of these two variants. With a logistic weight of $104$, only the least reliable $104$ indices and a subset of their combinations are considered for bit-flipping in ORBGRAND \cite{duffy2022orbgrand}. Therefore, when a jammed index is overlooked by the jamming detection, ORBGRAND-EDGE does not erase the bit, and it is likely considered an index reliable enough that it is never considered for bit-flipping. On the other hand, even when a jammed index is overseen, the GRAND-EDGE algorithm evaluates all non-erased indices for bit-flipping. As a result, especially when the $\epsilon$ is small, GRAND-EDGE can outperform ORBGRAND-EDGE. Although more sophisticated pre-decoding jamming detection patterns can be implemented \cite{ercan_jamming}, they are beyond the scope of this work. Fig.~\ref{fig:res}(g) and Fig.~\ref{fig:res}(h) present the computational complexity comparison for ORBGRAND-EDGE and ORBGRAND, with respect to channel SNR and bit-jamming probability, respectively. Compared to GRAND, ORBGRAND has better (less) computational complexity in general, due to the efficient identification of bit-flipping indices. On the other hand, when ORBGRAND is augmented with the EDGE subroutine, the average number of iterations reduces by up to five orders of magnitude. Similar to the observations made in GRAND, ORBGRAND-EDGE reverts to the ORBGRAND algorithm when $\epsilon=0$. Different than GRAND-EDGE, the computational complexity of ORBGRAND-EDGE begins to reduce as $\epsilon$ reduces further. This is because the complexity of the linear set of equations reduces with reducing $\epsilon$, and with the help of soft information ORBGRAND-EDGE can find the erroneous component of the received codeword faster compared to GRAND-EDGE. Fig.~\ref{fig:res_osd_fer} compares the BLER performance of the proposed ORBGRAND-EDGE algorithm against the Ordered Statistics Decoding (OSD) algorithm \cite{osd1995}, using RLC[128,105]. Similar to EDGE, OSD performs Gaussian elimination to find the most likely transmitted codeword. However, the Gaussian elimination in OSD is always performed over $k$ columns, whereas in EDGE it is only performed over up to $N-k$ columns. Moreover, OSD requires multiple permutations which adds to its implementation complexity. Nonetheless, it can be observed from Fig.~\ref{fig:res_osd_fer} that the ORBGRAND-EDGE outperforms OSD by up to three orders of magnitude in BLER. \section{Conclusion}\label{sec:conc} In this work, we introduced an adversarial model, whereby a jammer randomly overpowers bits of a transmitted signal, effectively causing erasures. To address this adversarial channel condition, we generalized the syndrome check component of the universal GRAND algorithm family to support erasure decoding. The proposed GRAND-EDGE algorithm and its variants address bit-level errors and erasures simultaneously. The erasure decoding component (i.e., the EDGE subroutine) presents no additional computational complexity when there is no detected erasure in the channel, reverting to the syndrome check function of the GRAND algorithm in that case. The proposed EDGE algorithm can be used with both hard and soft variants of GRAND, which we demonstrated through the implementation of GRAND-EDGE and ORBGRAND-EDGE. Compared to their original counterparts, the EDGE-enhanced GRAND algorithms achieve up to five orders of magnitude improvement both in terms of error-correction performance in terms of computational complexity under the considered adversarial model. \section*{Acknowledgements} This work was partially supported by Defense Advanced Research Projects Agency Contract number HR00112120008 and by National Science Foundation ECCS Award numbers 2128517 and 2128555. The content of the information does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred. This publication has emanated from research supported in part by a grant from Science Foundation Ireland under grant number 18/CRT/6049. The opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Science Foundation Ireland. \bibliographystyle{IEEEtran}
1,116,691,500,545
arxiv
\section{Introduction} The magnetohydrodynamics (MHD) \begin{equation} \begin{aligned} u_t+u\cdot \nabla u+\nabla p&=B\cdot \nabla B+\nu \Delta u,\\ B_t+u\cdot \nabla B&=B\cdot \nabla u+\mu \Delta B,\\ \nabla \cdot u=\nabla \cdot B&=0, \end{aligned} \end{equation}\\ describes the velocity field $u(x,t)$, the magnetic field $B(x,t)$, and the total pressure $p(x,t)$ of electrically conducting fluids for $x\in \Omega$ and $t>0$ in a domain $\Omega\subset \mathbb{R}^{3}$, where $\nu>0$ and $\mu>0$ denote viscosity and resistivity. In this paper, we look at the problem (1.1) both for a bounded and simply connected domain $\Omega$ with a $C^{1,1}$-boundary as well as for $\Omega=\mathbb{R}^{3}$. We consider the non-slip and perfect conductivity conditions when $\Omega$ is bounded, \begin{align} u=0,\quad (\nabla \times B)\times n=0,\quad B\cdot n=0, \end{align}\\ for $x\in \partial\Omega$ and $t>0$, where $n$ denotes the unit outward normal vector field on $\partial\Omega$. The frozen field equation $(1.1)_2$ describes the topology-preserving diffusion of magnetic field lines at the zero resistivity limit $\mu=0$ (integral curves of $B$). For the frozen field equation $(1.1)_2$, we apply the normal trace condition \begin{align*} B\cdot n=0. \end{align*}\\ The cases $\nu=0$ and $\nu>0$ are known as ideal MHD and nonresistive MHD, respectively. For ideal MHD, we impose the normal trace condition $u\cdot n=0$. Ideal MHD (and nonresistive MHD) admits steady states $u=0$ and $B=U$ that satisfy the steady Euler flow \begin{equation*} \begin{aligned} \nabla \Pi=(\nabla \times U)\times U,\quad \nabla \cdot U&=0\quad \textrm{in}\ \Omega, \\ U\cdot n&=0\quad \textrm{on}\ \partial\Omega, \end{aligned} \end{equation*}\\ with the Bernoulli function $\Pi$. Those $U$ that appear in turbulence with vanishing Lorentz force $(\nabla \times U)\times U$ are known as \textit{force-free fields}. The equations for such $U$ are expressed using the proportionality factor $f$ as \begin{equation} \begin{aligned} \nabla \times U=fU,\quad \nabla \cdot U&=0\quad \textrm{in}\ \Omega, \\ U\cdot n&=0\quad \textrm{on}\ \partial\Omega. \end{aligned} \end{equation} The simplest solutions to these equations are \textit{linear} force-free fields, which are eigenfunctions of the rotation operator with eigenvalues $f\equiv \textrm{const.}$ In general, the condition $\nabla \cdot U=0$ implies the first-order equation \begin{align} U\cdot \nabla f=0, \end{align}\\ and the equations (1.3) are a \textit{nonlinear} system for $U$ and $f\nequiv \textrm{const.}$ Magnetic field lines of $U$ are confined on level sets of $f$, according to the equation (1.4). The primary focus of this paper is the \textit{stability} of both linear and nonlinear force-free fields (1.3) in ideal MHD. \subsection{Taylor states} In turbulent flows, force-free fields appear as coherent structures. Their stability is based on Taylor's relaxation theory \cite{T74}, \cite{T86}, which is based on total energy and magnetic helicity \begin{align*} {\mathcal{E}}=\frac{1}{2}\int_{\Omega}\left(|u|^{2}+|B|^{2} \right)\textrm{d} x,\quad {\mathcal{H}}=\int_{\Omega}A\cdot B \textrm{d} x, \end{align*}\\ where $A=\textrm{curl}^{-1}B$ is a unique vector potential such that $\nabla \times A=B$, $\nabla \cdot A=0$ in $\Omega$, $A\times n=0$ on $\partial\Omega$ and $\int_{\Gamma_i}A\cdot n\textrm{d} H=0$ for connected components $\Gamma_i$ of $\partial\Omega$, $1\leq i\leq I$ as shown in \cite[Theorem 3.17]{ABDG}. Woltjer \cite{W58} used magnetic helicity to find \textit{linear} force-free fields by minimizing total energy. Moreau \cite{Moreau} and Moffatt \cite{Moffatt} recognized helicity as a topological quantity measuring knots and links of field lines conserved by the frozen field equation. More precisely, for the solenoidal space $L^{2}_{\sigma}(\Omega)=\{B\in L^{2}(\Omega)\ |\ \textrm{div}\ B=0\ \textrm{in}\ \Omega,\ B\cdot n=0\ \textrm{on}\ \partial\Omega \}$, minimizers of \begin{align} {\mathcal{I}}_h=\inf\left\{ \frac{1}{2}\int_{\Omega}|B|^{2}\textrm{d} x\ \middle|\ B\in L^{2}_{\sigma}(\Omega),\ \int_{\Omega}\textrm{curl}^{-1}B\cdot B\textrm{d} x=h \right\}, \end{align}\\ are linear force-free fields (Taylor states) \cite[p.1246]{Laurence91}. Rotation is a self-adjoint operator on $L^{2}_{\sigma}(\Omega)$ for a simply connected domain and linear force-free fields are countable with eigenvalues $\cdots\leq f^{-}_{2}\leq f^{-}_1<0<f^{+}_{1}\leq f^{+}_{2}\leq \cdots$ \cite[Theorem 1]{YG90}. Taylor states are a finite number of eigenfunctions associated with the least positive eigenvalue or the largest negative eigenvalue (principal eigenvalues), as demonstrated in Lemma 2.4. Taylor \cite{T74}, \cite{T86} hypothesized that among all sub-helicities \cite{Moffatt}, only magnetic helicity is approximately conserved with low resistivity in turbulence and used Woltjer's principle as a theoretical foundation for turbulence relaxation toward linear force-free fields as $t\to\infty$. Taylor's theory predicted the relaxed state in a reversed field pinch and other devices successfully \cite{OS93}. Taylor's conjecture is treated mathematically as a question about magnetic helicity conservation in the ideal limit $(\nu,\mu)\to (0,0)$ \cite[p.444]{CKS97}. A gentle review can be found in Buckmaster and Vicol \cite{BV21}. Faraco and Lindberg \cite{FL20} provide proof for Taylor's conjecture in terms of \textit{weak ideal limits} of Leray--Hopf solutions. A weak ideal (resp. nonresistive) limit is a weak-star limit of Leray--Hopf solutions to viscous and resistive MHD (1.1)--(1.2) in $L^{\infty}_t L^{2}_x$ as $\nu,\mu\to0$ (resp. as $\mu\to0$ with fixed $\nu>0$). They are very weak notions associated with measure-valued solutions, cf. \cite{DM87m}, \cite{BDS11}, and \textit{conserve} magnetic helicity \cite{FL20} despite the scaling gap to the $L^{3}_{t}L^{3}_{x}$ threshold for magnetic helicity conservation of weak solutions to ideal MHD \cite{Aluie}, \cite{KL07}, \cite{FL18}. See also \cite{FL22} on multiply connected domains. Conservation of low regular quantities at ideal limits is also investigated for other hydrodynamical equations; see \cite{CLNS} for the 2D Euler equations and \cite{CIN} for the SQG equations. The conservation of total energy and magnetic helicity to weak solutions of ideal MHD is first investigated in \cite{CKS97}, cf. \cite{CET}, \cite{Constantin08} for the Euler equations. In ideal MHD, two regularity thresholds appear for total energy and magnetic helicity conservation, resulting in Onsager-type conjectures \cite{BV21}. Based on the convex integration scheme of De Lellis and Sz\'ekelyhidi, Jr. \cite{DS09}, Faraco et al. \cite{FLS} constructed bounded weak solutions to ideal MHD with compact support in space and time that dissipate total energy with identically vanishing magnetic helicity. See also \cite{BLL15} for the first construction of $2\frac{1}{2}$D weak solutions to ideal MHD. By contrast, using Buckmaster and Vicol intermitted convex integration scheme \cite{BV19}, Beekie et al. \cite{BBV} constructed weak solutions of ideal MHD in $L^{\infty}_{t}L^{2}_{x}$ that \textit{do not} conserve magnetic helicity. See also \cite{Dai} for nonunique weak solutions to the hall MHD system. Li et al. \cite{YZZ} demonstrated weak solutions to (hyper) viscous and resistive MHD dissipating magnetic helicity and their strong convergence to ideal limits. Based on the convex integration through staircase laminates \cite{Faraco03}, \cite{AFS}, Faraco et al. \cite{FLS21} demonstrated the sharpness of the threshold $L^{3}_{t}L^{3}_{x}$ by constructing weak solutions to the Faraday-Maxwell system in $L^{\infty}_{t}L^{3,\infty}_{x}$ dissipating magnetic helicity. Taylor's relaxation theory \cite{T74}, \cite{T86} contains two components: Woltjer's principle and Taylor's conjecture. We begin by noting that proof of Taylor's conjecture for weak ideal limits \cite{FL20}, \cite{FL22} implies Taylor state stability \cite{W58}, \cite{Laurence91}. \begin{thm}[Taylor state stability] Let $S_{h}$ be a set of eigenfunctions of the rotation operator on $L^{2}_{\sigma}(\Omega)$ associated with the least positive (resp. largest negative) eigenvalue with magnetic helicity $h>0$ (resp. $h<0$). Let $S_0=\emptyset$. The set $S_{h}=\{U_j\}_{j=1}^{N}$ is stable in weak ideal limits of Leray--Hopf solutions to (1.1)--(1.2) in the sense that for arbitrary $\varepsilon>0$, there exists $\delta>0$ such that for $u_0,B_0\in L^{2}_{\sigma}(\Omega)$ satisfying \begin{align*} ||u_0||_{L^{2}}+\inf_{1\leq j\leq N}||B_0-U_j||_{L^{2}}+\left|\int_{\Omega}\textrm{curl}^{-1}B_0\cdot B_0\textrm{d} x-h\right|\leq \delta, \end{align*}\\ there exists a weak ideal limit $(u,B)$ of Leray--Hopf solutions to (1.1)--(1.2) for $(u_0,B_0)$ such that \begin{align*} ||u||_{L^{2}}+\inf_{1\leq j\leq N}||B-U_j||_{L^{2}}\leq \varepsilon\quad \textrm{for a.e.}\ t\geq 0. \end{align*} \end{thm} \begin{rem} For weak nonresistive limits of Leray--Hopf solutions, the same stability result as Theorem 1.1 holds. The stability result also holds for unique strong solutions to ideal MHD and nonresistive MHD up to maximal existence time; see \cite{CMZ} for local well-posedness of ideal MHD. \end{rem} \subsection{Chandrasekhar's nonlinear force-free fields} Recent computer simulations \cite{YRH}, \cite{PWHG}, and \cite{PCRH} demonstrated that turbulent flows relax toward \textit{nonlinear} force-free fields rather than linear force-free fields (Taylor states). In a uniform field, they have compactly supported two flux tubes with opposite signs. The presence of such coherent structures is attributed to integrand $A\cdot B$ redistribution, and sub-helicities are still regarded as important quantities \cite{Yeates}, \cite{FL22}. They could be used as constraints in an extension of Woltjer's principle to nonlinear force-free fields \cite{T74}, \cite{T86}, \cite{Laurence91}, \cite[Section 4]{Yeates} although such a variational principle is unknown. The existence of nonlinear force-free fields (1.3) is linked to their stability. It has long been debated whether nonlinear force-free fields exist besides symmetric solutions. Enciso and Peralta-Salas \cite{EP16} demonstrated that force-free fields do not exist if $f\in C^{2+\alpha}$, $0<\alpha<1$, admits a level set diffeomorphic to a \textit{sphere}. This rigidity result proved non-existence for a wide range of $f$, such as radial or having extrema, and contrasts with rigidity results \cite{Na14}, \cite{CC15} based on the decay of the magnetic field at infinity, such as $U=o(|x|^{-1})$ as $|x|\to\infty$. The nonlinear system $(1.3)$ is an overdetermined problem in general \cite{EP16}, \cite{CK20}, cf. \cite{EP12}, \cite{EP15}, and existence results are available only under symmetry, e.g., \cite{Chandra}, \cite{Tu89}, \cite{A8}. In the axisymmetric setting, both the system (1.3) and the steady Euler flow can be reduced to the Grad--Shafranov equation \cite{Grad}, \cite{Shafranov}; see \cite{Gav}, \cite{CLV}, and \cite{DEPS21} for the existence of compactly supported axisymmetric steady Euler flows. Constantin et al. \cite[p.529]{CDG21b} posed Grad's conjecture \cite[p.144]{Grad67} which states that nonsymmetric steady Euler flows do not exist. With small force \cite{CDG21} or piecewise constant Bernoulli functions \cite{BL96}, \cite{ELP21}, the existence of nonsymmetric steady Euler flows is known. See also \cite{CDG22} for more information on the flexibility and rigidity of MHD equilibria. The explicit solution of Chandrasekhar \cite{Chandra}, a particular case of Hicks--Moffatt solution \cite{Hicks85}, \cite{Moffatt}, which is an axisymmetric solution with a swirl in $\Omega=\mathbb{R}^{3}$ and a uniform field at infinity, is an important example of nonlinear force-free fields. In terms of the cylindrical coordinate $(r,\theta,z)$ and the Clebsch representation, the Chandrasekhar's force-free field is expressed as \begin{equation} \begin{aligned} &U_C=\nabla \times (\Phi_C\nabla \theta)+G_C\nabla \theta, \quad f_{C}=\lambda^{1/2}1_{(0,\infty)}(\Phi_C), \\ &\Phi_{C}(z,r)= \begin{cases} & \displaystyle\frac{3}{2}Wr^{2}\frac{c_{3/2}^{1/2} J_{3/2}(\lambda^{1/2} \rho)}{J_{5/2}(c_{3/2}) (\lambda^{1/2}\rho)^{3/2}},\qquad \rho=\sqrt{z^{2}+r^{2}}<R, \\ & \displaystyle-\frac{1}{2}Wr^{2}\left(1-\frac{R^{3}}{\rho^{3}}\right),\hspace{62pt} \rho=\sqrt{z^{2}+r^{2}}\geq R, \end{cases}\\ &G_C(z,r)=\lambda^{1/2}\Phi_{C,+}, \qquad R=c_{3/2}\lambda^{-1/2}. \end{aligned} \end{equation}\\ The indicator function on $(0,\infty)$ is $1_{(0,\infty)}(s)$ and $s_{+}=s1_{(0,\infty)}(s)$. The strength of the current field $\nabla \times U_C\in L^{\infty}(\mathbb{R}^{3})$ supported in a ball with radius $R$ is denoted by the parameter $\lambda>0$. The parameter $W>0$ denotes the uniform field at infinity, i.e., $U\to -We_z$ as $|x|\to\infty$ for $e_z={}^{t}(0,0,1)$. The function $J_{m}$ is the $m$-th order Bessel function of the first kind and $c_{3/2}=4.4934\cdots$ is the first zero point of $J_{3/2}$. The factor $f_C$ is a discontinuous function and the level set $f^{-1}_{C}(\lambda^{1/2})$ is a \textit{ball}, cf. \cite{EP16}. The explicit solution (1.6) has two distinguishing characteristics that do not appear in linear force-free fields. The first is the computer simulation of the compactly supported current field $\nabla \times U_C$ \cite{PWHG}, \cite{YRH}, and \cite{PCRH}. The second is an integrable magnetic field line structure. They are constrained on nested tori $\{x\in \mathbb{R}^{3}\ |\ \Phi_C(z,r)=k\}$ for $k>0$, and form torus knots and links. The purpose of this paper is to show that the nonlinear force-free field (1.6) is stable in weak ideal limits of axisymmetric Leray--Hopf solutions to (1.1) with a uniform field condition at infinity. Under axisymmetry, the system (1.1) has an additional Casimir invariant called \textit{generalized magnetic helicity} \cite{Moffatt97}, cf. \cite{KPY}. We look at a derivation of Taylor's theory \cite{T74}, \cite{T86} under axisymmetry: the Woltjer-type principle for nonlinear force-free fields and the Taylor-type conjecture for generalized magnetic helicity conservation. They are established in \cite{W58}, \cite{Laurence91}, \cite{FL20}, and \cite{FL22} for the stability of Taylor states. \subsection{The main result} We consider the system (1.1) under the uniform field condition \begin{align} (u,B)\to (u_{\infty}, B_{\infty})\quad \textrm{as}\ |x|\to\infty, \end{align}\\ for given constants $B_{\infty}=-We_z$, $W>0$ and $u_\infty \in \mathbb{R}^{3}$ parallel to $e_z$. The explicit solution (1.6) offers a specific traveling wave solution, \begin{align*} u=u_{\infty},\quad B(x,t)=U(x-u_{\infty}t). \end{align*}\\ The profile $U$ is a force-free field satisfying \begin{equation} \begin{aligned} \nabla \times U=fU,\quad \nabla \cdot U&=0\qquad \textrm{in}\ \mathbb{R}^{3}, \\ U&\to B_{\infty}\quad \textrm{as}\ |x|\to\infty. \end{aligned} \end{equation}\\ In terms of an equivalent system for new variables $(v,b)=(u-u_{\infty},B-B_{\infty})$, we state a stability result for the explicit solution (1.6) in the system (1.1) under the condition (1.7) : \begin{equation} \begin{aligned} v_t+(v+u_{\infty})\cdot \nabla v+\nabla p&=(b+B_{\infty})\cdot \nabla b+\nu \Delta v,\\ b_t+(v+u_{\infty})\cdot \nabla b&=(b+B_{\infty})\cdot \nabla v+\mu \Delta b, \\ \nabla \cdot v=\nabla \cdot b&=0. \end{aligned} \end{equation}\\ We apply Clebsch representation for axisymmetric solenoidal vector fields \begin{align*} b=\nabla \times (\phi \nabla \theta)+G \nabla \theta, \end{align*}\\ with unique Clebsch potentials $\phi(z,r)$ and $G(z,r)$; see Section 3. Potential of $B_{\infty}=-We_z$ is $-\phi_{\infty}$ for $\phi_{\infty}=Wr^{2}/2+\gamma$ and arbitrary constant $\gamma$. Under the condition (1.7), the total energy and magnetic helicity of the system (1.1) are expressed as \begin{align*} {\mathcal{E}}=\frac{1}{2}\int_{\mathbb{R}^{3}}\left(|v|^{2}+|b|^{2}\right) \textrm{d} x, \quad {\mathcal{H}}= 2\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})\frac{G}{r^{2}} \textrm{d} x. \end{align*}\\ Even for square-integrable $B\in L^{2}_{\sigma}(\mathbb{R}^{3})$, magnetic helicity is ill-defined in $\Omega=\mathbb{R}^{3}$ \cite[Appendix A]{FLS}. We instead apply generalized magnetic helicity \begin{align} H=2\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}\frac{G}{r^{2}} \textrm{d} x. \end{align}\\ If $G$ is supported in $\{\phi>\phi_{\infty}\}$, generalized magnetic helicity agrees with magnetic helicity. Indeed, generalized magnetic helicity of the explicit solution (1.6) is magnetic helicity \begin{align} h_C=2\lambda^{1/2}\int_{\mathbb{R}^{3}}\Phi_{C,+}^{2}\frac{1}{r^{2}} \textrm{d} x >0. \end{align}\\ For $W/\lambda$, the constant $h_C$ is quadratic \cite[p.128]{Moffatt}, as shown in Proposition 8.4. We show that generalized magnetic helicity is well-defined for axisymmetric $b=B-B_{\infty}\in L^{2}_{\sigma}(\mathbb{R}^{3})$ and constants $W>0$ and $\gamma\geq 0$. (Generalized magnetic helicity (1.10) is affected by gauge $\gamma\geq 0$.) For small axisymmetric disturbances $(v_0,b_0)$, we show that there exists a weak ideal limit $(v,b)$ such that $(u,B)=(v+u_{\infty},b+B_{\infty})$ is close to the traveling wave solution $(u_{\infty},U_C(x-u_{\infty}t ))$ up to translation in $z$ for all time. This demonstrates the stability of MHD equilibria that differ from Taylor states.\\ \begin{thm}[Stability of Chandrasekhar's nonlinear force-free fields] Let $\lambda, W>0$. Let $B_{\infty}=-We_z$ and $\phi_{\infty}=Wr^{2}/2$. Let $u_{\infty}$ be a constant parallel to $e_z$. The force-free field $U_C$ in (1.6) with magnetic helicity $h_C$ is orbitally stable in weak ideal limits of axisymmetric Leray--Hopf solutions to (1.1) under the condition (1.7) in the sense that for arbitrary $\varepsilon >0$ there exists $\delta >0$ such that for axisymmetric $v_0, b_0\in L^{2}_{\sigma}(\mathbb{R}^{3})$ satisfying \begin{align*} ||v_0||_{L^{2}(\mathbb{R}^{3})} +\inf_{z\in \mathbb{R}}||b_0+B_{\infty}-U_{C}(\cdot +ze_z) ||_{L^{2}(\mathbb{R}^{3})} +\left|2\int_{\mathbb{R}^{3}} \left(\phi_0-\phi_{\infty} \right)_{+}\frac{G_0}{r^{2}} \textrm{d} x -h_C\right| \leq \delta, \end{align*}\\ for the Clebsch potentials $\phi_0$, $G_0$ of $b_0$, there exists a weak ideal limit $(v,b)$ of axisymmetric Leray--Hopf solutions to (1.9) for $(v_0, b_0)$ such that \begin{align*} ||v||_{L^{2}(\mathbb{R}^{3})} +\inf_{z\in \mathbb{R}}||b+B_{\infty}-U_{C}(\cdot +ze_{z}) ||_{L^{2}(\mathbb{R}^{3})} \leq \varepsilon, \quad \textrm{for a.e.}\ t\geq 0. \end{align*} \end{thm} \vspace{5pt} \begin{rem} Theorem 1.3 is a special case $\gamma=0$ of a general stability theorem for a particular class of nonlinear force-free fields with discontinuous factors $f\in L^{\infty}(\mathbb{R}^{3})$ (Theorem 8.2). The stability result also holds for weak nonresistive limits of axisymmetric Leray--Hopf solutions and all time, as well as for unique strong axisymmetric solutions to ideal MHD and non-resistive MHD up to maximal existence time. \end{rem} \subsection{Stability: Euler and ideal MHD} The force-free field (1.6) also gives an explicit solution to the Euler equations (vanishing at infinity), \begin{align*} u_E=U_C(x+B_{\infty}t)-B_{\infty}, \end{align*}\\ which describes an axisymmetric vortex ring with a swirl whose vortex is supported in a ball moving at the speed of $B_{\infty}$. The solution $u_E$ belongs to the lower regularity space $W^{1,\infty}(\mathbb{R}^{3})$ than the space $C^{1,\alpha}(\mathbb{R}^{3})$, $\alpha>0$, for which the Euler equations are locally well-posed \cite{Lichtenstein}, \cite{Gunther}. Elgindi \cite{Elgindi} demonstrated that for some axisymmetric data without a swirl $u_0\in C^{1,\alpha}(\mathbb{R}^{3})$ and some $\alpha>0$, the Euler equations exhibit self-similar blow-up. See \cite{VV20}, \cite{LVV21} for instability of blow-up solutions. By contrast in Remark 1.4, the explicit solution (1.6) is \textit{stable} in ideal MHD for any axisymmetric disturbances \textit{with} swirls even though blow-up occurs. Moffatt \cite{Moffatt85}, \cite{Moffatt85b} discovered this MHD stability mechanism for shear flows $G(r)\nabla \theta$. Let us give a comparison on stability results for the Euler and the ideal MHD equations. The concept of force-free field stability appeared in the work of Lundquist \cite{Lundquist} who investigated stability of the linear force-free field $U=J_{1}(fr)e_{\theta}+J_{0}(fr)e_{z}$ for $e_{\theta}=r\nabla \theta$ in the presence of axisymmetric disturbances. Trehan \cite{Trehan} investigated the stability of more general explicit solutions. Woltjer studied the stability of linear force-free fields by linearization \cite{Woltjera} and variational arguments \cite{Woltjerb} after the seminal work on the variational characterization of linear force-free fields \cite{W58}, cf. \cite{CW58}, and concluded \cite{Woltjerc} stability of the lowest energy linear force-free fields. The works \cite{Woltjerb}, \cite{Woltjerc}, \cite{Woltjerd}, and \cite{Chandrasekhar1958} investigated this same variational characterization of other MHD equilibria. Voslamber and Caliebaut \cite{VC} discovered the instability of linear force-free fields for the Lundquist's solution at large values of $fr$. See also \cite[IX.]{Chandra61} for the Rayleigh-type stability criterion. Bernstein et al. \cite{Bernstein1958} established a nonlinear stability theorem for MHD equilibria with a sign condition for second derivatives of total energy. Arnold \cite{Arnold1965} established a well-known nonlinear stability theorem for steady flows in the Euler equations with a sign condition for second derivatives of kinetic energy, i.e., local maxima or minima in equivortical velocity fields, cf. \cite{Bernstein1958}. Arnold's stability theorem is useful for 2D steady flows \cite{MP94}, \cite{AK98}, \cite{GS21}, cf. \cite{WP85}, \cite{SV09}. By contrast, he stated \cite[p.1007]{Arnold1965}, \cite[p.349]{Arnold66} that he could not find any examples of 3D steady flows that met his criterion. Rouchon \cite[Theorem 1.1]{Rouchon} demonstrated that all 3D steady flows are kinetic energy saddle points and concluded that Arnold's criterion is never satisfied; see also \cite[9.3.3]{Moffatt21}. Later in the 1980s, Moffatt \cite[p.374]{Moffatt85}, \cite[p.369]{Moffatt85b} observed that the analogy of the existence of steady flows between Euler and ideal MHD does not extend to their stability; see \cite[9.3]{Moffatt21}. He computed the sign of second derivatives for magnetic energy in the vicinity of the shear flow $G(r)\nabla \theta$, which is a steady axisymmetric magnetic/velocity field with a swirl. The sign condition implies that $G$ is stable in ideal MHD for decreasing $|G/r^{2}|$ in $r>0$, e.g., $G=r^{-\alpha}$, $\alpha>1$. By contrast, such $G$ is unstable in the Euler equations by Rayleigh's criterion \cite{Rayleigh1917}, \cite[Chapter 3]{DR82}. Szeri and Holmes \cite[Theorem 4.2]{SH88} gave a sufficient condition for Lyapunov stability of axisymmetric Euler flows with swirls, cf. \cite{Arnold1965}, \cite{Rouchon}. Instability in steady Euler flows has been studied in terms of the spectrum of the linearized Euler operator in \cite{FV91}, \cite{LH91}, \cite{FV92}, \cite{LH93}, \cite{V96}, and \cite{FS05}. Nonlinear instability results can be found in \cite{Lin04}, \cite{BGS02}, \cite{VF03}, and the existence of unstable manifolds can be found in \cite{LZ13}, \cite{LZ14}, and \cite{LZ22}. Lifschitz and Hameiri \cite{LH93} studied the spectrum of the linearized Euler operator in the vicinity of axisymmetric vortex rings with swirls. It is believed that axisymmetric vortex rings with swirls are \textit{unstable} in the Euler equations. Galley and Smets \cite{GalleySmets18} investigated the spectral stability of shear flows $G(r)\nabla \theta$ (columnar vortices) in the non-symmetric setting; see also \cite{GalleySmets19}. Friedlander and Vishik \cite{VF98} investigated the spectrum of the linearized operator for MHD equilibria. The works \cite{Bernstein1958}, \cite{Moffatt85}, \cite{Moffatt85b}, \cite{Holm85}, \cite{FV90}, \cite{Moffatt95}, \cite{Moffatt96}, \cite{Moffatt97}, and \cite{Moffatt99} all developed nonlinear stability criteria. We are especially interested in the work \cite[Criterion 5.2]{Moffatt97} on nonlinear stability criteria in the axisymmetric setting. The criterion \cite{Moffatt97} includes axisymmetric MHD equilibria $B=U$ and $u=0$, although examples satisfying this criterion are unknown, cf. \cite{Arnold1965}, \cite{Rouchon}. The energy--Casimir method is a stability principle that appears in the study of other plasma/fluid systems such as the Vlasov--Poisson equations \cite{Guo99}, \cite{GR01} or the Euler--Poisson equations \cite{Rein03}, \cite{LS09}; see \cite{Holm85} and \cite{BPM19} for reviews. The energy--Casimir method uses additional conservation (Casimirs) under symmetry and differs from stability principles in general settings \cite{Bernstein1958}, \cite{Arnold1965}. The stability results \cite{Rein03}, \cite{LS09} are called \textit{conditional} because global-in-time solutions are unknown. Instability results can be found in \cite{jang08}, \cite{jang14}. The Euler and ideal MHD equations are noncanonical Hamiltonian PDEs, and little is known about the orbital stability of traveling wave solutions; see \cite{GSS} and \cite{LZ22} for the Grillakis--Shatah--Strauss theory for Hamiltonian PDEs. Ideas about the stability of axisymmetric vortex rings without swirls can be traced back to the works of Kelvin \cite{kelvin1880} and Benjamin \cite{Ben76}. For early work on the stability of Hill's spherical vortex rings, see Wan \cite{Wan86}. The stability principle \cite[p.20]{Ben76} is the maximization of kinetic energy through a rearrangement with an impulse constraint. Burton et al. \cite{BNL13} investigated the orbital stability of traveling wave solutions in the 2D Euler equations based on the maximization of kinetic energy using a rearrangement with unknown vorticity functions, cf. \cite{Ben76}. See also Burton \cite{B21}. The author and Choi \cite{AC22} investigated stability in a relatively restricted class of traveling waves using minimization of (penalized) enstrophy with prescribed vorticity functions, which is also applicable for the stability of axisymmetric vortex rings without swirl due to additional Casimir invariants. It should be noted that stability results \cite{BNL13}, \cite{B21}, and \cite{AC22} are demonstrated for weak solutions without assuming boundedness of vorticity, cf. \cite{Vishik18}, \cite{Vishik18b}, \cite{ABCDGK}. A particular type of vortex ring without swirls is Hill's spherical vortex rings, which are patch-type solutions; see \cite{Choi22} for the stability result in the Yudovich class \cite{UI}. Asymptotic stability, which is convergence to equilibrium as $t\to\infty$, is a more powerful stability concept than orbital (Lyapunov) stability. Bedrossian and Masmoudi \cite{BM15} established nonlinear asymptotic stability of the Euler equations for 2D Couette flows and disturbances in Gevrey classes. This stability mechanism is known as inviscid damping which is related to Landau damping in plasma physics \cite{MV11}. See also \cite{IJ20}, \cite{IJ22}, \cite{MZ20}, and \cite{IJ22b} for nonlinear asymptotic stability results. In \cite{DM18}, it is demonstrated that instability occurs for disturbances in low regular Gevrey classes. See \cite{BPM19} for more information on the asymptotic stability of the Euler equations. We also discuss recent works \cite{RZ17}, \cite{ZZZ22}, \cite{LMZZ22} on linear asymptotic stability for sheared velocity and magnetic fields in ideal MHD. Theorems 1.1 and 1.3 are the first (orbital) stability results for MHD equilibria in ideal MHD in terms of the \textit{existence} of weak ideal limits of Leray--Hopf solutions. They are based on the minimization of total energy following Taylor's relaxation theory. We describe the ideas of the proofs in subsection 1.6. Let us finally mention the Cauchy problem of the Naver--Stokes equations. Chen et al. \cite{CSTY1}, \cite{CSTY2} and Koch et al. \cite{KNSS} demonstrated that axisymmetric solutions with swirls do not exhibit self-similar blow-up, cf. \cite{Elgindi}. Feng and {\v{S}}ver\'{a}k \cite{FengSverak} showed the existence of global in time solutions for large vortex filaments in the axisymmetric class without swirls. Galley and {\v{S}}ver\'{a}k \cite{GallaySverak}, \cite{GallaySverak2} showed the uniqueness of constructed large solutions. See also Bedrossian et al. \cite{BGH} and Bedrossian and Golding \cite{BG} for nonsymmetric vortex filament solutions. Jia, {\v{S}}ver\'{a}k, and Guillod \cite{JS15}, \cite{JS}, \cite{GuS} developed a program for the nonuniqueness of Leray--Hopf solutions based on the spectral stability of self-similar solutions. In the groundbreaking work \cite{BV19}, Buckmaster and Vicol constructed nonunique finite energy weak solutions to the Navier--Stokes equations by the intermitted convex integration. See also \cite{BCV22} and \cite{Luo19}. Cheskidov and Lou \cite{CL22} showed the existence of sharp nonunique weak solutions in the supercritical class to the Ladyzhenskaya--Prodi--Serrin uniqueness criteria. Albritton et al. \cite{ABC} demonstrated the nonuniqueness of Leray--Hopf solutions with force in the axisymmetric class without swirls based on instability of self-similar solutions, cf. \cite{Vishik18}, \cite{Vishik18b}, \cite{ABCDGK}. \subsection{Remarks on magnetic relaxation} The total energy of solutions to nonresistive MHD, i.e., (1.1)--(1.2) with $\mu=0$, $\nu>0$, decreases with the velocity field $\nabla u\in L^{2}(0,\infty; L^{2})$. Arnold's inequality, conversely, limits magnetic energy from below for initial data with nonzero magnetic helicity; see (2.4). By letting $t\to \infty$, one may construct steady Euler flows for a given $B_0$. This means of constructing steady Euler flows is called \textit{magnetic relaxation} posed by Arnold \cite{Arnold74}, \cite[ChapterIII]{AK98} and Moffatt \cite{Moffatt85}, \cite[Section 8]{Moffatt21}. The equations of the velocity field do not have to be the Navier--Stokes equations, and other models are considered in \cite{Moffatt85}, \cite{Moffatt90}, \cite{Vallis}, \cite{Ni02}, \cite{Brenier14}, \cite{Pasqualotto}, and \cite{BFV21}. In Remark 1.2, weak nonresistive limits are nearby Taylor states for a.e. $t\geq 0$. It does not, however, imply convergence to a Taylor state as $t\to\infty$. Magnetic relaxation can, in fact, produce a broader range of steady flows than that of force-free fields \cite{Kom21}. Remember that some $B_0$ can form nontrivial links with magnetic helicity $h=0$ \cite[p.367]{Moffatt85}, \cite[8.1.1]{Moffatt21}, \cite[p.2]{Kom21}. Komendarczyk \cite{Kom21} investigated magnetic energy minimization for given $B_0\in L^{2}_{\sigma}(\Omega)$ among weak $L^{2}$-closure of pushforward $B=\varphi_*B_0$ with nonincreasing energy for volume-preserving diffeomorphism $\varphi$ of $\Omega$ whose restriction on $\partial\Omega$ is identity, i.e., \begin{align*} \tilde{{\mathcal{I}}}(B_0)=\inf\left\{\frac{1}{2}\int_{\Omega}|B|^{2}\textrm{d} x\ \middle|\ B\in \overline{M}^{w}(\Omega, B_0) \right\}, \end{align*}\\ for $M(\Omega, B_0)=\{B\in L^{2}_{\sigma}(\Omega)|\ B=\varphi_*B_0,\ \varphi\in \textrm{Diff} (\Omega, \textrm{d} x),\ ||B||_{L^{2}}\leq ||B_0||_{L^{2}} \}$. The work \cite{Kom21} demonstrated that for some $B_0\in L^{2}_{\sigma}(\Omega)$ with helicity $h=0$, a set of minimizers $\tilde{{\mathcal{S}}}(B_0)$ of $\tilde{{\mathcal{I}}}(B_0)$ is not empty, i.e., $\tilde{{\mathcal{S}}}(B_0)\neq \emptyset$. This means that for such $B_0$, magnetic relaxation minimum energy states are nontrivial, whereas Taylor states are trivial, i.e., $S_0=\emptyset$. Beekie et al. \cite{BFV21} obtained detailed information on the large time behavior for the velocity field satisfying \begin{align*} \nabla p=B\cdot \nabla B-\nu(-\Delta)^{\kappa} u. \end{align*}\\ The frozen field equation $(1.1)_2$ with the above velocity field (the MRE equations) is shown to be globally well-posed for $u_0,B_0\in H^{s}(\mathbb{T}^{n})$, $s,\kappa>n/2+1$, satisfying $\nabla \cdot B_0=0$, and the strong convergence limits $\lim_{t\to\infty}||\nabla u||_{L^{\infty}}=0$ holds \cite[Theorems 3.1, 4.1]{BFV21}. In the 2D setting, asymptotic stability of the steady sates $B=e_1$ and $u=0$ is obtained, as are examples of 3D exact solutions exhibiting an exponential growth of the current field \cite[Theorems 5.1, 6.4]{BFV21}. See also \cite{Elgindi17} and \cite{CCL19} for asymptotic stability of the IPM equations and \cite{Elgindi20} for an exponential growth of the vorticity field of the 3D Euler equations. \subsection{Ideas and main ingredients} The following are the main ideas for proving Theorem 1.3:\\ \noindent (a) Total energy minimization under the constraint of conserved generalized magnetic helicity, \\ \noindent (b) Existence of weak ideal limits of axisymmetric Leray--Hopf solutions with nonincreasing total energy and conserved generalized magnetic helicity.\\ For the stability of traveling wave solutions in the 2D Euler equations, we can consider (a) enstrophy minimization under the kinetic energy constraint \cite{AC22}, cf. \cite{BNL13}, \cite{B21}, and (b) the existence of global weak solutions with conserved enstrophy and kinetic energy. The conservation of enstrophy is due to the renormalized property of vorticity equations \cite{LNM06}, cf. \cite{CS15}. See \cite{CLNS} for more information on energy conservation. Due to additional Casimir invariants, axisymmetric vortex rings with no swirl have a similar minimization \cite{FT81} and a renormalized property \cite{NS22}. We take advantage of the analogy between 2D hydrodynamic and 3D MHD turbulence \cite{Hasegawa85}, \cite[7.3]{Biskamp93}. The simpler case is Taylor state stability (Theorem 1.1). Part (a) corresponds to Woltjer's principle \cite{W58}, \cite{Laurence91}, and part (b) to Taylor's conjecture \cite{FL20}, \cite{FL22}. We exploit from the part (a), (i) \textit{compactness of minimizing sequences}. Namely, we use the fact that any sequences $\{B_n\}\subset L^{2}_{\sigma}(\Omega)$ satisfying $\frac{1}{2}\int_{\Omega}|B_n|^{2}\textrm{d} x\to {\mathcal{I}}_h$, $\int_{\Omega}\textrm{curl}^{-1}B_n\cdot B_n\textrm{d} x\to h$ are relatively compact in $L^{2}(\Omega)$. A key point of (b) is (ii) \textit{compactness of Leray--Hopf solutions}. For Leray--Hopf solutions $(u_j,B_j)$ to (1.1)--(1.2) satisfying the equality \begin{align} \int_{\Omega}\textrm{curl}^{-1}B_j\cdot B_j\textrm{d} x+2\mu_j\int_{0}^{t}\int_{\Omega}\nabla \times B_j\cdot B_j\textrm{d} x\textrm{d} s=\int_{\Omega}\textrm{curl}^{-1}B_{0}\cdot B_0\textrm{d} x, \end{align}\\ the vector potentials $\{\textrm{curl}^{-1}B_j\}$ are relatively compact in $L^{2}_{\textrm{loc}}(0,T; L^{2}(\Omega))$ by the Aubin--Lions lemma as $(\nu_j,\mu_j)\to (0,0)$. By letting $j\to\infty$, the magnetic helicity of weak ideal limits is conserved. (The second term is eliminated by the energy inequality.) In weak ideal limits, the two points (i) and (ii) imply the stability of a set of minimizers (Taylor states) ${\mathcal{S}}_h$ to $\mathcal{I}_h$. Theorem 1.1 is derived by characterizing Taylor states as a finite number of eigenfunctions of the rotation operator associated with the principal eigenvalues using Rayleigh's formulas (Lemma 2.4). To prove the stability of the nonlinear force-free field (1.6) (Theorem 1.3), in part (a), we consider a new minimization principle: \begin{equation} \begin{aligned} &I_{h,W,\gamma}=\\ &\inf\left\{\frac{1}{2}\int_{\mathbb{R}^{3}}|b|^{2}\textrm{d} x\ \middle|\ b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3}), 2\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}\frac{G}{r^{2}} \textrm{d} x=h\ \right\}, \end{aligned} \end{equation}\\ for given constants $h\in \mathbb{R}$, $W>0$, and $\gamma\geq 0$, where $L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ denotes the space of all axisymmetric solenoidal vector fields in $L^{2}_{\sigma}(\mathbb{R}^{3})$. Minimizers of (1.13) produce axisymmetric nonlinear force-free fields (1.8) for $B_{\infty}=-We_z$ with discontinuous factors $f\in L^{\infty}(\mathbb{R}^{3})$ (Lemma 4.3). This appears to be the first variational principle that provides nonlinear force-free fields using the conservation of ideal MHD. Previous constructions use un-conserved Euler quantities \cite{Tu89} or a minimax method for flux functions \cite{A8}. In the part (b), a key quantity is \textit{generalized magnetic mean-square potential}, \begin{align} \int_{\mathbb{R}^{3} }(\phi-\phi_{\infty} )_{+}^{2}\textrm{d} x. \end{align}\\ Faraco and Lindberg \cite[Theorem 5.4]{FL20} demonstrated the conservation of magnetic mean-square potential at weak ideal limits of Leray--Hopf solutions for 2D bounded and multiply connected domains. For fixed initial data, we show the conservation of generalized magnetic mean-square potential for weak ideal limits of axisymmetric Leray--Hopf solutions. The main difficulty for (i) compactness of minimizing sequences and (ii) compactness of Leray--Hopf solutions lies on the \textit{unboundedness} of the domain $\Omega=\mathbb{R}^{3}$. As sketched below, we prove (i) and (ii) by different ideas.\\ The approach to proving (i) compactness of minimizing sequences for $I_{h}=I_{h, W,\gamma}$ in (1.13) is the strict subadditivity of the minimum \begin{align} I_{h_1+h_2}<I_{h_1}+I_{h_2},\quad h_1,h_2>0. \end{align}\\ We show that any axisymmetric minimizing sequences $\{b_n\}\subset L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ with potentials $\phi_n$ and $G_n$ satisfying \begin{align*} \frac{1}{2}\int_{\mathbb{R}^{3}}|b_n|^{2}\textrm{d} x\to I_h,\quad 2\int_{\mathbb{R}^{3}}(\phi_n-\phi_{\infty})_{+}\frac{G_n}{r^{2}}\textrm{d} x\to h, \end{align*}\\ are relatively compact in $L^{2}(\mathbb{R}^{3})$ up to translation in $z$ by using the strict subadditivity (1.15) and the concentration--compactness principle \cite{Lions84a}, \cite{Lions84b}. We derive the strict subadditivity (1.15) from the presence of symmetric minimizers in $z$, namely, $\phi(z,r)=\phi(-z,r)$ and $G(z,r)=G(-z,r)$. We also demonstrate that the minimum $I_h$ is symmetric and lower semi-continuous for $h$ and increasing for $|h|$. Minimizers of $I_h$ form $G=\mu(\phi -\phi_{\infty})_{+}$ for $\phi_{\infty}=Wr^{2}/2+\gamma$, $W>0$, $\gamma\geq 0$ with a Lagrange multiplier $\mu\in \mathbb{R}$ and a solution of the Grad--Shafranov equation \begin{align} -L\phi=\mu^{2}(\phi-\phi_{\infty} )_{+}\quad \textrm{in}\ \mathbb{R}^{2}_{+},\quad \phi=0\quad \textrm{on}\ \partial\mathbb{R}^{2}_{+}, \end{align}\\ where $\mathbb{R}^{2}_{+}=\{{}^{t}(z,r)\ |\ z\in \mathbb{R},\ r>0\}$ and $L=\partial_z^{2}+\partial_r^{2}-r^{-1}\partial_r$. The minimizers yield axisymmetric nonlinear force-free fields $U=b+B_{\infty}$ with $f=\mu 1_{(0,\infty)}(\phi-\phi_{\infty})$. The desired (ii) compactness of Leray--Hopf solutions $(v_j,b_j)$ to (1.9) as $(\nu_j,\mu_j)\to (0,0)$ is the flux \textit{global} convergence \begin{align} (\phi_j-\phi_{\infty})_{+}\to (\phi-\phi_{\infty})_{+}\quad \textrm{in}\ L^{2}(0,T; L^{2}(\mathbb{R}^{3})), \end{align}\\ for Clebsch potentials $\phi_j, G_j$ of $b_j$ and $\phi, G$ of a weak ideal limit $b$. We show the following equalities of generalized magnetic helicity and generalized magnetic mean-square potential for axisymmetric Leray--Hopf solutions, \begin{align} \int_{\mathbb{R}^{3}}\Phi_{j,+}\frac{G_j}{r^{2}}\textrm{d} x +\mu_j\int_{0}^{t}\int_{\mathbb{R}^{3}}\nabla \times B_j\cdot B_j 1_{(0,\infty)}(\Phi_{j} )\textrm{d} x\textrm{d} s&=\int_{\mathbb{R}^{3}}\Phi_{0,+}\frac{G_0}{r^{2}}\textrm{d} x, \\ \int_{\mathbb{R}^{3} }|\Phi_{j,+}|^{2}\textrm{d} x+2\mu_j \int_{0}^{t}\int_{\mathbb{R}^{3}}|\nabla \Phi_{j,+}|^{2}\textrm{d} x\textrm{d} s&=\int_{\mathbb{R}^{3} }|\Phi_{0,+}|^{2}\textrm{d} x, \end{align}\\ for $\Phi_j=\phi_j-\phi_{\infty}$ and $B_j=b_j+B_{\infty}$. They follow from vector potential equations and the drift--diffusion equation, \begin{align*} \partial_t \Phi_{j}+u_j\cdot \nabla \Phi_{j}=\mu_j\left(\Delta-\frac{2}{r}\partial_r \right)\Phi_j. \end{align*}\\ the Aubin--Lions lemma only implies flux local convergence $\Phi_{j,+}\to \Phi_{+}$ in $L^{2}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3}))$. We strengthen this convergence to the global convergence (1.17) by using the generalized magnetic mean-square potential conservation at the weak ideal limit, \begin{align} \int_{\mathbb{R}^{3} }|\Phi_{+}|^{2}\textrm{d} x=\int_{\mathbb{R}^{3} }|\Phi_{0,+}|^{2}\textrm{d} x. \end{align}\\ The equalities (1.19) and (1.20) imply the global convergence (1.17) and generalized magnetic helicity conservation at the weak ideal limit by taking a limit to the equality (1.18). At vanishing viscosity limits, the vorticity of the 2D Euler equations in $\mathbb{R}^{2}$ exhibits a similar global convergence \cite[p.359]{LNM06}. The difference between the 2D vorticity equations and this one is that the drift $u$ is only in $ L^{\infty}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3}))$ and lacks Sobolev regularity. The Sobolev regularity of the drift is critical for the renormalized property of transport equation solutions \cite[Corollary II.2]{DL89}, \cite[p.1213]{AC14} and is used in \cite{LNM06} for enstrophy conservation at vanishing viscosity limits, cf. \cite{CS15}. The conservation of (generalized) magnetic mean-square potential, conversely, is due to the Sobolev regularity $r^{-1}\nabla \Phi\in L^{\infty}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3}))$. This Sobolev regularity implies that the nonlinear term $u\cdot \nabla \Phi$ is integrable and that the flux is a transport equation solution \begin{align*} \Phi_{t}+u\cdot \nabla \Phi=0. \end{align*}\\ This is demonstrated in \cite[Lemma 5.7]{FL20} for the flux of a 2D system. More specifically, the work \cite{FL20} demonstrated that any solutions to the transport equation conserve magnetic mean-square potential with divergence-free drifts in $L^{\infty}_{t}L^{2}_x$ thanks to the Sobolev regularity of $ \Phi$ in a 2D bounded domain. We show that $\Phi_{+}^{2}$ satisfies the transport equation $\partial_t\Phi_{+}^{2}+u\cdot \nabla \Phi_{+}^{2}=0$ in a distributional sense and (1.20) holds. Heuristically, Sobolev regularity of $\Phi$ (in $x$ and $t$) implies the renormalized property \begin{align*} g(\Phi)_{t}+u\cdot \nabla g(\Phi)=0, \end{align*}\\ for regular $g(s)$ by the chain rule in Sobolev space. In weak ideal limits of axisymmetric Leray--Hopf solutions, the two points (i) and (ii) imply the stability of a set of minimizers $S_{h,W,\gamma}$ to $I_{h,W,\gamma}$ in (1.13) for $h\in \mathbb{R}$, $W>0$, and $\gamma\geq 0$ (Theorem 8.2). Theorem 1.3 is derived from the particular case $h=h_C$ in (1.11) and $\gamma=0$ for which the set of minimizers $S_{h_C,W,0}$ agrees with the orbit of the explicit solution $U_C$ for given $(W,\lambda)$ due to the uniqueness of the Grad--Shafranov equation (1.16) (Theorem 8.3) and the explicit form of the constant $h_C$ (Proposition 8.4), i.e., $S_{h_C,W,0}=\{U_C(\cdot +ze_z)-B_{\infty} |\ z\in \mathbb{R}\ \}$. The moving plane method is used to determine uniqueness \cite{Fra92}, \cite{Fra00}. The scaling property of the minimum (1.13) is $I_{h,W,\gamma}=(W/2)^{-2}I_{\tilde{h},2,\tilde{\gamma}}$, $\tilde{h}=(W/2)^{-2}h$, $\tilde{\gamma}=(W/2)^{-1}\gamma$. We will fix $W=2$ from Section 3 to Section 7 and deduce results for $W>0$ from those for $W=2$ in Section 8.\\ The following is how this paper is structured. In Section 2, we demonstrate the stability of Taylor states (Theorem 1.1). In Section 3, we prove Clebsch representation and well-definedness of generalized magnetic helicity (1.10) and generalized magnetic mean-square potential (1.14). Section 4 develops the variational problem (1.13) (for $W=2$) and establishes the strict subadditivity (1.15). Section 5 demonstrates the compactness of minimizing sequences of (1.13) by the concentration--compactness principle. In Section 6, we show the equalities (1.18) and (1.19) for axisymmetric Leray--Hopf solutions. In Section 7, we prove the conservation of generalized magnetic helicity at weak ideal limits. In Section 8, we prove the orbital stability of a set of minimizers to (1.13) and deduce Theorem 1.3 from a uniqueness theorem. \\ \noindent \textit{Acknowledgements.} This work was made possible in part by the JSPS through the Grant-in-aid for Young Scientist 20K14347 and by MEXT Promotion of Distinctive Joint Research Center Program JPMXP0619217849. \section{Linear force-free fields} We demonstrate Taylor state stability (Theorem 1.1). We show that minimizers of Woltjer's principle (1.5) (Taylor states) are eigenfunctions of the rotation operator associated with the principal eigenvalues by Rayleigh's formulas. We then prove Taylor state stability by a contradiction argument. We remark on Woltjer's principle and Taylor's conjecture for multiply connected domains in Remarks 2.1, 2.5, and 2.14. \subsection{Magnetic helicity} Let $\Omega\subset \mathbb{R}^{3}$ be a bounded and simply connected domain with a $C^{1,1}$-boundary $\partial\Omega$ consisting of connected components $\Gamma_{1},\dots, \Gamma_{I}$. We consider the direct sum decomposition \begin{equation} \begin{aligned} L^{p}(\Omega)&=L^{p}_{\sigma}(\Omega)\oplus G^{p} (\Omega),\quad 1<p<\infty,\\ L^{p}_{\sigma}(\Omega)&=\{u\in L^{p}(\Omega)\ |\ \nabla \cdot u=0\ \textrm{in}\ \Omega,\ u\cdot n=0\ \textrm{on}\ \partial\Omega\},\\ G^{p}(\Omega)&=\{\nabla q\in L^{p}(\Omega)\ |\ q\in L^{1}_{\textrm{loc}}(\Omega) \}, \end{aligned} \end{equation}\\ and the associated projection operator $\mathbb{P}: L^{p}(\Omega)\to L^{p}_{\sigma}(\Omega)$. We denote by $C_{c,\sigma}^{\infty}(\Omega)$ the space of all smooth solenoidal vector fields with compact support in $\Omega$. The $L^{p}$-closure of $C_{c,\sigma}^{\infty}(\Omega)$ is $L^{p}_{\sigma}(\Omega)$ and the $W^{1,p}$-closure of $C_{c,\sigma}^{\infty}(\Omega)$ is $L^{p}_{\sigma}\cap W^{1,p}_{0}(\Omega)$. The same decomposition and density properties hold also for $\Omega=\mathbb{R}^{3}$ \cite[Theorem III.1.2, 2.3, 5.1]{Gal}. According to \cite{ABDG} and \cite{FL20}, we define spaces of vector potentials. Let $H(\textrm{div},\Omega)=\{u\in L^{2}(\Omega)\ |\ \nabla \cdot u\in L^{2}(\Omega) \}$ and $H(\textrm{curl},\Omega)=\{u\in L^{2}(\Omega)\ |\ \nabla\times u\in L^{2}(\Omega) \}$. By the trace operator $T: H^{1}(\Omega)\ni \varphi\longmapsto T\varphi\in H^{1/2}(\partial\Omega)$, the tangential trace $u\times n$ for $u\in H(\textrm{curl},\Omega)$ and the normal trace $u\cdot n$ for $u\in H(\textrm{div},\Omega)$ are defined as elements in $H^{-1/2}(\partial\Omega)=H^{1/2}(\partial\Omega)^{*}$ in the sense that \begin{equation} \begin{aligned} <u\times n, \xi>_{\partial\Omega}&=\int_{\Omega}\nabla \times u\cdot \xi\textrm{d} x-\int_{\Omega}u\cdot \nabla \times \xi\textrm{d} x,\\ <u\cdot n, \varphi>_{\partial\Omega}&=\int_{\Omega}u\cdot \nabla \varphi\textrm{d} x+\int_{\Omega}\nabla \cdot u \varphi\textrm{d} x, \end{aligned} \end{equation}\\ for $\xi, \varphi\in H^{1}(\Omega)$. Here, $<\cdot,\cdot>_{\partial\Omega}$ denotes the duality pairing between $H^{-1/2}(\partial\Omega)$ and $H^{1/2}(\partial\Omega)$. The traces can be defined also in the $L^{p}$ setting \cite{FL20}. Let $X(\Omega)=H(\textrm{div},\Omega)\cap H(\textrm{curl},\Omega)$. Let $X_{N}(\Omega)=\{u\in X(\Omega)\ |\ u\times n=0\ \textrm{on}\ \partial\Omega \}$ and $X_{T}(\Omega)=\{u\in X(\Omega)\ |\ u\cdot n=0\ \textrm{on}\ \partial\Omega \}$. When $\Omega$ is $C^{1,1}$ or convex, $X_N(\Omega)$ and $X_T(\Omega)$ continuously embed into $H^{1}(\Omega)$ \cite[Theorems 2.9, 2.12, 2.17]{ABDG}, i.e. \begin{equation} X_N(\Omega), X_T(\Omega)\subset H^{1}(\Omega). \end{equation}\\ When $\Omega$ is Lipschitz, $X_N(\Omega)$ and $X_T(\Omega)$ continuously embed into $H^{s}(\Omega)$ for some $s>1/2$ and compactly embed into $L^{2}(\Omega)$ \cite[Proposition 3.7, Theorem 2.8]{ABDG}. We set $K_{N}(\Omega)=\{u\in X_N(\Omega)\ |\ \nabla \times u=0,\ \nabla \cdot u=0\ \textrm{in}\ \Omega \}$ and $K_{T}(\Omega)=\{u\in X_T(\Omega)\ |\ \nabla \times u=0,\ \nabla \cdot u=0\ \textrm{in}\ \Omega \}$. For a simply connected domain $\Omega$ with connected $\Gamma_{1},\dots, \Gamma_{I}$ of $\partial\Omega$, $\textrm{dim}\ K_{T}(\Omega)=0$ and $\textrm{dim}\ K_{N}(\Omega)=I$ \cite[Propositions 3.14, 3.18]{ABDG}. For $B\in L^{2}_{\sigma}(\Omega)$, there exists a unique $A\in X_N(\Omega)$ such that $\nabla \cdot A=0$ in $\Omega$ and $\int_{\Gamma_i}A\cdot n\textrm{d} H=0$ for $1\leq i\leq I$ \cite[Theorem 3.17]{ABDG}. We denote such $A$ by $A=\textrm{curl}^{-1}B$. Magnetic helicity in a simply connected domain is gauge invariant in the sense that \begin{align*} \int_{\Omega}\tilde{A}\cdot B\textrm{d} x=\int_{\Omega}\textrm{curl}^{-1}B\cdot B\textrm{d} x, \end{align*}\\ for any vector potential $\tilde{A}$ such that $\nabla \times \tilde{A}=B$. We use the integrand $\textrm{curl}^{-1}B \cdot B$ for the magnetic helicity. \subsection{Rayleigh's formulas} We recall the fact that the rotation operator $S=\nabla \times$ is a self-adjoint operator on $L^{2}_{\sigma}(\Omega)$ \cite[Theorem 1.1]{YG90}. Let $D(S)=\{u\in L^{2}_{\sigma}(\Omega)\ |\ S u\in L^{2}_{\sigma}(\Omega) \}$ and $Su=\nabla \times u$ for $u\in D(S)$. By the continuous embedding (2.3), $u\times n\in H^{1/2}(\partial\Omega)$ for $u\in D(S)\subset H^{1}(\Omega)$. The vector field $u$ is irrotational on $\partial\Omega$ for $u\in D(S)$, i.e., $d\alpha=0$ for the dual 1-form $\alpha$ of $u$ on $\partial\Omega$ and the exterior derivative $d$. Thus there exists a 0-form $\phi\in H^{3/2}(\partial\Omega)$ such that $\alpha=d \phi$. Let $D(S^{*})$ be a set of all $v\in L^{2}_{\sigma}(\Omega)$ such that there exists some $w\in L^{2}_{\sigma}(\Omega)$ such that $(Su,v)=(u,w)$ for all $u\in L^{2}_{\sigma}(\Omega)$, where $(\cdot,\cdot)$ is the inner product on $L^{2}_{\sigma}(\Omega)$. We set the adjoint operator by $S^{*}v=w$ for $v\in D(S^{*})$. For $u,v\in D(S)$, we denote by $\alpha$ and $\beta$, dual 1-forms of $u$ and $v$ on $\partial\Omega$. Since $\alpha=d\phi$ and $d\beta=0$, \begin{align*} d(\phi\wedge \beta)=d\phi\wedge \beta-\phi\wedge d\beta=d\phi\wedge \beta=\alpha\wedge \beta. \end{align*}\\ By Stokes's theorem, \begin{align*} <n\times u,v>_{\partial\Omega}=\int_{\partial\Omega}u\times v\cdot n\textrm{d} H =\int_{\partial\Omega}\alpha\wedge \beta=\int_{\partial\Omega}d(\phi \wedge \beta)=0. \end{align*}\\ This implies that $(Su,v)=(u,Sv)$, $u,v\in D(S)$ and $D(S)\subset D(S^{*})$ by $(2.2)_1$. Thus $S$ is a symmetric operator. For $\tilde{u}\in C^{\infty}_{c}(\Omega)$, $u=\mathbb{P}\tilde{u}\in D(S)$ since $\nabla \times u=\nabla \times \tilde{u}$. By $(\nabla \times \tilde{u},v)=( \tilde{u},S^{*}v)$, $\nabla \times v=S^{*}v\in L^{2}_{\sigma}(\Omega)$ and $D(S^{*})\subset D(S)$. Thus $S$ is a self-adjoint operator. The paper \cite{YG90} assumes that $\partial\Omega$ is smooth. The same result holds for domains with $C^{1,1}$-boundaries by the continuous embedding (2.3). The operator $S: L^{2}_{\sigma}(\Omega)\supset D(S)\longrightarrow L^{2}_{\sigma}(\Omega)$ is injective since $K_T(\Omega)=\emptyset$. The operator $S$ is surjective since for $B\in L^{2}_{\sigma}(\Omega)$, there exists a unique $\hat{A}\in X_{T}(\Omega)$ such that $\nabla \times \hat{A}=B$, $\nabla \cdot \hat{A}=0$ in $\Omega$ \cite[Theorem 3.12]{ABDG}, i.e., $\hat{A}\in D(S)$. The inverse operator $S^{-1}: L^{2}_{\sigma}(\Omega)\ni B\longmapsto \hat{A}\in D(S)$ is compact by (2.3). Thus eigenvalues are countable real numbers. Each eigenfunction space is finite dimensional by the Fredholm alternative and there exist eigenfunctions $\{\bold{B}_j\}$ making a complete orthonormal basis on $L^{2}_{\sigma}(\Omega)$. They provide eigenvalues $\{\lambda_j\}$ and eigenfunctions $\{\bold{B}_j\}$ of the operator $S$. We label $\{\bold{B}_j\}$ according to multiplicity of $\{\lambda_j\}$ so that each eigenvalue $\lambda_j$ corresponds to one eigenfunction $\bold{B}_j$. We also denote $\{\lambda_j\}$ by $\cdots<f_{2}^{-}\leq f_{1}^{-}<0<f_{1}^{+}\leq f_{2}^{+}<\cdots$. The eigenfunction $\bold{B}_j$ and $\bold{A}_j=\textrm{curl}^{-1}B_j$ satisfy \begin{align*} \int_{\Omega}\bold{A}_j\cdot \bold{B}_k\textrm{d} x=\frac{1}{\lambda_k}\int_{\Omega}\bold{A}_j\cdot \nabla \times \bold{B}_k\textrm{d} x=\frac{1}{\lambda_k}\int_{\Omega}\bold{B}_j\cdot \bold{B}_k\textrm{d} x=\frac{1}{\lambda_k}\delta_{j,k}. \end{align*}\\ The expansion of $B\in L^{2}_{\sigma}(\Omega)$ and $A=\textrm{curl}^{-1}B$ are \begin{align*} B=\sum_{j}c_j\bold{B}_j,\quad c_j=\int_{\Omega}B\cdot \bold{B}_{j}\textrm{d} x,\quad A=\sum_{j}c_j\bold{A}_{j}. \end{align*}\\ Magnetic helicity of $B$ is expressed as \begin{align*} \int_{\Omega}A\cdot B\textrm{d} x=\sum_{j}\frac{1}{\lambda_j}c_j^{2}=\sum_{j}\frac{1}{f_j^{+}}c_j^{2}+\sum_{j}\frac{1}{f_j^{-}}c_j^{2}. \end{align*}\\ This helicity expansion yields the best constants for Arnold's inequalities \cite[p.122]{AK98} \begin{equation} \begin{aligned} \int_{\Omega}A\cdot B\textrm{d} x&\leq \frac{1}{f_1^{+}}\int_{\Omega}|B|^{2}\textrm{d} x,\\ -\int_{\Omega}A\cdot B\textrm{d} x &\leq \frac{1}{-f_1^{-}}\int_{\Omega}|B|^{2}\textrm{d} x, \end{aligned} \end{equation} for $B\in L^{2}_{\sigma}(\Omega)$ and $A=\textrm{curl}^{-1}B$. The equalities hold for eigenfunctions associated with $f_1^{+}$ and $f_1^{-}$. Thus normalization implies Rayleigh's formulas \begin{equation} \begin{aligned} f_{1}^{+}&=\inf \left\{\int_{\Omega}|B|^{2}\textrm{d} x\ \middle|\ B\in L^{2}_{\sigma}(\Omega),\ \int_{\Omega}\textrm{curl}^{-1}B\cdot B\textrm{d} x=1 \right\}, \\ -f_{1}^{-}&=\inf \left\{\int_{\Omega}|B|^{2}\textrm{d} x\ \middle|\ B\in L^{2}_{\sigma}(\Omega),\ \int_{\Omega}\textrm{curl}^{-1}B\cdot B\textrm{d} x=-1 \right\}. \end{aligned} \end{equation} \begin{rem} When $\Omega$ is multiply connected, the rotation operator on $L^{2}_{\sigma}(\Omega)$ is not self-adjoint. Its spectrum is the point spectrum and agrees with all complex numbers \cite[Theorem 2]{YG90}. When $\Omega=\mathbb{R}^{3}$, linear force-free fields do not exist in $L^{2}(\mathbb{R}^{3})$ since $-\Delta U=f^{2}U$ and the spectrum of the Laplace operator is the essential spectrum. Nevertheless, there exist linear force-free fields with highly nontrivial topology of magnetic field lines decaying by the order $U=O(|x|^{-1})$ as $|x|\to\infty$ \cite{EP12}, \cite{EP15}. It is known \cite{Na14}, \cite{CC15} that the conditions $U\in L^{q}(\mathbb{R}^{3})$, $2\leq q\leq 3$ or $U=o(|x|^{-1})$ imply non-existence of force-free fields in $\mathbb{R}^{3}$. \end{rem} \subsection{Woltjer's principle} According to \cite[p.1246]{Laurence91}, we consider Woltjer's principle with the constant $h\in \mathbb{R}$, \begin{align*} {\mathcal{I}}_h=\inf\left\{\frac{1}{2}\int_{\Omega}|B|^{2}\textrm{d} x\ \middle|\ B\in L^{2}_{\sigma}(\Omega),\ \int_{\Omega}\textrm{curl}^{-1}B\cdot B\textrm{d} x=h \right\}. \end{align*}\\ We denote by ${\mathcal{S}}_h$ the set of minimizers to $\mathcal{I}_h$. By the scaling $B=|h|^{1/2}\tilde{B}$, \begin{equation} \mathcal{I}_{h}= \begin{cases} \ h \mathcal{I}_{1} &h>0, \\ \ -h \mathcal{I}_{-1} &h<0,\\ \ 0 &h=0, \end{cases} \qquad S_{h}= \begin{cases} \ h^{1/2} S_{1} &h>0, \\ \ (-h)^{1/2} S_{-1} &h<0,\\ \ \emptyset &h=0. \end{cases} \end{equation}\\ The minimum $\mathcal{I}_{h}$ is Lipschitz continuous and additive for $h\in \mathbb{R}$. Rayleigh's formulas (2.5) are expressed in terms of the minimum as \begin{align} f_{1}^{+}=2{\mathcal{I}}_1,\quad -f_{-}^{+}=2{\mathcal{I}}_{-1}. \end{align}\\ We will identify the set of minimizers ${\mathcal{S}}_h$ with the set of a finite number of eigenfunctions associated with $f_1^{+}$ for $h>0$ (resp. $f_1^{-}$ for $h<0$) by using (2.7) and a Lagrange multiplier \cite[8.4.1]{E}, cf. \cite[p.1247]{Laurence91}. \begin{prop} Suppose that $U\in L^{2}_{\sigma}(\Omega)$ is an eigenfunction of the rotation operator associated with the eigenvalue $f_1^{+}$ (resp. $f^{-}_{1}$) with helicity $1$ (resp. $-1$). Then, $\frac{1}{2}\int_{\Omega}|U|^{2}\textrm{d} x=\mathcal{I}_1$ (resp. $\frac{1}{2}\int_{\Omega}|U|^{2}\textrm{d} x=\mathcal{I}_{-1}$). \end{prop} \begin{proof} By the formulas (2.7), \begin{align*} \frac{1}{2}\int_{\Omega}|U|^{2}\textrm{d} x=\frac{1}{2}\int_{\Omega}U\cdot \nabla \times( \textrm{curl}^{-1}U)\textrm{d} x &=\frac{1}{2}\int_{\Omega}\nabla \times U\cdot \textrm{curl}^{-1}U\textrm{d} x \\ &=\frac{f_{1}^{+}}{2}\int_{\Omega} U\cdot \textrm{curl}^{-1}U\textrm{d} x ={\mathcal{I}}_1. \end{align*} \end{proof} \begin{prop} Suppose that $U\in L^{2}_{\sigma}(\Omega)$ satisfies $\frac{1}{2}\int_{\Omega}|U|^{2}\textrm{d} x={\mathcal{I}}_1$with helicity 1 (resp. $\frac{1}{2}\int_{\Omega}|U|^{2}\textrm{d} x={\mathcal{I}}_{-1}$ with helicity $-1$). Then, $U$ is an eigenfunction of the rotation operator for the eigenvalue $f_{1}^{+}$ (resp. $f_1^{-}$). \end{prop} \begin{proof} By $\int_{\Omega}\textrm{curl}^{-1}U\cdot U\textrm{d} x=1$, there exists $U_0\in L^{2}_{\sigma}(\Omega)$ such that $\int_{\Omega}\textrm{curl}^{-1}U \cdot U_0\textrm{d} x\neq 0$. We take an arbitrary $\tilde{U}\in L^{2}_{\sigma}(\Omega)$ and set \begin{align*} j(\tau,s)=\int_{\Omega}\textrm{curl}^{-1}(U+\tau \tilde{U}+s U_0)\cdot (U+\tau \tilde{U}+s U_0)\textrm{d} x. \end{align*}\\ The function $j(\tau,s)$ satisfies $j(0,0)=1$ and \begin{align*} \frac{\partial j}{\partial s}(0,0)=2\int_{\Omega}\textrm{curl}^{-1}U\cdot U_0\textrm{d} x\neq 0. \end{align*}\\ By the implicit function theorem, there exists a $C^{1}$-function $s=s(\tau)$ such that $j(\tau,s(\tau))=1$ for sufficiently small $|\tau|$. By differentiating this by $\tau$, $\dot{s}(0)=-\partial_{\tau} j(0,0)/\partial_{s} j(0,0)$. Since $U$ is a minimizer, \begin{align*} 0=\frac{\textrm{d}}{\textrm{d} \tau}\int_{\Omega}|U+\tau \tilde{U}+s(\tau)U_0|^{2}\textrm{d} x\Bigg|_{\tau=0} =2\int_{\Omega}U\cdot( \tilde{U}+\dot{s}(0) U_0) \textrm{d} x. \end{align*}\\ By substituting $\dot{s}(0)$ into the above, \begin{align*} \int_{\Omega}(U-f \textrm{curl}^{-1}U)\cdot \tilde{U}\textrm{d} x=0,\quad f=\frac{\int_{\Omega}U_0\cdot U\textrm{d} x}{\int_{\Omega}\textrm{curl}^{-1}U_0\cdot U\textrm{d} x}. \end{align*}\\ Since $\tilde{U}\in L^{2}_{\sigma}(\Omega)$ is arbitrary, by (2.1) there exists some $q$ such that $U- f \textrm{curl}^{-1}U=\nabla q$. Thus $\nabla \times U=f U$. By Rayleigh's formulas (2.7), \begin{align*} f_{1}^{+}=2\mathcal{I}_1=\int_{\Omega}|U|^{2}\textrm{d} x=\int_{\Omega}U\cdot \nabla \times (\textrm{curl}^{-1}U)\textrm{d} x=f\int_{\Omega}U\cdot \textrm{curl}^{-1}U\textrm{d} x= f. \end{align*}\\ Hence $U$ is an eigenfunction for the eigenvalue $f_1^{+}$. \end{proof} \begin{lem} Let $\{U_{j}^{+}\}_{j=1}^{N^{+}}$ (resp. $\{U_{j}^{-}\}_{j=1}^{N^{-}}$) be eigenfunctions of the rotation operator associated with the eigenvalues $f_1^{+}$ with helicity $h>0$ (resp. $f_1^{-}$ with helicity $h<0$). Then, \begin{align} {\mathcal{S}}_h= \begin{cases} \ \{U_j^{+}\}_{j=1}^{N^{+}} & h>0,\\ \ \{U_j^{-}\}_{j=1}^{N^{-}} &h<0,\\ \ \emptyset &h=0. \end{cases} \end{align} \end{lem} \begin{proof} By Propositions 2.2 and 2.3, \begin{align*} {\mathcal{S}}_1= \left\{U\in L^{2}_{\sigma}(\Omega)\ \middle|\ \nabla \times U=f_1^{+}U,\ \ \int_{\Omega}\textrm{curl}^{-1}U\cdot U\textrm{d} x=1 \right\}. \end{align*}\\ The multiplicity $N^{+}$ of $f_{1}^{+}$ is finite and the right-hand side consists of finite elements. By scaling (2.6), $(2.8)_1$ follows. The same argument applies to the case $h<0$. \end{proof} \begin{rem}(Taylor states) Lemma 2.4 characterizes the lowest energy linear force-free fields of (1.3) (Taylor states) in a simply connected domain as eigenfunctions of the rotation operator associated with the principal eigenvalues. For multiply connected domains, neither Woltjer's principle nor Taylor states seem known; cf. \cite[p.1245]{Laurence91}, \cite[Theorem 2]{YG90}. The original application of Taylor's theory was for a periodic cylinder $\Omega=D\times [0, L]$, where $D\subset \mathbb{R}^{2}$ is a disk and $L>0$. This domain has a nontrivial 1D de Rham cohomology group due to the harmonic vector field $e_z$ and is simpler than multiply connected domains. Taylor states are computed in \cite{Reiman} without variational principle by using explicit solutions given by Bessel functions of the first kind under the flux condition for $\int_{D}B\cdot n\textrm{d} x'$; see also \cite[9.1.1]{Biskamp93} and \cite[Example 2.3]{Yeates}. It is known that for a given (sufficiently large) helicity, there exist multiple axisymmetric linear force-free fields of (1.3). The Taylor state is axisymmetric for small helicity. As helicity increases and reaches a certain value, bifurcation occurs and after that, the Taylor state remains helical. \end{rem} \subsection{Minimizing sequences} To prove the stability of the set ${\mathcal{S}}_h$, we use compactness of minimizing sequences. \begin{lem} Let $h\in \mathbb{R}$. For a sequence $\{B_n\}\subset L^{2}_{\sigma}(\Omega)$ satisfying $\frac{1}{2}\int_{\Omega}|B_n|^{2}\textrm{d} x\to {\mathcal{I}}_h$ and $\int_{\Omega}\textrm{curl}^{-1}B_n\cdot B_n\textrm{d} x\to h$, there exist $\{n_k\}$ and $B\in {\mathcal{S}}_{h}$ such that $B_{n_k}\to B$ in $L^{2}(\Omega)$. \end{lem} \begin{proof} By (2.3) and the Rellich--Kondrakov theorem, there exists a subsequence (still denoted by $\{B_n\}$) and $B\in L^{2}_{\sigma}(\Omega)$ such that \begin{align*} B_n&\rightharpoonup B\quad \textrm{in}\ L^{2}(\Omega),\\ A_n=\textrm{curl}^{-1}B_n&\to A=\textrm{curl}^{-1}B \quad \textrm{in}\ L^{2}(\Omega). \end{align*}\\ They imply that \begin{align*} \left|\int_{\Omega} (A_n\cdot B_n-A\cdot B)\textrm{d} x\right| \leq ||A_n-A||_{L^{2}} \sup_n||B_n||_{L^{2}}+\left|\int_{\Omega} (A\cdot B_n-A\cdot B)\textrm{d} x\right|\to 0. \end{align*}\\ By $\int_{\Omega}A\cdot B\textrm{d} x=h$ and \begin{align*} {\mathcal{I}}_h\leq \int_{\Omega}|B|^{2}\textrm{d} x\leq \liminf_{n\to\infty}\int_{\Omega}|B_n|^{2}\textrm{d} x={\mathcal{I}}_h, \end{align*}\\ we conclude that $B\in {\mathcal{S}}_{h}$ and $B_n\to B$ in $L^{2}(\Omega)$. \end{proof} \vspace{15pt} For application to stability, we state Lemma 2.6 in terms of total energy. \vspace{15pt} \begin{thm} Let $h\in \mathbb{R}$. For a sequence $\{(u_n,B_n)\}\subset L^{2}_{\sigma}(\Omega)$ satisfying $\frac{1}{2}\int_{\Omega}(|u_n|^{2}+ |B_n|^{2})\textrm{d} x\to {\mathcal{I}}_h$ and $\int_{\Omega}\textrm{curl}^{-1}B_n\cdot B_n\textrm{d} x\to h$. There exist $\{n_k\}\subset \mathbb{N}$ and $B \in {\mathcal{S}}_h$ such that $(u_{n_k}, B_{n_k})\to (0,B)$ in $L^{2}(\Omega)$. \end{thm} \begin{proof} For $h_n=\int_{\Omega}\textrm{curl}^{-1}B_n\cdot B_n\textrm{d} x$, \begin{align*} {\mathcal{I}}_{h_n} \leq \frac{1}{2}\int_{\Omega}|B_n|^{2}\textrm{d} x\leq\frac{1}{2} \int_{\Omega}\left(|u_n|^{2}+ |B_n|^{2}\right)\textrm{d} x. \end{align*}\\ Since ${\mathcal{I}}_{h}$ is continuous for $h\in \mathbb{R}$, letting $n\to\infty$ implies that \begin{align*} \int_{\Omega}|B_n|^{2}\textrm{d} x\to I_h,\quad \int_{\Omega}|u_n|^{2}\textrm{d} x\to 0. \end{align*}\\ We apply Lemma 2.6 and conclude. \end{proof} \begin{rem} The assertion of Lemma 2.6 holds even for domains with Lipschitz boundaries since the embedding from $X_{N}(\Omega)$ into $L^{2}(\Omega)$ is compact. (The $C^{1}$-regularity is assumed in \cite[p.1247]{Laurence91}). The paper \cite{Laurence91} also studies minimization with an inhomogeneous boundary condition by using relative helicity \cite{BF84}, \cite{JC84}, \cite{FA85}. \end{rem} \subsection{Leray--Hopf solutions} To prove stability of force-free fields in weak ideal limits (Theorem 1.1), we recall definitions of Leray--Hopf solutions to (1.1)--(1.2) \cite[p.716]{FL20}, \cite[p.60]{GLL} and of their weak ideal limits \cite[p.709]{FL20}. The existence of Leray--Hopf solutions has been demonstrated for simply connected domains in \cite{DL72}, \cite[p.647]{ST83}, \cite[p.60]{GLL}, and \cite[Theorem 2.1]{FL20} and for multiply connected domains in \cite[Appendix A]{FL20}.\\ \begin{defn}[Leray--Hopf solutions] Let $u_0,B_0\in L^{2}_{\sigma}(\Omega)$. Let \begin{align*} u\in C_{w}([0,T]; L^{2}_{\sigma}(\Omega))\cap L^{2}(0,T; H^{1}_{0}(\Omega) ),\quad B\in C_{w}([0,T]; L^{2}_{\sigma}(\Omega))\cap L^{2}(0,T; H^{1}(\Omega) ). \end{align*}\\ Suppose that $u_t\in L^{1}(0,T; (L^{2}_{\sigma}\cap H^{1}_{0})(\Omega)^{*})$, $B_t\in L^{1}(0,T; (L^{2}_{\sigma}\cap H^{1})(\Omega)^{*})$ and \begin{align*} &<u_t,\xi>+\int_{\Omega}(u\cdot \nabla u-B\cdot \nabla B)\cdot \xi\textrm{d} x+\nu \int_{\Omega}\nabla u:\nabla \xi\textrm{d} x=0,\\ &<B_t,\zeta>+\int_{\Omega}(B\times u)\cdot \nabla \times \zeta\textrm{d} x +\mu \int_{\Omega}\nabla \times B\cdot \nabla \times \zeta \textrm{d} x=0, \end{align*}\\ for a.e. $t\in [0,T]$ and every $\xi\in L^{2}_{\sigma}\cap H^{1}_{0}(\Omega)$, $\zeta\in L^{2}_{\sigma}\cap H^{1}(\Omega)$. Suppose furthermore that $(u(\cdot ,0),B(\cdot ,0))=(u_0, B_0)$ and \begin{align*} \frac{1}{2}\int_{\Omega}\left(|u|^{2}+|B|^{2}\right)\textrm{d} x +\int_0^{t}\int_{\Omega}\left(\nu |\nabla u|^{2}+\mu |\nabla B|^{2}\right)\textrm{d} x\textrm{d} s \leq \frac{1}{2}\int_{\Omega}\left(|u_0|^{2}+|B_0|^{2}\right)\textrm{d} x, \end{align*}\\ for all $t\in [0,T]$. Then, we call $(u,B)$ Leray--Hopf solutions to (1.1)--(1.2).\\ \end{defn} \begin{thm} There exists a Leray--Hopf solution to (1.1)--(1.2). \\ \end{thm} \begin{defn}[Weak ideal limits] Let $(u_j,B_j)$ be a Leray--Hopf solution to (1.1)--(1.2) for $\nu_j,\mu_j>0$ and $(u_{0,j}, B_{0,j})$ such that $(u_{0,j},B_{0,j} )\rightharpoonup (u_0, B_0)$ in $L^{2}(\Omega)$ as $(\nu_j,\mu_j)\to (0,0)$. Assume that \begin{align*} (u_j, B_j)\overset{\ast}{\rightharpoonup} (u,B)\quad \textrm{in}\ L^{\infty}(0,T; L^{2}(\Omega) ). \end{align*}\\ Then, we call $(u,B)$ a weak ideal limit of $(u_j,B_j)$. If instead $\nu_j=\nu>0$ for every $j$ and $\mu_j\to 0$, we call $(u,B)$ a weak nonresistive limit of $(u_j,B_j)$.\\ \end{defn} By the equation $(1.1)_2$, the vector potential $A=\textrm{curl}^{-1}B$ and some potential $Q$ satisfy \begin{align} A_t+B\times u+\nabla Q=-\mu\nabla \times B. \end{align}\\ For smooth solutions, the equality of magnetic helicity (1.12) follows by multiplying $B$ by (2.9) and $A$ by $(1.1)_2$, respectively, and integration by parts. The equality (1.12) for Leray--Hopf solutions for all $t\in [0,T]$ is proved in \cite[Lemma 4.5]{FL20} by an approximation argument for the time variable. \subsection{Weak ideal limits} We state magnetic helicity conservation at weak ideal limits \cite[Theorem 1.2]{FL20} in terms of the vector potential $\textrm{curl}^{-1}B$. \begin{thm} Suppose that $(u,B)$ is a weak ideal limit of Leray--Hopf solutions to (1.1)--(1.2). Then, \begin{align} \int_{\Omega}\textrm{curl}^{-1}B\cdot B\textrm{d} x=\int_{\Omega}\textrm{curl}^{-1}B_0\cdot B_0\textrm{d} x, \end{align} for a.e. $t\in [0,T]$. \end{thm} \begin{thm} There exists a weak ideal limit $(u,B)$ of Leray--Hopf solutions to (1.1)--(1.2) for $(u_0,B_0)\in L^{2}_{\sigma}(\Omega)$ satisfying (2.10) and \begin{align*} \int_{\Omega}\left(|u|^{2}+|B|^{2}\right) \textrm{d} x \leq \int_{\Omega}\left(|u_0|^{2}+|B_0|^{2}\right)\textrm{d} x, \end{align*}\\ for a.e. $t\in [0,T]$. \end{thm} \begin{proof} By Theorems 2.10, 2.12 and lower semi-continuity of the norm for the weak-star convergence in $L^{\infty}(0,T; L^{2}(\Omega))$, the assertion follows. \\ \end{proof} \begin{rem} When $\Omega$ is multiply connected, helicity conservation at weak ideal limits is demonstrated in \cite{FL22} by using a gauge-invariant definition of magnetic helicity \cite{MV19}, cf. \cite{FL20}. (The boundary regularity is reduced from $C^{1,1}$ to Lipschitz in \cite{MV19}.) \end{rem} The proof of Theorem 2.12 is based on the following approximation lemma and the Aubin--Lions lemma \cite[Lemmas 2.10, 2.12]{FL20}, \cite[Proposition 1.2.32]{TNVW}. We state these lemmas for the later usage in Sections 6 and 7. Let $X$ be a Banach space. Let $0<\delta <T/2$. Let $g\in C^{\infty}_{c}(\mathbb{R})$ satisfy $\textrm{spt}\ g\in (-\delta,\delta)$. For $f\in L^{1}(0,T; X)$, we denote the convolution in $(0,T)$ by $f*g=\int_{0}^{T}f(s)g(t-s)\textrm{d} s\in C^{\infty}((\delta,T-\delta); X )$. For $\chi\in C^{\infty}_{c}(\mathbb{R})$ satisfying $\textrm{spt}\ \chi\in (-1,1)$ and $\int_{\mathbb{R}}\chi\textrm{d} t=1$, we set $\chi^{\varepsilon}(t)=\varepsilon^{-1}\chi(\varepsilon^{-1}t)$ for $\varepsilon>0$ and $f^{\varepsilon}=f*\chi^{\varepsilon}$. \begin{lem} Let $0<\delta<T/2$ and $1\leq p<\infty$. For $f\in L^{p}(0,T; X)$, $f^{\varepsilon}\to f$ in $L^{p}(\delta,T-\delta;X)$ as $\varepsilon\to 0$. \end{lem} \begin{lem}[Aubin--Lions lemma] Let $X,Y$, and $Z$ be reflexive Banach spaces such that $X$ embeds compactly into $Y$ and $Y$ embeds into $Z$. Let $1<p<\infty$ and $1\leq q\leq \infty$. The space $\{A\in L^{p}(0,T; X)\ |\ A_t\in L^{q}(0,T; Z)\ \}$ embeds compactly into $L^{p}(0,T; Y)$. \end{lem} A key part of the proof of Theorem 2.12 for a simply connected domain is the strong convergence of the vector potential \begin{align*} A_j=\textrm{curl}^{-1}B_j\to A=\textrm{curl}^{-1}B\quad \textrm{in}\ L^{2}_{\textrm{loc}}(0,T; L^{2}(\Omega)), \end{align*}\\ for Leray--Hopf solutions $(u_j,B_j)$ and the weak ideal limit $(u,B)$ by application of the Aubin--Lions lemma. This convergence implies (2.10) by letting $j\to\infty$ to the equality (1.12). To apply the Aubin--Lions lemma, the paper \cite{FL20} showed a uniform bound \begin{align*} \sup_{j} ||\partial_tA_{j}^{\varepsilon_j} ||_{L^{2}(\delta,T-\delta; (L^{2}_{\sigma}\cap W^{1,4}_{0})(\Omega)^{*} ) }<\infty. \end{align*}\\ This uniform bound is demonstrated by the equation (2.9) and the Sobolev embedding $W^{1,4}_0(\Omega)\subset L^{\infty}(\Omega)$. The vector potential $A_j$ is approximated by $A^{\varepsilon_j}_{j}$ with $\varepsilon_j>0$ to deduce (2.9) from the definition of Leray--Hopf solutions (Definition 2.9). \begin{rem}[2D case] For 2D bounded and multiply connected domains, magnetic mean-square potential conservation at weak ideal limits of Leray--Hopf solutions is demonstrated in \cite[Theorem5.4]{FL20}. For a simply connected domain, there exist a flux function $\phi$ and a stream function $\psi$ vanishing on $\partial\Omega$ such that $B=\nabla^{\perp}\phi $ and $u=\nabla^{\perp}\psi $ for $\nabla^{\perp}={}^{t}(\partial_{x_2},-\partial_{x_1})$. An equivalent equation to $(1.1)_2$ in this setting is \begin{align*} \phi_t+u\cdot \nabla \phi=\mu\Delta \phi. \end{align*}\\ Leray--Hopf solutions satisfy this equation on $L^{2}(\Omega)$ for a.e. $t\in [0,T]$, and by integration by parts, the equality \begin{align*} \int_{\Omega}|\phi|^{2}\textrm{d} x+2\mu\int_{0}^{t}\int_{\Omega}|\nabla \phi|^{2}\textrm{d} x\textrm{d} s=\int_{\Omega}|\phi_0|^{2}\textrm{d} x, \end{align*}\\ holds for $t\in [0,T]$. The paper \cite{FL20} showed a uniform bound \begin{align*} \sup_{j} ||\partial_t\phi_j||_{L^{1}(0,T; H^{1}_{0}(\Omega)^{*} ) }<\infty, \end{align*}\\ by applying the Hardy space theory of compensated compactness of Coifmann et al. \cite{CLMS} and Fefferman's ${\mathcal{H}}^{1}-\textrm{BMO}$ duality \cite{FS72} for the zero extension of $(u,B)$ to $\mathbb{R}^{2}$. The uniform bound implies the strong convergence \begin{align*} \phi_j\to \phi\quad \textrm{in}\ L^{2}(0,T; L^{2}(\Omega) ), \end{align*}\\ and that the limit $\phi$ is a distributional solution to the transport equation \begin{align*} \phi_t+u\cdot \nabla \phi=0. \end{align*}\\ It is remarked that the convergence of $\phi_j$ implies $\phi_j^{2}\to \phi^{2}$ in $L^{2}(0,T; L^{2}(\Omega) )$ by the uniform bound of $\nabla \phi_j\in L^{\infty}(0,T; L^{2}(\Omega))$, and the limit $\phi^{2}$ satisfies the equation $\partial_t\phi^{2}+u\cdot \nabla \phi^{2}=0$ in the distributional sense. Magnetic mean-square potential conservation at weak ideal limits follows by integrating this equation in time. The paper \cite[Lemma 5.7]{FL20} showed magnetic mean-square potential conservation more generally for any transport equation solutions under the condition $u\in L^{\infty}(0,T; L^{2}(\Omega) )$ and $\nabla \phi\in C_w([0,T]; L^{2}(\Omega) )$ by an approximation argument for spatial and time variables, cf. \cite{DL89}, \cite{AC14}. \end{rem} \subsection{Application to stability} Theorems 2.7 and 2.13 imply the stability of the set of minimizers $\mathcal{S}_{h}$. \begin{prop} Let $h\in \mathbb{R}$. Let ${\mathcal{S}}_h$ be as in (2.6). For $\varepsilon>0$, there exists $\delta>0$ such that for $u_0,B_0\in L^{2}_{\sigma}(\Omega)$ satisfying \begin{align*} ||u_0||_{L^{2}}+\inf_{U\in S_h}||B_0-U||_{L^{2}}+\left|\int_{\Omega}\textrm{curl}^{-1}B_0\cdot B_0\textrm{d} x-h\right|\leq \delta, \end{align*}\\ there exists a weak ideal limit $(u,B)$ of Leray--Hopf solutions to (1.1)--(1.2) for $(u_0,B_0)$ such that \begin{align*} \textrm{ess sup}_{t>0} \left\{||u||_{L^{2}}+\inf_{U\in {\mathcal{S}}_h}||B-U||_{L^{2}} \right\}\leq \varepsilon. \end{align*} \end{prop} \begin{proof} We argue by contradiction. Suppose that the assertion were false. Then there exists some $\varepsilon_0>0$ such that for any $n\geq 1$, there exists $u_{0,n}$, $B_{0,n}\in L^{2}_{\sigma}(\Omega)$ such that \begin{align*} ||u_{0,n}||_{L^{2}}+\inf_{U\in {\mathcal{S}}_h}||B_{0,n}-U||_{L^{2}}+\left|\int_{\Omega}\textrm{curl}^{-1}B_{0,n}\cdot B_{0,n}\textrm{d} x-h\right|\leq \frac{1}{n}, \end{align*}\\ and the weak ideal limit $(u_n,B_n)$ of Leray--Hopf solutions in Theorem 2.13 satisfying \begin{align*} \textrm{ess sup}_{t>0} \left\{||u_n||_{L^{2}}+\inf_{U\in {\mathcal{S}}_h}||B_n-U||_{L^{2}} \right\}\geq \varepsilon_0>0. \end{align*}\\ We denote by $F_n$ the set of all points $t\in (0,\infty)$ such that \begin{align*} &\int_{\Omega}(|u_n|^{2}+|B_n|^{2}) \textrm{d} x \leq \int_{\Omega}(|u_{0,n}|^{2}+|B_{0,n}|^{2})\textrm{d} x, \\ &\int_{\Omega}\textrm{curl}^{-1}B_{n}\cdot B_{n}\textrm{d} x =\int_{\Omega}\textrm{curl}^{-1}B_{0,n}\cdot B_{0,n}\textrm{d} x. \end{align*}\\ The set $F_n^{c}=(0,\infty)\backslash F_n$ has measure zero. For $F=\cap_{n=1}^{\infty} F_n$, $F^{c}$ has measure zero and the above inequalities hold for all $t\in F$ and $n\geq 1$. We take a point $t_n\in F$ such that \begin{align*} ||u_n||_{L^{2}}(t_n)+\inf_{U\in {\mathcal{S}}_h}||B_n-U||_{L^{2}}(t_n) \geq \frac{\varepsilon_0}{2}. \end{align*}\\ We write $(u_n,B_n)= (u_n,B_n)(\cdot,t_n)$ by suppressing $t_n$. For $h_n=\int_{\Omega}\textrm{curl}^{-1}B_{0,n}\cdot B_{0,n}\textrm{d} x$, \begin{align*} {\mathcal{I}}_{h_n}\leq\frac{1}{2}\int_{\Omega} |B_{0,n}|^{2}\textrm{d} x \leq \frac{1}{2} \left(\inf_{U\in {\mathcal{S}}_h}||B_{0,n}-U||_{L^{2}}+\sqrt{2}{\mathcal{I}}_h^{1/2}\right)^{2}. \end{align*}\\ By continuity of ${\mathcal{I}}_h$ for $h\in \mathbb{R}$, letting $n\to\infty$ implies that \begin{align*} h_n&\to h,\\ \frac{1}{2}\int_{\Omega}\left(|u_{0,n}|^{2}+|B_{0,n}|^{2}\right)\textrm{d} x &\to {\mathcal{I}}_h. \end{align*}\\ By helicity conservation and nonincreasing total energy, \begin{align*} &\int_{\Omega}\textrm{curl}^{-1}B_{n}\cdot B_{n}\textrm{d} x=\int_{\Omega}\textrm{curl}^{-1}B_{0,n}\cdot B_{0,n}\textrm{d} x=h_n, \\ &\mathcal{I}_{h_n}\leq \frac{1}{2}\int_{\Omega}\left(|u_{n}|^{2}+|B_{n}|^{2}\right)\textrm{d} x\leq \frac{1}{2}\int_{\Omega}\left(|u_{0,n}|^{2}+|B_{0,n}|^{2}\right)\textrm{d} x, \end{align*}\\ letting $n\to\infty$ implies that \begin{align*} \frac{1}{2}\int_{\Omega}\left(|u_{n}|^{2}+|B_{n}|^{2}\right)\textrm{d} x \to {\mathcal{I}}_h. \end{align*}\\ By Theorem 2.7, there exists a subsequence (still denoted by $\{(u_n,B_n)\}$) and some $ B \in {\mathcal{S}}_h$ such that $(u_n,B_n)\to (0, B)$ in $L^{2}(\Omega)$. Thus \begin{align*} 0=\lim_{n\to\infty}\left\{ ||u_n||_{L^{2}}+||B_n-B||_{L^{2}} \right\} \geq\liminf_{n\to\infty}\left\{ ||u_n||_{L^{2}}+\inf_{U\in {\mathcal{S}}_h}||B_n-U||_{L^{2}}\right\} \geq \frac{\varepsilon_0}{2}>0. \end{align*}\\ We obtained a contradiction. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem 1.1] By the characterization of ${\mathcal{S}}_h$ in Lemma 2.4, we deduce Theorem 1.1 from Proposition 2.18. \end{proof} \begin{rem} It is emphasized that Theorem 1.1 is stability of a set of minimizers $\{U_j\}_{j=1}^{N}$ for $N\geq 2$. For symmetric elliptic operators, the principal eigenvalue is simple, i.e. $N=1$, thanks to the strong maximum principle, e.g., \cite[6.5.1]{E}. On the other hand, two Taylor states can exist for a periodic cylinder due to the bifurcation as noted in Remark 2.5. \end{rem} \begin{rem} It is observed from the proof of Theorem 1.1 that the same stability result holds for weak nonresistive limits of Leray--Hopf solutions (decreasing total energy) by their magnetic helicity conservation \cite[Theorem 1.5]{FL20}. The stability also holds for unique strong solutions to ideal MHD and nonresistive MHD up to maximal existence time. \end{rem} \section{Nonlinear force-free fields} We apply Clebsch representation for axisymmetric solenoidal vector fields in $L^{2}(\mathbb{R}^{3})$ and their vector potential, and define generalized magnetic helicity (1.10) and generalized magnetic mean-square potential (1.14) with the function $\phi_{\infty}=r^{2}+\gamma$ for $\gamma\geq 0$. \subsection{Clebsch representation} Clebsch representation is a generalization of the stream (flux) function for 3D symmetric solenoidal vector fields. For translationally symmetric solenoidal vector fields $b(x_1,x_2)={}^{t}(b_1(x_1,x_2),b_2(x_1,x_2),b_3(x_1,x_2))\in L^{2}_{\sigma}(\mathbb{R}^{2})$, Clebsch representation is \begin{align*} b(x_1,x_2)=\nabla \times (\phi(x_1,x_2)\nabla z)+G(x_1,x_2)\nabla z. \end{align*}\\ Here, $\nabla=\nabla_x$ is gradient for $x\in \mathbb{R}^{3}$ and $z=x_3$. We call $\phi$ and $G$ Clebsch potentials. The $L^{2}$-norm of $b$ is \begin{align*} \int_{\mathbb{R}^{2}}|b|^{2}\textrm{d} x'=\int_{\mathbb{R}^{2}}\left(|\nabla \phi|^{2}+|G|^{2}\right)\textrm{d} x'. \end{align*}\\ The flux function $\phi$ is unique up to constant. The 3-dimensional space $L^{2}_{\sigma}(\mathbb{R}^{2})$ is identified with the product space $ \dot{H}^{1}(\mathbb{R}^{2})\times L^{2}(\mathbb{R}^{2})$. Clebsch representation of the vector potential of $b$ is \begin{align*} a(x_1,x_2)=\nabla \times (\eta(x_1,x_2)\nabla z)+\phi(x_1,x_2)\nabla z, \end{align*}\\ where $\eta$ is a solution of the Poisson equation $-\Delta_{\mathbb{R}^{2}}\eta=G$ in $\mathbb{R}^{2}$. The vector field $a\in \textrm{BMO}(\mathbb{R}^{2})$ is a unique potential such that $\nabla \times a=b$ and $\nabla \cdot a=0$. We consider Clebsch representation for axisymmetric solenoidal vector fields $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})=\{b\in L^{2}_{\sigma}(\mathbb{R}^{3})\ |\ b: \textrm{axisymmetric}\ \}$, \begin{align} b(x)=\nabla \times (\phi(z,r)\nabla \theta)+G(z,r)\nabla \theta. \end{align}\\ Here, $(r,\theta,z)$ is the polar coordinate. The $L^{2}$-norm of $b$ is \begin{align} \int_{\mathbb{R}^{3}}|b|^{2}\textrm{d} x=2\pi\int_{\mathbb{R}^{2}_{+}}\left(|\nabla \phi|^{2}+|G|^{2}\right)\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align}\\ Clebsch representation of the vector potential of $b$ is \begin{align} a(x)=\nabla \times (\eta(z,r)\nabla \theta)+\phi(z,r)\nabla \theta, \end{align}\\ where $\eta$ is a solution of the Dirichlet problem \begin{align} -L\eta=G\quad \textrm{in}\ \mathbb{R}^{2}_{+},\quad \eta=0\quad \textrm{on}\ \partial\mathbb{R}^{2}_{+}, \end{align}\\ for the operator $L=\Delta_{z,r}-r^{-1}\partial_r$. The Green function of this problem \cite[p.4]{FT81}, \cite[p.472]{Friedman82}, \cite[19.1]{SverakLec} is \begin{equation} \begin{aligned} &\eta(z,r)=\int_{\mathbb{R}^{2}_{+}}{\mathcal{G}}(z,r,z',r')\frac{G(z',r')}{r'}\textrm{d} z'\textrm{d} r',\\ &{\mathcal{G}}(z,r,z',r')= \frac{rr'}{2\pi}\int_{0}^{\pi}\frac{\cos\theta \textrm{d} \theta}{\sqrt{|z-z'|^{2}+r^{2}+r'^{2}-2rr'\cos\theta }}. \end{aligned} \end{equation}\\ We denote $(3.5)_1$ by $\eta=(-L)^{-1}G$. The Green function can be estimated by asymptotic expansions of complete elliptic integrals of the first and the second kind, e.g., \cite{Friedman82}. Its higher-order estimates are derived in \cite[Corollary 2.9]{FengSverak} from the representation \begin{equation} \begin{aligned} {\mathcal{G}}(z,r,z',r')&=\frac{\sqrt{rr'}}{2\pi}F(s),\quad s= \frac{|z-z'|^{2}+|r-r'|^{2}}{rr'},\\ F(s)&=\int_{0}^{\pi}\frac{\cos\theta}{\sqrt{2(1-\cos\theta)+s}}\textrm{d} \theta. \end{aligned} \end{equation}\\ The function $F(s)$ has the asymptotics $F(s)=-(1/2)\log s+\log{8}-2+O(s\log s)$ as $s\to 0$ and $O(s^{-3/2})$ as $s\to\infty$. The $k$-th derivative $F^{(k)}(s)$ is also estimated from the asymptotic expansion. They satisfy \begin{equation} \begin{aligned} &F(s)\lesssim \frac{1}{s^{\tau}},\quad 0<\tau\leq \frac{3}{2}, \\ &F^{(k)}(s)\lesssim \frac{1}{s^{k+\tau}},\quad 0\leq \tau\leq \frac{3}{2},\ k\in \mathbb{N}. \end{aligned} \end{equation}\\ \subsection{The weighted Hilbert space} Let $L^{2}(\mathbb{R}^{2}_{+};r^{-1})$ denote the weighted $L^{2}$ space on the cross section $\mathbb{R}^{2}_{+}=\{{}^{t}(z,r)\ |\ z\in \mathbb{R},\ r>0\}$ with the measure $r^{-1}\textrm{d} z\textrm{d} r$. Let $\dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1} )$ denote the homogeneous $L^{2}(\mathbb{R}^{2}_{+};r^{-1})$-Sobolev space of trace zero functions on $\partial\mathbb{R}^{2}_{+}$. We take the trace at $r=0$ in $H^{1/2}_{\textrm{loc}}(\mathbb{R})$ by $L^{2}(\mathbb{R}^{2}_{+};r^{-1})\subset L^{2}_{\textrm{loc}}(\overline{\mathbb{R}^{2}_{+}})$. By the weighted Sobolev inequality \cite[Lemma 3]{Van13}, \begin{align} \left(\int_{\mathbb{R}^{2}_{+}}|\phi|^{p}\frac{1}{r^{2+p/2}}\textrm{d} z\textrm{d} r\right)^{1/p} \leq C \left(\int_{\mathbb{R}^{2}_{+}}|\nabla\phi|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\right)^{1/2},\quad 2\leq p<\infty, \end{align}\\ the space $\dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1} )$ continuously embeds into $L^{p}(\mathbb{R}^{2}_{+}; r^{-2-p/2})$. Moreover, the Rellich--Kondrakov theorem holds in the weighted space \cite[Lemma 3.1]{A8}, i.e., \begin{align} \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})\subset \subset L^{p}(\overline{\mathbb{R}^{2}_{+}};r^{-1}),\quad 1\leq p <\infty. \end{align}\\ The space $\dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1} )$ is a Hilbert space isometrically isomorphic to $\dot{H}^{1}_{\textrm{axi}}(\mathbb{R}^{5})= \{ \varphi\in \dot{H}^{1}(\mathbb{R}^{5})\ | \ \varphi: \textrm{axisymmetric } \}$ \cite[Lemma 2.2]{AF88} by the transform \begin{align} \phi(z,r)\longmapsto \varphi(y)=\frac{\phi(z,r)}{r^{2}},\quad y={}^{t}(y_1,y'),\ y_1=z,\ |y'|=r, \end{align}\\ in the sense that \begin{equation} \begin{aligned} \int_{\mathbb{R}^{2}_{+}}\nabla \phi\cdot \nabla \tilde{\phi}\frac{2\pi^{2}}{r}\textrm{d} z\textrm{d} r =\int_{\mathbb{R}^{5}}\nabla_y\varphi \cdot \nabla_y\tilde{\varphi} \textrm{d} y,\quad \varphi=\frac{\phi}{r^{2}}, \tilde{\varphi}=\frac{\tilde{\phi}}{r^{2}}. \end{aligned} \end{equation}\\ The inverse transform of (3.10) induces the isometries from $L^{1}(\mathbb{R}^{5})$ into $L^{1}(\mathbb{R}^{2})$ and from $L^{2}(\mathbb{R}^{5})$ into $L^{2}(\mathbb{R}^{2}_{+};r^{-1})$, respectively, in the sense that \begin{equation} \begin{aligned} ||\phi||_{L^{1}(\mathbb{R}^{3})}&=\frac{1}{\pi} ||\varphi||_{L^{1}(\mathbb{R}^{5}) }, \\ ||\phi||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1} )}&=\frac{1}{\sqrt{2}\pi} ||\varphi||_{L^{2}(\mathbb{R}^{5}) }. \end{aligned} \end{equation} \vspace{5pt} We show the Poincar\'e inequality with the weighted measure for the later usage in Section 5. \begin{prop} There exists a constant $C$ such that \begin{align} \int_{D(0,2R)\backslash D(0,R)}|\phi |^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\leq CR^{2} \int_{D(0,2R)\backslash D(0,R)}|\nabla \phi |^{2}\frac{1}{r}\textrm{d} z\textrm{d} r, \end{align}\\ for $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$, where $D(0,R)=\{{}^{t}(z,r)\in \mathbb{R}^{2}_{+} \ |\ |z|^{2}+r^{2}<R^{2} \}$ and $R>0$. \end{prop} \vspace{5pt} \begin{proof} We reduce to the case $R=1$ by dilation. Suppose that (3.13) were false, then there exists a sequence $\{\phi_n\}\subset \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$ such that \begin{align*} \int_{D(0,2)\backslash D(0,1)}|\phi_n |^{2}\frac{1}{r}\textrm{d} z\textrm{d} r=1,\quad \int_{D(0,2)\backslash D(0,1)}|\nabla \phi_n |^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\to 0. \end{align*}\\ By the transform (3.10), $\varphi_n=\phi_n/r^{2}$ satisfies \begin{align*} \int_{B(0,2)\backslash B(0,1)}|\nabla \varphi_n |^{2}\textrm{d} y= \int_{D(0,2)\backslash D(0,1)}|\nabla \phi_n |^{2}\frac{2\pi^{2}}{r}\textrm{d} z\textrm{d} r, \end{align*}\\ where $B(0,R_0)$ denotes the open ball in $\mathbb{R}^{5}$ centered at the origin with radius $R_0>0$. By the Rellich--Kondrakov theorem, there exist $\{n_k\}$ and some axisymmetric $\varphi$ such that $\varphi_{n_{k}}\to \varphi$ in $L^{2}(B(0,2)\backslash B(0,1))$. Thus the function $\phi=r^{2}\varphi$ satisfies \begin{align*} \int_{D(0,2)\backslash D(0,1)}| \phi_{n_{k}}-\phi |^{2}\frac{2\pi^{2}}{r}\textrm{d} z\textrm{d} r =\int_{B(0,2)\backslash B(0,1)}| \varphi_{n_{k}}-\varphi |^{2}\textrm{d} y \to 0. \end{align*}\\ This implies that \begin{align*} \int_{D(0,2)\backslash D(0,1)}|\phi |^{2}\frac{1}{r}\textrm{d} z\textrm{d} r=1. \end{align*}\\ Since $\nabla \phi=0$ and $\phi(z,0)=0$, we have $\phi\equiv 0$. We obtained a contradiction. \end{proof} \begin{rem} The weighted Sobolev inequality (3.8) holds also in a disk $D(0,R)$. For example, \begin{align} \left(\int_{D(0,R)}|\phi|^{10/3}\frac{1}{r^{11/3}}\textrm{d} z\textrm{d} r\right)^{3/10} \leq C \left(\int_{D(0,R)}|\nabla\phi|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\right)^{1/2}, \end{align}\\ holds for $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$. The Poincar\'e inequality $||\phi||_{L^{2}(D(0,1);r^{-1}) }\leq C ||\nabla \phi||_{L^{2}(D(0,1);r^{-1}) }$ can be demonstrated in a similar way as (3.13) and hence the homogeneous Sobolev inequality $||\varphi||_{L^{10/3}(B(0,1))}\leq C|| \nabla \varphi||_{L^{2}(B(0,1))}$ holds for $\varphi\in \dot{H}^{1}_{\textrm{axi}}(\mathbb{R}^{5})$ by the transform (3.10). The inverse transform of (3.10) and a dilation argument imply (3.14). \end{rem} \subsection{Anisotropic estimates} We demonstrate the Clebsch representation (3.1) and (3.3). \begin{lem} For $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, there exist unique $\phi\in \dot{H}^{1}_{0} (\mathbb{R}^{2}_{+};r^{-1})$ and $G\in L^{2}(\mathbb{R}^{2}_{+};r^{-1})$ such that (3.1) holds. \end{lem} \vspace{5pt} \begin{proof} For $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, we set $G(z,r)=r^{2} b\cdot \nabla \theta$. Then $b-G\nabla \theta$ is axisymmetric without a swirl, and there exists an axisymmetric stream function $\phi(z,r)$ such that $b-G\nabla \theta=\nabla \times (\phi\nabla \theta)$. The functions $\nabla \phi$ and $G$ belong to $L^{2}(\mathbb{R}^{2}_{+};r^{-1})$ by (3.2). For smooth $b$, we may assume that $\phi(z,0)=0$ since $\nabla \phi$ vanishes on $\{r=0\}$ by \begin{align*} rb=\nabla \phi\times r\nabla \theta+G r\nabla \theta. \end{align*}\\ For general $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, we take a sequence $\{b_m\} \subset C^{\infty}\cap L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ such that $b_m\to b$ in $L^{2}(\mathbb{R}^{3})$. The potentials $\phi_m\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1}) $ and $G_m\in L^{2}(\mathbb{R}^{2}_{+};r^{-1})$ of $b_m$ satisfy \begin{align*} 0=\lim_{m\to\infty}\int_{\mathbb{R}^{3}}|b-b_m|^{2}\textrm{d} x=\lim_{m\to\infty}\int_{\mathbb{R}^{2}_{+}}\left(|\nabla (\phi-\phi_m) |^{2}+|G-G_m|^{2}\right)\frac{2\pi}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ Thus $\phi(z,0)=0$. The potentials $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$ and $G \in L^{2}(\mathbb{R}^{2}_{+};r^{-1})$ of (3.1) are unique by the trace condition at $r=0$. \end{proof} \vspace{5pt} \begin{prop} For $G\in L^{2}(\mathbb{R}^{2}_{+};r^{-1})$, $\eta=(-L)^{-1}G$ satisfies \begin{align} ||\nabla_{z,r}\eta||_{L^{p}(\mathbb{R}) }(r)\lesssim r^{1/p+1/2}||G||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) },\quad r>0,\ 2\leq p<\infty. \end{align} \end{prop} \vspace{5pt} \begin{proof} By the 1-dimensional convolution $*_1$ for $z$-variable, \begin{align*} \nabla_{z,r}\eta(z,r)=\int_{0}^{\infty}\nabla_{z,r}{\mathcal{G}}*_1G(\cdot,r')\frac{1}{r'}\textrm{d} r'. \end{align*}\\ By Young's inequality for $1/p=1/q-1/2$, $1\leq q<2$, \begin{align*} ||\nabla_{z,r}\eta||_{L^{p}(\mathbb{R}) }(r) &\leq \int_{0}^{\infty}||\nabla {\mathcal{G}}||_{L^{q}(\mathbb{R}) }(r,r') ||G ||_{L^{2}(\mathbb{R}) }\frac{1}{r'}\textrm{d} r' \\ &\leq \left(\int_{0}^{\infty}||\nabla {\mathcal{G}}||_{L^{q}(\mathbb{R}) }^{2}(r,r') \frac{1}{r'}\textrm{d} r'\right)^{1/2}||G||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1} ) }. \end{align*}\\ By the pointwise estimates (3.7), \begin{align*} \left|\nabla_{z,r} {\mathcal{G}}(z,r,z',r')\right|\lesssim \frac{1}{s^{1/2+\tau}}+\sqrt{\frac{r'}{r}}\frac{1}{s^{\tilde{\tau}}},\quad 0<\tau\leq \frac{3}{2},\ 0\leq \tilde{\tau}\leq \frac{3}{2}. \end{align*}\\ By using \begin{align*} \left\|\frac{1}{s^{\alpha}}\right\|_{L^{q}(\mathbb{R}) }=C\frac{(rr')^{\alpha}}{|r-r' |^{2\alpha-1/q}},\quad \alpha>\frac{1}{2q}, \end{align*}\\ we estimate \begin{align*} \left\|\nabla_{z,r} {\mathcal{G}}\right\|_{L^{q}(\mathbb{R})}^{2}(r,r') \lesssim \left( \frac{(rr')^{2\tau+1}}{|r-r' |^{4\tau +2-2/q}}+\frac{r' (rr')^{2\tilde{\tau}}}{r |r-r' |^{4\tilde{\tau} -2/q}} \right),\quad \frac{1}{2q}- \frac{1}{2}<\tau\leq \frac{3}{2},\ \frac{1}{2q}<\tilde{\tau}\leq \frac{3}{2}. \end{align*}\\ For $1/(2q)-1/2<\tau\leq 3/2$, \begin{align*} \int_{\{|r-r'|<r/2\}}\frac{(rr')^{2\tau+1}}{|r-r' |^{4\tau +2-2/q}r'}\textrm{d} r'=Cr^{2/q}. \end{align*}\\ For $1/q-1/2<\tau\leq 3/2$, \begin{align*} \int_{\{|r-r'|\geq r/2\}}\frac{(rr')^{2\tau+1}}{|r-r' |^{4\tau +2-2/q}r'}\textrm{d} r'=Cr^{2/q}. \end{align*}\\ Thus \begin{align*} \int_{\mathbb{R}}\frac{(rr')^{2\tau+1}}{|r-r' |^{4\tau +2-2/q}r'}\textrm{d} r'=Cr^{2/q}. \end{align*}\\ By choosing different $\tilde{\tau}$ in $|r-r'|<r/2$ and in $|r-r'|\geq r/2$ in a similar way, \begin{align*} \int_{\mathbb{R}}\frac{r'(rr')^{2\tilde{\tau}+1}}{|r-r' |^{4\tilde{\tau} -2/q}r'}\textrm{d} r'=Cr^{2/q}. \end{align*}\\ Thus (3.15) holds. \end{proof} \vspace{5pt} \begin{lem} The vector potential $a\in L^{6}(\mathbb{R}^{3})$ in (3.3) is a unique potential of $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ such that $\nabla \times a=b$ and $\nabla \cdot a=0$. \end{lem} \vspace{5pt} \begin{proof} By (3.8) and (3.15), $a$ is locally integrable in $\mathbb{R}^{3}$ and $\nabla \times a=b$ and $\nabla \cdot a=0$. By $||\nabla a||_{L^{2}}=||\nabla \times a||_{L^{2}}=||b||_{L^{2}}$ and Sobolev embedding, $a$ belongs to $L^{6}(\mathbb{R}^{3})$. The uniqueness follows from the Liouville theorem. \end{proof} \begin{rem} The condition $a\in L^{6}(\mathbb{R}^{3})$ implies that $\nabla_{z,r}\eta$ and $\phi$ belong to $L^{6}(\mathbb{R}^{2}_{+}; r^{-5})$. The weighted Sobolev inequality (3.8) implies more general properties $\phi\in L^{p}(\mathbb{R}^{2}_{+}; r^{-2-p/2})$ for $2\leq p<\infty$, whereas (3.15) yields the weaker integrability for $\nabla_{z,r}\eta$ at $r=0$ and $r=\infty$, i.e., \begin{align*} ||\nabla_{z,r} \eta||_{L^{p}(\mathbb{R})}^{p}\frac{1}{r^{2+p/2}}\leq \frac{C}{r}||G||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }^{p}\quad r>0. \end{align*}\\ The trace of $\nabla_{z,r} \eta$ at $r=0$ vanishes on $L^{p}(\mathbb{R})$ by (3.15). \end{rem} \begin{rem} The constant $B_{\infty}=-2e_z$ can be expressed as $B_{\infty}=-\nabla \times (\phi_{\infty}\nabla \theta)$ by the function $\phi_{\infty}=r^{2}+\gamma$ for any $\gamma\geq 0$. The Clebsch representation (3.1) and (3.3) imply those for $B=b+B_{\infty}$ and its vector potential $A$, \begin{equation} \begin{aligned} B&=\nabla \times( \Phi\nabla \theta)+G\nabla \theta,\\ A&=\nabla \times (\eta\nabla \theta)+\Phi\nabla \theta, \\ \Phi&=\phi-\phi_{\infty}. \end{aligned} \end{equation}\\ The (pseudo-)scaler $A\cdot B$ is expressed as \begin{align*} A\cdot B=2\Phi \frac{G}{r^{2}}+\nabla \cdot ( \Phi\nabla \theta\times A ). \end{align*} \end{rem} \begin{prop} Let $1<q<3$ and $1/p=1/q-1/3$. For $b\in L^{q}_{\sigma}(\mathbb{R}^{3})$, there exists a unique $a\in L^{p}_{\sigma}(\mathbb{R}^{3})$ such that $\nabla a\in L^{q}(\mathbb{R}^{3})$ and $\nabla \times a=b$. \end{prop} \begin{proof} For $b\in C^{\infty}_{c,\sigma}(\mathbb{R}^{3})$, we set $a=E*(\nabla \times b)$ by the fundamental solution of the Laplace equation $E(x)=(4\pi)^{-1}|x|^{-1}$ and the convolution $*$ in $\mathbb{R}^{3}$. Then, $\nabla \times a=b$ and $\nabla \cdot a=0$. By the Calderon-Zygmund inequality for the Newton potential \cite[Theorem 9.9]{GT} and the Sobolev inequality, \begin{align*} ||\nabla a||_{L^{q}(\mathbb{R}^{3}) }+||a||_{L^{p}(\mathbb{R}^{3}) }\leq C||b||_{L^{q}(\mathbb{R}^{3}) }. \end{align*}\\ For $b\in L^{q}_{\sigma}(\mathbb{R}^{3})$, we take a sequence $\{b_n\}\subset C^{\infty}_{c,\sigma}(\mathbb{R}^{3})$ such that $b_n\to b$ in $L^{q}(\mathbb{R}^{3})$. Then, $a_n=E*(\nabla \times b_n)\to a$ in $L^{p}(\mathbb{R}^{3})$ and $\nabla a_n\to \nabla a$ in $L^{q}(\mathbb{R}^{3})$ for some $a$ by the above inequality. The limit $a$ satisfies the desired properties. The uniqueness follows from the Liouville theorem. \end{proof} \subsection{Generalized magnetic helicity} We define generalized magnetic helicity (1.10) and generalized mean-square potential (1.14) for any axisymmetric solenoidal vector fields in $L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. \begin{prop} Let $1\leq q<\infty$. Let $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. For $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+}; r^{-1})$, \begin{align} |\{x\in \mathbb{R}^{3}\ |\ \phi(x)>\phi_{\infty} \}| &\lesssim ||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+}; r^{-1}) }^{2}, \\ ||(\phi-\phi_{\infty})_{+}||_{L^{q} (\mathbb{R}^{2}_{+};r^{-1}) } &\lesssim ||\nabla \phi||^{4/3+2/(3q) }_{L^{2}(\mathbb{R}^{2}_{+}; r^{-1}) }, \\ ||(\phi-\phi_{\infty})_{+}||_{L^{q}(\mathbb{R}^{3}) } &\lesssim ||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+}; r^{-1}) }^{4/3+2/q}, \end{align}\\ where $|\cdot |$ denotes the Lebesgue measure in $\mathbb{R}^{3}$. \end{prop} \begin{proof} By the weighted Sobolev inequality (3.8), \begin{align*} |\{x\in \mathbb{R}^{3}\ |\ \phi(x)>\phi_{\infty} \}| =\int_{\{\phi>\phi_{\infty}\}}\textrm{d} x =2\pi \int_{\{\phi>\phi_{\infty}\}}\frac{1}{r}\textrm{d} z\textrm{d} r \leq 2\pi \int_{\{\phi>r^{2}\}}\phi^{2}\frac{1}{r^{3}}\textrm{d} z\textrm{d} r \lesssim ||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }^{2}. \end{align*}\\ Thus (3.17) holds. For $1\leq q<\infty$, we set $p=l+q$ and $l=(q+2)/3$ to estimate \begin{align*} \int_{\mathbb{R}^{2}_{+}}(\phi-\phi_{\infty})_{+}^{q}\frac{1}{r}\textrm{d} z\textrm{d} r \lesssim \int_{\mathbb{R}^{2}_{+}}\phi^{l+q}\frac{1}{r^{2l+1}}\textrm{d} z\textrm{d} r= \int_{\mathbb{R}^{2}_{+}}\phi^{p}\frac{1}{r^{p/2+2}}\textrm{d} z\textrm{d} r\lesssim ||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }^{p}. \end{align*}\\ Thus (3.18) holds. Similarly, we set $p=l+q$ and $l=q/3+2$ to estimate \begin{align*} \int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{q}\textrm{d} x\lesssim \int_{\mathbb{R}^{2}_{+}}\phi^{q+l}\frac{1}{r^{2l-1}}\textrm{d} z\textrm{d} r = \int_{\mathbb{R}^{2}_{+}}\phi^{p}\frac{1}{r^{p/2+2}}\textrm{d} z\textrm{d} r \lesssim ||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }^{p}. \end{align*}\\ Thus (3.19) holds. \end{proof} \begin{lem}[Arnold-type inequality] \begin{align} \left|\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}G\frac{1}{r^{2}} \textrm{d} x\right| &\lesssim \left(\int_{\mathbb{R}^{3}}|b|^{2}\textrm{d} x\right)^{4/3}, \\ \left|\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{2} \textrm{d} x\right| &\lesssim \left(\int_{\mathbb{R}^{3}}|b|^{2}\textrm{d} x\right)^{7/3}, \end{align}\\ for $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. \end{lem} \begin{proof} By H\"older's inequality, (3.18) and (3.2), \begin{align*} \left|\int_{\mathbb{R}^{3}}(\phi-\phi)_{+}G\frac{1}{r^{2}}\textrm{d} x\right|=2\pi \left|\int_{\mathbb{R}^{2}_{+}}(\phi-\phi)_{+}G\frac{1}{r}\textrm{d} z\textrm{d} r\right| &\leq 2\pi ||(\phi-\phi)_{+}||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }||G||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } \\ &\lesssim ||b||_{L^{2}(\mathbb{R}^{3}) }^{8/3}. \end{align*}\\ Thus (3.20) holds. The inequality (3.21) follows from (3.19) and (3.2). \end{proof} \begin{defn} Let $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. For $v, b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, we set \begin{align} E[b]&=\frac{1}{2}\int_{\mathbb{R}^{3}}|b|^{2} \textrm{d} x \hspace{50pt} \textrm{(magnetic energy),} \\ H_{\gamma}[b]&=2\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}\frac{G}{r^{2}} \textrm{d} x\hspace{13pt} \textrm{(generalized magnetic helicity),} \\ M_{\gamma}[b]&=\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{2} \textrm{d} x \hspace{32pt} \textrm{(generalized magnetic mean-square potential),}\\ {\mathcal{E}}[v,b]&=E[v]+E[b] \hspace{51pt} \textrm{(total energy)}. \end{align}\\ We denote by $H[\cdot]=H_\gamma[\cdot]$ and $M[\cdot]=M_\gamma[\cdot]$ by suppressing $\gamma$. \end{defn} Magnetic helicity reflects direction of vector fields since $\textrm{curl}^{-1}B\cdot B$ is a pseudo-scalar, i.e., $\textrm{curl}^{-1}\tilde{B}\cdot \tilde{B}(x)=-(\textrm{curl}^{-1}B\cdot B)(-x)$ for $\tilde{B}(x)=B(-x)$. A special property of generalized magnetic helicity is symmetry with respect to the $\theta$-component of axisymmetric solenoidal vector fields.\\ \begin{prop}[Symmetry of generalized magnetic helicity] \begin{align} H[\nabla \times (\phi\nabla \theta)+G\nabla \theta ]=-H[\nabla \times (\phi\nabla \theta)-G\nabla \theta ] \end{align}\\ for $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. \end{prop} \begin{rem}[Gauge dependence] We will see in Section 6 that (3.23) and (3.24) are conserved for smooth solutions of ideal MHD (1.1) under the condition (1.7). The quantities (3.23) and (3.24) depend on the gauge $\gamma\geq 0$ since the potential $\phi_\infty=r^{2}+\gamma$ of $B_{\infty}=-e_z$ has freedom on the choice of $\gamma$. \end{rem} \begin{rem} Generalized magnetic helicity (3.23) and generalized magnetic mean-square potential (3.24) are also well-defined with $\phi_{\infty}=Wr^{2}/2+\gamma$, $W>0$, and $\gamma\geq 0$ for $ b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. \end{rem} \section{The variational principle} We develop the variational problem (1.13). Our goal is to demonstrate that the minimum (1.13) is symmetric and lower semi-continuous for $h\in \mathbb{R}$ and increasing and strictly subadditive for $|h|$ as stated in Lemma 4.10. We derive these properties from the presence of symmetric minimizers for $z$-variable by using the Steiner symmetrization. In the next section, we apply these properties of the minimum to demonstrate the compactness of (non-symmetric) minimizing sequences. \subsection{Axisymmetric nonlinear force-free fields} We set a variational problem and demonstrate that minimizers provide axisymmetric nonlinear force-free fields with discontinuous factors in $\mathbb{R}^{3}$. We also show that flux functions of minimizers are non-negative solutions to the Grad--Shafranov equation. \begin{defn} Let $h\in \mathbb{R}$. Let $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. Let $E[\cdot]$ and $H[\cdot]$ be as in (3.22) and (3.23). We set \begin{align} I_{h,\gamma}&=\inf \left\{E[b] \ \Bigg|\ b \in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3}),\ H[b]=h \right\},\\ S_{h,\gamma}&=\left\{b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})\ \middle|\ E[b]=I_{h,\gamma},\ H[b]=h \right\}. \end{align}\\ We denote by $I_{h}=I_{h,\gamma}$ and $S_{h}=S_{h,\gamma}$ by suppressing $\gamma$. \end{defn} \begin{prop}[Lagrange multiplier] Let $h\in \mathbb{R}$ and $\gamma\geq 0$. Let $\phi_{\infty}=r^{2}+\gamma$. Let $b=\nabla\times( \phi \nabla \theta) + G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ satisfy $E[b]=I_{h}$ and $H[b]=h$. There exists a constant $\mu\in \mathbb{R}$ such that $G=\mu (\phi-\phi_\infty)_{+}$ and $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$ is a weak solution of \begin{align} -L\phi =\mu^{2} (\phi -\phi_\infty)_{+}\quad \textrm{in}\ \mathbb{R}^{2}_{+},\quad \phi =0\quad \textrm{on}\ \partial \mathbb{R}^{2}_{+}, \end{align}\\ in the sense that \begin{align} \int_{\mathbb{R}^{2}_{+}}\nabla \phi \cdot \nabla \tilde{\phi} \frac{1}{r}\textrm{d} z\textrm{d} r =\mu^{2} \int_{\mathbb{R}^{2}_{+}} (\phi-\phi_{\infty})_{+}\tilde{\phi} \frac{1}{r}\textrm{d} z\textrm{d} r, \end{align}\\ for all $\tilde{\phi}\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$. The constant $\mu$ satisfies \begin{align} h=2\mu \int_{\mathbb{R}^{3}}(\phi-\phi_\infty)_{+}^{2}\frac{1}{r^{2}}\textrm{d} x. \end{align} \end{prop} \begin{proof} For $b_0, \tilde{b}\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, we set \begin{align*} j(\tau,s)=H[b+\tau \tilde{b}+sb_0],\quad \tau,s\in \mathbb{R}. \end{align*}\\ By differentiating this by $s$, for $b_0=\nabla \times (\phi_0\nabla \theta)+G_0\nabla \theta$, \begin{align*} \frac{\partial j}{\partial s}(0,0)=2\int_{\mathbb{R}^{3}}(\phi_0 1_{(0,\infty)}(\phi-\phi_{\infty})G+(\phi-\phi_{\infty})_{+}G_0 )\frac{1}{r^{2}}\textrm{d} x. \end{align*}\\ Suppose that $\partial_s j(0,0)=0$ for all $b_0\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. Then, $(\phi-\phi_{\infty})_{+}=0$ and \begin{align*} h=H[b]=2\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}\frac{G}{r^{2}}\textrm{d} x=0. \end{align*}\\ By $0=I_0=E[b]$, we have $b= 0$. Thus the assertion holds for $\mu=0$. We may assume that $\partial_s j(0,0)\neq 0$ for some $b_0\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. By the implicit function theorem, there exists a function $s=s(\tau)$ such that for small $|\tau|$, \begin{align*} j(\tau,s(\tau))=h. \end{align*}\\ By differentiating this by $\tau$, \begin{align*} \dot{s}(0)=-\frac{\partial_{\tau}j(0,0)}{\partial_{s} j(0,0)}. \end{align*}\\ We set \begin{align*} \mu=\frac{2}{\partial_{s} j(0,0)}\left(\int_{\mathbb{R}^{3}}b\cdot b_0\textrm{d} x\right). \end{align*}\\ Since $b$ is a minimizer, \begin{align*} 0=\frac{\textrm{d}}{\textrm{d} \tau} E[b+\tau \tilde{b}+s(\tau)b_0]\Bigg|_{\tau=0} &=\int_{\mathbb{R}^{3}}b\cdot (\tilde{b}+\dot{s}(0) b_0 ) \textrm{d} x \\ &=\int_{\mathbb{R}^{3}}b\cdot \tilde{b}\textrm{d} x-\frac{1}{\partial_{s} j(0,0)}\left(\int_{\mathbb{R}^{3}}b\cdot b_0\textrm{d} x\right)\partial_{\tau}j(0,0), \\ &=\int_{\mathbb{R}^{3}}b\cdot \tilde{b}\textrm{d} x-\frac{\mu}{2} \partial_{\tau}j(0,0). \end{align*}\\ Thus \begin{align*} \int_{\mathbb{R}^{3}}\left(\nabla \phi \cdot \nabla \tilde{\phi}+G\tilde{G}\right)\frac{1}{r^{2}}\textrm{d} x =\mu \int_{\mathbb{R}^{3}}\left( \tilde{\phi} 1_{(0,\infty)}(\phi-\phi_{\infty})G+(\phi-\phi_{\infty})_{+}\tilde{G} \right)\frac{1}{r^{2}}\textrm{d} x. \end{align*}\\ Since $\tilde{b}\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ is arbitrary, $G=\mu (\phi-\phi_{\infty})_{+}$ and (4.4) holds. The identity (4.5) follows from $G=\mu (\phi-\phi_{\infty})_{+}$. \end{proof} \begin{lem} Let $B_{\infty}=-2e_z$. Let $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ be a minimizer of (4.1). Let $\phi$ and $\mu$ be as in Proposition 4.2. Then, $U=b+B_{\infty}$ is a nonliner force-free field (1.8) for $f=\mu 1_{(0,\infty)}(\phi-\phi_\infty)$. \end{lem} \begin{proof} By $-L\phi_{\infty}=0$ and $B_{\infty}=-2e_{z}=-\nabla \times (\phi_{\infty}\nabla \theta )$, \begin{align*} U&=b+B_{\infty} =\nabla \times ((\phi-\phi_{\infty})\nabla \theta) +G\nabla \theta,\\ \nabla \times U&=\nabla \times (G\nabla \theta)-L\phi \nabla \theta. \end{align*}\\ By $G=\mu(\phi-\phi_{\infty})_{+}$ and (4.3), \begin{align*} \nabla \times U=\mu 1_{(0,\infty)}(\phi-\phi_{\infty} ) U. \end{align*}\\ Thus $U$ satisfies (1.8) for $f=\mu1_{(0,\infty)}(\phi-\phi_{\infty})$. \end{proof} To investigate the properties of the minimum in (4.1) for $h\in \mathbb{R}$, we show that the minimum $I_{h}$ is invariant under the restriction of admissible functions $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ to those with non-negative $\phi\geq 0$ and $G=\mu(\phi-\phi_{\infty})_{+}$. The constant $\mu$ is chosen so that $H[b]=h$, i.e., $h=2\mu \int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{2}r^{-2}\textrm{d} x$. \begin{prop} The function $\phi$ in Proposition 4.2 is non-negative. \end{prop} \begin{proof} By (4.4) and the transform (3.10), $\varphi=\phi/r^{2}\in \dot{H}^{1}_{\textrm{axi}}(\mathbb{R}^{5})$ satisfies \begin{align*} \int_{\mathbb{R}^{5}}\nabla_y \varphi \cdot \nabla_y \tilde{\varphi}\textrm{d} y=\mu^{2}\int_{\mathbb{R}^{5}}\left(\varphi-1-\frac{\gamma}{r^{2}}\right)_{+}\tilde{\varphi}\textrm{d} y, \end{align*}\\ for $\tilde{\varphi}\in \dot{H}^{1}_{\textrm{axi}}(\mathbb{R}^{5})$. This equality is extendable for all $\tilde{\varphi}\in \dot{H}^{1}(\mathbb{R}^{5})$ since \begin{align*} \tilde{\varphi}=\fint_{S^{3}}\tilde{\varphi}\textrm{d} H+\left(\tilde{\varphi}-\fint_{S^{3}}\tilde{\varphi}\textrm{d} H\right)=\tilde{\varphi}_1+\tilde{\varphi}_2\in \dot{H}^{1}_{\textrm{axi}}(\mathbb{R}^{5})\oplus \dot{H}^{1}_{\textrm{axi}}(\mathbb{R}^{5})^{\perp}, \end{align*} and \begin{align*} &\int_{\mathbb{R}^{5}}\nabla_y \varphi \cdot \nabla_y \tilde{\varphi}_2\textrm{d} y =\int_{\mathbb{R}^{5}}\nabla_y \varphi \cdot \nabla_y \left(\fint_{S^{3}} \tilde{\varphi}_2\textrm{d} H\right)\textrm{d} y=0,\\ &\int_{\mathbb{R}^{5}}\left(\varphi-1-\frac{\gamma}{r^{2}}\right) \tilde{\varphi}_2\textrm{d} y =\int_{\mathbb{R}^{5}}\left(\varphi-1-\frac{\gamma}{r^{2}}\right) \left(\fint_{S^{3}} \tilde{\varphi}_2\textrm{d} H\right)\textrm{d} y=0. \end{align*}\\ Here, $S^{3}$ is the unit sphere in $\mathbb{R}^{4}$. Thus $\varphi$ is a weak solution of \begin{align*} -\Delta_y \varphi=\mu^{2}\left(\varphi-1-\frac{\gamma}{r^{2}}\right)_{+} \quad \textrm{in}\ \mathbb{R}^{5}. \end{align*}\\ By Sobolev embedding, $\varphi\in L^{10/3} (\mathbb{R}^{5})$ and $(\varphi-1-\gamma/r^{2} )_{+}\in L^{10/3} (\mathbb{R}^{5})$. By differentiability of weak solutions to the Poisson equation \cite[Theorem 8.8]{GT}, $\varphi\in H^{2}_{\textrm{loc}}(\mathbb{R}^{5})$ and $-\Delta \varphi=\mu^{2}(\varphi-1-\gamma/r^{2})_{+}\in L^{10/3}(\mathbb{R}^{5})$. By the Calderon-Zygmund inequality \cite[Corollary 9.10]{GT}, \begin{align*} ||\nabla^{2}\varphi ||_{L^{10/3}(\mathbb{R}^{5}) }\lesssim ||\Delta \varphi ||_{L^{10/3}(\mathbb{R}^{5}) }. \end{align*}\\ Thus $\varphi$ is continuous and $\lim_{R\to \infty}\sup_{|x|\geq R}|\varphi(x)|=0$. Since $\varphi$ is super-harmonic, by the maximum principle \cite[9.4 THEOREM]{LL01}, \begin{align*} \varphi(y)\geq \inf\left\{\varphi(y)\ | \ |y|=R\right\},\quad |y|\leq R. \end{align*}\\ By letting $R\to\infty$, $\varphi(y)=\phi(z,r)/r^{2}\geq 0$ follows. \end{proof} \begin{prop} Let $h\in \mathbb{R}$. Let $T_h$ be a set of all $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ such that \begin{equation} \phi\geq 0,\quad h=2\mu \int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r^{2}}\textrm{d} x,\quad G=\mu(\phi-\phi_{\infty})_{+}. \end{equation}\\ Then, $H[b]=h$ for $b\in T_h$ and \begin{equation} \begin{aligned} &S_h\subset T_h\subset \left\{b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})\ \middle|\ H[b]=h \right\},\\ &I_h=\inf\left\{E[b]\ \middle|\ b\in T_h \right\}. \end{aligned} \end{equation}\\ \end{prop} \begin{proof} By Propositions 4.2 and 4.4, $(4.7)_1$ follows. The property $(4.7)_2$ follows from $(4.7)_1$. \end{proof} \subsection{Symmetric minimizers} To prove existence of symmetric minimizers for $z$-variable, we recall the Steiner symmetrization (symmetric decreasing rearrangement in the variable $z$)\cite[Appendix I]{FB74}, \cite[p.293]{Friedman82}, \cite[Chapter 3]{LL01}. For a non-negative measurable function $\phi(z,r)$, the Steiner symmetrization $\phi^{*}$ satisfies \begin{equation} \begin{aligned} \phi^{*}(z,r)&\geq 0,\quad z\in \mathbb{R},\ r>0,\\ \phi^{*}(z,r)&=\phi^{*}(-z,r),\quad z\in \mathbb{R},\ r>0,\\ \phi^{*}(z_1,r)&\geq \phi^{*}(z_2,r),\quad 0\leq z_1\leq z_2,\ r>0, \\ \int_{\mathbb{R}^{2}_{+}}g(\phi(z,r),r)\textrm{d} z\textrm{d} r&=\int_{\mathbb{R}^{2}_{+}}g(\phi^{*}(z,r),r)\textrm{d} z\textrm{d} r,\\ \int_{\mathbb{R}^{2}_{+}}|\nabla \phi^{*}(z,r)|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r &\leq \int_{\mathbb{R}^{2}_{+}}|\nabla \phi(z,r)|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r, \end{aligned} \end{equation} for increasing functions $g(s,r)$ for $s\geq 0$ satisfying $g(0,r)=0$. The property $(4.8)_4$ follows from the layer case representation of $g(\phi,r)$. The property $(4.8)_5$ follows from the isometry (3.11) and Riesz's rearrangement inequality for the heat kernel \cite[THEOREM 1.13, 3.7]{LL01}. \begin{prop} Let $\tilde{T}_h$ denote the space of all $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in T_h$ such that $\phi(z,r)$ is non-negative, symmetric and nonincreasing in the sense of $(4.8)_1-(4.8)_3$. Then, \begin{align} I_h=\inf \left\{E[b]\ \middle|\ b\in \tilde{T}_h \right\}. \end{align} \end{prop} \vspace{5pt} \begin{proof} The left-hand side is smaller than the right-hand side by $\tilde{T}_h\subset T_h$ and $(4.7)_2$. For $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in T_h$, we take the Steiner symmetrization $\phi^{*}$. By $(4.8)_4$, \begin{align*} \int_{\mathbb{R}^{2}_{+}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r =\int_{\mathbb{R}^{2}_{+}}(\phi^{*}-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ With the constant $\mu$ in (4.6), we set $\tilde{G}=\mu(\phi^{*}-\phi_{\infty})_{+}$ and $\tilde{b}=\nabla \times (\phi^{*}\nabla \theta)+\tilde{G}\nabla \theta$. Then, $b\in \tilde{T}_{h}$ and by $(4.8)_5$, \begin{align*} E[\tilde{b}]=\pi \int_{\mathbb{R}^{2}_{+}}\left(|\nabla \phi^{*}|^{2}+\mu^{2}(\phi^{*}-\phi)_{+}^{2} \right)\frac{1}{r}\textrm{d} z\textrm{d} r \leq \pi \int_{\mathbb{R}^{2}_{+}}\left(|\nabla \phi|^{2}+\mu^{2}(\phi-\phi)_{+}^{2} \right)\frac{1}{r}\textrm{d} z\textrm{d} r=E[b]. \end{align*}\\ By $(4.7)_2$, \begin{align*} I_h \leq \inf\left\{E[b]\ \middle|\ b\in \tilde{T}_h \right\} \leq \inf\left\{E[b]\ \middle|\ b\in T_h \right\} =I_h, \end{align*}\\ and (4.9) holds. \end{proof} We show the existence of symmetric minimizers to (4.9). The magnetic helicity of $b\in \tilde{T}_h$ concentrates near the origin $Q=\{{}^{t}(z,r)\in \mathbb{R}^{2}_{+}\ |\ |z|<Z,\ r<R \}$ because of the symmetry and nonincreasing properties for $z$-variable. We use the fact that the set $\{{}^{t}(z,r)\in \mathbb{R}^{2}_{+}\ |\ \phi(z,r)>\phi_{\infty}\}$ is below the graph $r=C|z|^{-1/2}$ \cite[Lemma 4.6]{FT81}, and estimate magnetic helicity in $\mathbb{R}^{2}_{+}\backslash Q$ to reduce the compactness of minimizing sequences in $\tilde{T}_h$ to that in the bounded domain $Q$. \begin{prop} For $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$ satisfying $(4.8)_1-(4.8)_3$, \begin{align} 2r|z|^{1/2}1_{(0,\infty)}(\phi-\phi_{\infty}) \leq ||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }, \quad {}^{t}(z,r)\in \mathbb{R}^{2}_{+}. \end{align} \end{prop} \begin{proof} We take a point ${}^{t}(z,r)\in \mathbb{R}^{2}_{+}$ such that $\phi(z,r)>\phi_{\infty}$. Since $\phi$ is nonincreasing for $0\leq z'\leq z$ and vanishes on $\{r=0\}$, \begin{align*} r^{2}<\phi(z,r)\leq \phi(z',r)-\phi(z',0)=\int_{0}^{r}\partial_{r'}\phi(z',r')\textrm{d} r. \end{align*}\\ By integrating both sides in $[0,z]$ and applying H\"older's inequality, \begin{align*} r^{2}z\leq \int_{0}^{z}\int_{0}^{r}\partial_{r'}\phi(z',r')\textrm{d} r'\textrm{d} z' &\leq \left(\int_{0}^{z}\int_{0}^{r}r'\textrm{d} r'\textrm{d} z'\right)^{1/2}\left(\int_{0}^{z}\int_{0}^{r}|\partial_{r'}\phi(z',r')|^{2}\frac{1}{r'}\textrm{d} r'\textrm{d} z'\right)^{1/2} \\ &\leq \frac{r\sqrt{z}}{2}||\nabla \phi||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }. \end{align*}\\ Thus (4.10) holds. \end{proof} \begin{prop} Let $Z,R\geq 1$. Let $Q=\{{}^{t}(z,r)\in \mathbb{R}^{2}_{+}\ |\ |z|<Z,\ r<R \}$. There exists $C>0$ such that \begin{align} \int_{\mathbb{R}^{2}_{+}\backslash Q}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \leq \frac{C}{\min\{Z,A\}} || \nabla\phi||^{4}_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } , \end{align}\\ for $\phi \in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$ satisfying $(4.8)_1-(4.8)_3$. \end{prop} \begin{proof} We estimate \begin{align*} \int_{\mathbb{R}^{2}_{+}\backslash Q}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r =\int_{\{r\geq R\}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r+2\int_{\{z\geq Z, r<R\}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ By the weighted Sobolev inequality (3.8), \begin{align*} \int_{\{r\geq R\}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \leq \int_{\{r\geq R\}}(\phi-\phi_{\infty})_{+}^{2}\frac{\phi^{2}}{r^{5}}\textrm{d} z\textrm{d} r \leq \frac{1}{R}\int_{\mathbb{R}^{2}_{+}}\phi^{4}\frac{1}{r^{4}}\textrm{d} z\textrm{d} r \lesssim \frac{1}{R}||\nabla \phi||^{4}_{L^{2}(\mathbb{R}^{2}_{+};r^{-1} ) }. \end{align*}\\ By the pointwise estimate (4.10) and (3.8), \begin{align*} \int_{\{z>Z, r<R\}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r &\lesssim ||\nabla \phi||^{2}_{L^{2}(\mathbb{R}^{2}_{+};r^{-1} ) }\int_{\{z>Z, r<R\}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{zr^{3}}\textrm{d} r \textrm{d} z \\ &\leq \frac{1}{Z}||\nabla \phi||^{2}_{L^{2}(\mathbb{R}^{2}_{+};r^{-1} ) }\int_{\mathbb{R}^{2}_{+}}\phi^{2}\frac{1}{r^{3}}\textrm{d} z\textrm{d} r \\ & \lesssim \frac{1}{Z}||\nabla \phi||^{4}_{L^{2}(\mathbb{R}^{2}_{+};r^{-1} ) }. \end{align*}\\ Thus (4.11) holds. \end{proof} \begin{thm}[Existence of symmetric minimizers] Let $h\in \mathbb{R}$. Let $\{b_n\}\subset \tilde{T}_h$ be a sequence such that $E[b_n]\to I_h$. Then, there exist $\{n_k\}$ and $b\in \tilde{T}_{h}\cap S_h$ such that $b_{n_{k}}\to b$ in $L^{2}(\mathbb{R}^{3})$. \end{thm} \begin{proof} By the Rellich--Kondrakov theorem in the weighted space (3.9), there exist a subsequence of $b_n=\nabla \times (\phi_n\nabla \theta)+G_n\nabla \theta$ (still denoted by $\{b_n\}$) and $b=\nabla\times (\phi\nabla \theta)+G\nabla \theta$ such that \begin{align*} b_n &\rightharpoonup b \quad \textrm{in}\ L^{2}(\mathbb{R}^{3}),\\ \phi_n &\to \phi \quad \textrm{in}\ L^{2}_{\textrm{loc}}(\overline{\mathbb{R}^{2}_{+}};r^{-1}). \end{align*}\\ For $Z,R\geq 1$ and $Q=\{ |z|<Z, r<R \}$, \begin{align*} H[b_n] &=4\pi \int_{\mathbb{R}^{2}_{+}}(\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r\\ &=4\pi \int_{Q}(\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r +4\pi \int_{\mathbb{R}^{2}_{+}\backslash Q}(\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ By H\"older's inequality, (4.11) and (3.2), \begin{align*} \left|\int_{\mathbb{R}^{2}_{+}\backslash Q}(\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r \right| &\leq \left|\int_{\mathbb{R}^{2}_{+}\backslash Q}(\phi_n-\phi_\infty)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \right|^{1/2}||G_n||_{L^{2}(\mathbb{R}\backslash Q;r^{-1} ) } \\ &\leq \frac{C}{\min\{Z,R\}^{1/2}}\sup_{n}||b_n||^{3}_{L^{2}(\mathbb{R}^{3}) }. \end{align*}\\ By $|\tau_+-s_{+}|\leq |\tau-s|$, H\"older's inequality and (3.2), \begin{align*} \left|\int_{Q}(\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r-\int_{Q}(\phi-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r \right|\leq ||\phi_n-\phi||_{L^{2}(Q;r^{-1})}\left(\sup_{n}||b_n||_{L^{2}(\mathbb{R}^{3}) }\right). \end{align*}\\ Thus \begin{align*} |H[b_n]-H[b]| &\leq 4\pi ||\phi_n-\phi||_{L^{2}(Q;r^{-1})}\left(\sup_{n}||b_n||_{L^{2}(\mathbb{R}^{3}) }\right) \\ &+4\pi \left|\int_{Q}(\phi-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r-\int_{Q}(\phi-\phi_\infty)_{+}G\frac{1}{r}\textrm{d} z\textrm{d} r \right| \\ &+ \frac{C}{\min\{Z,R\}^{1/2}}\left( \sup_{n}||b_n||^{3}_{L^{2}(\mathbb{R}^{3}) }+||b||^{3}_{L^{2}(\mathbb{R}^{3}) } \right). \end{align*}\\ Since $G_n$ weakly converges in $L^{2}(Q;r^{-1})$ and $H[b_n]=h$, letting $n\to\infty$ and then $Z\to\infty$, $R\to\infty$ imply $h=H[b]$. By \begin{align*} I_h\leq E[b]\leq \liminf_{n\to\infty}E[b_n]=I_h, \end{align*}\\ the limit $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ is a minimizer of $I_h$. The convergence $\lim_{n\to\infty}||b_n||_{L^{2}(\mathbb{R}^{3})}=||b||_{L^{2}(\mathbb{R}^{3})}$ implies that $b_n\to b$ in $L^{2}(\mathbb{R}^{3})$. The limit $b$ belongs to $\tilde{T}_{h}$ since $\phi$ satisfies $(4.8)_1$-- $(4.8)_3$ and $b\in S_h\subset T_h$. \end{proof} \subsection{Properties of minimum} We derive properties of the minimum $I_h$ for $h\in \mathbb{R}$ from (4.9) and the presence of symmetric minimizers $b\in \tilde{T}_{h}$ in Theorem 4.9. \begin{lem} \begin{align} I_h&=I_{-h}\geq I_{0}=0,\quad h\in \mathbb{R},\qquad (\textrm{symmetry}), \\ 0&<I_{h_1}<I_{h_2},\quad 0<h_1<h_2, \hspace{12pt} (\textrm{monotonicity}),\\ I_{\theta h}&<\theta I_{h},\quad \theta>1, h>0, \\ I_{h_1+h_2}&<I_{h_1}+I_{h_2},\quad h_1,h_2>0, \hspace{22pt} (\textrm{strict subadditivity}), \\ I_{h_1}&= \lim_{h\to h_1}I_h,\quad h_1> 0,\\ I_{h_1}&\leq \liminf_{h\to h_1}I_h,\quad h_1\in \mathbb{R} \hspace{37pt} (\textrm{lower semi-continuity}), \end{align} \end{lem} \begin{proof} The symmetry (4.12) follows from (3.26). By Theorem 4.9, for $h>0$, there exists $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in S_h$ such that $I_h=E[b]$ and $h=H[b]$. If $I_h=0$, $E[b]=0$ and $h=H[b]=0$. Thus $I_h>0$. For $\tau>1$, \begin{align*} \tilde{b}=\nabla \times (\phi \nabla \theta)+\frac{1}{\tau}G\nabla \theta, \end{align*}\\ satisfies $H[\tilde{b}]=H[b]/\tau=h/\tau$ and \begin{align*} I_{h/\tau}\leq E[\tilde{b}]=\frac{1}{2}\int_{\mathbb{R}^{3}}|\nabla \phi|^{2}\frac{1}{r^{2}}\textrm{d} x+\frac{1}{2\tau^{2}}\int_{\mathbb{R}^{3}}|G|^{2}\frac{1}{r^{2}}\textrm{d} x =E[b]-\frac{1}{2}\left(1-\frac{1}{\tau^{2}}\right)\int_{\mathbb{R}^{3}}|G|^{2}\frac{1}{r^{2}}\textrm{d} x<I_h. \end{align*}\\ Thus the monotonicity (4.13) holds. By Proposition 4.2, for some $\mu\in \mathbb{R}$, \begin{align*} H[b]=2\mu\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r^{2}}\textrm{d} x. \end{align*}\\ For $t>1$, $H[tb]$ is continuous and \begin{align*} H[tb]=2t^{2}\mu \int_{\mathbb{R}^{3}}\left(\phi-\frac{\phi_{\infty}}{t}\right)_{+}^{2}\frac{1}{r^{2}}\textrm{d} x >2t^{2}\mu \int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}^{2}\frac{1}{r^{2}}\textrm{d} x=t^{2}H[b]=t^{2}h. \end{align*}\\ For $\theta>1$, we take $t_1>1$ such that $H[t_1b]>t_1^{2}h>\theta h$. By the intermediate value theorem, there exists $1<t=t(\theta)<t_1$ such that $\theta h=H[t(\theta)b]$ and $\theta>t(\theta)^{2}$. Thus \begin{align*} I_{\theta h}\leq E[t(\theta)b]=t(\theta)^{2}E[b]=t(\theta)^{2}I_{h}<\theta I_h, \end{align*}\\ and (4.14) holds. We take $h_1,h_2>0$. We may assume that $h_1<h_2$. By (4.14), $I_{h_2}<(h_2/h_1)I_{h_1}$. For $\vartheta=(h_1+h_2)/h_2$, \begin{align*} I_{h_1+h_2}=I_{\vartheta h_2}<\vartheta I_{h_2}=\left(\frac{h_1}{h_2}+1 \right)I_{h_2}<I_{h_1}+I_{h_2}. \end{align*}\\ Thus the strict subadditivity (4.15) holds. By (4.13) and (4.14), for $\varepsilon>0$, \begin{align*} &I_{h}<I_{h+\varepsilon},\quad I_{h}<\frac{h}{h-\varepsilon}I_{h-\varepsilon},\\ &I_{h-\varepsilon}<I_{h},\quad I_{h+\varepsilon}<\frac{h+\varepsilon}{h}I_{h}. \end{align*}\\ Letting $\varepsilon\to 0$ implies the continuity (4.16) for $h_1>0$. By (4.12) and (4.13), $I_h$ is lower semi-continuous at $h_1=0$ and (4.17) holds. \end{proof} \section{Minimizing sequences} We demonstrate the compactness of (non-symmetric) minimizing sequences to the variational problem (4.1) in $L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ up to translation in $z$. We apply the concentration--compactness principle in $\mathbb{R}^{2}_{+}$ \cite{Lions84a}, \cite{Lions84b} and exclude possibilities of dichotomy and vanishing of minimizing sequences to obtain the desired compactness. The key part of the proof is the exclusion of dichotomy by application of the strict subadditivity, monotonicity, and lower semi-continuity of the minimum shown in Lemma 4.10. \subsection{Concentration--compactness lemma} We derive a concentration--compactness lemma for sequences in $L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ from the concentration--compactness lemma in $L^{1}(\mathbb{R}^{2}_{+})$ \cite{Lions84a}, \cite{Lions84b}. We then modify the derived lemma in the case of dichotomy so that two sequences have disjoint supports by a cut-off function argument. \begin{prop} Let $\{\rho_n\} \subset L^{1}(\mathbb{R}^{2}_{+})$ be a sequence such that $\rho_n\geq 0$ and \begin{align*} \int_{\mathbb{R}^{2}_{+}}\rho_n\textrm{d} z\textrm{d} r \to l>0\quad \textrm{as}\ n\to\infty. \end{align*}\\ Then, there exists a subsequence $\{\rho_{n_k}\}$ such that one of the following holds: \noindent (i) Compactness: there exists $z_k \in \mathbb{R}$ such that for $\varepsilon>0$ there exists $R_{\varepsilon}>0$ such that for $D(z_k,R_\varepsilon)=\{{}^{t}(z,r)\in \mathbb{R}^{2}_{+}\ |\ |z-z_k|^{2}+r^{2}<R^{2}_\varepsilon \}$, \begin{align*} \liminf_{k\to\infty}\int_{D(z_k,R_\varepsilon)} \rho_{n_{k}}\textrm{d} z\textrm{d} r \geq l-\varepsilon. \end{align*}\\ (ii) Vanishing: for each $R>0$, \begin{align*} \lim_{k\to\infty}\sup_{z_0\in \mathbb{R}}\int_{D(z_0,R) }\rho_{n_{k}}\textrm{d} z\textrm{d} r=0. \end{align*}\\ (iii) Dichotomy: there exists $\alpha \in (0,l)$ such that for $\varepsilon>0$ there exist $z_k \in \mathbb{R}$ and $ R_k\geq R_0$ such that $R_k\to\infty$ and, \begin{equation*} \begin{aligned} \limsup_{k\to\infty}\left\{ \left| \int_{D(z_k,R_0)}\rho_{n_k}\textrm{d} z \textrm{d} r -\alpha \right| +\left|\int_{\mathbb{R}^{2}_{+}\backslash D(z_k,R_k)} \rho_{k}\textrm{d} z \textrm{d} r -(l-\alpha)\right|+\int_{D(z_k,R_k) \backslash D(z_k,R_0)} \rho_{k}\textrm{d} z \textrm{d} r\right\} \leq \varepsilon. \end{aligned} \end{equation*} \end{prop} \begin{proof} For the case of fixed mass, $l_n=l$ for $l_n=\int_{\mathbb{R}^{2}_{+}}\rho_n\textrm{d} z\textrm{d} r$, we apply a similar argument as $\mathbb{R}^{2}$ for the L\'evy's (partial) concentration function $Q_n(t)=\sup_{z\in \mathbb{R}}\int_{D(z,t)}\rho_n \textrm{d} z\textrm{d} r$ and conclude \cite[Lemma I.1]{Lions84a}, \cite[p.279]{Lions84b}. The case of varying mass $l_n\to l$ is reduced to the case of fixed mass by the normalization $\tilde{\rho}_n=\rho_n l/l_n$. \end{proof} \begin{lem} Let $\{b_n\}\subset L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ be a sequence such that \begin{align} \int_{\mathbb{R}^{3}}|b_n|^{2}\textrm{d} x \to l>0\quad \textrm{as}\ n\to\infty. \end{align}\\ There exists a subsequence $\{b_{n_k}\}$ such that one of the following holds: \noindent (i) There exists $z_k \in \mathbb{R}$ such that for $\varepsilon>0$ there exists $R_{\varepsilon}>0$ such that for $B(z_ke_z,R_\varepsilon)=\{x\in \mathbb{R}^{3}\ |\ |x-z_ke_z|<R_\varepsilon \}$, \begin{align} \liminf_{k\to\infty}\int_{B(z_ke_z,R_\varepsilon)} |b_{n_{k} }|^{2}\textrm{d} x \geq l-\varepsilon. \end{align}\\ (ii) For each $R>0$, \begin{align} \lim_{k\to\infty}\sup_{z_0\in \mathbb{R}}\int_{B(z_0e_z,R) }|b_{n_{k}}|^{2} \textrm{d} x=0. \end{align}\\ (iii) There exists $\alpha \in (0,l)$ such that for $\varepsilon>0$, there exists $z_k\in \mathbb{R}$ and $R_k\geq R_0$ such that $R_k\to\infty$ and, \begin{equation} \begin{aligned} \limsup_{k\to\infty}\Bigg\{ &\left| ||b_{n_k}||_{L^{2}(B(z_ke_z,R_0)) }^{2} -\alpha \right| +\left| ||b_{n_k}||_{L^{2}(\mathbb{R}^{3}\backslash B(z_ke_z,R_k)) }^{2} -(l-\alpha)\right| \\ &+||b_{n_k}||_{L^{2}( B(z_ke_z,R_k)\backslash B(z_ke_z,R_0))}^{2} \Bigg\} \leq \varepsilon. \end{aligned} \end{equation} \end{lem} \begin{proof} We apply Proposition 5.1 for $b_n=\nabla \times (\phi_n\nabla \theta)+G_n\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ and \begin{align*} \rho_n=\left(|\nabla \phi_n|^{2}+|G_n|^{2}\right)\frac{2\pi}{r}. \end{align*} \end{proof} \begin{prop} In the case of the dichotomy (iii) in Lemma 5.2, there exist $b_{i,n_k}=\nabla \times (\phi_{i,n_k}\nabla \theta)+G_{i,n_k}\nabla \theta \in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, $i=1,2$, such that \begin{equation} \begin{aligned} &\textrm{spt}\ \phi_{1,n_k},\textrm{spt}\ G_{1,n_k}\subset D(z_k,2R_0), \\ &\textrm{spt}\ \phi_{2,n_k},\textrm{spt}\ G_{2,n_k}\subset \mathbb{R}^{2}_{+}\backslash \overline{D(z_k,R_k/2)}, \\ &\limsup_{k\to\infty}\left\{ \left| \|b_{1,n_k}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -\alpha \right| +\left| \|b_{2,n_k}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -(l-\alpha) \right| +\|b_n-b_{1,n_k}-b_{2,n_k}\|^{2}_{L^{2}(\mathbb{R}^{3}) } \right\} \\ &\leq C\varepsilon. \end{aligned} \end{equation} \end{prop} \begin{proof} We construct $b_{i,n_k}\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3}) $ satisfying (5.5) by a cut-off function argument. We may assume $z_k=0$ by translation. For simplicity of notation, we denote $b_{n_k}$ by $b_n$. We take a function $\chi\in C^{\infty}_{c}[0,\infty)$ such that $\chi= 1$ in $[0,1]$ and $\chi= 0$ in $[2,\infty)$ and set $\chi_{R_0}(z,r)=\chi(R_0^{-1}\sqrt{|z|^{2}+|r|^{2}})$ so that $\chi_{R_0}\in C^{\infty}_c(\mathbb{R}^{2})$ satisfies $\chi_{R_0}= 1$ in $D(0,R_0)$ and $\chi_{R_0}= 0$ in $\mathbb{R}^{2}_{+}\backslash \overline{D(0,2R_0)}$. We set $b_{1,n}=\nabla \times (\phi_{1,n}\nabla \theta)+G_{1,n}\nabla \theta$ by \begin{align*} \phi_{1,n}=\phi_{n}\chi_{R_0},\quad G_{1,n}=G_{n}\chi_{R_0}, \end{align*}\\ so that $(5.5)_1$ holds. By the Poincar\'e inequality (3.13), \begin{align} \int_{D(0,2R_0)\backslash D(0,R_0)}|\phi_{n}|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\leq CR_0^{2} \int_{D(0,2R_0)\backslash D(0,R_0)}|\nabla \phi_n|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r, \end{align}\\ and we estimate \begin{align*} \int_{\mathbb{R}^{2}_{+}}|\nabla \phi_{1,n}|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r-\int_{D(0,R_0)}|\nabla \phi_{n}|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r &=\int_{D(0,2R_0)\backslash D(0,R_0)}|\nabla \phi_{n}\chi_{R_0}+\phi_{n}\nabla \chi_{R_0} |^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \\ &\leq C\int_{D(0,2R_0)\backslash D(0,R_0)}|\nabla \phi_{n}|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ For $\rho_{1,n}=\left(|\nabla \phi_{1,n}|^{2}+|G_{1,n}|^{2}\right)2\pi r^{-1}$ and $\rho_{n}=\left(|\nabla \phi_{n}|^{2}+|G_{n}|^{2}\right)2\pi r^{-1}$, \begin{align*} \int_{\mathbb{R}^{2}_{+}}\rho_{1,n}\textrm{d} z\textrm{d} r -\int_{D(0,R_0)}\rho_{n}\textrm{d} z\textrm{d} r \leq C\int_{D(0,2R_0)\backslash D(0,R_0)}\rho_{n}\textrm{d} z\textrm{d} r. \end{align*}\\ In terms of $b_{1,n}$ and $b_{n}$, \begin{align*} ||b_{1,n}||^{2}_{L^{2}( \mathbb{R}^{3}) }-||b_{n}||^{2}_{L^{2}( B(0,R_0)) }\leq C ||b_{n}||^{2}_{L^{2}( B(0,2R_0)\backslash B(0,R_0)) }. \end{align*}\\ By applying (5.4) to \begin{align*} \left| \|b_{1,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -\alpha\right| &\leq \left|\|b_{1,n}\|^{2}_{L^{2}(B(0,2R_0)) } - ||b_{n}||^{2}_{L^{2}( B(0,R_0)) } \right|+\left| ||b_{n}||^{2}_{L^{2}( B(0,R_0)) }-\alpha\right| \\ &\leq C ||b_{n}||^{2}_{L^{2}( B(0,2R_0)\backslash B(0,R_0)) }+\left| ||b_{n}||^{2}_{L^{2}( B(0,R_0)) }-\alpha\right|, \end{align*}\\ and we have \begin{align*} \limsup_{n\to\infty}\left| \|b_{1,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -\alpha\right|\leq C\varepsilon. \end{align*}\\ Similarly, we set $b_{2,n}=\nabla \times (\phi_{2,n}\nabla \theta)+G_{2,n}\nabla \theta$ by \begin{align*} \phi_{2,n}=\phi_{n}(1-\chi_{R_n/2}),\quad G_{2,n}=G_{n}(1-\chi_{R_n/2}), \end{align*}\\ so that $(5.5)_2$ holds. By (3.13), \begin{align} \int_{D(0,R_n)\backslash D(0,R_n/2)}|\phi_n|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\leq CR_n^{2} \int_{D(0,R_n)\backslash D(0,R_n/2)}|\nabla \phi_n|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align}\\ Similarly as we estimated $b_{1,n}$, by (5.4) and \begin{align*} \left| \|b_{2,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -(l-\alpha)\right| &\leq \left|\|b_{2,n}\|^{2}_{L^{2}(\mathbb{R}^{3}\backslash B(0,R_n/2)) } - ||b_{n}||^{2}_{L^{2}(\mathbb{R}^{3}\backslash B(0,R_n)) } \right|+\left| ||b_{n}||^{2}_{L^{2}(\mathbb{R}^{3}\backslash B(0,R_n)) }-(l-\alpha)\right| \\ &\leq C ||b_{n}||^{2}_{L^{2}( B(0,R_n)\backslash B(0,R_n/2)) }+\left| ||b_{n}||^{2}_{L^{2}(\mathbb{R}^{3}\backslash B(0,R_n)) }-(l-\alpha)\right|, \end{align*}\\ we obtain \begin{align*} \limsup_{n\to\infty}\left| \|b_{2,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -(l-\alpha)\right|\leq C\varepsilon. \end{align*}\\ By using (5.6) and (5.7) for \begin{align*} b_n-b_{1,n}-b_{2,n}=\nabla \times (\phi_n(\chi_{R_n/2}-\chi_{R_0})\nabla \theta )+G_n(\chi_{R_n/2}-\chi_{R_0})\nabla \theta, \end{align*}\\ we estimate \begin{align*} ||b_n-b_{1,n}-b_{2,n}||_{L^{2}(\mathbb{R}^{3}) }^{2} \leq C\int_{D(0,R_n)\backslash D(0,R_0) }(|\nabla \phi_n |^{2}+|G_n|^{2})\frac{1}{r}\textrm{d} z\textrm{d} r=C||b_n||_{L^{2}(B(0,R_n)\backslash B(0,R_0)) }^{2}. \end{align*}\\ By (5.4), \begin{align*} \limsup_{n\to\infty} \left\{ \left| \|b_{1,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -\alpha\right| +\left| \|b_{2,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } -(l-\alpha)\right| +\|b_n-b_{1,n}-b_{2,n}\|^{2}_{L^{2}(\mathbb{R}^{3}) } \right\} \leq C\varepsilon. \end{align*}\\ Thus $(5.5)_3$ holds. \end{proof} \subsection{Lipschitz estimates} Proposition 5.3 states that $||b_{n_k}||_{L^{2}(\mathbb{R}^{3})}^{2}\approx l$ can be divided into two parts \begin{align*} ||b_{1,n_k}||_{L^{2}(\mathbb{R}^{3})}^{2}\approx \alpha,\quad ||b_{2,n_k}||_{L^{2}(\mathbb{R}^{3})}^{2}\approx l-\alpha. \end{align*}\\ We further show that $H[b_{n_k}]$ can be divided into two parts $H[b_{1,n_k}]$ and $H[b_{2,n_k}]$. Recall that $H[\cdot ]: L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})\to \mathbb{R}$ is locally bounded by the Arnold-type inequality (3.20), \begin{align*} |H[b]|\lesssim ||b||_{L^{2}(\mathbb{R}^{3})}^{8/3}. \end{align*}\\ We extend this estimate to the following Lipschitz estimate and derive the desired decomposition property for magnetic helicity. \begin{prop} \begin{align} \left|H[b_1] -H[b_2] \right|&\lesssim \left(\max_{i=1,2} ||b_i||_{L^{2}(\mathbb{R}^{3}) } ^{5/3}\right) ||b_1-b_2||_{L^{2}(\mathbb{R}^{3}) }, \quad \textrm{for}\ b_1,b_2\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3}), \end{align} \begin{align} \begin{aligned} \left| H[b_0]-H[b_1]-H[b_2]\right| &\lesssim \left(\max_{0\leq i\leq 2}||b_i||_{L^{2}(\mathbb{R}^{3})}^{5/3}\right) ||b_0-b_1-b_2||_{L^{2}(\mathbb{R}^{3}) } , \end{aligned} \end{align}\\ for $b_i=\nabla \times (\phi_i\nabla \theta)+G_i\nabla \theta\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, $i=0, 1,2$, such that $\textrm{spt}\ \phi_i\cap \textrm{spt}\ G_j=\emptyset$, $i\neq j$, $i,j=1,2$. \end{prop} \begin{proof} By the monotonicity of the indicator function, \begin{equation} \begin{aligned} |(\phi_1-\phi_{\infty})_{+}-(\phi_2-\phi_{\infty})_{+}| &=\left|\int_{0}^{1}\frac{\textrm{d}}{\textrm{d} \tau}(\tau \phi_1+(1-\tau)\phi_2-\phi_{\infty})_{+}\textrm{d} \tau\right| \\ &=\left|\int_{0}^{1}(\phi_1-\phi_2)1_{(0,\infty)}(\tau \phi_1+(1-\tau)\phi_2-\phi_{\infty})\textrm{d} \tau\right| \\ &\leq |\phi_1-\phi_2| 1_{(0,\infty)}(|\phi_1|+|\phi_2|-\phi_{\infty}). \end{aligned} \end{equation}\\ By H\"older's inequality, (5.10), (3.2) and (3.18), \begin{align*} |H[b_1]-H[b_2]| &\leq 2\int_{\mathbb{R}^{3}} |((\phi_1-\phi_{\infty})_{+}-(\phi_2-\phi_{\infty})_{+})G_1|\frac{1}{r^{2}}\textrm{d} x+\int_{\mathbb{R}^{3}} (\phi_2-\phi_{\infty})_{+}|G_1-G_2|\frac{1}{r^{2}}\textrm{d} x \\ &\lesssim ||(\phi_1-\phi_{\infty})_{+}-(\phi_2-\phi_{\infty})_{+}||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } ||G_1||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } \\ &+||(\phi_2-\phi_{\infty})_{+}||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }\ ||G_1-G_2||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } \\ &\lesssim ||(\phi_1-\phi_2) 1_{(0,\infty)}(|\phi_1|+|\phi_2|-\phi_{\infty})||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }\ ||b_1||_{L^{2}(\mathbb{R}^{3}) } \\ &+ ||b_2||^{5/3}_{L^{2}(\mathbb{R}^{3}) }\ ||b_1-b_2||_{L^{2}(\mathbb{R}^{3}) }. \end{align*}\\ By H\"older's inequality, (3.8) and (3.2), \begin{align*} &||(\phi_1-\phi_2) 1_{(0,\infty)}(|\phi_1|+|\phi_2|-\phi_{\infty})||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }^{2} \\ &\leq \int_{\mathbb{R}^{2}_{+}} |\phi_1-\phi_2|^{2}\left(\frac{|\phi_1|+|\phi_2|}{\phi_{\infty}}\right)^{4/3}\frac{1}{r}\textrm{d} z\textrm{d} r \\ & \leq \int_{\mathbb{R}^{2}_{+}} |\phi_1-\phi_2|^{2}\frac{1}{r^{11/5}} (|\phi_1|+|\phi_2|)^{4/3}\frac{1}{r^{22/15}}\textrm{d} z\textrm{d} r \\ &\leq \left(\int_{\mathbb{R}^{2}_{+}} |\phi_1-\phi_2|^{10/3}\frac{1}{r^{2+5/3}}\textrm{d} z\textrm{d} r \right)^{3/5} \left(\int_{\mathbb{R}^{2}_{+}}(|\phi_1|+|\phi_2|)^{10/3}\frac{1}{r^{2+5/3}}\textrm{d} z\textrm{d} r \right)^{2/5} \\ &\lesssim ||b_1-b_2||^{2}_{L^{2}(\mathbb{R}^{3}) } \left(\max_{i=1,2} ||b_i||_{L^{2}(\mathbb{R}^{3})} \right)^{4/3}. \end{align*}\\ Thus (5.8) holds. In a similar way as (5.10), \begin{align*} |(\phi_1+\phi_2-\phi_{\infty})_{+}-(\phi_1-\phi_{\infty})_{+}| \leq |\phi_2| 1_{(0,\infty)}(|\phi_1+\phi_2|+|\phi_1|-\phi_{\infty}). \end{align*}\\ Since $\textrm{spt}\ \phi_2\cap \textrm{spt}\ G_1=\emptyset $, \begin{align*} &\left|\int_{\mathbb{R}^{3}}( \phi_1+\phi_2-\phi_\infty)_{+}G_1\frac{1}{r^{2}}\textrm{d} x -\int_{\mathbb{R}^{3}}( \phi_1-\phi_\infty)_{+}G_1\frac{1}{r^{2}}\textrm{d} x\right|\\ &\leq \int_{\mathbb{R}^{3}}|\phi_2| 1_{(0,\infty)}(|\phi_1+\phi_2|+|\phi_1|-\phi_{\infty})|G_1|\frac{1}{r^{2}}\textrm{d} x=0. \end{align*}\\ Thus \begin{align*} \int_{\mathbb{R}^{3}}( \phi_1+\phi_2-\phi_\infty)_{+}G_1\frac{1}{r^{2}}\textrm{d} x =\int_{\mathbb{R}^{3}}( \phi_1-\phi_\infty)_{+}G_1\frac{1}{r^{2}}\textrm{d} x. \end{align*}\\ In a similar way, by $\textrm{spt}\ \phi_1\cap \textrm{spt}\ G_2=\emptyset $, \begin{align*} \int_{\mathbb{R}^{3}}( \phi_1+\phi_2-\phi_\infty)_{+}G_2\frac{1}{r^{2}}\textrm{d} x =\int_{\mathbb{R}^{3}}( \phi_2-\phi_\infty)_{+}G_2\frac{1}{r^{2}}\textrm{d} x. \end{align*}\\ Thus $H[b_1+b_2]=H[b_1]+H[b_2]$. By applying (5.8) to $H[b_0]-H[b_1]-H[b_2]=H[b_0]-H[b_1+b_2]$, (5.9) follows. \end{proof} \subsection{Compactness} We now demonstrate the compactness of minimizing sequences for the variational problem (4.1) by using Lemma 4.10. \begin{thm} Let $h\in \mathbb{R}$. Let $\{b_n\}\subset L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ be a sequence such that $E[b_n]\to I_h$ and $H[b_n]\to h$. There exists $\{n_k\}\subset \mathbb{N}$, $\{z_k\}\subset \mathbb{R}$ and $b \in S_h$ such that $b_{n_k}(\cdot+z_{k}e_z)\to b$ in $L^{2}(\mathbb{R}^{3})$. \end{thm} \begin{proof} For $h=0$, the assertion holds for $b=0$ since $I_0=0$. We may assume that $h>0$ by the symmetry (4.12). For a sequence $\{b_n\}$ satisfying $E[b_n]\to I_h$ and $H[b_n]\to h$, we apply Lemma 5.2 with $l=2I_h$. Then, by choosing a subsequence (still denoted by $\{b_n\}$), one of the 3 cases -- compactness, vanishing, dichotomy -- should occur.\\ \noindent \textit{Case 1}: dichotomy. \\ By Proposition 5.3, there exist $b_{1,n}, b_{2,n}\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ such that (5.5) holds. By $(5.5)_3$ and (3.20), $E[b_{i,n}]$ and $H[b_{i,n}]$ are uniformly bounded for $n$ and $\varepsilon$. By choosing a subsequence, we may assume that $h_{i,n}=H[b_{i,n}]\to \bar{h}_i$ for some $\bar{h}_i\in \mathbb{R}$ as $n\to\infty$ and $\varepsilon\to0$. By $(5.5)_1$ and $(5.5)_2$, we apply Proposition 5.4 to estimate \begin{align*} &|H[b_n]-H[b_{1,n}]-H[b_{2,n}]|\\ &\lesssim \max\left\{||b_{1,n}||_{L^{2}(\mathbb{R}^{3})},||b_{2,n}||_{L^{2}(\mathbb{R}^{3})},||b_{n}||_{L^{2}(\mathbb{R}^{3})}\right\}^{5/3}||b_{n}-b_{1,n}-b_{2,n}||_{L^{2}(\mathbb{R}^{3})}. \end{align*}\\ By $(5.5)_3$, \begin{align*} \limsup_{n\to \infty}|H[b_n]-H[b_{1,n}]-H[b_{2,n}]| \leq C\varepsilon^{1/2}. \end{align*}\\ Letting $\varepsilon\to 0$ implies that $h=\bar{h}_1+\bar{h}_2>0$. Since $\textrm{spt}\ b_{1,n}\cap \textrm{spt}\ b_{2,n}=\emptyset$, \begin{align*} E[b_n] &=E[b_{1,n}]+E[b_{2,n}]+E[b_{n}-b_{1,n}-b_{2,n}]+\int_{\mathbb{R}^{3}} (b_{1,n}+b_{2,n})\cdot (b_{n}-b_{1,n}-b_{2,n} )\textrm{d} x \\ &\geq E[b_{1,n}]+E[b_{2,n}]-\left(\sup_{i,n} ||b_{i,n}||_{L^{2}(\mathbb{R}^{3}) }\right) ||b_{n}-b_{1,n}-b_{2,n} ||_{L^{2}(\mathbb{R}^{3}) } \\ &\geq I_{h_{1,n}}+I_{h_{2,n}}-\left(\sup_{i,n} ||b_{i,n}||_{L^{2}(\mathbb{R}^{3}) } \right)||b_{n}-b_{1,n}-b_{2,n} ||_{L^{2}(\mathbb{R}^{3}) }. \end{align*}\\ By the lower semi-continuity of the minimum (4.17) and (5.5), letting $n\to\infty$ and $\varepsilon\to 0$ imply \begin{align*} I_h\geq I_{\bar{h}_1}+I_{\bar{h}_2}. \end{align*}\\ If $\bar{h}_1>0$ and $\bar{h}_2>0$, this contradicts the strict subadditivity (4.15). Thus $\bar{h}_i\leq 0$ for $i=1$ or $i=2$. We may assume $\bar{h}_1\leq 0$. Since \begin{align*} E[b_n] \geq E[b_{1,n}]+I_{h_{2,n}}-\left(\sup_{i,n}||b_{i,n}||_{L^{2}(\mathbb{R}^{3}) }\right) ||b_{n}-b_{1,n}-b_{2,n} ||_{L^{2}(\mathbb{R}^{3}) }, \end{align*}\\ and $\limsup_{n\to\infty}|2E[b_{1,n}]-\alpha|\leq C \varepsilon$ by (5.5), letting $n\to\infty$ and $\varepsilon\to0$ imply \begin{align*} I_{h}\geq \frac{\alpha}{2}+I_{\bar{h}_2}>I_{\bar{h}_2}. \end{align*}\\ Since $0<h=\bar{h}_1+\bar{h}_2\leq \bar{h}_2$, this contradicts the monotonicity (4.13). We conclude that dichotomy does not occur.\\ \noindent \textit{Case 2}: vanishing. \\ By H\"older's inequality and (3.2), \begin{align*} |H[b_n]|=4\pi\left|\int_{\mathbb{R}^{2}_{+}}(\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r \right| &\leq 4\pi ||(\phi_n-\phi_\infty)_{+}||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } ||G_n||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } \\ &\leq 4\pi ||(\phi_n- r^{2} )_{+}||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } \left(\sup_{n} ||b_n||_{L^{2}(\mathbb{R}^{3}) }\right). \end{align*}\\ For arbitrary $R>0$, we estimate \begin{align*} \int_{\mathbb{R}^{2}_{+}}\left(\phi_n-r^{2}\right)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r =\int_{\{r<R\}}\left(\phi_n-r^{2}\right)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r+\int_{\{r\geq R\}}\left(\phi_n-r^{2}\right)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ By the weighted Sobolev inequality (3.8), \begin{align*} &\int_{\{r\geq R\}}\left(\phi_n-r^{2}\right)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \leq \frac{1}{R} \int_{\{r\geq R\}}\phi_n^{4}\frac{1}{r^{4}}\textrm{d} z\textrm{d} r \leq \frac{C}{R}\sup_{n}||\nabla \phi_n||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) }^{4},\\ &\int_{\{r<R\}}\left(\phi_n-r^{2}\right)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \leq \int_{\{r<R\}}\phi_n^{10/3}\frac{1}{r^{11/3}}\textrm{d} z\textrm{d} r. \end{align*}\\ For $z_0\in \mathbb{R}$ and $R'>R$, we apply the weighted Sobolev inequality in $D(z_0,R')$ (3.14) to estimate \begin{align*} \int_{D(z_0,R')}|\phi_n|^{10/3}\frac{1}{r^{11/3}}\textrm{d} z\textrm{d} r \leq C \left(\int_{D(z_0,R')}|\nabla \phi_n|^{2}\frac{1}{r}\textrm{d} z\textrm{d} r \right)^{5/3} =C ||\nabla \phi_n||_{L^{2}(D(z_0,R');r^{-1}) }^{10/3}. \end{align*}\\ By summing up $D(z_0,R')$ for countable points $z_0\in \mathbb{R}$, \begin{align*} \int_{\{r<R\}}|\phi_n|^{10/3}\frac{1}{r^{11/3}}\textrm{d} z\textrm{d} r \leq C\left(\sup_n||\nabla \phi_n||^{2}_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } \right)\left(\sup_{z_0\in \mathbb{R}}||\nabla \phi_n||^{4/3}_{L^{2}(D(z_0,R') ;r^{-1}) } \right). \end{align*}\\ By (3.2), \begin{align*} \int_{\{r<R\}}|\phi_n|^{10/3}\frac{1}{r^{11/3}}\textrm{d} z\textrm{d} r \leq C\left(\sup_n||b_n||^{2}_{L^{2}(\mathbb{R}^{3}) } \right)\left(\sup_{z_0\in \mathbb{R}}||b_n||^{4/3}_{L^{2}(B(z_0e_z,R'))} \right). \end{align*}\\ The right-hand side vanishes as $n\to\infty$ by (5.3). Thus \begin{align*} \limsup_{n\to\infty}\int_{\mathbb{R}^{2}_{+}}\left(\phi_n-r^{2}\right)_{+}^{2}\frac{1}{r}\textrm{d} z\textrm{d} r\leq \frac{C}{R} \left(\sup_{n}||b_n||_{L^{2}(\mathbb{R}^{3} ) }^{4}\right). \end{align*}\\ Letting $R\to\infty$ implies that $\lim_{n\to\infty}H[b_n]=0$. This contradicts $H[b_n]\to h>0$. We conclude that vanishing does not occur.\\ \noindent \textit{Case 3}: compactness. \\ It remains to show compactness of the sequence $\{b_n\}$ satisfying (5.2). By translation, we may assume that (5.2) holds for $z_n=0$. We may assume that for all $n$, \begin{align} 2\pi \int_{\mathbb{R}^{2}_{+}\backslash D(0,R_{\varepsilon})}\left(|\nabla \phi_n|^{2}+|G_n|^{2}\right)\frac{1}{r}\textrm{d} z\textrm{d} r\leq \varepsilon. \end{align}\\ By the Rellich--Kondrakov theorem in the weighted space (3.9), there exists a subsequence and $b=\nabla \times ( \phi\nabla \theta)+G\nabla \theta$ such that $b_n\rightharpoonup b$ in $L^{2}(\mathbb{R}^{3})$ and $\phi_n\to \phi$ in $L^{2}_{\textrm{loc}}(\overline{\mathbb{R}^{2}_{+}}; r^{-1})$. For $D=D(0, R_{\varepsilon} )$, \begin{align*} H[b_n]=4\pi \int_{D}(\phi_n-\phi_\infty)_+G_n\frac{1}{r}\textrm{d} z\textrm{d} r+4\pi \int_{\mathbb{R}^{2}_{+}\backslash D}(\phi_n-\phi_\infty)_+G_n\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ In a similar way as we proved Theorem 4.9, \begin{align*} \lim_{n\to\infty}\int_{D}(\phi_n-\phi_\infty)_+G_n\frac{1}{r}\textrm{d} z\textrm{d} r= \int_{D}(\phi-\phi_\infty)_+G\frac{1}{r}\textrm{d} z\textrm{d} r. \end{align*}\\ By H\"older's inequality, (3.18), (3.2) and (5.11), \begin{align*} \left|\int_{\mathbb{R}^{2}_{+}\backslash \overline{D}} (\phi_n-\phi_\infty)_{+}G_n\frac{1}{r}\textrm{d} z\textrm{d} r\right| \leq ||(\phi_n-\phi_\infty)_{+}||_{L^{2}(\mathbb{R}^{2}_{+};r^{-1}) } ||G_n||_{L^{2}(\mathbb{R}^{2}_{+}\backslash \overline{D}; r^{-1}) } \lesssim \left(\sup_{n }||b_n||^{5/3}_{L^{2}(\mathbb{R}^{3}) }\right)\varepsilon^{1/2}. \end{align*}\\ Thus \begin{align*} |H[b]-h|=\lim_{n\to\infty}\left|H[b]-H[b_n] \right|\leq \left(\sup_{n }||b_n||^{5/3}_{L^{2}(\mathbb{R}^{3}) }+||b||^{5/3}_{L^{2}(\mathbb{R}^{3}) } \right)\varepsilon^{1/2}. \end{align*}\\ By letting $\varepsilon\to 0$, $H[b]=h$ and \begin{align*} I_h \leq E[b]\leq \liminf_{n\to\infty}E[b_n]=I_h. \end{align*}\\ Thus $b$ is a minimizer of $I_h$. By $\lim_{n\to \infty}||b_n||_{L^{2}(\mathbb{R}^{3}) }= ||b||_{L^{2}(\mathbb{R}^{3}) }$, $b_n\to b$ in $L^{2}(\mathbb{R}^{3})$. The proof is now complete. \end{proof} \vspace{5pt} \begin{thm} Let $h\in \mathbb{R}$. Let $\{(v_n,b_n)\}\subset L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ be a sequence such that ${\mathcal{E}}[v_n,b_n]\to I_h$ and $H[b_n]\to h$. There exist $\{n_k\}\subset \mathbb{N}$, $\{z_k\}\subset \mathbb{R}$ and $b \in S_h$ such that $(v_{n_k}, b_{n_k}(\cdot+z_ke_z))\to (0,b)$ in $L^{2}(\mathbb{R}^{3})$. \end{thm} \vspace{5pt} \begin{proof} For $h_n=H[b_n]$, \begin{align*} I_{h_n}\leq E[b_n]\leq {\mathcal{E}}[v_n,b_n]. \end{align*}\\ By the lower semi-continuity of the minimum (4.17), letting $n\to\infty$ implies that $E[b_n]\to I_h$ and $E[v_n]\to 0$. We apply Theorem 5.5 and conclude. \end{proof} \vspace{5pt} \begin{rem} The variational problem (1.13) is also well-defined with the function $\phi_\infty=Wr^{2}/2+\gamma$ for $W>0$ and $\gamma\geq 0$ as noted in Remark 3.14. The same compactness theorem as Theorem 5.6 holds for minimizing sequences to (1.13). Namely, for given $h\in \mathbb{R}$, $W>0$, $\gamma\geq 0$, and a sequence $\{(v_n,b_n)\}\subset L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ satisfying \begin{align*} &\frac{1}{2}\int_{\mathbb{R}^{3}}\left(|v_n|^{2}+|b_n|^{2} \right)\textrm{d} x\to I_{h,W,\gamma},\\ &2\int_{\mathbb{R}^{3}}(\phi_n-\phi_\infty )_+\frac{G_n}{r^{2}}\textrm{d} x\to h, \end{align*}\\ there exist $\{n_k\}$, $\{z_k\}$ and a minimizer $b\in S_{h,W,\gamma}$ of $I_{h,W,\gamma}$ such that $(v_{n_k},b_{n_k}(\cdot+z_ke_z) )\to (0,b)$ in $L^{2}(\mathbb{R}^{3})$. In fact, for $\tilde{h}=(W/2)^{-2}h$ and $\tilde{\gamma}=(W/2)^{-1}\gamma$, $b\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ and $\tilde{b}=(W/2)^{-1}b$ satisfy \begin{align*} \int_{\mathbb{R}^{3}}|\tilde{b}|^{2}\textrm{d} x &=\left(\frac{W}{2}\right)^{-2}\int_{\mathbb{R}^{3}}|b|^{2}\textrm{d} x,\\ \int_{\mathbb{R}^{3}}\left(\tilde{\phi}-r^{2}-\tilde{\gamma} \right)_+\frac{\tilde{G}}{r^{2}}\textrm{d} x &=\left(\frac{W}{2}\right)^{-2}\int_{\mathbb{R}^{3}}\left(\phi-\frac{W}{2}r^{2}-\gamma \right)_+\frac{G}{r^{2}}\textrm{d} x. \end{align*}\\ Minimizers of $I_{h,W,\gamma}$ are those of $I_{\tilde{h},2,\tilde{\gamma}}$ and vice versa. The scaling of the minimum is \begin{align*} I_{h,W,\gamma}=\left(\frac{W}{2}\right)^{-2}I_{\tilde{h},2,\tilde{\gamma}}. \end{align*}\\ The compactness of minimizing sequences to $I_{h,W,\gamma}$ is derived from that for $I_{\tilde{h},2,\tilde{\gamma}}$ by Theorem 5.5. \end{rem} \section{Leray--Hopf solutions} We show that axisymmetric Leray--Hopf solutions to (1.9) satisfy the equalities of generalized magnetic helicity (1.18) and generalized magnetic mean-square potential (1.19) by using the vector potential equations. We provide proof of the existence of axisymmetric Leray--Hopf solutions in Appendix A. \subsection{Existence} Leray--Hopf solutions to viscous and resistive MHD in $\mathbb{R}^{3}$ can be defined similarly as those for the Navier--Stokes equations, e.g., \cite[p.718]{SSS96}, cf. \cite{Masuda1984}, \cite{Sohr}, \cite{Lemarie}. We define Leray--Hopf solutions to (1.9) with $u_{\infty}, B_{\infty}\in \mathbb{R}^{3}$ and their weak ideal limits by adapting the definitions for the case of bounded domains in Definitions 2.9 and 2.11. \begin{defn}[Leray--Hopf solutions] Let $u_{\infty}, B_{\infty}\in \mathbb{R}^{3}$. Let $v_0,b_0\in L^{2}_{\sigma}(\mathbb{R}^{3})$. Let \begin{align*} (v, b)\in C_{w}([0,T]; L^{2}_{\sigma}(\mathbb{R}^{3}) )\cap L^{2}(0,T; H^{1}(\mathbb{R}^{3}) ). \end{align*}\\ Suppose that $(v_t, b_t) \in L^{1}(0,T; (L^{2}_{\sigma}\cap H^{1})(\mathbb{R}^{3})^{*}) )$ and \begin{align*} &<v_t,\xi>+\int_{\mathbb{R}^{3}}((v+u_{\infty})\cdot \nabla v-(b+B_{\infty})\cdot \nabla b)\cdot \xi\textrm{d} x+\nu \int_{\mathbb{R}^{3}}\nabla v:\nabla \xi\textrm{d} x=0,\\ &<b_t,\zeta>+\int_{\Omega}(b\times v+B_{\infty}\times v+b\times u_{\infty} )\cdot \nabla \times \zeta\textrm{d} x +\mu \int_{\Omega}\nabla \times b\cdot \nabla \times \zeta \textrm{d} x=0, \end{align*}\\ for a.e. $t\in [0,T]$ and every $\xi, \zeta\in L^{2}_{\sigma}\cap H^{1} (\mathbb{R}^{3})$. Suppose furthermore that $(v(\cdot,0),b(\cdot,0))=(v_0,b_0)$ and \begin{align} \frac{1}{2}\int_{\mathbb{R}^{3}}\left(|v|^{2}+|b|^{2}\right) \textrm{d} x +\int_{0}^{t}\int_{\mathbb{R}^{3}}\left(\nu |\nabla v|^{2}+\mu |\nabla b|^{2}\right) \textrm{d} x\textrm{d} s \leq \frac{1}{2}\int_{\mathbb{R}^{3}}\left( |v_0|^{2}+|b_0|^{2} \right) \textrm{d} x, \end{align}\\ for all $t\in [0,T]$. Then, we call $(v,b)$ Leray--Hopf solutions to (1.9). \end{defn} \begin{defn}[Weak ideal limits] Let $(v_j,b_j)$ be a Leray--Hopf solution to (1.9) for $\nu_j,\mu_j>0$ and $(v_{0,j},b_{0,j})$ such that $(v_{0,j},b_{0,j})\rightharpoonup (v_{0},b_{0})$ in $L^{2}(\mathbb{R}^{3})$ as $(v_j,b_j)\to (0,0)$. Assume that \begin{align} (v_j,b_j)\overset{\ast}{\rightharpoonup} (v,b) \quad \textrm{in}\ L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}) ). \end{align}\\ Then, we call $(v,b)$ a weak ideal limit of $(v_j,b_j)$. If instead $\nu_j=\nu>0$ for every $j$ and $\mu_j\to 0$, we call $(v,b)$ a weak nonresistive limit of $(v_j,b_j)$. \end{defn} We construct Leray--Hopf solutions to (1.9) by Leray's method in Appendix A. They are axisymmetric for constants $u_{\infty},B_{\infty}$ parallel to $e_z$ and axisymmetric data $v_0,b_0\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$. \begin{thm} Let $B_{\infty}=-2e_{z}$. Let $u_{\infty}$ be a constant parallel to $e_z$. For $v_0,b_0\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$, there exists an axisymmetric Leray--Hopf solution to (1.9). \end{thm} \subsection{Vector potential equations} For a smooth solution $(v,b)$ to (1.9), $(u,B)=(v+u_\infty,b+B_{\infty})$ is a solution to (1.1) and (1.7). We apply the Clebsch representation (3.16) for $B$ and its vector potential $A$ with the function $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. By the equation $(1.1)_2$ and some potential $Q$, the vector potential equations can be expressed as \begin{equation} \begin{aligned} A_{t}+B\times u+\nabla Q&=-\mu \nabla \times B. \end{aligned} \end{equation}\\ The $\theta$-component of this equation is the drift-diffusion equation for $\Phi=\phi-\phi_{\infty}$ and the flux function $\phi$ of $b$, \begin{align} \Phi_{t}+u\cdot \nabla \Phi&=\mu\left(\Delta -\frac{2}{r}\partial_r\right)\Phi. \end{align}\\ The quantities \begin{align} \int_{\mathbb{R}^{3}}g(\Phi)\frac{G}{r^{2}}\textrm{d} x,\quad \int_{\mathbb{R}^{3}}g(\Phi)\textrm{d} x, \end{align}\\ are conserved at the zero resistivity limit $\mu=0$ for arbitrary regular functions $g=g(s)$ due to the transport equation (6.4). We apply the identity for axisymmetric magnetic fields \begin{align} A_t\cdot B\dot{g}(\Phi)=\partial_t \left(g(\Phi)\frac{G}{r^{2}}\right)+\nabla \cdot (g(\Phi)\nabla \theta\times A_t). \end{align}\\ This identity follows from (3.16) and $B\dot{g}(\Phi)=\nabla \times (g(\Phi)\nabla \theta)+G\dot{g}(\Phi)\nabla \theta$. We can observe generalized magnetic helicity conservation from (6.6) by multiplying the solenoidal $B\dot{g}(\Phi)$ by (6.3) and integration by parts. We take $g(s)=2s_+$ for generalized magnetic helicity and $g(s)=s_{+}^{2}$ for generalized magnetic mean-square potential. Their equalities with resistivity $\mu>0$ are \begin{align} \int_{\mathbb{R}^{3}}\Phi_{+}\frac{G}{r^{2}}\textrm{d} x +\mu \int_{0}^{t}\int_{\mathbb{R}^{3}}\nabla \times B\cdot B1_{(0,\infty)}(\Phi) \textrm{d} x\textrm{d} s &=\int_{\mathbb{R}^{3}}\Phi_{0,+}\frac{G_0}{r^{2}}\textrm{d} x, \\ \int_{\mathbb{R}^{3}}\Phi^{2}_{+}\textrm{d} x +2\mu \int_{0}^{t}\int_{\mathbb{R}^{3}} |\nabla \Phi_{+}|^{2} \textrm{d} x\textrm{d} s &= \int_{\mathbb{R}^{3}}\Phi^{2}_{0,+} \textrm{d} x. \end{align}\\ We demonstrate (6.7) and (6.8) for axisymmetric Leray--Hopf solutions by showing that the equations (6.3) hold on $L^{2}(\mathbb{R}^{3})$ for a.e. $t\in [0,T]$. In terms of the projection operator $\mathbb{P}$, associated with (2.1) in $\Omega=\mathbb{R}^{3}$, the equations (6.3) can be expressed as \begin{align*} A_t+\mathbb{P}(B\times u)=-\mu\nabla \times B\quad \textrm{on}\ L^{2}_{\sigma}(\mathbb{R}^{3}). \end{align*}\\ At the heuristic level, the conditions $B\times u, \nabla \times B=\nabla \times b\in L^{2}(\mathbb{R}^{3})$ imply that $\nabla Q=-(1-\mathbb{P})(B\times u)\in L^{2}(\mathbb{R}^{3})$. Hence a distributional solution $A$ of (6.3) satisfies $A_t\in L^{2}(\mathbb{R}^{3})$ and (6.3) holds on $L^{2}(\mathbb{R}^{3})$. We approximate $A$ for the time variable and derive (6.3) from the definition of Leray--Hopf solutions. \begin{prop} For axisymmetric Leray--Hopf solutions $(v,b)$ to (1.9) for $B_{\infty}=-2e_z$ and $u_{\infty}$ parallel to $e_z$, $(u,B)=(v+u_{\infty},b+B_{\infty})$ satisfies \begin{align} \begin{aligned} B\times u\in L^{4/3}(0,T; L^{2}(\mathbb{R}^{3})), \\ \nabla \times B\in L^{2}(0,T; L^{2}(\mathbb{R}^{3})). \end{aligned} \end{align}\\ Moreover, for $A$ defined by (3.16), there exists some $Q$ such that \begin{align} A_t, \nabla Q\in L^{4/3}(0,T; L^{2}(\mathbb{R}^{3})), \end{align}\\ and (6.3) holds on $L^{2}(\mathbb{R}^{3})$ for a.e. $t\in [0,T]$. \end{prop} \begin{proof} For the vector potential $a$ of $b$ defined by (3.3), \begin{align*} A_t&=a_t,\\ B\times u&=b\times v+b\times u_{\infty}+B_{\infty}\times v, \\ \nabla \times B&=\nabla \times b. \end{align*}\\ By $L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}))\cap L^{2}(0,T; H^{1}(\mathbb{R}^{3}) )\subset L^{8/3}(0,T; L^{4} (\mathbb{R}^{3}))$, (6.9) holds. By the definition of Leray--Hopf solutions, \begin{align*} b_t+\nabla \times (B\times u)+\mu \nabla \times (\nabla \times b)=0\quad \textrm{on}\ (L^{2}_{\sigma}\cap H^{1}) (\mathbb{R}^{3})^{*}, \end{align*}\\ for a.e. $t\in [0,T]$. Thus \begin{align*} \nabla \times (a_{t}+(B\times u)+\mu \nabla \times b)=0\quad \textrm{on}\ (L^{2}_{\sigma}\cap H^{1}) (\mathbb{R}^{3})^{*}. \end{align*}\\ We take arbitrary $0<\delta<T/2$ and $\varepsilon<\delta$. By the mollifier in Lemma 2.15, \begin{align*} \nabla \times (a_{t}^{\varepsilon}+(B\times u)^{\varepsilon}+\mu \nabla \times b^{\varepsilon})=0\quad \textrm{on}\ (L^{2}_{\sigma}\cap H^{1}) (\mathbb{R}^{3})^{*}, \end{align*}\\ for $t\in (\delta, T-\delta)$. By Lemma 3.5, $a\in L^{\infty}(0,T; L^{6}(\mathbb{R}^{3}))$ and $a_{t}^{\varepsilon}\in C^{\infty}(\delta,T-\delta; L^{6}(\mathbb{R}^{3}))$. For an arbitrary $\zeta\in L^{2}_{\sigma} \cap H^{1}(\mathbb{R}^{3})$ satisfying $\nabla \times \zeta \in L^{6/5}(\mathbb{R}^{3})$, by integration by parts, \begin{align*} \int_{\mathbb{R}^{3}}(a_{t}^{\varepsilon}+(B\times u)^{\varepsilon}+\mu \nabla \times b^{\varepsilon} ) \cdot \nabla \times \zeta \textrm{d} x=0. \end{align*}\\ We take an arbitrary $\xi \in L^{6/5}\cap L^{2}(\mathbb{R}^{3})$ and apply Proposition 3.8 to take $\zeta\in L^{2}_{\sigma}\cap L^{6}(\mathbb{R}^{3})$ such that $\nabla \zeta\in L^{6/5}\cap L^{2}(\mathbb{R}^{3})$ and $\nabla \times \zeta=\mathbb{P}\xi$. Then, for $\nabla Q^{\varepsilon}=-(I-\mathbb{P})(B\times u)^{\varepsilon}$, \begin{align*} 0=\int_{\mathbb{R}^{3}}(a_{t}^{\varepsilon}+(B\times u)^{\varepsilon}+\mu \nabla \times b^{\varepsilon} ) \cdot \mathbb{P}\xi \textrm{d} x &=\int_{\mathbb{R}^{3}}(a_{t}^{\varepsilon}+\mathbb{P}(B\times u)^{\varepsilon}+\mu \nabla \times b^{\varepsilon} ) \cdot \xi \textrm{d} x \\ &=\int_{\mathbb{R}^{3}}(a_{t}^{\varepsilon}+(B\times u)^{\varepsilon}+\nabla Q^{\varepsilon}+\mu \nabla \times b^{\varepsilon} ) \cdot \xi \textrm{d} x. \end{align*}\\ Since $(B\times u)^{\varepsilon}\in L^{2}(\mathbb{R}^{3})$ for $t\in (\delta,T-\delta)$ and $\xi \in L^{6/5}\cap L^{2}(\mathbb{R}^{3})$ is arbitrary, $\nabla Q^{\varepsilon}, a^{\varepsilon}_{t}\in L^{2}(\mathbb{R}^{3})$ and \begin{align*} a^{\varepsilon}_{t}+(B\times u)^{\varepsilon}+\nabla Q^{\varepsilon}=-\mu\nabla \times b^{\varepsilon}\quad \textrm{on}\ L^{2}(\mathbb{R}^{3}). \end{align*} \\ By Lemma 2.15, letting $\varepsilon\to0$ implies that \begin{align*} (B\times u)^{\varepsilon}&\to B\times u\quad \textrm{in}\ L^{4/3}(\delta,T-\delta; L^{2}(\mathbb{R}^{3}) ),\\ \nabla \times b^{\varepsilon}&\to \nabla \times b \quad \textrm{in}\ L^{2}(\delta,T-\delta; L^{2}(\mathbb{R}^{3}) ). \end{align*}\\ Thus $\nabla Q=-(1-\mathbb{P})(B\times u)$ and $a_{t}=A_t$ satisfy (6.10). Since $0<\delta<T/2$ is arbitrary, (6.3) holds for a.e. $t\in [0,T]$. \end{proof} \begin{prop} The equation (6.4) holds on $L^{2}_{\textrm{loc}}(\mathbb{R}^{3})$ for a.e. $t\in [0,T]$ for axisymmetric Leray--Hopf solutions to (1.9) for $B_{\infty}=-2e_z$ and $u_\infty$ parallel to $e_z$. Moreover, \begin{align} \partial_t\Phi_{+}^{2}+u\cdot \nabla \Phi_{+}^{2}=\mu\left(\Delta-\frac{2}{r}\partial_r \right)\Phi_{+}^{2}-2\mu |\nabla \Phi_{+}|^{2} \quad \textrm{on}\ L^{1}_{\textrm{loc}}(\mathbb{R}^{3}). \end{align} \end{prop} \begin{proof} By (3.16), \begin{align*} A_t\cdot r\nabla \theta&=\frac{1}{r} \Phi_t,\\ (B\times u)\cdot r\nabla \theta&=\frac{1}{r} u\cdot \nabla \Phi, \\ \nabla \times B\cdot r\nabla \theta&=-\frac{1}{r}\left(\Delta-\frac{2}{r}\partial_r \right)\Phi. \end{align*}\\ Each term belongs to $L^{2}(\mathbb{R}^{3})$ by (6.9) and (6.10). By multiplying $r\nabla \theta$ by (6.3), \begin{align*} \frac{1}{r}\left(\Phi_t+u\cdot \nabla \Phi-\mu \left(\Delta -\frac{2}{r}\partial_r\right)\Phi\right)=0\quad \textrm{on}\ L^{2}(\mathbb{R}^{3}). \end{align*}\\ Thus (6.4) holds on $L^{2}_{\textrm{loc}}(\mathbb{R}^{3})$. Since $\Phi_{+}\in L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}))$ by (3.19) and (3.2), by multiplying $2\Phi_{+}$ by the above equation, (6.11) follows. \end{proof} \subsection{Equalities} We show the equalities (6.7) and (6.8) for axisymmetric Leray--Hopf solutions to (1.9). \begin{prop} For axisymmetric Leray--Hopf solutions to (1.9) for $B_{\infty}=-2e_z$ and $u_\infty$ parallel to $e_z$, \begin{align} \frac{\textrm{d} }{\textrm{d} t}\int_{\mathbb{R}^{3}}\Phi_+\frac{G}{r^{2}}\textrm{d} x =-\mu \int_{\mathbb{R}^{3}} \nabla \times B\cdot B 1_{(0,\infty)}(\Phi)\textrm{d} x, \end{align}\\ in the sense of distribution. \end{prop} \begin{proof} The field $B1_{(0,\infty)}(\Phi)=\nabla \times (\Phi_+\nabla \theta)+G1_{(0,\infty)}(\Phi)\nabla \theta$ is solenoidal. By (3.17) and (3.2), \begin{align} ||B1_{(0,\infty)}(\Phi)||_{L^{2}(\mathbb{R}^{3})}=||(b+B_\infty)1_{(0,\infty)}(\Phi)||_{L^{2}(\mathbb{R}^{3})}\lesssim ||b||_{L^{2}(\mathbb{R}^{3})}. \end{align}\\ By multiplying $B1_{(0,\infty)}(\Phi)\in L^{\infty}(0,T; L^{2}_{\sigma}(\mathbb{R}^{3}))$ by (6.3), \begin{align*} A_{t}\cdot B1_{(0,\infty)}(\Phi)+\nabla Q\cdot B1_{(0,\infty)}(\Phi)=-\mu \nabla \times B\cdot B1_{(0,\infty)}(\Phi)\quad \textrm{on}\ L^{1}(\mathbb{R}^{3}), \end{align*}\\ for a.e. $t\in [0,T]$. Thus \begin{align*} \int_{\mathbb{R}^{3}}A_{t}\cdot B1_{(0,\infty)}(\Phi)\textrm{d} x =-\mu \int_{\mathbb{R}^{3}}\nabla \times B\cdot B1_{(0,\infty)}(\Phi) \textrm{d} x. \end{align*}\\ For $\chi\in C^{\infty}_{c}[0,\infty)$ satisfying $\chi=1$ in $[0,1]$ and $\chi=0$ in $[2,\infty)$, we set $\chi_R(x)=\chi(R^{-1}|x|)$. For arbitrary $\rho\in C^{\infty}_{c}(0,T)$, by multiplying $\chi_R\rho$ by (6.6) and integration by parts, \begin{align*} \int_{0}^{T}\int_{\mathbb{R}^{3}}A_t \cdot B1_{(0,\infty)}(\Phi)\chi_R\rho\textrm{d} x\textrm{d} t =-\int_{0}^{T}\int_{\mathbb{R}^{3}}\Phi_+ \frac{G}{r^{2}}\chi_R\dot{\rho} \textrm{d} x\textrm{d} t-\int_{0}^{T}\int_{\mathbb{R}^{3}}( \Phi_+\nabla \theta \times A_t)\cdot \nabla \chi_R\rho \textrm{d} x\textrm{d} t. \end{align*}\\ Since $\Phi_+\nabla \theta\in L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}))$ and $A_t\in L^{4/3}(0,T; L^{2}(\mathbb{R}^{3}) )$ by (3.18), (3.2) and (6.10), $\Phi_+\nabla\theta \times A_t \in L^{4/3}(0,T; L^{1}(\mathbb{R}^{3}) )$. The last term vanishes as $R\to\infty$. Since $\rho\in C^{\infty}_{c}(0,T)$ is arbitrary, \begin{align*} \frac{\textrm{d} }{\textrm{d} t}\int_{\mathbb{R}^{3}}\Phi_+\frac{G}{r^{2}}\textrm{d} x =-\mu \int_{\mathbb{R}^{3}}\nabla \times B\cdot B1_{(0,\infty)}(\Phi) \textrm{d} x, \end{align*}\\ in the sense of distribution. Thus (6.12) holds. \end{proof} \begin{lem} The equality (6.7) holds for axisymmetric Leray--Hopf solutions to (1.9) for $B_\infty=-2e_z$ and $u_\infty$ parallel to $e_z$ and for all $t\in [0,T]$. Moreover, \begin{equation} \begin{aligned} \left\|\frac{\textrm{d} }{\textrm{d} t}\int_{\mathbb{R}^{3}}\Phi_+\frac{G}{r^{2}}\textrm{d} x\right\|_{L^{2}(0,T)} \leq C\mu^{1/2}\left(||v_0||_{L^{2}(\mathbb{R}^{3}) }^{2}+||b_0||_{L^{2}(\mathbb{R}^{3}) }^{2} \right). \end{aligned} \end{equation} \end{lem} \begin{proof} By (6.13), \begin{align*} \left|\int_{\mathbb{R}^{3}} \nabla \times B\cdot B 1_{(0,\infty)}(\Phi)\textrm{d} x\right| =\left|\int_{\mathbb{R}^{3}} \nabla \times b\cdot B 1_{(0,\infty)}(\Phi)\textrm{d} x\right| \lesssim ||\nabla \times b||_{L^{2}} || b||_{L^{2}}. \end{align*}\\ By (6.12) and (6.1), \begin{align*} \left\|\frac{\textrm{d} }{\textrm{d} t}\int_{\mathbb{R}^{3}}\Phi_{+}\frac{G}{r^{2}}\textrm{d} x\right\|_{L^{2}(0,T) } \lesssim \mu ||\nabla \times b||_{L^{2}(0,T; L^{2}(\mathbb{R}^{3}))} || b||_{L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}))} \lesssim \mu^{1/2}\left(||v_0||_{L^{2}(\mathbb{R}^{3}) }^{2}+||b_0||_{L^{2}(\mathbb{R}^{3}) }^{2} \right). \end{align*}\\ Thus (6.14) holds. The equality (6.7) follows by integrating (6.12) in time. \end{proof} \begin{lem} The equality (6.8) holds for axisymmetric Leray--Hopf solutions to (1.9) for $B_\infty=-2e_z$ and $u_\infty$ parallel to $e_z$ and for all $t\in [0,T]$. \end{lem} \begin{proof} We take nonincreasing $\chi\in C^{\infty}_{c}[0,\infty)$ such that $\chi=1$ in $[0,1]$ and $\chi=0$ in $[2,\infty)$ and set $\chi_R(x)=\chi(R^{-1}|x|)$. Since $\Phi_+=(-\gamma)_+=0$ on $\{r=0\}$, by multiplying $\chi_R$ by (6.11) and integration by parts, \begin{align*} &\int_{\mathbb{R}^{3}}\Phi_{+}^{2}\chi_R\textrm{d} x-\int_{\mathbb{R}^{3}}\Phi_{0,+}^{2}\chi_R\textrm{d} x +2\mu \int_{0}^{t}\int_{\mathbb{R}^{3}} |\nabla \Phi_{+}|^{2}\chi_R \textrm{d} x\textrm{d} s \\ &=\mu\int_{0}^{t}\int_{\mathbb{R}^{3}} \Phi_{+}^{2}\left(\Delta +\frac{2}{r}\partial_r\right)\chi_R\textrm{d} x\textrm{d} s+\int_{0}^{t}\int_{\mathbb{R}^{3}}u\cdot \nabla \chi_R \Phi_{+}^{2}\textrm{d} x\textrm{d} s. \end{align*}\\ Since $\Phi_{+}\in L^{\infty}(0,T; L^{q}(\mathbb{R}^{3}) )$ for $1\leq q<\infty$ by (3.19) and $u=v+u_{\infty}$ for $v\in L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}) )$, the right-hand side vanishes as $R\to\infty$. The equality (6.8) follows from the monotone convergence theorem. \end{proof} \section{Weak ideal limits} We demonstrate generalized magnetic helicity conservation at weak ideal limits of axisymmetric Leray--Hopf solutions to (1.9) for fixed initial data in Theorems 7.5. We show flux local convergence by the vector potential equations (6.3) and the Aubin--Lions lemma, and the generalized magnetic mean-square potential conservation at weak ideal limits. The equality of generalized magnetic mean-square potential for axisymmetric Leray--Hopf solutions (6.8) strengthens the flux convergence from local to global. By taking a limit to the equality of generalized magnetic helicity for axisymmetric Leray--Hopf solutions (6.7), we show the desired generalized magnetic helicity conservation at weak ideal limits. \subsection{Local convergence} We estimate the time derivative of the vector potential by the equations (6.3) and apply the Aubin--Lions lemma. \begin{prop} Let $(v_j,b_j)$ be an axisymmetric Leray--Hopf solution to (1.9) for fixed $v_0,b_0\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ with $(\nu_j,\mu_j)$, $B_{\infty}=-2e_z$, and $u_{\infty}$ parallel to $e_z$. Let $(v,b)$ be a weak ideal limit of $(v_j,b_j)$ as $(\nu_j,\mu_j)\to (0,0)$. Let $A_j$ and $A$ be vector potentials of $B_j=b_j+B_{\infty}$ and $B=b+B_{\infty}$ defined by (3.16). There exists a subsequence such that \begin{align} A_j\to A\quad \textrm{in}\ L^{2}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3}) ). \end{align}\\ In particular, for flux functions $\phi_j$ of $b_j$ and $\phi$ of $b$, \begin{align} \frac{1}{r}\phi_j\to \frac{1}{r}\phi\quad \textrm{in}\ L^{2}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3}) ). \end{align} \end{prop} \begin{proof} By Proposition 6.4, the vector potential $a_j$ of $b_j$ defined by (3.3) and $u_j=v_j+u_{\infty}$ satisfy \begin{align*} \partial_t a_j+\mathbb{P}(B_j\times u_j)=-\mu_j\nabla \times b_j\quad \textrm{on}\ L^{2}_{\sigma}(\mathbb{R}^{3}), \end{align*}\\ for a.e. $t\in [0,T]$, where $\mathbb{P}$ is the projection operator associated with (2.1). We take an open ball $B(0,R)\subset \mathbb{R}^{3}$ with radius $R>0$ and an arbitrary $\xi\in W^{1,4}_{0}(B(0,R))$. We extend $\xi$ to $\mathbb{R}^{3}$ by the zero extension and denote it by the same symbol. By multiplying $\xi$ by the above equation and integration by parts, \begin{align*} \int_{B(0,R)}\partial_t a_{j}\cdot \xi\textrm{d} x+\int_{B(0,R)}B_j\times u_j\cdot \mathbb{P} \xi\textrm{d} x =-\mu_j\int_{B(0,R)}b_j\cdot \nabla \times \xi\textrm{d} x. \end{align*}\\ By the Sobolev embedding and boundedness of $\mathbb{P}$ on $L^{4}(\mathbb{R}^{3})$, \begin{align*} ||\mathbb{P} \xi||_{L^{\infty}(\mathbb{R}^{3})}\lesssim ||\mathbb{P} \xi||_{W^{1,4}(\mathbb{R}^{3})}\lesssim || \xi||_{W^{1,4}(\mathbb{R}^{3})}=|| \xi||_{W^{1,4}(B(0,R) )}. \end{align*}\\ Thus \begin{align*} \left|\int_{B(0,R)}\partial_t a_{j}\cdot \xi\textrm{d} x\right| &\lesssim ||B_j\times u_j||_{L^{1}(B(0,R)) }|| \xi||_{W^{1,4}(B(0,R))} +\mu_j ||b_j||_{L^{2}(\mathbb{R}^{3}) }||\nabla \xi||_{L^{2}(B(0,R))} \\ &\lesssim (||B_j\times u_j||_{L^{1}(B(0,R)) } +\mu_j ||b_j||_{L^{2}(\mathbb{R}^{3}) })|| \xi||_{W^{1,4}(B(0,R))}. \end{align*}\\ By taking the supremum for $ \xi$, \begin{align*} ||\partial_t a_j||_{W^{1,4}_{0}(B(0,R))^{*} }\lesssim ||B_j\times u_j||_{L^{1}(B(0,R)) } +\mu_j ||b_j||_{L^{2}(\mathbb{R}^{3}) }. \end{align*}\\ Since $B_j\times u_j=b_j\times v_j+B_{\infty}\times v_j+b_j\times u_{\infty}$ is uniformly bounded in $L^{\infty}(0,T; L^{1}(B(0,R)) )$ by (6.1), so is $\partial_t a_j$ in $L^{\infty}(0,T; W^{1,4}_0(B(0,R))^{*} )$. By Lemma 2.16, there exists a subsequence such that for the vector potential $a$ of $b$ defined by (3.3), \begin{align*} a_j\to a\quad \textrm{in}\ L^{2}(0,T; L^{2}(B(0,R))). \end{align*}\\ Since $R>0$ is arbitrary, by a diagonal argument \begin{align*} a_j\to a\quad \textrm{in}\ L^{2}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3})). \end{align*}\\ Thus (7.1) holds for $A_j=a_j-\phi_{\infty}\nabla \theta$, $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. By taking the $\theta$-component $a_j\cdot r\nabla \theta=r^{-1} \phi_j$ and (7.2) follows. \end{proof} \subsection{Mean-square potential conservation} We show that the convergence (7.2) implies that for $u=v+u_{\infty}$, $\Phi=\phi-\phi_{\infty}$ is a distributional solution to \begin{align} \Phi_{t}+u\cdot \nabla \Phi&=0\quad \textrm{in}\ \mathbb{R}^{3}\times (0,T). \end{align}\\ Furthermore, we show that $\Phi_{+}^{2}$ is a distributional solution to \begin{align} \partial_t \Phi^{2}_{+}+u\cdot \nabla \Phi^{2}_{+}&=0\quad \textrm{in}\ \mathbb{R}^{3}\times (0,T), \end{align}\\ and conserves generalized magnetic mean-square potential. \begin{prop} Let $\phi_j, \phi, v$ and $u_{\infty}$ be as in Proposition 7.1. Let $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. For $\Phi_j=\phi_j-\phi_{\infty}$ and $\Phi=\phi-\phi_{\infty}$, \begin{align} \Phi_j\to \Phi \quad \textrm{in}\ L^{2} (0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3}) ). \end{align}\\ For $u=v+u_{\infty}$, $\Phi$ is a distributional solution to (7.3). Moreover, \begin{align} \Phi_{j,+}&\to \Phi_{+}\quad \textrm{in}\ L^{2}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3})), \\ \Phi_{j,+}^{2}&\to \Phi_{+}^{2}\quad \textrm{in}\ L^{2}(0,T; L^{2}_{\textrm{loc}}(\mathbb{R}^{3})). \end{align}\\ The function $\Phi_{+}^{2}$ is a distributional solution to (7.4) and satisfies \begin{align} \int_{\mathbb{R}^{3}}\Phi_{+}^{2}\textrm{d} x=\int_{\mathbb{R}^{3}}\Phi_{0,+}^{2}\textrm{d} x, \end{align}\\ for a.e. $t\in [0,T]$. \end{prop} \begin{proof} The convergence (7.2) implies (7.5). For an arbitrary $\varphi\in C^{\infty}_{c}(\mathbb{R}^{3}\times [0,T))$, we take $R>0$ such that $\textrm{spt}\ \varphi\subset B(0,R)\times [0,T]$. The function $\Phi_j$ satisfies (6.4) on $L^{2}_{\textrm{loc}}(\mathbb{R}^{3})$ for a.e. $t\in [0,T]$ by Proposition 6.5. By multiplying $\varphi$ by (6.4) and integration by parts, \begin{align*} \int_0^{T}\int_{\mathbb{R}^{3}}\Phi_j(\partial_t \varphi +(v_j+u_{\infty})\cdot \nabla \varphi) \textrm{d} x\textrm{d} s +\int_{\mathbb{R}^{3}} \Phi_0\varphi_0\textrm{d} x =-\mu_j\int_0^{T}\int_{\mathbb{R}^{3}} \left( \Phi_j \Delta \varphi-\frac{2}{r}\partial_r \Phi_j\varphi\right)\textrm{d} x\textrm{d} s. \end{align*}\\ The function $\Phi_j$ is uniformly bounded in $L^{2}(0,T; L^{2}(B(0,R)) )$. By (3.2) and (6.1), $r^{-1}\nabla \Phi_j$ is uniformly bounded in $L^{\infty}(0,T; L^{2}(B(0,R)) )$. Thus the right-hand side vanishes as $j\to\infty$. The field $v_j$ is uniformly bounded in $L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}) )$ by (6.1). By \begin{align*} \left|\int_{0}^{T}\int_{\mathbb{R}^{3}}(\Phi_j v_{j}-\Phi v) \cdot \nabla \varphi \textrm{d} x\textrm{d} s\right| &\leq \sup_j ||v_{j}||_{L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}) ) } ||\Phi_j-\Phi||_{L^{1}(0,T; L^{2}(B) ) }||\nabla \varphi||_{L^{\infty}(\mathbb{R}^{3}\times (0,T)) } \\ &+\left|\int_{0}^{T}\int_{\mathbb{R}^{3}}( v_{j}- v) \Phi\cdot \nabla \varphi \textrm{d} x\textrm{d} s\right|\to 0, \end{align*}\\ the limit $\Phi$ satisfies \begin{align*} \int_0^{T}\int_{\mathbb{R}^{3}}\Phi(\partial_t \varphi +(v+u_{\infty})\cdot \nabla \varphi) \textrm{d} x\textrm{d} s =-\int_{\mathbb{R}^{3}} \Phi_0\varphi_0\textrm{d} x, \end{align*}\\ and (7.3) holds in the distributional sense. The convergence (7.5) implies (7.6) since $|\tau_+-s_{+}|\leq |\tau-s|$ for $\tau,s\in \mathbb{R}$. The function $\Phi_{j,+}$ is uniformly bounded in $L^{\infty}(0,T; L^{q}(\mathbb{R}^{3}) )$ for $1\leq q<\infty$ by (3.19) and (6.1). By H\"older's inequality, for $l>4$ and $\theta\in (0,1)$ satisfying $1/4=\theta/2+(1-\theta)/l$, \begin{align*} ||\Phi_{j,+}-\Phi_{+} ||_{L^{4}(B(0,R)) }\leq||\Phi_{j,+}-\Phi_{+} ||_{L^{2}(B(0,R)) }^{\theta}||\Phi_{j,+}-\Phi_{+} ||_{L^{l}(B(0,R)) }^{1-\theta}. \end{align*}\\ This implies that $\Phi_{j,+}\to \Phi_{+}$ in $L^{2} (0,T; L^{4}_{\textrm{loc}}(\mathbb{R}^{3}) )$ since $2\theta<1$. By applying H\"older's inequality for $\Phi_{j,+}^{2}-\Phi_{+}^{2}=(\Phi_{j,+} -\Phi_{+} )( \Phi_{j,+} +\Phi_{+})$, \begin{align*} || \Phi_{j,+}^{2}-\Phi_{+}^{2}||_{L^{2}(B(0,R)) } \leq || \Phi_{j,+}-\Phi_{+}||_{L^{4}(B(0,R)) }|| \Phi_{j,+}+\Phi_{+}||_{L^{4}(B(0,R)) }. \end{align*}\\ Thus (7.7) holds. By Proposition 6.5, $\Phi_{j,+}^{2}$ satisfies (6.11). For arbitrary $\varphi\in C^{\infty}_{c}(\mathbb{R}^{3}\times [0,T))$, \begin{align*} &\int_0^{T}\int_{\mathbb{R}^{3}}\Phi_{j,+}^{2}(\partial_t \varphi +(v_j+u_{\infty})\cdot \nabla \varphi) \textrm{d} x\textrm{d} s +\int_{\mathbb{R}^{3}} \Phi_{0,+}^{2}\varphi_0\textrm{d} x \\ &=-\mu_j\int_0^{T}\int_{\mathbb{R}^{3}} \left( \Phi_{j,+}^{2} \Delta \varphi+\frac{2}{r}\partial_r \Phi_{j,+}^{2}\varphi -2|\nabla \Phi_{j,+}|^{2} \varphi\right)\textrm{d} x\textrm{d} s. \end{align*}\\ We take $R>0$ such that $\textrm{spt}\ \varphi\subset B(0,R)\times [0,T]$. The functions $\Phi_{j,+}$ and $r^{-1}\partial_r \Phi_j$ are uniformly bounded in $L^{\infty}(0,T; L^{2}(B(0,R)))$. Hence $r^{-1}\partial_r \Phi_{j,+}^{2}=2r^{-1}\partial_r \Phi_{j} \Phi_{j,+}$ is uniformly bounded in $L^{\infty}(0,T; L^{1}( B(0,R)))$. By $|\nabla \Phi_{j,+}|=\nabla \Phi_j1_{(0,\infty)}(\Phi_j)\leq |\nabla \Phi_j|$, $|\nabla \Phi_{j,+}|^{2}\in L^{\infty}(0,T; L^{1}(B(0,R)))$ is uniformly bounded. Thus the right-hand side vanishes as $j\to\infty$. By (7.7) and (6.2), in a similar way as we showed (7.3), letting $j\to\infty$ implies that \begin{align*} \int_0^{T}\int_{\mathbb{R}^{3}}\Phi_{+}^{2}(\partial_t \varphi +(v+u_{\infty})\cdot \nabla \varphi) \textrm{d} x\textrm{d} s= -\int_{\mathbb{R}^{3}} \Phi_{0,+}^{2}\varphi_0\textrm{d} x. \end{align*}\\ Thus $\Phi_{+}^{2}$ is a distributional solution to (7.4). We take $\chi\in C^{\infty}_{c}[0,\infty)$ such that $\chi=1$ in $[0,1]$ and $\chi=0$ on $[2,\infty)$ and set $\chi_R(x)=\chi(R^{-1}|x|)$. For arbitrary $\rho\in C^{\infty}_{c}[0,T)$, substituting $\varphi=\chi_{R} \rho$ into the above and letting $R\to\infty$ imply that \begin{align*} \int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}\Phi_{+}^{2}\textrm{d} x\right)\textrm{d} t&=-\rho(0) \int_{\mathbb{R}^{3}}\Phi_{0,+}^{2}\textrm{d} x. \end{align*}\\ Thus (7.8) holds for a.e. $t\in [0,T]$. \end{proof} \subsection{Global convergence} We strengthen the local convergence (7.6) to the following global convergence (7.9) by using the equalities (6.8) and (7.8). We then adjust it to the desired form by the isometries (3.12) to demonstrate generalized magnetic helicity conservation at weak ideal limits. \begin{prop} \begin{align} \Phi_{j,+}\to \Phi_{+}\quad \textrm{in}\ L^{2}(0,T; L^{2}(\mathbb{R}^{3})). \end{align} \end{prop} \begin{proof} By (3.19), (3.2) and (6.1), $\Phi_{j,+}$ is uniformly bounded in $L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}) )$. By choosing a subsequence, $\Phi_{j,+}\rightharpoonup \Phi_{+}$ in $L^{2}(0,T; L^{2}(\mathbb{R}^{3}) )$. By (6.8) and (7.8), \begin{align*} T\int_{\mathbb{R}^{3}}\Phi_{0,+}^{2}\textrm{d} x=\int_{0}^{T}\int_{\mathbb{R}^{3}}\Phi_+^{2}\textrm{d} x\textrm{d} t \leq \liminf_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{3}}\Phi_{j,+}^{2}\textrm{d} x\textrm{d} t\leq T\int_{\mathbb{R}^{3}}\Phi_{0,+}^{2}\textrm{d} x. \end{align*}\\ Thus $\lim_{j\to \infty}||\Phi_{j,+}||_{L^{2}(0,T; L^{2}(\mathbb{R}^{3}) )}=||\Phi_{+}||_{L^{2}(0,T; L^{2}(\mathbb{R}^{3}) )}$ and (7.9) holds. \end{proof} \begin{prop} \begin{align} \Phi_{j,+}\to \Phi_{+}\quad \textrm{in}\ L^{7}(0,T; L^{2}(\mathbb{R}^{2}_{+};r^{-1})). \end{align} \end{prop} \begin{proof} The function $\Phi_{j,+}-\Phi_{+}$ is supported in $\{x\in \mathbb{R}^{3}\ |\ \Phi_j(x)>0\}\cup \{x\in \mathbb{R}^{3}\ |\ \Phi(x)>0\}$, and \begin{align*} |\Phi_{j,+}-\Phi_{+}| &= |\Phi_{j,+}-\Phi_{+}|(1_{(0,\infty)}(\Phi_j)+1_{(0,\infty)}(\Phi)-1_{(0,\infty)}(\Phi_j)1_{(0,\infty)}(\Phi) ) \\ &\leq |\Phi_{j,+}-\Phi_{+}|(1_{(0,\infty)}(\Phi_j)+1_{(0,\infty)}(\Phi) ). \end{align*}\\ By (3.17), (3.2) and (6.1), $1_{(0,\infty)}(\Phi_j)$ is uniformly bounded in $L^{\infty}(0,T; L^{2}(\mathbb{R}^{3}) )$. The convergence (7.9) implies that \begin{align*} \Phi_{j,+}\to \Phi_{+}\quad \textrm{in}\ L^{2}(0,T; L^{1}(\mathbb{R}^{3})). \end{align*}\\ By the isometry $(3.12)_1$ for $\varphi_j=\phi_j/r^{2}$ and $\varphi=\phi/r^{2}$, \begin{align*} \left(\varphi_j-1-\frac{\gamma}{r^2}\right)_{+}\to \left(\varphi-1-\frac{\gamma}{r^2}\right)_{+} \quad \textrm{in}\ L^{2}(0,T; L^{1}(\mathbb{R}^{5})). \end{align*}\\ By (3.11), (3.2) and (6.1), $\varphi_j$ is uniformly bounded in $L^{\infty}(0,T; \dot{H}^{1}(\mathbb{R}^{5}))$. By $(\varphi_j-1-\gamma/r^{2})_{+}\leq |\varphi_j|$ and $\dot{H}^{1}(\mathbb{R}^{5})\subset L^{10/3}(\mathbb{R}^{5})$, $(\varphi_j-1-\gamma/r^{2})_{+}$ is uniformly bounded in $L^{\infty}(0,T; L^{10/3}(\mathbb{R}^{5}) )$. By H\"older's inequality, \begin{align*} ||\varsigma||_{L^{7}(0,T; L^{2}(\mathbb{R}^{5}) ) } \leq ||\varsigma||_{L^{\infty}(0,T; L^{10/3}(\mathbb{R}^{5}) ) } ^{5/7}||\varsigma||_{L^{2}(0,T; L^{1}(\mathbb{R}^{5}) ) } ^{2/7}, \end{align*}\\ $\varsigma=(\varphi_j-1-\gamma/r^{2})_{+}-(\varphi-1-\gamma/r^{2})_{+}$ converges to zero in $L^{7}(0,T; L^{2}(\mathbb{R}^{5}) )$. By the isometry $(3.12)_2$, (7.10) holds. \end{proof} \subsection{Helicity conservation} We now demonstrate generalized magnetic helicity conservation at weak ideal limits for axisymmetric Leray--Hopf solutions. \begin{thm} Let $B_{\infty}=-2e_z$. Let $u_{\infty}$ be a constant parallel to $e_z$. Let $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. Suppose that $(v,b)$ is a weak ideal limit of axisymmetric Leray--Hopf solutions to (1.9) for fixed $(v_0,b_0)$. Then, for the Clebsch potentials $\phi,G$ of $b$ and $\Phi=\phi-\phi_{\infty}$, \begin{align} \int_{\mathbb{R}^{3}}\Phi_{+}\frac{G}{r^{2}}\textrm{d} x =\int_{\mathbb{R}^{3}}\Phi_{0,+}\frac{G_0}{r^{2}}\textrm{d} x, \end{align}\\ for a.e. $t\in [0,T]$. \end{thm} \begin{proof} By the uniform bound (6.14), \begin{align*} &\sup_{0\leq t\leq T}\left|\int_{\mathbb{R}^{3}}\Phi_{j,+}\frac{G_j}{r^{2}}\textrm{d} x-\int_{\mathbb{R}^{3}}\Phi_{0,+}\frac{G_0}{r^{2}}\textrm{d} x \right| \leq \int_{0}^{T}\left|\frac{\textrm{d}}{\textrm{d} t} \left(\int_{\mathbb{R}^{3}}\Phi_{j,+} \frac{G_j}{r^{2}} \textrm{d} x\right)\right| \textrm{d} t \\ &\lesssim T^{1/2}\mu^{1/2}_{j}\left(||v_0||_{L^{2}(\mathbb{R}^{3}) }^{2}+||b_0||_{L^{2}(\mathbb{R}^{3}) }^{2} \right). \end{align*}\\ Thus \begin{align} \lim_{j\to\infty}\int_{\mathbb{R}^{3}}\Phi_{j,+}\frac{G_j}{r^{2}}\textrm{d} x=\int_{\mathbb{R}^{3}}\Phi_{0,+}\frac{G_0}{r^{2}}\textrm{d} x\quad \textrm{uniformly in}\ [0,T]. \end{align}\\ For an arbitrary $\rho\in C_{c}^{\infty}[0,T)$, \begin{align*} \int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}\Phi_{j,+} \frac{G_j}{r^{2}} \textrm{d} x \right)\textrm{d} t-\int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}\Phi_{+} \frac{G}{r^{2}} \textrm{d} x \right)\textrm{d} t &=\int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}( \Phi_{j,+}-\Phi_{+}) \frac{G_j}{r^{2}} \textrm{d} x \right)\textrm{d} t \\ &+\int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}\Phi_{+}(G_j-G) \frac{1}{r^{2}} \textrm{d} x \right)\textrm{d} t. \end{align*}\\ The last term converges to zero as $j\to\infty$ since $G_j\rightharpoonup G$ weakly-star in $L^{\infty}(0,T;L^{2}(\mathbb{R}^{2}_{+};r^{-1}))$ by (6.2) and $\Phi_{+}\in L^{\infty}(0,T; L^{2}(\mathbb{R}^{2}_{+};r^{-1}) )$ by (3.18). By (7.10), \begin{align*} &\left| \int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}( \Phi_{j,+}-\Phi_{+}) \frac{G_j}{r^{2}} \textrm{d} x \right)\textrm{d} t \right| \\ &=2\pi \left| \int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{2}_{+}}( \Phi_{j,+}-\Phi_{+}) \frac{G_j}{r} \textrm{d} z\textrm{d} r \right)\textrm{d} t \right| \\ &\leq 2\pi ||\dot{\rho}||_{L^{7/6}(0,T) }||\Phi_{j,+}-\Phi_{+}||_{L^{7}(0,T; L^{2}(\mathbb{R}^{2}_{+};r^{-1} ) ) }\left(\sup_{j}||G_{j}||_{L^{\infty}(0,T; L^{2}(\mathbb{R}^{2}_{+};r^{-1}) ) }\right)\to 0. \end{align*}\\ By the uniform convergence (7.12), \begin{align*} \int_{0}^{T}\dot{\rho}(t)\left(\int_{\mathbb{R}^{3}}\Phi_{+}\frac{G}{r^{2}}\textrm{d} x\right) \textrm{d} t=-\rho(0)\int_{\mathbb{R}^{3}}\Phi_{0,+}\frac{G_0}{r^{2}}\textrm{d} x. \end{align*}\\ Thus (7.11) holds for a.e. $t\in [0,T]$. \end{proof} \begin{thm} Let $B_{\infty}=-2e_z$. Let $u_{\infty}$ be a constant parallel to $e_z$. Let $\phi_{\infty}=r^{2}+\gamma$ and $\gamma\geq 0$. There exists a weak ideal limit $(v,b)$ of axisymmetric Leray--Hopf solutions to (1.9) for $v_0,b_0\in L^{2}_{\sigma,\textrm{axi}}(\mathbb{R}^{3})$ satisfying \begin{align} &\int_{\mathbb{R}^{3}}\left(|v|^{2}+|b|^{2}\right) \textrm{d} x \leq \int_{\mathbb{R}^{3}}\left(|v_0|^{2}+|b_0|^{2}\right) \textrm{d} x, \\ &\int_{\mathbb{R}^{3}}(\phi-\phi_{\infty})_{+}\frac{G}{r^{2}}\textrm{d} x =\int_{\mathbb{R}^{3}}(\phi_0-\phi_{\infty})_{+}\frac{G_0}{r^{2}}\textrm{d} x, \end{align}\\ for a.e. $t\in [0,T]$, where $\phi$, $G$ are the Clebsch potentials of $b$. \end{thm} \begin{proof} We take $(\nu_j,\mu_j)$ such that $(\nu_j,\mu_j)\to (0,0)$. By Theorem 6.3, there exists an axisymmetric Leray--Hopf weak solution $(v_j,b_j)$ to (1.9) for $(v_0,b_0)$. By (6.1), there exists a subsequence and $(v,b)$ such that (6.2) holds. The limit $(v,b)$ satisfies (7.13) by the lower semicontinuity of the norm for the weak-star convergence (6.2). The conservation (7.14) follows from Theorem 7.5. \end{proof} \begin{rem} The results in Sections 6 and 7 hold also for $B_{\infty}=-We_{z}$, $\phi_{\infty}=Wr^{2}/2+\gamma$, $W>0$, and $\gamma\geq 0$. \end{rem} \section{Orbital stability} We complete the proof of Theorem 1.3. We first show the stability of a set of minimizers to the variational problem (4.1) with parameters $(h,2,\gamma)$ in weak ideal limits of axisymmetric Leray--Hopf solutions by the compactness result in Theorem 5.6 and the existence result for weak ideal limits in Theorem 7.6. We then extend the result for general parameters $(h, W,\gamma)$. We derive Theorem 1.3 from the uniqueness of the Grad--Shafranov equation (4.3) for $\gamma=0$ and the explicit form of the constant $h_C$ in (1.11). \subsection{Stability of minimizers} We denote the suppressed constant $\gamma\geq 0$ explicitly for $H[\cdot]=H_{\gamma}[\cdot]$, $I_{h}=I_{h,\gamma}$ and $S_{h}=S_{h,\gamma}$ in (3.26), (4.1), and (4.2). \begin{prop} Let $h\in \mathbb{R}$ and $\gamma\geq 0$. Let $\phi_{\infty}=r^{2}+\gamma$ and $B_{\infty}=-2e_{z}$. Let $u_{\infty}$ be a constant parallel to $e_z$. The set $S_{h,\gamma}$ in (4.2) is orbitally stable in weak ideal limits of axisymmetric Leray--Hopf solutions to (1.9) in the sense that for arbitrary $\varepsilon >0$ there exists $\delta >0$ such that for $v_0,b_0\in L^{2}_{\sigma,\textrm{axi}} (\mathbb{R}^{3})$ satisfying \begin{align*} ||v_0||_{L^{2}(\mathbb{R}^{3})} +\inf\left\{ ||b_0-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,\gamma}\ \right\}+\left|H_{\gamma}[b_0]-h\right| \leq \delta, \end{align*}\\ there exists a weak ideal limit $(v,b)$ of axisymmetric Leray--Hopf solutions to (1.9) for $(v_0,b_0)$ such that \begin{align*} \textrm{ess sup}_{t>0} \left(||v||_{L^{2}(\mathbb{R}^{3})} +\inf\left\{ ||b-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,\gamma}\ \right\} \right)\leq \varepsilon. \end{align*} \end{prop} \vspace{5pt} \begin{proof} Suppose that the assertion were false. Then, there exists $\varepsilon_0>0$ such that for any $n\geq 1$, there exists $(v_{0,n},b_{0,n})$ satisfying \begin{align*} ||v_{0,n}||_{L^{2}(\mathbb{R}^{3})}+\inf\left\{ ||b_{0,n}-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,\gamma}\ \right\}+|H_{\gamma}[b_{0,n}]-h| \leq \frac{1}{n}, \end{align*}\\ and the weak ideal limit $(v_n,b_n)$ in Theorem 7.6 satisfying \begin{align*} \textrm{ess sup}_{t>0}\left( ||v_n||_{L^{2}(\mathbb{R}^{3})} +\inf\left\{ ||b_n-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,\gamma}\ \right\}\right)\geq \varepsilon_0. \end{align*}\\ We denote by $F_n$ the set of all points $t\in (0,\infty)$ such that \begin{align*} &{\mathcal{E}}[v_n,b_n]\leq {\mathcal{E}}[v_{0,n},b_{0,n}], \\ &H_{\gamma}[b_n]=H_{\gamma}[b_{0,n}]. \end{align*}\\ The set $F_n^{c}$ has measure zero. For $F=\cap_{n=1}^{\infty} F_n$, $F^{c}$ has measure zero and the above inequality and equality hold for all $t\in F$ and $n\geq 1$. We take a point $t_n\in F$ such that \begin{align*} ||v_n||_{L^{2}(\mathbb{R}^{3})}(t_n) +\inf\left\{ ||b_n-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,\gamma} \right\}(t_n) \geq \frac{\varepsilon_0}{2}>0, \end{align*}\\ and write $(v_n,b_n)=(v_n,b_n)(\cdot,t_n)$ by suppressing $t_n$. For $h_n=H_{\gamma}[b_{0,n}]$, \begin{align*} I_{h_n,\gamma}\leq \frac{1}{2}\int_{\mathbb{R}^{3}}|b_{0,n}|^{2}\textrm{d} x\leq \frac{1}{2}\left(\inf\left\{ ||b_{0,n}-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,\gamma} \right\}+\sqrt{2}I_{h,\gamma}^{1/2} \right)^{2}. \end{align*}\\ By the lower semi-continuity (4.17), letting $n\to\infty$ implies that \begin{align*} H_{\gamma}[b_{0,n}]\to h,\quad {\mathcal{E}}[v_{0,n},b_{0,n}]\to I_{h,\gamma}. \end{align*}\\ By helicity conservation and nonincreasing total energy, \begin{align*} &H_{\gamma}[b_n]=H_{\gamma}[b_{0,n}]=h_n,\\ &I_{h_n,\gamma}\leq {\mathcal{E}}[v_n,b_n]\leq {\mathcal{E}}[v_{0,n},b_{0,n}]. \end{align*}\\ Letting $n\to\infty$ implies that \begin{align*} H_{\gamma}[b_n]\to h,\quad {\mathcal{E}}[v_n,b_n]\to I_{h,\gamma}. \end{align*}\\ By Theorem 5.6, there exists $\{n_k\}$, $\{z_{k}\}$ and some $b\in S_{h,\gamma}$ such that \begin{align*} (v_{n_k},b_{n_k}(\cdot +z_{k}e_z))\to (0,b) \quad \textrm{in}\ L^{2}(\mathbb{R}^{3}). \end{align*}\\ Thus \begin{align*} 0&=\lim_{k\to\infty}\left\{ ||v_{n_k}||_{L^{2}(\mathbb{R}^{3})}+||b_{n_k}(\cdot+z_{k}e_z)-b||_{L^{2}(\mathbb{R}^{3})} \right\} \\ &\geq\liminf_{k\to\infty}\left( ||v_{n_k}||_{L^{2}(\mathbb{R}^{3})}+\inf\left\{\ ||b_{n_k}-\tilde{b}||_{L^{2}(\mathbb{R}^{3})} \ \middle|\ \tilde{b}\in S_{h,\gamma} \right\}\right) \geq \frac{\varepsilon_0}{2}>0. \end{align*}\\ We obtained a contradiction. The proof is complete. \end{proof} \begin{thm}[Stability of nonlinear force-free fields with discontinuous factors] Let $h\in \mathbb{R}$, $W>0$ and $\gamma\geq 0$. Let $\phi_{\infty}=Wr^{2}/2+\gamma$ and $B_{\infty}=-We_{z}$. Let $u_{\infty}$ be a constant parallel to $e_z$. The set of minimizers $S_{h,W,\gamma}$ to $I_{h,W,\gamma}$ in (1.13) is orbitally stable in weak ideal limits of axisymmetric Leray--Hopf solutions to (1.9) in the sense that for arbitrary $\varepsilon >0$ there exists $\delta >0$ such that for $v_0,b_0\in L^{2}_{\sigma,\textrm{axi}} (\mathbb{R}^{3})$ satisfying \begin{align*} ||v_0||_{L^{2}(\mathbb{R}^{3})} +\inf\left\{ ||b_0-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,W,\gamma}\ \right\}+\left|2\int_{\mathbb{R}^{3}}(\phi_0-\phi_{\infty})_+\frac{G_0}{r^{2}}\textrm{d} x -h\right| \leq \delta, \end{align*}\\ for the Clebsch potentials $\phi_0,G_0$ of $b_0$, there exists a weak ideal limit $(v,b)$ of axisymmetric Leray--Hopf solutions to (1.9) for $(v_0,b_0)$ such that \begin{align*} \textrm{ess sup}_{t>0}\left( ||v||_{L^{2}(\mathbb{R}^{3})} +\inf\left\{ ||b-\tilde{b}||_{L^{2}(\mathbb{R}^{3}) }\ \big|\ \tilde{b}\in S_{h,W,\gamma}\ \right\}\right) \leq \varepsilon. \end{align*} \end{thm} \vspace{5pt} \begin{proof} The assertion follows from the same argument as the proof of Proposition 8.1 for $W=2$ by the compactness of minimizing sequences to $I_{h, W,\gamma}$ (Remark 5.7) and the existence of weak ideal limits conserving generalized magnetic helicity (Remark 7.7). \end{proof} \subsection{Uniqueness} For given constants $(W,\lambda)$ and the constant $h_C$ in (1.11), we show that the set of minimizers $S_{h_C,W,0}$ is translations of $U_C-B_{\infty}$ for the explicit solution $U_C$ in (1.6). The following uniqueness result is due to Fraenkel \cite[Theorem 4]{Fra92}, \cite[Exercise 4.23]{Fra00}. \begin{thm} Let $W>0$, $\mu\in \mathbb{R}$ and $\phi_{\infty}=Wr^{2}/2$. Let $\phi\in \dot{H}^{1}_{0}(\mathbb{R}^{2}_{+};r^{-1})$ be a weak solution to (4.3). There exists $z_0\in \mathbb{R}$ such that $\phi-\phi_{\infty}=\Phi_{C}(\cdot+ z_0e_{z})$ for $\Phi_C$ with $(W,\mu^{2})$ in (1.6).\\ \end{thm} The proof of Theorem 8.3 is based on the moving plane method. As proved in the proof of Proposition 4.4, $\varphi=\phi/r^{2}$ is a continuous decaying solution to \begin{align} -\Delta_{y} \varphi=\mu^{2}\left(\varphi-\frac{W}{2} \right)_{+}\quad \textrm{in}\ \mathbb{R}^{5}. \end{align}\\ The decay implies that the set $\{y\in \mathbb{R}^{5}\ |\ \varphi>W/2 \}$ is compact in $\mathbb{R}^{5}$. Therefore, $\varphi$ is expressed in terms of the Newton potential, \begin{align*} \varphi(y)=\frac{1}{8\pi^{2}}\int_{\{\varphi>W/2\}}\frac{\mu^{2}}{|y-w|^{3}}\left(\varphi-\frac{W}{2} \right)_{+}\textrm{d} w. \end{align*}\\ This potential representation implies that $\varphi$ is a positive solution to (8.1) satisfying an admissible asymptotic behavior as $|y|\to\infty$ which enables one to apply the moving plane method \cite[Theorem 4.2]{Fra00} to deduce that a translation of $\varphi$ in $y_1$-direction is radially symmetric and decreasing. The symmetry of $\varphi$ implies the explicit form (1.6) for $\lambda=\mu^{2}$ \cite{Moffatt}, cf. \cite{Fra92}, \cite{Fra00}. The equation of $\varphi=\varphi(\rho)$ for $\rho=\sqrt{z^{2}+r^{2}}$ is \begin{align*} -\left(\partial_{\rho}^{2}+\frac{4}{\rho}\partial_{\rho} \right)\varphi=\mu^{2}\left(\varphi-\frac{W}{2}\right)_{+},\quad \rho>0. \end{align*}\\ Since $\varphi$ is decreasing, there exists a unique $R>0$ such that $\varphi(R)=W/2$. The function $\tilde{\varphi}=\varphi-W/2$ satisfies \begin{align*} -\left(\partial_{\rho}^{2}+\frac{4}{\rho}\partial_{\rho} \right)\tilde{\varphi}=\mu^{2}\tilde{\varphi}_{+},\quad \rho>0. \end{align*}\\ The scaled $\hat{\varphi}=\tilde{\varphi}(\rho/|\mu|)$ satisfies $-(\partial_{\rho}^{2}+4\rho^{-1}\partial_{\rho} )\hat{\varphi}=\hat{\varphi}$ and $\hat{\varphi}>0$ for $0<\rho<R_0=|\mu| R$ and $\hat{\varphi}(R_0)=0$. By the transform $\kappa=\rho^{3/2}\hat{\varphi}$, this equation is reduced to the Bessel's differential equation, \begin{align*} \ddot{\kappa}+\frac{1}{\rho}\dot{\kappa}+\left(1-\frac{(3/2)^{2} }{\rho^{2}}\right)\kappa=0,\quad \kappa&>0, \quad 0<\rho<R_0,\\ \kappa(R_0)&=0. \end{align*}\\ By the boundedness of $\kappa$ at $\rho=0$, $\kappa$ is the $3/2$-th order Bessel function of the first kind with some constant $C_1$, i.e., \begin{align*} \kappa&=C_1J_{3/2}(\rho),\\ R_0&=c_{3/2}. \end{align*}\\ The function $\tilde{\varphi}$ is harmonic for $\rho>R$ and expressed as $\tilde{\varphi}=C_2+C_3/\rho^{3}$ for $\rho>R$ with $C_2=-W/2$ and $C_3=WR^{2}/2$ by the boundary conditions $\tilde{\varphi }\to -W/2$ as $\rho\to\infty$ and $\tilde{\varphi}(R)=0$. By continuity of $\partial_{\rho}\tilde{\varphi}$ at $\rho=R$ and $\dot{J}_{3/2}(c_{3/2})=-J_{5/2}(c_{3/2})$, \begin{align*} C_1=\frac{3}{2}W\frac{c_{3/2}^{1/2} }{|\mu|^{3}J_{5/2}(c_{3/2}) }. \end{align*}\\ This implies the explicit form (1.6) with $(W,\mu^{2})$, i.e., $\Phi=\Phi_{C}$, since $\tilde{\varphi}=\Phi/r^{2}$ for $\Phi=\phi-\phi_{\infty}$.\\ The constant $h_C$ in (1.11) was computed by Moffatt \cite[p.128]{Moffatt}. We write it with our symbols. \begin{prop} The constant $h_C$ in (1.11) is \begin{align} h_C=\left(\frac{W}{\lambda}\right)^{2}\frac{12\pi c_{3/2}}{J_{5/2}^{2}(c_{3/2}) }\int_{0}^{c_{3/2}}\rho J_{3/2}^{2}(\rho)\textrm{d} \rho. \end{align} \end{prop} \begin{proof} By (1.11), for $\phi=\Phi_C+Wr^{2}/2$ and $\varphi=\phi/r^{2}$, \begin{align*} h_C =2\lambda^{1/2}\int_{\mathbb{R}^{3}}\Phi_{C,+}^{2}\frac{1}{r^{2}}\textrm{d} x =2\lambda^{1/2}\int_{\mathbb{R}^{3}}\left(\phi-\frac{W}{2}r^{2} \right)_{+}^{2}\frac{1}{r^{2}}\textrm{d} x &=\frac{2\lambda^{1/2}}{\pi}\int_{\mathbb{R}^{5}}\left(\varphi-\frac{W}{2} \right)_{+}^{2}\textrm{d} y\\ &=\frac{2}{\pi\lambda^{2}}\int_{\mathbb{R}^{5}}\left(\varphi\left(\frac{w}{\lambda^{1/2}}\right)-\frac{W}{2} \right)_{+}^{2}\textrm{d} w. \end{align*}\\ The function $\varphi_1(w)=\varphi(w/\lambda^{1/2})$ is a solution to (8.1) for $\mu=1$. By $\varphi_1(w)-W/2=C_1J_{3/2}(\rho)\rho^{-3/2}$ for $\rho=|w|<c_{3/2}$ with $C_1=3Wc_{3/2}^{1/2}/ (2J_{5/2}(c_{3/2}))$ and \begin{align*} \int_{\mathbb{R}^{5}}\left(\varphi_1(w)-\frac{W}{2} \right)_{+}^{2}\textrm{d} w =\frac{8\pi^{2}}{3}C_1^{2}\int_{0}^{c_{3/2}}\rho J_{3/2}^{2}(\rho)\textrm{d} \rho, \end{align*}\\ the constant $h_C$ is given by (8.2). \end{proof} \begin{thm} Let $W>0$ and $\lambda>0$. Let $U_C$ be as in (1.6). Let $h_C>0$ be as in (1.11). Let $B_{\infty}=-We_z$. Let $S_{h_C,W,0}$ be as in Theorem 8.2. Then, \begin{align} S_{h_C,W,0}=\left\{ U_{C}(\cdot +ze_z)-B_{\infty}\ \big|\ z\in \mathbb{R}\ \right\}. \end{align} \end{thm} \vspace{5pt} \begin{proof} We take an arbitrary $b=\nabla \times (\phi\nabla \theta)+G\nabla \theta\in S_{h_C,W,0}$. By Proposition 4.2, $G=\mu(\phi-\phi_{\infty})_{+}$ for $\phi_{\infty}=Wr^{2}/2$ and some $\mu>0$, and $\phi$ is a weak solution to (4.3). By Theorem 8.3, $\phi-\phi_{\infty}$ is a translation of the explicit solution (1.6) with parameters $W>0$ and $\tilde{\lambda}=\mu^{2}$. By (8.2), generalized magnetic helicity of $b$ is $C(W/\tilde{\lambda})^{2}$ for some exact constant $C$. Since $b\in S_{h_C,W,0}$, generalized magnetic helicity of $b$ is $h_C=C(W/\lambda)^{2}$. Thus $\tilde{\lambda}=\lambda$ and $\phi-\phi_{\infty}=\Phi_C(\cdot+z_0e_z)$, $G=G_{C}(\cdot +z_0e_z)$ for $\Phi_C$ and $G_C$ in (1.11) and some $z_0\in \mathbb{R}$. By $B_{\infty}=-\nabla \times (\phi_{\infty}\nabla \theta)$, \begin{align*} U_C=\nabla \times (\Phi_C\nabla \theta)+G_C\nabla \theta =b(\cdot -z_0e_z)+B_{\infty}. \end{align*}\\ We proved that $S_{h_C,W,0}\subset \left\{ U_{C}(\cdot +ze_z)-B_{\infty}\ \big|\ z\in \mathbb{R}\ \right\}$. The translations of $b=U_C(\cdot+z_0e_z)-B_{\infty}$ are also elements of $S_{h_C,W,0}$. Thus the equality (8.3) holds. \end{proof} \begin{proof}[Proof of Theorem 1.3] By Theorem 8.5 for $h=h_{C}$ and $\gamma=0$, \begin{align*} \inf\left\{||b_0-\tilde{b}||_{L^{2}(\mathbb{R}^{3})}\ |\ \tilde{b}\in S_{h_C,W,0}\ \right\} &=\inf\left\{||b_0-\tilde{b}||_{L^{2}(\mathbb{R}^{3})}\ \middle|\ \tilde{b}=U_C(\cdot +ze_z)-B_{\infty},\ z\in \mathbb{R} \right\} \\ &=\inf\left\{||b_0+B_{\infty}-U_C(\cdot +ze_z)||_{L^{2}(\mathbb{R}^{3})}\ \middle|\ z\in \mathbb{R} \right\}. \end{align*}\\ The assertion follows from Theorem 8.2. \end{proof} \begin{rem} The existence and the stability of axisymmetric nonlinear force-free fields with continuous factors $f\in C(\mathbb{R}^{3})$ may be studied by replacing $g(s)=2s_{+}$ of generalized magnetic helicity in (6.5) with sufficiently regular $g(s)$, e.g., $g(s)=2s_+^{\alpha}$, $\alpha>1$, cf. \cite{A8}. \end{rem}
1,116,691,500,546
arxiv
\section{Introduction} While supervised learning has been successful in a variety of computer vision tasks~\cite{krizhevsky2012imagenet,he2016deep,c3d,ren2015faster}, self-supervised representation learning from unlabeled data has attracted increasing attention in recent years~\cite{caron2018deep,misra2016shuffle}. In self-supervised learning, a model is first pre-trained on a large amount of unlabeled data with a surrogate loss. The fine-tuning process further helps the pre-trained model to be specialized in downstream tasks. Recently, there has been rapid progress in self-supervised representation learning for texts~\cite{devlin2018bert,yang2019xlnet}, where the Bidirectional Encoder Representations from Transformers (BERT) model~\cite{devlin2018bert} generalizes remarkably to many natural language tasks, \eg, question answering~\cite{alberti2019bert}. Motivated by BERT's success in self-supervised training, we aim to learn an analogous model for video and text joint modeling. We exploit video-text relations based on narrated instructional videos, where the aligned texts are detected by off-the-shelf automatic speech recognition (ASR) models. These instructional videos serve as natural sources for video-text relationship studies. First, they are vastly available and freely accessible on YouTube and other platforms~\cite{miech2019howto100m,sun2019videobert}. Second, the visual frames are aligned with the instructional narrations. The text narrations not only cover the objects in the scene explicitly but identify the salient action in the video clip. To generalize BERT to video-and-language tasks, Sun~\etal~\cite{sun2019videobert} extended the BERT model by learning from quantized video frame features. The original BERT takes discrete elements as inputs and predicts the corresponding tokens as the output. In contrast, visual features are distributed representations with real value, while the real-value features cannot be directly categorized into discrete labels for ``visual token'' prediction. Sun~\etal~\cite{sun2019videobert} discretized visual features into visual words via clustering. These visual tokens can be directly passed to the original BERT model. However, detailed local information, \eg, interacting objects, human actions would be possibly lost during clustering. It prevents the model from uncovering fine-grained relations between video and text. In this paper, we propose ActBERT\xspace to learn a joint video-text representation that uncovers global and local visual clues from paired video sequences and text descriptions. Both the global and the local visual signals interact with the semantic stream mutually. ActBERT\xspace leverages profound contextual information and exploits fine-grained relations for video-text joint modeling. First, ActBERT\xspace incorporates global actions, local regional objects and text descriptions in a joint framework. Actions, \eg, ``cut'', ``rotate'', ``slice'', are essential to various video-related downstream tasks. The recognition of human actions can demonstrate the model's capacity in motion understanding and complex human intention reasoning. It could be beneficial to explicitly model human actions during model pre-training. Long-term action sequences furthermore offer temporal dependencies about an instructional task. Though action clues are important, they are largely ignored in previous self-supervised video-text training~\cite{sun2019videobert,miech2019howto100m}, where actions are treated identically to objects. To model human actions, we first extract verbs from the text descriptions and construct an action classification dataset from the original dataset. Then, a 3D convolution network is trained to predict the action labels. The features from the optimized network are used as the action embedding. In this way, clip-level actions are represented, and the corresponding action label is inserted. Besides global action information, we incorporate local regional information to provide fine-grained visual cues~\cite{lu2019vilbert,tan2019lxmert,su2019vl,li2019unicoder,chen2019uniter}. Object regions provide detailed visual clues about the whole scene, including the regional object feature, the position of the object. The language model can benefit from the regional information for better language-and-visual alignment. Second, we introduce a TaNgled Transformer block (TNT) to encode features from three sources, \ie, global actions, local regional objects, and linguistic tokens. Previous studies \cite{lu2019vilbert,tan2019lxmert} consider two modalities when designing the new transformer layers, \ie, fine-grained object information from image and natural language. Lu \etal~\cite{lu2019vilbert} introduced a co-attentional transformer layer, where the key-value pairs from one modality are passed to the other modality's attention block to act as the new key-value pairs. However, in our scenario, there are three sources of inputs. The two sources, \ie, local regional features and linguistic texts, offer detailed descriptions of the occurring event in the clip. The other global action feature provides the human intention in time-series as well as a straightforward clue for contextual inferring. We design a new tangled transformer block for cross-modality feature learning from three sources. To enhance the interactions between two visual cues and linguistic features, we use a separate transformer block~\cite{vaswani2017attention} to encode each modality. The mutual cross-modal communication is later enhanced with two additional multi-head attention blocks. The action feature catalyzes mutual interactions. With the guidance from the action features, we inject visual information to the linguistic transformer, and incorporate linguistic information to the visual transformers. The tangled transformer dynamically selects judicious cues its context to facilitate the target prediction. Furthermore, we design four surrogate tasks to train ActBERT\xspace, \ie, masked language modeling with global and local visual cues, masked action classification, masked object classification and cross-modal matching. The pre-trained ActBERT\xspace is transferred to five video-related downstream tasks, \ie, video captioning, action segmentation, text-video clip retrieval, action step localization, and video question answering. We quantitatively show ActBERT\xspace achieves the state-of-the-art performance with a clear margin. \section{Related Work} \noindent\textbf{Video and language.} There are many existing video-and-language tasks to evaluate the model's capacities in joint video-text representation learning, e.g., video question answering \cite{tapaswi2016movieqa,jang2017tgif,lei2018tvqa,zhu2017uncovering}, video captioning~\cite{yao2015describing,zhou2018end}, text-video retrieval~\cite{yu2018joint,wang2019learning,miech2018learning}, video grounding~\cite{zhou2019grounded}. In video and language modeling, it can be difficult to learn relations between ordered video frames and their corresponding descriptions, where video temporal information and the interactions between multiple objects spatio-temporally requires to be incorporated. The dominant approach for multi-modal modeling is to leverage Recurrent Neural Networks (RNNs) and their variants, \eg, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), to model sequence relations, \eg, \cite{pan2016jointly,zhu2017bidirectional}. Zhou~\etal\cite{zhou2018end} leveraged masked transformers in both the encoder and the decoder for dense video captioning. Most of these works are conducted on well-annotated datasets where the descriptions are manually generated, requiring considerable human interference. There are other works to learn video representations from limited annotated data \cite{zhu2018compound}. The video data is a natural source to learn cross-modal representations. The text descriptions are automatically generated by off-the-shelf automatic speech recognition (ASR) models. This is more scalable and general to the model's deployment in real-world applications. In this paper, we focus on learning joint video-text representation in a self-supervised way. \noindent\textbf{Cross-modal pre-training.} In the past year, many works extended BERT to model cross-modal data~\cite{lu2019vilbert,su2019vl,tan2019lxmert,chen2019uniter,li2019unicoder,sun2019videobert}. The recent BERT model for video-text modeling~\cite{sun2019videobert} introduces visual words for video frames encoding, where local regional information is largely ignored. The synchronized video-audio signal is also a good test-bed for cross-modal representation learning~\cite{arandjelovic2018objects,korbar2018cooperative}. However, they leveraged low-level audio signals and only considered the synchronization nature of video data. In this work, we focus on video-text joint representation learning. Our ActBERT\xspace leverages multi-source information and achieves remarkable performance in many downstream video-text tasks. \noindent\textbf{Instructional videos.} Learning from instructional videos is challenging due to its data complexity across various tasks~\cite{damen2018scaling,alayrac2016unsupervised,zhou2018towards,miech2019howto100m}. These videos are collected from many domains, \eg, cooking, sports, gardening. Many works also regard the transcriptions generated from instructional videos as a source of supervision~\cite{alayrac2016unsupervised,zhou2018towards,miech2019howto100m}. However, we employ ActBERT\xspace to explicitly model human actions, local regions in a unified framework. We improve \cite{miech2019howto100m} with more specific relation modeling between videos and their description. We quantitatively demonstrated that ActBERT\xspace is more suitable for unsupervised video-text modeling. \section{Model Architecture} \subsection{Preliminary} \label{sec:preliminary} We first illustrate the original BERT~\cite{devlin2018bert} model. BERT~\cite{devlin2018bert} pre-trains a language model on large corpora in an unsupervised way. The pre-trained model is found to be flexible and beneficial to a variety of downstream tasks, \eg, question answering~\cite{alberti2019bert}. In BERT \cite{devlin2018bert}, the input entities are processed by a multi-layer bidirectional transformer~\cite{vaswani2017attention}. The embeddings of each input are processed with stacked self-attention layers to aggregate contextual features. The attention weights are adaptively generated. The output features contain contextual information about the original input sequence. In self-attention, the generated features are irrelevant to input sequence order, and it enables the output representation to be permutation-invariant. The output representation is not affected when the input sequence is shuffled. A position embedding is commonly applied to each input entity for the incorporation of sequential order clues. In the original BERT, Devlin \etal introduced two tasks for pre-training. In the task of masked language modeling (MLM), a portion of input words are randomly masked out. These masked-out words are replaced by a special token ``[MASK]''. The task is to predict the masked words based on the observations from the contextual contents. The contextual contents are unmasked elements that provide useful relevant cues for the prediction of the masked word. The other task, \ie, Next Sentence Prediction (NSP), models order information between two sentences. Two sentences are sampled from a document, and NSP aims to identify if the second sentence is adjacent to the first sentence with the correct order. The two sentences are concatenated via a token ``[SEP]'', so that the models can be aware of the inputs being separated sentences. The prediction is made upon the output features of the first token ``[CLS]''. This is a binary classification problem, and a simple sigmoid classifier is used. A prediction of ``$1$'' indicates the sentences are consecutive, and the second sentence is right after the first sentence. \subsection{ActBERT\xspace} \subsubsection{Input Embeddings} There are four types of input elements in ActBERT\xspace. They are actions, image regions, linguistic descriptions and special tokens. Special tokens are used to distinguish different inputs. Each input sequence starts with a special token ``[CLS]'' and ends with another token ``[SEP]''. We put the linguistic descriptions after ``[CLS]''. There are the action inputs followed by local regional features. We denote the action features as $a_1, \ldots, a_L$, the frame region features as $r_1, \ldots, r_M$. The sequential text descriptions is denoted as $w_1, \ldots, w_N$. The whole sequence is denoted as $\{\text{[CLS]}, w_1, \ldots, w_N, \text{[SEP]}, a_1, \ldots, a_L, \text{[SEP]},$ $r_1, \ldots, r_M, \text{[SEP]}\}$. ``[SEP]'' is also inserted between different sentences. We also insert ``[SEP]'' between regions that are from different clips, which can help the model to identify the clip boundaries. For each input step, the final embedding feature consists of four different embeddings. The embeddings are position embedding, segment embedding, token embedding, visual feature embedding. We added a few new tokens to distinguish action features and regional object features. The visual embedding is introduced to extract visual and action information. These embeddings are added to be the final feature of ActBERT\xspace. We explain them in detail as follows. \noindent\textbf{Position embedding.} Following~\cite{devlin2018bert}, we incorporate a learnable position embedding to every input in the sequence. Since self-attention does not consider order information, position encoding offers a flexible way to embed a sequence when the sequence order matters. For the actions in different clips, the position embeddings are different as the video clips are ordered. For the regions extracted from the same frame, we use the same position embedding. To distinguish regions from the same frame, we consider spatial position embedding for different spatial positions. The details will be described in ``Visual (action) embedding''. \noindent\textbf{Segment embedding.} We consider multiple video clips for long-term video context modeling. Each video clip or video segment has a corresponding segment embedding. The elements, \ie, action inputs, regional object inputs, linguistic descriptions, have the same segment embedding in the same video clip. \noindent\textbf{Token embedding.} Each word is embedded with WordPiece embeddings~\cite{wu2016google} with a 30,000 vocabulary. In addition to the special tokens mentioned above (``[CLS]'', ``[MASK]'', ``[SEP]''), we introduce ``[ACT]'' and ``[REGION]'' to represent the action features and the region features extracted from video frames, respectively. Note that all action inputs have the identical token embedding, which reveals the modality of the inputs. \noindent\textbf{Visual (action) embedding.} We now explain the visual (action) embedding in details. We first illustrate the procedure to obtain the action embedding. For each video clip, we extract verbs from its corresponding descriptions. For simplicity, we remove clips that do not have any verbs. We then build a vocabulary from all the extracted verbs. After verb vocabulary construction, each video clip has one or multiple category labels. We train a 3D convolutional neural network on this constructed dataset. The inputs to the 3D network is a tensor that contains an additional temporal dimension. We leverage a softmax classifier on top of the convolutional neural network. For clips with multiple labels, we normalize the one-hot label with $\ell_1$-norm, where the scores for all labels are summed to be $1$. After the model is trained, we extract the features after global average pooling as the \textbf{action features}. This feature can well represent the actions that occurred in the video clip. To obtain regional object features, we extract bounding boxes and the corresponding visual features from a pre-trained object detection network. Similar to Lu~\etal~\cite{lu2019vilbert}, we utilized pre-trained Faster R-CNN network \cite{ren2015faster} to extract the categorical distribution under the COCO vocabulary \cite{lin2014microsoft}. The image region features offer detailed visual information for visual and text relation modeling. For each region, the visual feature embeddings are the feature vectors before the output layer in the pre-trained network. Following~\cite{lu2019vilbert}, we incorporate spatial position embeddings to represent region locations with a 5-D vector. This vector consists of four box coordinates and the fraction of the region area. Specifically, we denote the vector as $(\frac{x_1}{W}, \frac{y_1}{H}, \frac{x_2}{W}, \frac{y_2}{H}, \frac{(x_2-x_1)*(y_2-y_1)}{W*H})$, where $W$ is the frame width, $H$ is the frame height, and $(x_1, y_1)$ and $(x_2, y_2)$ are the top-left and bottom-right coordinates, respectively. This vector is then embedded to match the dimension of the visual feature. The final regional object feature is the summation of the spatial position embedding and the object detection feature. \subsubsection{Tangled Transformer} We design a TaNgled Transformer (TNT) to better encode three sources of information, \ie, action features, regional object features and linguistic features. Instead of using only one transformer that treats the visual and text features equally, our tangled transformer consists of three transformers. The three transformers take three sources of features, respectively. To enhance the interactions between visual and linguistic features, we propose to inject visual information to the linguistic transformer and incorporate linguistic information to the visual transformers. With cross-modal interactions, the tangled transformer can dynamically select judicious cues for target prediction. We denote the intermediate representations at transformer block $l$ as $h^l=\{(h_{w_0}^l, $ $\ldots, h_{w_N}^l), (h_{a_0}^l, \ldots, h_{a_L}^l), (h_{r_0}^l, \ldots, h_{r_M}^l)\}$. For simplicity, we denote $h_{w}^l=\{h_{w_0}^l, \ldots, h_{w_N}^l\}$, $h_{a}^l=\{h_{a_0}^l, \ldots, h_{a_L}^l)\}$, and $h_{r}^l=\{h_{r_0}^l, \ldots, h_{r_M}^l)\}$, which are processed by $w$-transfomer, $a$-transformer, and $r$-transformer, respectively (Figure~\ref{fig:our_transformer}). Besides the standard multi-head attention encoding features from the same modality, we leverage the other two multi-head attention blocks to enhance mutual interactions between the transformer blocks. Specifically, we utilize $h_a^l$ to catalyze mutual interactions. We denote the multi-head attention as $output=Multihead(Q, K, V)$, where $Q$ is the query, $K$ is the key, $V$ is the value. The details of multi-head attention can be found in \cite{vaswani2017attention}. We use $h_a^l$ as a query to attend judicious cues from $h_w^l$ and $h_r^l$: \begin{align} c_w &= Multihead(W_{q}^1 h_a^l, W_{k}^w h_w^l, W_{v}^w h_w^l), \\ c_r &= Multihead(W_{q}^2 h_a^l, W_{k}^r h_r^l, W_{v}^r h_r^l), \end{align} where $W^{*}_{*}$ are learnable weights. $c_w$ is the blended feature from linguistic representations, while $c_r$ is the guided feature from regional object representation. We then generate a new key-value pair from $c_w$ using a linear layer. This generated key-value pair is stacked with the key-value pairs from the original $a$-transformer and $r$-transformer. Similarly, we generate a new key-value pair from $c_r$, which is stacked with key-value pair in $w$-transformer. With this form tangled transformer, visual and linguistic features are further associated. Note that our tangled transformer is different from the co-attentional transformer block in~\cite{lu2019vilbert} in several ways. First, the co-attentional transformer block simply passes the keys and values from one modality to the other modality's attention block, without further pre-processing. Second, \cite{lu2019vilbert} treats the two modalities equally, while our tangled block utilizes a global cue to guide the selection of local hints from linguistic and visual features. Third, the keys and values from different modalities replace the origin key-values in \cite{lu2019vilbert}, while our tangled transformer stacks the key-value with the original one. In this way, both the linguistic and visual features are incorporated during transformer encoding. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{./transformer.pdf} \caption{\textbf{Our tangled transformer} takes three sources of information as inputs, which enhances the interactions between linguistic features and visual features.} \label{fig:our_transformer} \end{figure} \subsubsection{ActBERT\xspace Training} \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{./framework-backup-re.pdf} \caption{\textbf{Our ActBERT\xspace framework}. We incorporate three sources of information during pre-training, \ie, global actions, local regional objects, and text descriptions. The yellow grid indicates that the action or the region object is masked out. } \label{fig:framework} \end{figure*} We introduce four tasks for ActBERT\xspace pre-training. Our framework is presented in Figure \ref{fig:framework}. We naturally extend the Masked Language Modeling in our cross-modal setting. There are some existing extensions for image and language pre-training~\cite{lu2019vilbert,sun2019videobert}, and video and language pre-training~\cite{sun2019videobert}. Compared to \cite{sun2019videobert}, we explicitly model actions and regional information in a unified framework. \noindent\textbf{Masked Language Modeling with Global and Local Visual Cues.} We extend the Masked Language Modeling (MLM) task in BERT to our setting. We leverage visual cues from local regional objects and global actions to uncover the relationships between visual and linguistic entities. As described in Section~\ref{sec:preliminary}, each word in the input sentence is randomly masked with a fixed probability. The task forces the model to learn from contextual descriptions, and at the same time, extract relevant visual features to facilitate prediction. When a verb word is masked out, the model should exploit the action features for a more accurate prediction. When a description of an object is masked out, local regional features can provide more contextual information. Thus, the strong model needs to align visual and linguistic inputs locally and globally. The output feature is then appended with a softmax classifier over the whole linguistic vocabulary. \noindent\textbf{Masked Action Classification.} Similarly, in Masked Action Classification, the action features are masked out. The task is to predict the masked action label based on linguistic features and object features. Explicit action prediction can be beneficial in two perspectives. First, action sequential cues can be exploited in the long-term. For example, for a video with action sequences of ``get into'', ``rotate'', ``add'', this task can better exploit the temporal order information regarding performing this instructional assignment. Second, the regional objects and linguistic texts are leveraged for better cross-modality modeling. Note that in Masked Action Classification, the goal is to predict the categorical label of the masked-out action feature. This task can enhance the action recognition capability of the pre-trained model, which can be further generalized to many downstream tasks, \eg, video question answering. \noindent\textbf{Masked Object Classification.} In Masked Object Classification, the regional object features are randomly masked out. We follow~\cite{lu2019vilbert} to predict a distribution over fixed vocabulary for the masked-out image region. The target distribution of the masked-out region is calculated as the softmax activation that is extracted by forwarding the region to the same pre-trained detection model in the feature extraction stage. The KL divergence between the two distributions is minimized. \noindent\textbf{Cross-modal matching.} Similar to the Next Sentence Prediction (NSP) task, we apply a linear layer on top of the output of the first token ``[CLS]''. It is followed by a sigmoid classifier, indicating the relevance score of the linguistic sentences and the visual features. If the score is high, it shows that the text well-describes the video clips. The model is optimized via a binary cross-entropy loss. To train this cross-modal matching task, we sample negative video-text pairs from the unlabeled dataset. We follow \cite{miech2019howto100m} for sampling positive pairs and negative pairs. \section{Experiments} \label{sec:expr} In this section, we evaluate ActBERT\xspace in multiple downstream video-and-language tasks. We quantitatively evaluate the generalization capability of ActBERT\xspace on five challenging tasks, \ie, text-video clip retrieval, video captioning, video question answering, action segmentation, and action step localization. \subsection{ActBERT\xspace implementation details} \paragraph{HowTo100M.} We pre-train ActBERT\xspace on the HowTo100M dataset~\cite{miech2019howto100m}. The HowTo100M dataset is constructed by querying YouTube API. The top 200 search results are kept. This dataset covers a total of 23,611 tasks, \eg, maintenance and repair, animal rescue, food preparation. This dataset is biased towards actions, where the verbs like ``go'', ``make'', ``come'' being the most frequent. The nouns are also distributed in a long-tailed way, where objects like ``water'', ``cup'' are ranked top. Each video has a corresponding narration that is extracted from video subtitles. As the association between video clips and texts are not manually annotated, the video-text connection can sometimes be weak. There are cases of noisy correspondences, where the actors sometimes talk about unrelated things. Though noisy, we found pre-training on HowTo100M can still significantly improve the performance of downstream tasks. \noindent\textbf{Pre-training details.} To construct video-text inputs for ActBERT\xspace pre-training, we sample video clips from the HowTo100M dataset. Instead of only using one clip for video-text joint training, we leverage multiple adjacent clips to cover a longer context. This enables ActBERT\xspace to model relations in different segments. We sample 10 adjacent video clips, and the temporal-aligned linguistic tokens are extracted to form a video-text pair. To obtain the local regional features, we use Faster R-CNN pre-trained on the Visual Genome~\cite{krishna2017visual} dataset following \cite{lu2019vilbert}. The backbone is ResNet-101~\cite{he2016deep}. We use the frame rate of 1 FPS to extract the regional features. Each region feature is RoI-pooled from the convolutional feature from that region. We set the detection confidence threshold as 0.4, and each frame contains at most five boxes. Transformer and co-attentional transformer blocks in the visual stream have hidden state size of 1024 and 8 attention heads. To obtain the action features, we first construct an action classification dataset. We sample frames at 8 FPS. For each clip, we extract the verb from its text descriptions. Then, we train a ResNet-3D~\cite{tran2018closer} network with a softmax classification loss. We initialized the weights of the ResNet-3D model from a pre-trained model on Kinetics~\cite{kay2017kinetics}. The Kinetics dataset covers 400 actions from YouTube videos. The 3D convolutional network converges faster using when it is pre-trained on Kinetics. The input clip length to ResNet-3D is 32. The clip covers a 4-second video duration. The spatial shape of the input frame is 224$\times$224. The initial learning rate is set to 0.001. The batch size is 16. We decay the learning rate by 0.1 at iteration 100,000, and the total number of training iterations is 1,000,000. We keep other training settings unchanged following~\cite{tran2018closer}. During feature extraction, we sample the central clip, and each frame is central cropped. We use the feature after global average pooling as the clip representation. During ActBERT\xspace pre-training, 15\% of input features are randomly masked out. ActBERT\xspace has 12 layers of transformer blocks. Each transformer block has a hidden unit size of 768. We initialize the linguistic transformer with the BERT model pre-trained on the BookCorpus \cite{zhu2015aligning} and English Wikipedia. The other two transformers are randomly initialized. The network is optimized by Adam optimizer. We set the learning rate to be $10^{-5}$. We trained the model for five epochs due to the large-scale data. \subsection{Results on video-and-text tasks} We evaluate ActBERT\xspace on five downstream tasks, \ie, action step localization, action segmentation, text-video clip retrieval, video captioning, and video question answering. We evaluate the five tasks on CrossTask~\cite{zhukov2019cross}, COIN~\cite{tang2019coin}, YouCook2~\cite{zhou2018towards}, MSR-VTT~\cite{xu2016msr} and LSMDC \cite{lsmdc}. Videos from the test sets of these datasets are removed during pre-training on HowTo100M. \subsubsection{Datasets} \paragraph{CrossTask:} We evaluate action step localization on the CrossTask~\cite{zhukov2019cross} dataset. CrossTask~\cite{zhukov2019cross} contains 83 tasks and 4.7k videos related to cooking, car maintenance, crafting, etc. We use the recall metric described in~\cite{zhukov2019cross}, which is defined by the number of step assignments that fall into the ground-truth interval, divided by the total number of steps in the video. \textbf{COIN:} We evaluate the action segmentation task on the recent COIN~\cite{tang2019coin} dataset. COIN~\cite{tang2019coin} contains 180 tasks and 11,827 videos. This dataset consists of 46,354 annotated segments. The videos are collected from YouTube. \textbf{YouCook2:} We evaluate text-video clip retrieval and video captioning on YouCook2. YouCook2 is a cooking video dataset collected from YouTube, covering a large variety of cooking styles, methods, ingredients and cookwares~\cite{zhou2018towards}. In YouCook2, there are 89 types of recipes and totally 14k clips described with linguistic texts. Following~\cite{miech2019howto100m}, we evaluate the text-video clip retrieval task on the validation clips of YouCook2. \textbf{MSR-VTT:} We evaluate text-video clip retrieval and video question answering on MSR-VTT. The MSR-VTT dataset \cite{xu2016msr} is a general video dataset collected from YouTube with text descriptions. For the video question answering task, we evaluate the multiple-choice VideoQA following~\cite{yu2018joint}. There are 2,990 questions in total for testing. Each test video is associated with a ground-truth caption, a correct answer, and four mismatched descriptions. For text-video clip retrieval, following~\cite{yu2018joint}, we use 1,000 pairs text-video for evaluation. \textbf{LSMDC}: We evaluate fill-in-the-blank video question answering on LSMDC \cite{lsmdc}. This task is to predict a single answer given a video clip and a sentence with a blank in it. In LSMDC fill-in-the-blank, there are 296,960 training question-answer pairs, 21,689 validation pairs, and 30,349 testing pairs on public test set. The accuracy is reported on the public test set. \subsubsection{Video captioning} We compare our ActBERT\xspace to VideoBERT \cite{sun2019videobert} on the video captioning task. We take the pre-trained action transformer as the video encoder. We follow the setup from \cite{zhou2018end} that takes the video clips from YouCook2 \cite{zhou2018towards} as input, and a transformer decoder is used to decode videos to captions. We do not use the regional object transformer to fairly compare to \cite{sun2019videobert}. Similar to \cite{sun2019videobert}, we cross-validate the hyper-parameters on the training set. We report the standard evaluation metrics for captioning, \ie, BLEU, METEOR, and ROUGE, on the validation set. The model is optimized by Adam optimizer for 40k iterations. We set the initial learning rate to $1.0\times10^{-3}$, and the batch size is 128. The results are shown in Table~\ref{tab:youcook_captioning}. We outperform VideoBERT \cite{sun2019videobert} across all metrics, achieving a 1.36 improvement on METEOR. It demonstrates that our pre-trained transformer learns a better video representation. It also indicates the effectiveness of ActBERT\xspace in modeling video sequences by considering both global and local video cues. Our transformer generalizes better in video captioning. \begin{table} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{ lccccc } \toprule Method & BLEU-3 & BLEU-4 & METEOR & ROUGE-L & CIDEr \\ \midrule Zhou \etal~\cite{zhou2018end} & 7.53 & 3.84 & 11.55 & 27.44 & 0.38 \\ S3D~\cite{xie2018rethinking} & 6.12 & 3.24 & 9.52 & 26.09 & 0.31 \\ VideoBert \cite{sun2019videobert} & 6.80 & 4.04 & 11.01 & 27.50 & 0.49\\ VideoBert + S3D \cite{sun2019videobert}& {7.59} & {4.33} & {11.94} & {28.80} & {0.55}\\ \midrule ActBERT\xspace & \textbf{8.66} & \textbf{5.41} & \textbf{13.30} & \textbf{30.56} & \textbf{0.65} \\ \bottomrule \end{tabular} } \end{center} \caption{\textbf{Video captioning} results on YouCook2. We outperform VideoBERT \cite{sun2019videobert} across all the metrics. } \label{tab:youcook_captioning} \end{table} \begin{table}[tb] \small \centering \begin{tabular}{l c } \toprule Method & Frame Accuracy (\%) \\ \toprule NN-Viterbi~\cite{richard2018neuralnetwork} & 21.17\\ VGG~\cite{simonyan2014very} & 25.79 \\ TCFPN-ISBA~\cite{ding2018weakly} & 34.30\\ \midrule ActBERT\xspace w/o region cues & 52.10 \\ ActBERT\xspace & \textbf{56.95} \\ \bottomrule \end{tabular} \caption{\textbf{Action segmentation} results on COIN.} \label{tab:action_segmentation} \end{table} \begin{table*}[t] \resizebox{\textwidth}{!}{ \begin{tabular}{lc@{~~~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}|c} \toprule & \rotatebox{90}{\small Make} \rotatebox{90}{\small Kimchi Rice} & \rotatebox{90}{\small Pickle} \rotatebox{90}{\small Cucumber} & \rotatebox{90}{\small Make Banana} \rotatebox{90}{\small Ice Cream} & \rotatebox{90}{\small Grill} \rotatebox{90}{\small Steak} & \rotatebox{90}{\small Jack Up } \rotatebox{90}{\small Car} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Jello Shots} & \rotatebox{90}{\small Change } \rotatebox{90}{\small Tire} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Lemonade} & \rotatebox{90}{\small Add Oil } \rotatebox{90}{\small to Car} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Latte} & \rotatebox{90}{\small Build } \rotatebox{90}{\small Shelves} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Taco Salad} & \rotatebox{90}{\small Make } \rotatebox{90}{\small French Toast} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Irish Coffee} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Strawberry Cake} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Pancakes} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Meringue} & \rotatebox{90}{\small Make } \rotatebox{90}{\small Fish Curry} & \rotatebox{90}{\small Average } \\ \midrule Alayrac \etal \cite{alayrac2016unsupervised} & 15.6 & 10.6 & 7.5 & 14.2 & 9.3 & 11.8 & 17.3 & 13.1 & 6.4 & 12.9 & 27.2 & 9.2 & 15.7 & 8.6 & 16.3 & 13.0 & 23.2 & 7.4 & 13.3 \\ Zhukov \etal \cite{zhukov2019cross} & 13.3 & 18.0 & 23.4 & 23.1 & 16.9 & 16.5 & 30.7 & 21.6 & 4.6 & 19.5 & 35.3 & 10.0 & 32.3 & 13.8 & 29.5 & 37.6 & {43.0} & 13.3 & 22.4 \\ Supervised \cite{zhukov2019cross} & 19.1 & 25.3 & 38.0 & 37.5 & 25.7 & 28.2 & \textbf{54.3} & 25.8 & 18.3 & 31.2 & 47.7 & 12.0 & 39.5 & 23.4 & 30.9 & 41.1 & \textbf{53.4} & 17.3 & 31.6 \\ TVJE~\cite{miech2019howto100m} & {33.5} & {27.1} & {36.6} & {37.9} & {24.1} & {35.6} & {32.7} & {35.1} & {30.7} & {28.5} & {43.2} & {19.8} & {34.7} & {33.6} & {40.4} & {41.6} & 41.9 & {27.4} & {33.6} \\ \midrule ActBERT\xspace w/o region cues & {37.4} & {29.5} & {39.0} & {42.2} & {29.8} & {37.5} & {35.5} & {37.8} & {33.2} & {32.8} & {48.4} & {25.2} & {37.4} & {35.6} & {42.4} & {47.0} & {46.1} & {30.4} & {37.1} \\ ActBERT\xspace & \textbf{41.8} & \textbf{33.6} & \textbf{42.7} & \textbf{46.8} & \textbf{33.4} & \textbf{43.0} & \textbf{40.8} & \textbf{41.8} & \textbf{38.3} & \textbf{37.4} & \textbf{52.5} & \textbf{30.1} & \textbf{41.2} & \textbf{40.4} & \textbf{46.1} & \textbf{51.0} & \textbf{49.7} & \textbf{35.1} & \textbf{41.4} \\ \bottomrule \end{tabular} } \caption{\textbf{Action step localization} results on CrossTask~\cite{zhukov2019cross}.} \label{table:action_step_localization} \end{table*} \begin{table}[t] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lccccc} \toprule Method & Dataset & R@1 & R@5 & R@10 & Median R \\ \midrule HGLMM~\cite{klein2015associating} & YouCook2 & 4.6 & 14.3 & 21.6 & 75 \\ TVJE~\cite{miech2019howto100m} & YouCook2 & 4.2 & 13.7 & 21.5 & 65 \\ TVJE +FT~\cite{miech2019howto100m} & YouCook2 & {8.2} & {24.5} & {35.3} & {24} \\ \midrule ActBERT\xspace & YouCook2 & {9.6} & {26.7} & {38.0} & {19} \\ \midrule \midrule C+LSTM+SA~\cite{torabi2016learning} & MSR-VTT & 4.2 & 12.9 & 19.9 & 55 \\ VSE-LSTM~\cite{kiros2014unifying} & MSR-VTT & 3.8 & 12.7 & 17.1 & 66 \\ SNUVL~\cite{yu2016video} & MSR-VTT & 3.5 & 15.9 & 23.8 & 44\\ Kaufman \etal~\cite{kaufman2017temporal} & MSR-VTT & 4.7 & 16.6 & 24.1 & 41\\ CT-SAN~\cite{yu2017end} & MSR-VTT & 4.4 & 16.6 & 22.3 & 35\\ JSFusion~\cite{yu2018joint} & MSR-VTT & 10.2 & 31.2 & 43.2 & 13 \\ TVJE \cite{miech2019howto100m} & MSR-VTT & 7.5 & 21.2 & 29.6 & 38 \\ TVJE +FT \cite{miech2019howto100m} & MSR-VTT & {14.9} & {40.2} & {52.8} & {9} \\ \midrule ActBERT\xspace & MSR-VTT & 8.6 & 23.4 & 33.1 & 36 \\ ActBERT\xspace +FT & MSR-VTT & 16.3 & 42.8 & 56.9 & 10 \\ \bottomrule \end{tabular} } \caption{\textbf{Text-video clip retrieval} results on YouCook2 and MSR-VTT. ``FT'' denotes fine-tuning on the training set.} \label{tab:retrieval_youcook} \end{table} \subsubsection{Action segmentation} The action segmentation task in COIN is to design an action label for a video at the frame-level. To apply ActBERT\xspace to action segmentation, we fine-tune ActBERT\xspace by adding a linear classifier upon the output features for dense frame labeling. We do not feed the text descriptions during the fine-tuning process. The results are shown in Table~\ref{tab:action_segmentation}. The baseline methods are conducted by~\cite{tang2019coin}. Notably, ActBERT\xspace significantly outperforms the baselines with more than 20\% improvements. It shows that the pre-trained ActBERT\xspace can deal with only visual inputs when linguistic descriptions are absent. When we remove the regional information, we observe a performance drop compared to our full model. It shows that detailed local cues are important to the dense frame labeling task. \subsubsection{Action step localization} \label{sec:action_step_localization} We evaluate action step localization on CrossTask. To fairly compare to \cite{miech2019howto100m}, we do not fine-tune on the target dataset. We regard the step action label as the text description and directly feed the text-video pair to ActBERT\xspace. We regard the prediction for the first token ``[CLS]'' as the relevance score of this clip belonging to the label. We choose the action with the max relevance score as the final prediction. The results are shown in Table~\ref{table:action_step_localization}. ActBERT\xspace significantly outperforms TVJE~\cite{miech2019howto100m} with a large margin, \ie, the average improvement is 7\%. We achieve even better than the supervised baseline. We remove the region cues to have a fair comparison to~\cite{miech2019howto100m}, as \cite{miech2019howto100m} does not use object detection features for video and text matching. The results of ``ActBERT\xspace w/o region cues'' also substantially outperform \cite{miech2019howto100m}, demonstrating the effectiveness of ActBERT\xspace pre-training. Our full ActBERT\xspace model further improves performance by 4\%. This validates that regional information is an important source that provides detailed local object features for text-and-video matching. \subsubsection{Text-video clip retrieval} We evaluate ActBERT\xspace on the task of video clip retrieval with natural language queries. Given a linguistic query, it aims to rank the video clips from a gallery video set. We use the following metrics for evaluation~\cite{miech2019howto100m}, \ie, Recall@1 (R@1), Recall@5 (R@5), Recall@10 (R@10) and the median rank (Median R). We evaluate ActBERT\xspace on YouCook2 and MSR-VTT. We followed \cite{miech2019howto100m} to conduct the YouCook2 evaluation. The results are shown in Table~\ref{tab:retrieval_youcook}. ActBERT\xspace significantly outperforms TVJE \cite{miech2019howto100m} and other baselines. TVJE trains a ranking loss on the HowTo100M dataset. It shows ActBERT\xspace is a better pre-training framework for video-text joint representation learning. Notably, our pre-trained model achieves better retrieval performance than the finetuned TVJE model (``TVJE +FT'') on YouCook2. It shows the superiority of ActBERT\xspace in self-supervised video-text representation learning. In MSR-VTT, ActBERT\xspace outperforms TVJE by 1.1\% on R@1 when no labeled data is accessed. Note that JSFusion \cite{yu2018joint} is a supervised method that leverages labeled video and text pairs for training. \begin{table}[tb] \centering \small \begin{tabular}{l|cc|} \hline Method & {\footnotesize Accuracy} \\ \midrule LSTM-fusion \cite{yu2018joint} & 38.3 \\ C+LSTM+SA-FC7 \cite{torabi2016learning} & 60.2 \\ VSE-LSTM \cite{kiros2014unifying} & 67.3 \\ SNUVL \cite{yu2016video} & 65.4 \\ EITanque \cite{kaufman2017temporal} & 65.5 \\ CT-SAN \cite{yu2017end} & 66.4 \\ MLB \cite{kim2016hadamard} & 76.1 \\ JSFusion \cite{yu2018joint} & 83.4 \\ \midrule ActBERT\xspace & \textbf{85.7} \\ \bottomrule \hline \end{tabular} \caption{ \textbf{Video question answering (multiple-choices)} results on MSR-VTT. } \label{tbl:results_mc} \end{table} \begin{table} \centering \small \begin{tabular}{lc } \toprule Method & {\footnotesize Accuracy} \\ \midrule Text-only BLSTM \cite{maharaj2017dataset} & 32.0 \\ Text-only Human \cite{maharaj2017dataset} & 30.2 \\ GoogleNet-2D + C3D \cite{maharaj2017dataset} & 35.7 \\ Merging-LSTM \cite{mazaheri2016video} & 34.2 \\ SNUVL \cite{yu2016video} & 38.0 \\ CT-SAN \cite{yu2017end} & 41.9 \\ LR/RL LSTMs \cite{mazaheri2017video} & 40.9 \\ JSFusion \cite{yu2018joint} & {45.5} \\ \midrule ActBERT\xspace & \textbf{48.6} \\ \bottomrule \end{tabular} \caption{\textbf{Video question answering (fill-in-the-blank)} results on LMSDC. } \label{tbl:results_mcfib} \end{table} \subsubsection{Video question answering.} We evaluate ActBERT\xspace on VideoQA tasks. For multi-choice VideoQA, we fine-tune the pre-trained ActBERT\xspace on the MSR-VTT training set. The video-text pairs are fed to ActBERT\xspace. We use a linear classifier upon the output feature. We use a small learning rate of 0.0001 and use Adam optimizer for training. At the inference time, we fed each candidate with the video clip to ActBERT\xspace. The final choice is made by selecting the candidates with the max matching score. The results are shown in Table~\ref{tbl:results_mc}. We compare to many baselines in this task. Without fancy joint modeling, ActBERT\xspace significantly outperforms JSFusion \cite{yu2018joint} by 2.3\%. It shows ActBERT\xspace's strong generalization from a large-scale dataset. We additionally evaluate on another VideoQA task on LSMDC, \ie, fill-in-the-blank VideoQA. We report the prediction accuracy on the public test set and the results are shown in Table \ref{tbl:results_mcfib}. It shows ActBERT\xspace is capable of learning generalizable features that it achieves considerable gains when the target video domains are movies. \section{Conclusion} In this paper, we introduce ActBERT\xspace for joint video-text modeling in a self-supervised way. We directly model both global and local visual cues for fine-grained visual and linguistic relation learning. ActBERT\xspace takes three sources of information as input, \ie, global actions, local regional objects, and linguistic descriptions. The novel tangled transformer further enhances the communications between the three sources. Quantitative results on five video-text benchmarks demonstrate the effectiveness of ActBERT\xspace. In the future, we will consider evaluating ActBERT\xspace on video action recognition and detection. We will also improve ActBERT\xspace by designing more powerful modules for video and text modeling. \noindent\textbf{Acknowledgements.} This work is supported by ARC DP200100938. {\small \bibliographystyle{ieee_fullname}
1,116,691,500,547
arxiv
\section{Introduction} \label{sec:Introduction} The observed baryon asymmetry in the Universe requires the violation of the charge-conjugation parity ({\ensuremath{C\!P}}\xspace) symmetry~\cite{Sakharov:1967dj}. Within the Standard Model (SM), {\ensuremath{C\!P}}\xspace violation occurs due to an irreducible complex phase in the Cabibbo-Kobayashi-Maskawa (CKM) matrix, which describes the transitions between quarks~\cite{Cabibbo:1963yz,Kobayashi:1973fv}. However, the size of {\ensuremath{C\!P}}\xspace violation in the SM appears to be too small to explain the observed baryon asymmetry, suggesting that there may be additional sources of {\ensuremath{C\!P}}\xspace violation beyond the SM~\cite{Cohen:1993nk,Riotto:1999yt,Hou:2008xd}. Such sources can be associated to new heavy particles, giving further motivation for {\ensuremath{C\!P}}\xspace violation searches. Although {\ensuremath{C\!P}}\xspace violation in the $b$- and $s$-quark sectors has been established for some time~\cite{BaBar:2004gyj,Belle:2004nch,LHCb-PAPER-2013-018,LHCb-PAPER-2012-001}, its observation in the charm sector was only accomplished in 2019~\cite{LHCb-PAPER-2019-006}. The charm sector provides a unique opportunity to study {\ensuremath{C\!P}}\xspace violation in decays of up-type quarks. Charge-parity violation in charm decays occurs through the interference of tree- and loop-level diagrams in the Cabibbo-suppressed quark transitions $c\ensuremath{\rightarrow}\xspace\bar ddu$ and $c\ensuremath{\rightarrow}\xspace \bar ssu$. Standard Model calculations of {\ensuremath{C\!P}}\xspace violation in the charm sector are difficult due to the presence of low-energy strong-interaction effects; as a consequence, predictions vary by orders of magnitude~\cite{Grossman:2006jg,Golden:1989qx,Buccella:1994nf,Bianco:2003vb,Artuso:2008vf}. The study of {\ensuremath{C\!P}}\xspace asymmetries in several decays related by flavour symmetry can provide insight to the origin of observed {\ensuremath{C\!P}}\xspace violation~\cite{Pirtskhalava:2011va,Cheng:2012wr,Feldmann:2012js,Li:2012cfa,Franco:2012ck,Brod:2012ud,Atwood:2012ac,Hiller:2012xm,Muller:2015rna}. Of particular interest are two-body decays of {\ensuremath{\D^+}}\xspace and {\ensuremath{\D^+_\squark}}\xspace mesons, such as \ensuremath{\Dsp \to \etazpr \pip}\xspace (Cabibbo-favoured) and \ensuremath{\Dp \to \etazpr \pip}\xspace (singly Cabibbo-suppressed).\footnote{Charge-conjugate decays are implied throughout this article, except when discussing asymmetries.} The {\ensuremath{C\!P}}\xspace asymmetries for these channels have been measured by the CLEO~\cite{Mendez:2009aa,Onyisi:2013bjt}, Belle~\cite{Won:2011ku,Belle:2021ygw} and LHCb~\cite{LHCb-PAPER-2016-041,LHCb-PAPER-2021-001} collaborations. No evidence for {\ensuremath{C\!P}}\xspace violation has been observed, within the uncertainties of a few per mille. This article presents measurements of {\ensuremath{C\!P}}\xspace asymmetries for the modes \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace and \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace at the LHCb experiment, made using proton-proton ($pp$) collisions, recorded during the period 2015--2018 at a centre-of-mass energy $\sqrt{s}=13\aunit{Te\kern -0.1em V}\xspace$, corresponding to an integrated luminosity of 6 \ensuremath{\fb^{-1}}\xspace. The \ensuremath{\ensuremath{\upeta}\xspace}\xspace and \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace mesons are both reconstructed in the final state $\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$. The two charged particles in the final states allows the reconstruction of the \ensuremath{\Peta^{(\prime)}}\xspace decay vertex. The {\ensuremath{C\!P}}\xspace asymmetry is defined as \begin{equation} {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) \equiv \frac{\Gamma({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) - \Gamma({\ensuremath{\D^-_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^-)}{\Gamma({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) + \Gamma({\ensuremath{\D^-_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^-)}, \end{equation} where $f$ is the considered final state and $\Gamma$ is the partial decay width. Experimentally, the raw asymmetry {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace is measured using the event yields $N$ as \begin{equation} {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) \equiv \frac{N({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) - N({\ensuremath{\D^-_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^-)}{N({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) + N({\ensuremath{\D^-_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^-)}. \end{equation} The difference between {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace and {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace arises from asymmetries in the production of positively and negatively charged $D_{(s)}^{\pm}$ mesons~\cite{LHCb-PAPER-2018-010,LHCb-PAPER-2012-026} and in the detection efficiency of the corresponding final states. For small asymmetries, {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace may be approximated to first order as \begin{equation} {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) \approx {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace f^+) + {\ensuremath{{\mathcal{A}}^{\mathrm{ prod}}}}\xspace({\ensuremath{\D^+_{(\squark)}}}\xspace) + {\ensuremath{{\mathcal{A}}^{\mathrm{ det}}}}\xspace(f^+), \end{equation} where ${\ensuremath{{\mathcal{A}}^{\mathrm{ prod}}}}\xspace = \frac{\sigma({\ensuremath{\D^+_{(\squark)}}}\xspace)-\sigma({\ensuremath{\D^-_{(\squark)}}}\xspace)}{\sigma({\ensuremath{\D^+_{(\squark)}}}\xspace)+\sigma({\ensuremath{\D^-_{(\squark)}}}\xspace)}$ is the production asymmetry and ${\ensuremath{{\mathcal{A}}^{\mathrm{ det}}}}\xspace = \frac{\epsilon(f^+)-\epsilon(f^-)}{\epsilon(f^+)+\epsilon(f^-)}$ is the detection asymmetry. Here $\sigma({\ensuremath{\D^{\pm}_{(\squark)}}}\xspace)$ is the production cross section for {\ensuremath{\D^{\pm}_{(\squark)}}}\xspace mesons and $\epsilon(f^{\pm})$ is the efficiency for detecting the final state $f^{\pm}$. The detection asymmetry can arise due to instrumental effects, such as different interaction cross-sections of positive and negative particles with the detector material, or a small charge-dependence of the reconstruction algorithms. For $f^{\pm} = \ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^\pm}}\xspace$, {\ensuremath{{\mathcal{A}}^{\mathrm{ det}}}}\xspace is due only to the pion, since the decays $\ensuremath{\Peta^{(\prime)}}\xspace\ensuremath{\rightarrow}\xspace\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ are charge symmetric. The production and detection asymmetries are subtracted using the control channels \ensuremath{\Ds \to \phiz \pip}\xspace and \ensuremath{\Dp \to \phiz \pip}\xspace, followed by the decay $\ensuremath{\Pphi}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace$. The production and detection asymmetries are the same for the control channels and the signal channels, after accounting for small differences in the kinematic properties, and are therefore eliminated in the difference of the raw asymmetries \begin{align} {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace(\ensuremath{\Dp \to \etazpr \pip}\xspace) - {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace(\ensuremath{\Dp \to \phiz \pip}\xspace) &= {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \etazpr \pip}\xspace) - {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \phiz \pip}\xspace), \label{eq:araw1}\\ {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace(\ensuremath{\Dsp \to \etazpr \pip}\xspace) - {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace(\ensuremath{\Ds \to \phiz \pip}\xspace) &= {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dsp \to \etazpr \pip}\xspace). \label{eq:araw2} \end{align} The Cabibbo-favoured channel \ensuremath{\Ds \to \phiz \pip}\xspace is assumed to have \mbox{${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace=0$}, based on theoretical SM expectations~\cite{Bianco:2003vb}. For the singly Cabibbo-suppressed control channel, the value of \mbox{${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \phiz \pip}\xspace)= (0.005 \pm 0.051)\%$~\cite{LHCb-PAPER-2019-002}} is taken as an external input. Its uncertainty is small compared to the sensitivity of this measurement. \section{Detector and simulation} \label{sec:Detector} The \mbox{LHCb}\xspace detector~\cite{LHCb-DP-2008-001,LHCb-DP-2014-002} is a single-arm forward spectrometer covering the \mbox{pseudorapidity} range $2<\eta <5$, designed for the study of particles containing {\ensuremath{\Pb}}\xspace or {\ensuremath{\Pc}}\xspace quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. The tracking system provides a measurement of the momentum, \ensuremath{p}\xspace, of charged particles with a relative uncertainty that varies from 0.5\% at low momentum to 1.0\% at 200\aunit{Ge\kern -0.1em V}\xspace.\footnote{Natural units, with $c=\hbar=1$, are used throughout.} The minimum distance of a track to a primary $pp$ collision vertex (PV), the impact parameter (IP), is measured with a resolution of $(15+29/\ensuremath{p_{\mathrm{T}}}\xspace)\ensuremath{\,\upmu\nospaceunit{m}}\xspace$, where \ensuremath{p_{\mathrm{T}}}\xspace is the component of the momentum transverse to the beam, in\,\aunit{Ge\kern -0.1em V}\xspace. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The online event selection is performed by a trigger, which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by two software stages, which apply a full event reconstruction. Detector calibration and alignment is performed after the first software stage and is used in the second stage. Simulation is required to model the effects of the detector acceptance and the imposed selection requirements, and to study background contributions. In the simulation, $pp$ collisions are generated using \mbox{\textsc{Pythia}}\xspace~\cite{Sjostrand:2007gs,*Sjostrand:2006za} with a specific \mbox{LHCb}\xspace configuration~\cite{LHCb-PROC-2010-056}. Decays of unstable particles are described by \mbox{\textsc{EvtGen}}\xspace~\cite{Lange:2001uf}, in which final-state radiation is generated using \mbox{\textsc{Photos}}\xspace~\cite{davidson2015photos}. The interaction of the generated particles with the detector, and its response, are implemented using the \mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:2006ve, *Agostinelli:2002hh} as described in Ref.~\cite{LHCb-PROC-2011-006}. \section{Event selection} \label{sec:selection} The online selection of signal candidates is performed by the trigger. The hardware trigger selects candidates where one or more of the {\ensuremath{\D^+_{(\squark)}}}\xspace decay products produces a significant energy deposit in the calorimeter. Events where particles that are not used to reconstruct the {\ensuremath{\D^+_{(\squark)}}}\xspace candidate satisfy the hardware trigger are also accepted. The first software stage requires at least one pion of the \ensuremath{\Peta^{(\prime)}}\xspace decay to have high transverse momentum and to be well detached from all primary $pp$ interaction vertices. In the second software stage, each selected event is required to have at least one fully reconstructed \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etazpr \pip}\xspace candidate. Offline, the \ensuremath{\etazpr\to\gamma\pip\pim}\xspace decays are reconstructed with two oppositely charged high-quality tracks and a high-quality photon candidate. The charged tracks must have transverse momentum $\ensuremath{p_{\mathrm{T}}}\xspace > 500 \aunit{Me\kern -0.1em V}\xspace$ and momentum $p>1000 \aunit{Me\kern -0.1em V}\xspace$. Furthermore, they must have particle-identification characteristics compatible with the pion hypothesis. The pion candidates must be well-detached from all primary vertices in the event. Photon candidates must satisfy $\ensuremath{p_{\mathrm{T}}}\xspace>1000 \aunit{Me\kern -0.1em V}\xspace$. For \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace decays, the invariant mass of the pion pair must be greater than $600 \aunit{Me\kern -0.1em V}\xspace$, which selects $\rho^0$ mesons from the dominant decay process $\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace\ensuremath{\rightarrow}\xspace\rho^0\gamma$, followed by $\rho^0\ensuremath{\rightarrow}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$. The \ensuremath{\Peta^{(\prime)}}\xspace candidates are combined with an additional high-quality track that is consistent with being a pion, denoted the ``companion'' pion, to form {\ensuremath{\D^+_{(\squark)}}}\xspace candidates. The companion pion must satisfy $ 1 < \ensuremath{p_{\mathrm{T}}}\xspace < 20 \aunit{Ge\kern -0.1em V}\xspace$, have pseudorapidity between 2 and 5, and be significantly detached from all primary vertices. Furthermore, it is required to satisfy fiducial requirements designed to avoid areas of the detector with large detection asymmetries. The {\ensuremath{\D^+_{(\squark)}}}\xspace candidates must satisfy $\ensuremath{p_{\mathrm{T}}}\xspace > 2000\aunit{Me\kern -0.1em V}\xspace$ and have proper decay time \mbox{$\tau>0.25 \aunit{ps}$}. Furthermore, the invariant mass of the three pion candidates combined must satisfy $m_{3\pi}<1825\aunit{Me\kern -0.1em V}\xspace$, which removes background ${\ensuremath{\D^+_{(\squark)}}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^+}}\xspace$ decays. Each {\ensuremath{\D^+_{(\squark)}}}\xspace candidate must be consistent with having originated at its associated primary vertex. A kinematic fit is performed constraining the three-pion candidates to arise from a common vertex and the {\ensuremath{\D^+_{(\squark)}}}\xspace candidate to originate at the primary vertex~\cite{Hulsbergen:2005pu}. Selected candidates are required to have a good \ensuremath{\chi^2}\xspace value for this fit. The invariant mass of the selected candidates is calculated by repeating the kinematic fit while constraining the \ensuremath{\Peta^{(\prime)}}\xspace particle to its known mass~\cite{PDG2020}. Multiple candidates are found in 5.5\% of the events. In case of more than one candidate per event, one of the candidates is selected randomly. The data is divided into eight subsamples, separated by year of data taking, magnet polarity and if the event had multiple primary vertices. The small amount of data from 2015 is combined with the 2016 data. Furthermore, for the years 2015--2017, the dedicated second level software trigger only accepted events with one primary vertex, while for 2018 this requirement was eliminated. The analysis is performed independently on each subsample and a weighted sum is performed to determine the result for the complete dataset. \section{Background sources} \label{sec:backgrounds} The dominant background in all the signal channels is combinatorial, in which random combinations of charged particles and photons result in masses $m(\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$ and $m(\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^+}}\xspace)$ that are compatible with those of the \ensuremath{\Peta^{(\prime)}}\xspace and the {\ensuremath{\D^+_{(\squark)}}}\xspace mesons, respectively. Included in the combinatorial background is a small contribution from correctly reconstructed \ensuremath{\Peta^{(\prime)}}\xspace mesons combined with a random companion pion. Secondary decays, in which a true {\ensuremath{\D^+_{(\squark)}}}\xspace meson is the decay product of a {\ensuremath{\Pb}}\xspace hadron and not produced at the primary vertex, contribute to the signal sample. Such decays can bias the {\ensuremath{C\!P}}\xspace asymmetry measurement, because the asymmetry in the production and decay of the {\ensuremath{\Pb}}\xspace hadron may differ from that of the prompt production of {\ensuremath{\D^+_{(\squark)}}}\xspace mesons. The secondary background is estimated from simulation to be 11\% for the {\ensuremath{\D^+}}\xspace sample and 13\% for the {\ensuremath{\D^+_\squark}}\xspace sample. The potential bias introduced by the secondary background is treated as a source of systematic uncertainty (Sec.~\ref{sec:systematics}). Additionally, background arising from the decay channel ${\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace$, followed by the decay \mbox{$\phi\ensuremath{\rightarrow}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^0}}\xspace$}, (henceforth denoted \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \phiz_{3\pi} \pip}\xspace) contributes for the channels with \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace in the final state. This can occur when the {\ensuremath{\pion^0}}\xspace meson is misreconstructed as a photon. A component for this background is accounted for in the fit, as discussed in Sec.~\ref{sec:fit}. Another background contribution comes from the signal decay \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace, followed by $\ensuremath{\ensuremath{\upeta}\xspace}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^0}}\xspace$, with the {\ensuremath{\pion^0}}\xspace meson reconstructed as a photon. This decay is visible in the two-dimensional distribution of $m(\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$ versus $m(\ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^+}}\xspace)$, as seen in Fig.~\ref{fig:m2d}. This background is well separated from the signal and furthermore does not peak in $m(\ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^+}}\xspace)$. \begin{figure} \centering \includegraphics[width=.48\textwidth]{fig1_left.pdf} \includegraphics[width=.48\textwidth]{fig1_right.pdf} \caption{Two-dimensional distributions of $m(\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$ versus $m(\ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^+}}\xspace)$ for (left) the \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^+}}\xspace final state and (right) the \ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^+}}\xspace final state. The background candidates at low values of $m(\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$ for the \ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^+}}\xspace final state arise from $\ensuremath{\ensuremath{\upeta}\xspace}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^0}}\xspace$ decays. These distributions correspond to a subset of the signal data sample. } \label{fig:m2d} \end{figure} It should be noted that this decay mode could be used to measure ${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace)$ with a dedicated analysis. Other physics background sources from partially or misreconstructed decays, such as \mbox{${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace p$}, ${\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace\mu^+\nu_\mu$, ${\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^0}}\xspace$, and ${\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^+}}\xspace ~({\rm with}~\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$, \mbox{$\ensuremath{\ensuremath{\upeta}\xspace}\xspace\ensuremath{\rightarrow}\xspace\gamma\gamma)$}, have been studied and found to contribute negligibly in the mass range of interest. The latter two are considered when evaluating the systematic uncertainty associated to the fit model. \section{Determining the raw asymmetry} \label{sec:fit} The distributions of the invariant mass $m(\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^\pm}}\xspace)$ of the ${\ensuremath{\D^{\pm}_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace\pi^{\pm}$ candidates, separated by charge, are shown in Fig.~\ref{fig:Mass_EtapPi} and the corresponding distributions for the \mbox{${\ensuremath{\D^{\pm}_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^\pm}}\xspace$} candidates are shown in Fig.~\ref{fig:Mass_EtaPi}. \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{fig2.pdf} \caption{Distribution of the invariant mass $m(\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^+}}\xspace)$ of (left) \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace candidates and (right) $D^{-}_{(s)}\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^-}}\xspace$ candidates, summed over the eight subsamples. Candidates have \ensuremath{m(\gamma\pip\pim)}\xspace in the range 936--976$\aunit{Me\kern -0.1em V}\xspace$. The results of the fit described in the text are superimposed.} \label{fig:Mass_EtapPi} \end{figure} For each data subsample (described in Sec.~\ref{sec:selection}), the raw asymmetry is independently determined using a simultaneous, binned, extended maximum-likelihood fit to the $m(\ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^+}}\xspace)$ distributions of positively and negatively charged $D^{\pm}_{(s)}$ candidates in the mass range 1770--2060 MeV. This fit to the {\ensuremath{\D^+_{(\squark)}}}\xspace invariant mass is performed simultaneously in 4 MeV intervals of the unconstrained \ensuremath{m(\gamma\pip\pim)}\xspace mass, in the mass range 936--976 \aunit{Me\kern -0.1em V}\xspace for the \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace channel or 526--570 \aunit{Me\kern -0.1em V}\xspace for the \ensuremath{\ensuremath{\upeta}\xspace}\xspace channel. The distributions of the unconstrained \ensuremath{m(\gamma\pip\pim)}\xspace mass for \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace and \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace candidates are shown in Fig.~\ref{fig:Mass_EtapEta}. The total charge-integrated {\ensuremath{\D^+_\squark}}\xspace and {\ensuremath{\D^+}}\xspace yields in each \ensuremath{m(\gamma\pip\pim)}\xspace bin are also determined in the fit. \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{fig3.pdf} \caption{Distribution of the invariant mass $m(\ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^+}}\xspace)$ of (left) \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace candidates and (right) $D^{-}_{(s)}\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^-}}\xspace$ candidates, summed over the eight subsamples. Candidates have \ensuremath{m(\gamma\pip\pim)}\xspace in the range 526--570$\aunit{Me\kern -0.1em V}\xspace$. The results of the fit described in the text are superimposed.} \label{fig:Mass_EtaPi} \end{figure} \begin{figure} \centering \includegraphics[width=.48\textwidth]{fig4_left.pdf} \includegraphics[width=.48\textwidth]{fig4_right.pdf} \caption{Distributions of the invariant mass \ensuremath{m(\gamma\pip\pim)}\xspace for (left) \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace candidates and \mbox{(right) \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace candidates}, with the projections of the fit superimposed, summed over the subsamples and the {\ensuremath{\D^+_{(\squark)}}}\xspace charges. In this figure, the \ensuremath{m(\gamma\pip\pim)}\xspace mass ranges are enlarged with respect to the baseline fit. The default mass ranges are indicated by the vertical dashed lines.} \label{fig:Mass_EtapEta} \end{figure} The {\ensuremath{\D^+_\squark}}\xspace and {\ensuremath{\D^+}}\xspace mass peaks are each described with a Johnson SU function~\cite{ref:JohnsonSU}, while the combinatorial background component is modelled by a third-order Chebyshev polynomial. The mean of the Johnson SU function corresponding to the {\ensuremath{\D^+_\squark}}\xspace peak is left to vary quadratically as a function of \ensuremath{m(\gamma\pip\pim)}\xspace, allowing for a dependence of the {\ensuremath{\D^+_{(\squark)}}}\xspace invariant mass distributions as a function of the reconstructed $\gamma{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ mass. Such variation is expected due to the small correlations between these two quantities. The mean of the Johnson SU function in the $i^{\rm th}$ \ensuremath{m(\gamma\pip\pim)}\xspace interval, $\mu_{\ensuremath{\D^+_{(\squark)}}}\xspace^i$, is given by \begin{equation} \label{eq:quadratic} \mu_{\ensuremath{\D^+_{(\squark)}}}\xspace^i = \mu_{\ensuremath{\D^+_{(\squark)}}}\xspace^p + a[M^i-M^p] + b[M^i-M^p]^2, \end{equation} where $M$ is shorthand for \ensuremath{m(\gamma\pip\pim)}\xspace, and the superscript $p$ refers to the value in the interval corresponding to the peak of the \ensuremath{m(\gamma\pip\pim)}\xspace distribution. The values of $\mu_{\ensuremath{\D^+_{(\squark)}}}\xspace^p$, $a$ and $b$ are determined in the fit. The fit also determines $\Delta m \equiv m({\ensuremath{\D^+_\squark}}\xspace)-m({\ensuremath{\D^+}}\xspace)$, which is allowed to vary linearly as a function of \ensuremath{m(\gamma\pip\pim)}\xspace. Similarly, the widths of both peaks are determined by the fit and allowed to vary quadratically as a function of \ensuremath{m(\gamma\pip\pim)}\xspace. The choice of the allowed variation (linear or quadratic) was made empirically, choosing the simplest functional form that describes the data. These means and widths may be different for positively and negatively charged candidates. The two parameters describing the shape of the tails of the Johnson SU functions are shared among the {\ensuremath{\D^+_\squark}}\xspace and {\ensuremath{\D^+}}\xspace peaks, positively and negatively charged candidates and \ensuremath{m(\gamma\pip\pim)}\xspace intervals. The signal and background yields and the parameters describing the combinatorial background are determined independently in intervals of \ensuremath{m(\gamma\pip\pim)}\xspace and for positively and negatively charged candidates. For the channels \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace a dedicated component to the fit for the background due to \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \phiz_{3\pi} \pip}\xspace decays is included. The shape of this component is obtained from simulated events. The overall yield of this background is free in the fit, while the relative contribution arising from {\ensuremath{\D^+}}\xspace and {\ensuremath{\D^+_\squark}}\xspace decays is fixed to the value estimated from known branching fractions and the relative efficiency determined from simulation. The results of the fit, summed over the eight subsamples, for the \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace and \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace channels are shown in Figs.~\ref{fig:Mass_EtapPi} and \ref{fig:Mass_EtaPi}, respectively. The total charge-integrated signal yields for the \ensuremath{\Dsp \to \etapr \pip}\xspace and \ensuremath{\Dp \to \etapr \pip}\xspace channels are $(1\,085.7 \pm 1.2)\times 10^3$ and $(555.4 \pm 0.9)\times 10^3$, respectively. The obtained yields for the \ensuremath{\Dsp \to \etaz \pip}\xspace and \ensuremath{\Dp \to \etaz \pip}\xspace are $(135.8 \pm 0.7)\times 10^3$ and $(110.8 \pm 0.7)\times 10^3$, respectively. The projections of the fit results on \ensuremath{m(\gamma\pip\pim)}\xspace, showing the contribution determined in the fit for each component in each \ensuremath{m(\gamma\pip\pim)}\xspace interval, are shown in Fig.~\ref{fig:Mass_EtapEta}. A small peaking contribution is visible in the combinatorial component, indicating the presence of correctly reconstructed \ensuremath{\Peta^{(\prime)}}\xspace candidates paired with random companion pions. For the \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace channels, the large shoulder at low values of \ensuremath{m(\gamma\pip\pim)}\xspace, attributed by the fit to the combinatorial component, is due to ${\ensuremath{\D^+_{(\squark)}}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^+}}\xspace$ decays, followed by the decay $\ensuremath{\ensuremath{\upeta}\xspace}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^0}}\xspace$, as discussed in Sec.~\ref{sec:backgrounds}. This figure was made using wider \ensuremath{m(\gamma\pip\pim)}\xspace mass ranges than those used for the baseline fit. \section{Control channel analysis} As described in Sec.~\ref{sec:Introduction}, the control channels \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \phiz \pip}\xspace, followed by the decay $\phi\ensuremath{\rightarrow}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace$, are used to cancel production and detection asymmetries. The triggering and selection of the candidates of the control channels follow closely those of Ref.~\cite{LHCb-PAPER-2019-002}. In particular, kaon daughters of the \Pphi meson are required to satisfy particle-identification criteria compatible with being kaons and the sum of their \ensuremath{p_{\mathrm{T}}}\xspace must be greater than 2\aunit{Ge\kern -0.1em V}\xspace. Their invariant mass $m(K^+K^-)$ must lie in the range 1010--1030$\aunit{Me\kern -0.1em V}\xspace$. The pion candidate must have particle-identification information compatible with a pion and $\ensuremath{p_{\mathrm{T}}}\xspace > 1 \aunit{Ge\kern -0.1em V}\xspace$. As for the signal channels, the pion candidate must satisfy fiducial requirements that remove tracks from areas of the detector with large detector asymmetries. Each {\ensuremath{\D^+_{(\squark)}}}\xspace candidate must have a large flight distance and a large flight-distance significance. Furthermore, the direction of the momentum of the {\ensuremath{\D^+_{(\squark)}}}\xspace candidate must be compatible with the direction defined by its production and decay vertices. The \ensuremath{\chi^2}\xspace per degree of freedom of the vertex fit for the {\ensuremath{\D^+_{(\squark)}}}\xspace candidate must be small. The {\ensuremath{\D^+_{(\squark)}}}\xspace candidate transverse momentum must satisfy \mbox{$\ensuremath{p_{\mathrm{T}}}\xspace>2700 \aunit{Me\kern -0.1em V}\xspace$}. These channels have significantly lower background than the signal modes owing to the absence of neutral particles in the final state. To achieve satisfactory cancellation of production and detector asymmetries, the control sample candidates are weighted such that the distributions of kinematic and trigger variables simultaneously match the distributions observed in the signal sample. Specifically, the distributions of the transverse momentum and pseudorapidity of the companion pion and the {\ensuremath{\D^+_{(\squark)}}}\xspace candidates, and of the type of trigger decision (whether independent of the final state particles or not) are matched to the signal distributions. The algorithm to determine the weights~\cite{Rogozhnikov:2016bdpnew} is trained on distributions from signal and control sample data candidates. The signal distributions are background-subtracted using the {\em sPlot} technique~\cite{Pivk:2004ty}, with $m(\ensuremath{\Peta^{(\prime)}}\xspace{\ensuremath{\pion^+}}\xspace)$ as the discriminating variable; the background in the control sample is negligible. The control channel data is divided into eight subsamples, as is done for the signal. The invariant mass distribution $m({\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^\pm}}\xspace)$ for one subsample is shown in Fig.~\ref{fig:cc_fit}. For each subsample, the raw asymmetry is independently determined using a simultaneous binned \ensuremath{\chi^2}\xspace fit to the weighted positive and negative candidates, in the range 1820--1920\aunit{Me\kern -0.1em V}\xspace for {\ensuremath{\D^+}}\xspace modes and 1920--2020\aunit{Me\kern -0.1em V}\xspace for {\ensuremath{\D^+_\squark}}\xspace modes. Each control sample channel is fit twice, employing the weighting appropriate for either the $\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^+}}\xspace$ or the $\ensuremath{\ensuremath{\upeta}\xspace}\xspace{\ensuremath{\pion^+}}\xspace$ final state. The fit model consists of the sum of a Johnson SU function and a Gaussian function for the {\ensuremath{\D^+_{(\squark)}}}\xspace peak, and a linear function for the combinatorial background. The parameters describing the tail of the Johnson SU function, the widths of the Johnson~SU and Gaussian functions and the relative proportion of the signal described by a Gaussian component are shared among positive and negative candidates. All other fitted parameters are determined separately for positive and negative candidates. The result of the fit for {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace for one control channel subsample, for both the {\ensuremath{\D^+_\squark}}\xspace and the {\ensuremath{\D^+}}\xspace decays, is shown in Fig.~\ref{fig:cc_fit}. \begin{figure} \centering \includegraphics[width=.9\textwidth]{fig5_top.pdf} \includegraphics[width=.9\textwidth]{fig5_bottom.pdf} \caption{Mass distributions with corresponding fits of (top) ${\ensuremath{\D^+_\squark}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace$ candidates and \mbox{(bottom) ${\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace$} candidates. The candidates are weighted to match the signal distributions for the $\ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace{\ensuremath{\pion^+}}\xspace$ channels. The positively charged candidates are shown on the left and the negatively charged candidates on the right. The blue curves represent the total fit, while the red curves show the very small background component. The data shown are from the 2017, magnet up polarity subsample. } \label{fig:cc_fit} \end{figure} The total charge-integrated signal yields for the \ensuremath{\Ds \to \phiz \pip}\xspace and \ensuremath{\Dp \to \phiz \pip}\xspace channels are $(27\,928\pm5)\times 10^3$ and $(16\,276\pm4)\times 10^3$, respectively. \section{Systematic uncertainties} \label{sec:systematics} Several sources of systematic uncertainty on the {\ensuremath{C\!P}}\xspace asymmetry measurements are presented in Table~\ref{tab:syst}. \begin{table}[t] \centering \caption{Systematic uncertainties associated to values of {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace (\%). } \label{tab:syst} \begin{tabular}{lcccc} \hline\hline Source & \ensuremath{\Dsp \to \etapr \pip}\xspace & \ensuremath{\Dp \to \etapr \pip}\xspace & \ensuremath{\Dsp \to \etaz \pip}\xspace & \ensuremath{\Dp \to \etaz \pip}\xspace \\ \hline Fit model, signal & 0.04 & 0.04 & 0.10 & 0.16\\ Fit bias & 0.01 & 0.01 & 0.01 & 0.02 \\ Secondary decays & 0.06 & 0.03 & 0.06 & 0.03 \\ {\ensuremath{{\mathcal{A}}^{\mathrm{ KK}}}}\xspace & 0.01 & 0.02 & 0.01 & 0.02 \\ Fit model, control & 0.03 & 0.00 & 0.03 & 0.00 \\ Weighting & 0.01 & 0.01 & 0.02 & 0.00 \\ \hline Total & 0.08 & 0.06 & 0.12 & 0.16 \\ \hline\hline \end{tabular} \end{table} The choice of the fit model to describe the signal mass distributions is a significant source of systematic uncertainty. This is evaluated by fitting, with the baseline model, pseudoexperiments generated according to several alternative models, the parameters of which have been previously determined from the experimental data. In particular, the background model is altered by using a higher-order polynomial function to represent the combinatorial component, including additional background components for partially or misreconstructed decays, or varying their parameterisation, while a sum of two Gaussian functions is used as an alternative parameterisation for the signal. An estimate of the uncertainty caused by possible bias introduced by the fitting procedure is investigated using pseudoexperiments. A significant contribution to the systematic uncertainty also arises from secondary decays, where the {\ensuremath{\D^+_{(\squark)}}}\xspace is produced in the decay of a {\ensuremath{\Pb}}\xspace hadron. Such decays have a slightly different value of {\ensuremath{{\mathcal{A}}^{\mathrm{ prod}}}}\xspace, causing a shift in the measured value of {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace. To estimate this contribution, the fraction of secondary background in the signal is derived from simulation, while the production asymmetry of secondary decays is determined on data using a control sample enhanced in such decays. Sources of systematic uncertainties associated to the analysis of the control channels are also included. The uncertainty due to the fit model of the control channels is determined, as for the signal channels, by considering alternative fit models. Detection asymmetries due to a momentum asymmetry between the kaons in the $\phi\ensuremath{\rightarrow}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace$ decay have a small associated uncertainty~\cite{LHCb-PAPER-2019-002}. The systematic uncertainty associated to the weighting procedure is evaluated by repeating the fits to the control channels without performing the weighting, and the resulting shift in {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace is taken as the systematic uncertainty. The systematic uncertainties described here were determined for the full data set, \mbox{\itshape i.e.}\xspace not separately for the different subsamples, and hence are applicable to the final {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace values given in the next section. The systematic uncertainties for the signal channels and the control channels are added in quadrature. \section{Results} The {\ensuremath{C\!P}}\xspace asymmetry is obtained by subtracting the value of {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace of the control channel from that of the signal channel. For the case of {\ensuremath{\D^+}}\xspace decays, which are Cabibbo-suppressed, the previously measured value of ${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace)$ is taken into account, while for Cabibbo-favoured {\ensuremath{\D^+_\squark}}\xspace decays, ${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Ds \to \phiz \pip}\xspace) = 0$ is assumed, resulting in \begin{align*} {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+_\squark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace\pi) & = {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace({\ensuremath{\D^+_\squark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace\pi) - {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace({\ensuremath{\D^+_\squark}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace), \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace\pi) & = {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\Peta^{(\prime)}}\xspace\pi) - {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace) + {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace), \end{align*} where ${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace) = (0.005 \pm 0.051)\%$~\cite{LHCb-PAPER-2019-002}. The results for $\Delta{\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace = {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace(\ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etazpr \pip}\xspace) - {\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace(\ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \phiz \pip}\xspace)$ for the separate subsamples are shown in Fig.~\ref{fig:subsample_acp_Eta}. \begin{figure} \centering \includegraphics[width=\textwidth]{fig6_top.pdf} \includegraphics[width=\textwidth]{fig6_bottom.pdf} \caption{Measured values of $\Delta{\ensuremath{{\mathcal{A}}^{\mathrm{ raw}}}}\xspace$ for the individual subsamples for (upper left) the \ensuremath{\Dp \to \etaz \pip}\xspace channel, (upper right) the \ensuremath{\Dsp \to \etaz \pip}\xspace channel, (lower left) the \ensuremath{\Dp \to \etapr \pip}\xspace channel and (lower right) the \ensuremath{\Dsp \to \etapr \pip}\xspace channel. The subsample labels indicate year of data taking (15/16 = 2015 + 2016), magnet polarity (``Up'' or ``Dn'') and number of PVs (``1PV'' for one primary vertex and ``NPV'' for $N_{\rm PV}>1$). The vertical lines and the grey bands indicate the weighted averages and the corresponding statistical uncertainties.} \label{fig:subsample_acp_Eta} \end{figure} The value of {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace is determined independently for each of the eight subsamples and a weighted sum is performed to obtain the overall value of {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace, yielding \begin{align*} {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \etaz \pip}\xspace) & = (0.34 \pm 0.66 \pm 0.16 \pm 0.05)\%, \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dsp \to \etaz \pip}\xspace) & = (0.32 \pm 0.51 \pm 0.12)\%, \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \etapr \pip}\xspace) & = (0.49 \pm 0.18 \pm 0.06 \pm 0.05)\%, \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dsp \to \etapr \pip}\xspace) & = (0.01 \pm 0.12 \pm 0.08)\%, \end{align*} where the first uncertainty is statistical, the second is systematic and the third, relevant for the {\ensuremath{\D^+}}\xspace channels, is due to the uncertainty on ${\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace({\ensuremath{\D^+}}\xspace\ensuremath{\rightarrow}\xspace\phi{\ensuremath{\pion^+}}\xspace)$. The use of a single control channel for two signal channels, \mbox{\itshape e.g.}\xspace, \ensuremath{\Ds \to \phiz \pip}\xspace for \ensuremath{\Dp \to \etaz \pip}\xspace and \ensuremath{\Dp \to \etapr \pip}\xspace, introduces a small correlation between the corresponding {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace measurements. This correlation is found to be less than 1\%. These results are combined with previous \mbox{LHCb}\xspace measurements based on an independent data sample~\cite{LHCb-PAPER-2016-041} or a different \ensuremath{\ensuremath{\upeta}\xspace}\xspace decay channel~\cite{LHCb-PAPER-2021-001}. A weighted sum, with weights determined using only the statistical uncertainties, is performed. The systematic uncertainties are uncorrelated between previous and current measurements. The results of the combination are \begin{align*} {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \etaz \pip}\xspace) & = \phantom{+}(0.13 \pm 0.50 \pm 0.18 )\%, \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dsp \to \etaz \pip}\xspace) & = \phantom{+}(0.48 \pm 0.42 \pm 0.17)\%, \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dp \to \etapr \pip}\xspace) & = \phantom{+}(0.43 \pm 0.17 \pm 0.10)\%, \\ {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace(\ensuremath{\Dsp \to \etapr \pip}\xspace) & = (-0.04 \pm 0.11 \pm 0.09)\%, \end{align*} where the first uncertainty is statistical and the second uncertainty is systematic, which includes the uncertainty on the externally input values of {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace of the control channels. In summary, searches for {\ensuremath{C\!P}}\xspace violation in the decays \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etaz \pip}\xspace and \ensuremath{{\ensuremath{\D^+_{(\squark)}}}\xspace \to \etapr \pip}\xspace are performed using $pp$ collisions, collected by the \mbox{LHCb}\xspace experiment at centre-of-mass energy $\sqrt{s}=13$ TeV, corresponding to 6 \ensuremath{\fb^{-1}}\xspace of integrated luminosity. The results are consistent with the absence of {\ensuremath{C\!P}}\xspace violation in these decay modes and with previous measurements~\cite{Won:2011ku,Belle:2021ygw,LHCb-PAPER-2016-041,LHCb-PAPER-2021-001}. The measurements are the most precise to date for the \ensuremath{\Dp \to \etaz \pip}\xspace, \ensuremath{\Dp \to \etapr \pip}\xspace and \ensuremath{\Dsp \to \etapr \pip}\xspace channels.
1,116,691,500,548
arxiv
\section{Introduction} \label{intro} Eigenvalue distributions of Wishart random matrices arise in many fields. Prominent examples include wireless communication systems \cite{telatar,Shijin,Alouini,Matthew,MatthewIT,Chi,PeterJ}, synthetic aperture radar (SAR) signal processing \cite{Martinez}, econometrics \cite{Stock}, statistical physics \cite{Bronk,Wig}, and multivariate statistical analysis \cite{James1964,Chuk,Sengupta,Guptanew}. In many cases, the Wishart matrices of interest are complex \cite{Good}, correlated, and non-central. Such matrices arise, for example, in multiple-input multiple-output (MIMO) communication channels characterized by line-of-sight components (i.e.,\ Rician fading) with spatial correlation amongst the antenna elements \cite{MatthewIT}. In this paper, a main focus is on the distributions of the \emph{extreme} eigenvalues (i.e., maximum and minimum) of Wishart matrices, which arise in many areas. For example, in the context of contemporary wireless communication systems, the maximum eigenvalue distribution is instrumental to the analysis of MIMO multi-channel beamforming systems \cite{Shijin} and the analysis of MIMO maximal ratio combining receivers \cite{Alouini,Matthew}, whereas the minimum eigenvalue distribution is important for the design and analysis of adaptive MIMO multiplexing-diversity switching systems \cite{Robert1}, as well as the analysis of linear MIMO receiver structures \cite{Nara}. In the context of econometrics, the minimum eigenvalue of a non-central Wishart matrix is important for characterizing the weak instrument asymptotic distribution of the Cragg-Donald statistic \cite{Stock}. In statistical physics, information pertaining to the nature of entanglement of a random pure quantum state can be obtained from the two extreme eigenvalue densities of Wishart matrices \cite{Satya}. Moreover, the maximal and minimal height distributions of $N$ non-intersecting fluctuating interfaces at the thermal equilibrium and with a certain external potential are also related to the extreme eigenvalues of a Wishart matrix \cite{Nadal}. As a final example, in SAR signal processing, the probability density of the maximum eigenvalue a Wishart matrix is an important parameter for target detection and analysis \cite{Martinez}. We focus primarily on correlated complex non-central Wishart matrices, as well as another important and closely related class of random matrices, which we refer to \emph{gamma-Wishart}. Such matrices arise in the context of MIMO land mobile satellite (LMS) communication systems \cite{Alfano}, and correspond to non-central Wishart matrices with a random non-centrality matrix having a distribution which is intimately related to the matrix-variate gamma. As discussed in \cite{Alfano}, the eigenvalues of gamma-Wishart random matrices are important for the design and analysis of MIMO LMS systems; for example, the maximum eigenvalue density determines the performance of beamforming transmission techniques, whereas the minimum eigenvalue density is closely related to the performance of linear reception techniques. Recently, the marginal eigenvalue distributions of random matrices have received much attention; for surveys, see \cite{Verdu,Edleman,McKayThesis}. For the extreme eigenvalues, distributional results are now available for correlated central, uncorrelated central, and uncorrelated noncentral complex Wishart matrices (see, for example, \cite{Khatri1964,Khatri1968,Matthew,Chi,Forrester,Forrester2,ChenManning,Aris,Alouini,Shijin,Ranjan1,Maraf,Zanella,Kov1,Kov2,Rathna1}). Far less is known for gamma-Wishart matrices, other than the results in \cite{Alfano}, which deal exclusively with uncorrelated matrices. In the majority of cases, the standard approach has been to integrate the respective joint eigenvalue densities over suitably chosen multi-dimensional regions. For the more general class of complex non-central Wishart and gamma-Wishart matrices with \emph{non-trivial correlation} however, there appears to be no tractable existing results. For these matrices, as we will show, the joint eigenvalue densities are extremely complicated, and it seems that this direct approach cannot be easily undertaken to yield meaningful results. In this paper, by employing an alternative derivation technique (also considered in \cite{Davis1979,Muirhead,Const,Mathai,Rathna,Kov1}) which allows us to deal with the joint matrix-variate density rather than the density of the eigenvalues, we derive new exact expressions for the cumulative distribution functions (c.d.f.s) of the minimum and maximum eigenvalues of correlated complex non-central Wishart and correlated gamma-Wishart random matrices. In both cases, whilst a general theory which accounts for all matrix dimensions and distributional parameters appears intractable, we are able to derive solutions for various important scenarios. Specifically, for correlated non-central Wishart matrices, we derive expressions for the minimum eigenvalue c.d.f.s when the matrix dimensionality and the number of degrees of freedom are equal. We also derive results for some specific scenarios for which they are not equal, and present some analogous results for the maximum eigenvalue c.d.f. For tractability, we focus on matrices with rank-one non-centrality parameter, which is practical for various applications; most notably, MIMO communication systems with a direct line-of-sight path between the transmitter and receiver. Given the overwhelming complexity of the underlying joint eigenvalue distribution, these extreme eigenvalue c.d.f.\ expressions are remarkably simple, involving infinite series with fast convergence, and they can be easily and efficiently computed. For the case of gamma-Wishart matrices, we focus on scenarios for which the underlying matrix-variate gamma has an integer parameter. The implications of this assumption from a telecommunications engineering perspective are discussed in \cite{Alfano}. As for the non-central Wishart case, we derive exact expressions for the minimum and maximum eigenvalue distributions for certain gamma-Wishart particularizations. Whilst previous expressions pertaining to the non-central Wishart case have been reported in \cite{Davis1979,Rathna,Mathai}; those are very complicated, involving either infinite series' with inner summations over partitions with each term involving invariant zonal polynomials (c.f.\ Section \ref{sec:Prelim}), or infinite series with special functions of matrix arguments \cite{Davis1979,Mathai}. As such, those previous results have limited utility from a numerical computation perspective. \section{Preliminaries and New Matrix Integrals} \label{sec:Prelim} \subsection{Preliminaries} In this section, we provide some preliminary results and definitions in random matrix theory which will be useful in the subsequent derivations. The following notation is used throughout the paper. Matrices are represented as uppercase bold-face, and vectors by lowercase bold-face. The superscript $(\cdot)^H$ indicates the Hermitian-transpose. $\mathbf{I}_p$ denotes a $p\times p$ identity matrix. We use $|\cdot|$ to represent the determinant of a square matrix, $\text{tr}(\cdot)$ to represent trace, and $\text{etr}(\cdot)$ stands for $\exp\left(\text{tr}(\cdot)\right)$. The set of complex Hermitian $m\times m$ matrices are denoted by $\mathcal{H}_m$ and the set of Hermitian positive definite matrices are denoted as $\mathcal{H}_m^+$. For $\mathbf{A},\mathbf{B}\in\mathcal{H}_m$, $\mathbf{A}>0$ is used to indicate the positive definiteness, and $\mathbf{A}>\mathbf{B}$ denotes $\mathbf{A}-\mathbf{B}\in\mathcal{H}_m^+$. $\mathbf{A}\geq 0$ is used to indicate non-negativeness. $\mathbf{A}_{j,k}$ represents the $j,k$th element of matrix $\mathbf{A}$. $\left\lceil x\right\rceil$ is the ceiling function, defined as $\left\lceil x\right\rceil=\min\left\{n\in \mathbb{Z}|n\geq x\right\}$. Finally, the $k$th derivative of function $f(y)$ is represented as $f^{(k)}(y)$ for all $k\in\mathbb{Z}^+$, and with $f^{(0)}(y):=f(y)$. \begin{definition} The generalized hypergeometric function of one matrix argument can be defined as\footnote{The convergence of the infinite zonal series is discussed in \cite{Muirhead,Rathna}.} \begin{equation} \label{hypo} {}_p\widetilde{F}_q\left(a_1,a_2,\ldots,a_p;b_1,b_2,\ldots,b_q;\mathbf{Y}\right)=\sum_{k=0}^\infty \sum_{\kappa} \frac{[a_1]_\kappa[a_2]_\kappa\cdots[a_p]_\kappa}{[b_1]_\kappa[b_2]_\kappa\cdots[b_q]_\kappa} \frac{C_{\kappa}(\mathbf{Y})}{k!} \end{equation} where $\mathbf{Y}\in \mathcal{H}_m$, $[a]_\kappa=\displaystyle \prod_{j=1}^m(a-j+1)_{k_j}$, $\kappa=\left(k_1,k_2,\ldots,k_m\right)$ is a partition of $k$ such that $k_1\geq k_2\geq\ldots\geq k_m\geq 0$ and $\sum_{i=1}^mk_i=k$, and $(a)_k=a(a+1)\cdots (a+k-1)$. Also, the complex zonal polynomial $C_{\kappa}(\mathbf{Y})$ is defined in \cite{James1964}. \end{definition} \begin{remark} Note that the infinite zonal polynomial expansion given in (\ref{hypo}) reduces to a finite series if at least one of the $a_i$s is a negative integer. As such, when $N\in\mathbb{Z}^+$ we have \begin{equation} \label{hyptrk} {}_p\widetilde{F}_q\left(-N,a_2,\ldots,a_p;b_1,b_2,\ldots,b_q;\mathbf{Y}\right)=\sum_{k=0}^{mN} \widetilde \sum_{\kappa} \frac{[-N]_\kappa[a_2]_\kappa\cdots[a_p]_\kappa}{[b_1]_\kappa[b_2]_\kappa\cdots[b_q]_\kappa} \frac{C_{\kappa}(\mathbf{Y})}{k!} \end{equation} where $\widetilde \sum_{\kappa}$ denotes the summation over all partitions $\kappa=\left(k_1,k_2,\ldots,k_m\right)$ of $k$ with $k_1\leq N$. \end{remark} For more properties of zonal polynomials, see \cite{James1968,Takemura,Caro}. \begin{definition}{\bf{(Non-Central Wishart Distribution)}} Let $\mathbf{X}$ be an $n\times m$ ($n \geq m$) random matrix distributed as $\mathcal{CN}_{n,m}\left(\boldsymbol{\Upsilon},\mathbf{I}_n\otimes \boldsymbol{\Sigma }\right)$, where $\boldsymbol{\Sigma}\in \mathcal{H}_m^+$ and $\boldsymbol{\Upsilon}\in \mathbb{C}^{n\times m}$. Then $\mathbf{W}=\mathbf{X}^H\mathbf{X}\in\mathcal{H}_m^+$ has a complex non-central Wishart distribution $\mathcal{W}_m\left(n,\boldsymbol{\Sigma},\boldsymbol{\Theta}\right)$ with density function \cite{James1964} \begin{equation} \label{wishart} \begin{split} f_{\mathbf{W}}\left(\mathbf{W}\right)& = \frac{\mathrm{etr}\left(-\boldsymbol{\Theta}\right)|\mathbf{W}|^{n-m}}{\tilde{\Gamma}_m(n)|\boldsymbol{\Sigma|}^{n}} \mathrm{etr}\left(-\boldsymbol{\Sigma}^{-1}\mathbf{W}\right){}_0\widetilde{F}_1\left(n;\boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}\mathbf{W}\right) \end{split} \end{equation} where $\boldsymbol{\Theta}=\boldsymbol{\Sigma}^{-1}\boldsymbol{\Upsilon}^H\boldsymbol{\Upsilon}$ is the \textit{non-centrality} parameter and $\tilde{\Gamma}_m(\cdot)$ represents the complex multivariate gamma function defined as \begin{equation*} \tilde{\Gamma}_m(n)\stackrel{ \Delta}{=}\pi^{\frac{m(m-1)}{2}}\prod_{j=1}^{m}\Gamma(n-j+1) \end{equation*} with $\Gamma(\cdot)$ denoting the classical gamma function. \end{definition} \begin{definition}{\bf{(Matrix Variate Gamma Distribution)}} Let $\alpha\geq m$ and $\boldsymbol{\Omega}\in\mathcal{H}_m^+$. The random matrix $\mathbf{M}\in\mathcal{H}_m^+$ has a matrix-variate complex gamma distribution $\Gamma_m\left(\alpha,\boldsymbol{\Omega}\right)$ if its density is \cite[Def. 6.3]{Mathai2}. \end{definition} \begin{definition}{\bf{(Gamma-Wishart Distribution)}} Let us construct an $n\times m$ matrix $\widetilde{\mathbf{X}}$ such that \begin{equation} \widetilde{\mathbf{X}}=\widehat{\mathbf{X}}+\overline{\mathbf{X}} \end{equation} where $\widehat{\mathbf{X}}\sim\mathcal{CN}_{n,m}\left(\mathbf{0},\mathbf{I}_n\otimes \boldsymbol{\Sigma }\right) $ and $\overline{\mathbf{X}}^H\overline{\mathbf{X}}\sim \Gamma_m\left(\alpha,\boldsymbol{\Omega}\right)$ are independent. Then $\mathbf{V}=\widetilde{\mathbf{X}}^H\widetilde{\mathbf{X}}\in\mathcal{H}_m^+$ follows a gamma-Wishart distribution $\Gamma {\cal W}_m (n, \alpha, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$ given by \cite{Alfano} \begin{equation} \label{Gram} f_{\mathbf{V}}(\mathbf{V})=\frac{\mathrm{etr}\left(-\boldsymbol{\Sigma}^{-1}\mathbf{V}\right)|\mathbf{V}|^{n-m}|\boldsymbol{\Omega}|^{\alpha}} {\tilde{\Gamma}_m(n)|\boldsymbol{\Sigma}|^{n}\left|\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}\right|^{\alpha}} {}_1\widetilde{F}_1\left(\alpha;n;\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}\right)^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{V}\right). \end{equation} Note that for $\alpha=n$, (\ref{Gram}) reduces to $\mathcal{W}_{m}\left(n,\boldsymbol{\Sigma}+\boldsymbol{\Omega}^{-1}\right)$. \end{definition} In addition to zonal polynomials, non-central distributional problems in multivariate statistics commonly give rise to other classes of invariant polynomials \cite{Chikuse1986}. The next lemma presents the joint eigenvalue distributions of gamma-Wishart matrix, in terms of invariant polynomials defined in \cite{Davis1979,Davis1980,Rathna}. The proof of this lemma follows similar steps to the proof of the correlated non-central Wishart joint eigenvalue density, $g_{\boldsymbol{\Lambda}}\left(\boldsymbol{\Lambda}\right)$, in \cite[Eq. 5.4]{Rathna} and thus omitted. \begin{lemma} \label{lem:eigwgamma} The joint density of the ordered eigenvalues ${\lambda}_1>{\lambda}_2> \cdots >{\lambda}_m>0$, of the matrix $\mathbf{V}$ in (\ref{Gram}) is given by \begin{align} \label{eigenpdfwg} g_{\widetilde{\boldsymbol{\Lambda}}}\left({\boldsymbol{\Lambda}}\right) &= \frac{\pi^{m(m-1)}|\boldsymbol{\Omega}|^\alpha}{\tilde{\Gamma}_m(n)\tilde{\Gamma}_m(m)|\boldsymbol{\Sigma}|^{n} |\boldsymbol{\Omega}+\boldsymbol{\Sigma}^{-1}|^\alpha} \prod_{k=1}^{m}{\lambda}_k^{n-m}\prod_{k<l}^m\left({\lambda}_k-{\lambda}_l\right)^2\nonumber\\ & \times \sum_{k,s=0}^{\infty}\sum_{\kappa,\sigma;\phi\in\kappa.\sigma} \frac{[\alpha]_\sigma C_{\phi}^{\kappa,\sigma}\left(-\boldsymbol{\Sigma}^{-1},\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{\Omega}+\boldsymbol{\Sigma}^{-1}\right)^{-1} \boldsymbol{\Sigma}^{-1} \right) C_{\phi}^{\kappa,\sigma}\left({\boldsymbol{\Lambda}},{\boldsymbol{\Lambda}}\right)}{k!s![n]_{\sigma}C_{\phi}\left(\mathbf{I}_m\right)} \end{align} where ${\boldsymbol{\Lambda}}$ is a diagonal matrix containing the eigenvalues of $\mathbf{V}$ along the main diagonal. \end{lemma} The following technical lemma is proved in \ref{ap:A}. \begin{lemma}\label{lem:factorize} Let $x_1,x_2$ be the two distinct eigenvalues of $\mathbf{X}\in\mathcal{H}_2^+$. Then, for all $n\in\mathbb{Z}^+$, \begin{equation} \frac{x_1^n-x_2^n}{x_1-x_2}= \sum_{i=0}^{\left\lceil \frac{n-2}{2}\right\rceil} (-1)^i4^ie_i^n|\mathbf{X}|^i\mathrm{tr}^{n-1-2i}(\mathbf{X}) \end{equation} where $e_i^n$ denotes the $i$th elementary symmetric function of the parameters \begin{equation} \mathcal{S}^n := \left\{\cos^2 \left(\frac{\pi}{n}\right),\cos^2 \left(\frac{2\pi}{n}\right),\ldots,\cos^2\left( \left\lceil\frac{n-2}{2}\right\rceil\frac{\pi}{n}\right)\right\}. \end{equation} \end{lemma} \subsection{New Matrix Integrals} Here we present some new matrix integral results which will be important in the derivations of the extreme eigenvalue distributions, given in the following sections. \begin{lemma}\label{lem:1f1} Let $\mathbf{A}\in\mathcal{H}_2^+$ and $\mathbf{B}\in\mathcal{H}_2$ with $\mathbf{B}\geq 0$. Also, define $x_1(y)$ and $x_2(y)$ as the eigenvalues of $\mathbf{A}+\mathbf{B}y$. Then, $\forall p\in\mathbb{Z}^+_0$ and $\Re(a)>1$, \begin{equation} \label{incomgammaint} \int_{\mathbf{0}}^{\mathbf{I}_2} |\mathbf{X}|^{a-2} \mathrm{etr}\left(\mathbf{AX}\right) \mathrm{tr}^{p}\left(\mathbf{BX}\right)d\mathbf{X}=\frac{\tilde \Gamma_2(a)\tilde \Gamma_2(2)}{\tilde \Gamma_2(a+2)} \phi^{(p)}_{\mathbf{A},\mathbf{B},a}(0) \end{equation} where $ \phi^{(p)}_{\mathbf{A},\mathbf{B},a}(0)$ is calculated recursively via \begin{equation} \label{sub1} \phi^{(p)}_{\mathbf{A},\mathbf{B},a}(0)=\frac{1}{h_{\mathbf{A},\mathbf{B}}(0)} \left(\Delta^{(p)}_{\mathbf{A},\mathbf{B},a}(0)-\sum_{j=1}^p\binom{p}{j}\phi^{(p-j)}_{\mathbf{A},\mathbf{B},a}(0)h^{(j)}_{\mathbf{A},\mathbf{B}}(0)\right) \end{equation} with initial condition \begin{equation} \label{initialcond} \phi^{(0)}_{\mathbf{A},\mathbf{B},a}(0)=\phi_{\mathbf{A},\mathbf{B},a}(0)=\frac{\Delta_{\mathbf{A},\mathbf{B},a}(0)}{x_1(0)-x_2(0)}\;. \end{equation} Here, \begin{align} & \Delta_{\mathbf{A},\mathbf{B},a}(y)= x_1(y){}_1F_1\left(a;a+2;x_1(y)\right) {}_1F_1\left(a-1;a+1;x_2(y)\right) \nonumber \\ & \hspace*{3cm} - x_2(y){}_1F_1\left(a;a+2;x_2(y)\right) {}_1F_1\left(a-1;a+1;x_1(y)\right) \end{align} and \begin{equation} h^{(j)}_{\mathbf{A},\mathbf{B}}(0)=x^{(j)}_1(0)-x^{(j)}_2(0), \end{equation} with \begin{equation} \label{x1def} x^{(j)}_1(0)=\left\{\begin{array}{cl} \displaystyle \frac{x_1(0)\mathrm{tr}(\mathbf{B})-|\mathbf{A}|\mathrm{tr}\left(\mathbf{BA}^{-1}\right)}{x_1(0)-x_2(0)} & \mathrm{if}\; j=1\\ \displaystyle \frac{2\left(x_1^{(1)}(0)x_2^{(1)}(0)-|\mathbf{B}|\right)}{x_1(0)-x_2(0)} & \mathrm{if}\; j=2\\ \displaystyle \frac{\sum_{k=1}^{j-1}\binom{j}{k}x_1^{(j-k)}(0)x_2^{(k)}(0)}{x_1(0)-x_2(0)} & \mathrm{if}\; j\geq 3\;, \end{array}\right. \end{equation} \begin{equation} \label{x2def} x^{(j)}_2(0)=\left\{\begin{array}{cl} \displaystyle \frac{|\mathbf{A}|\mathrm{tr}\left(\mathbf{BA}^{-1}\right)-x_2(0)\mathrm{tr}(\mathbf{B})}{x_1(0)-x_2(0)} & \mathrm{if}\; j=1\\ -x_1^{(j)}(0)& \mathrm{if}\; j\geq 2\; . \\ \end{array}\right. \end{equation} \end{lemma} \begin{proof} See \ref{ap:B}. \begin{lemma} \label{lem:trace} Let $\mathbf{A}\in\mathcal{H}_m^+$ and let $\mathbf{R}\in\mathcal{H}_m$ with unit rank. Then, for $t\in \mathbb{Z}^{+}_0$ and $\Re(a)>m-1$, \begin{align} \label{theq1} \int_{\mathbf{X}\in\mathcal{H}_m^+} \mathrm{etr}& \left(-\mathbf{A}\mathbf{X}\right)\mathrm{tr}\left(\mathbf{X}\right)|\mathbf{X}|^{a-m}\mathrm{tr}^t\left(\mathbf{R}\mathbf{X}\right)d\mathbf{X} = \nonumber\\ &(a)_t\tilde \Gamma_m(a)\mathrm{tr}^{t}\left(\mathbf{R}\mathbf{A}^{-1}\right)|\mathbf{A}|^{-a} \left(t\; \frac{\mathrm{tr}\left(\mathbf{R}\left(\mathbf{A}^{-1}\right)^{2}\right)}{\mathrm{tr}\left(\mathbf{R}\mathbf{A}^{-1}\right)}+a\;\mathrm{tr}(\mathbf{A}^{-1})\right). \end{align} \end{lemma} \begin{proof} See \ref{ap:C}. When the matrices are of size $2\times 2$, we can obtain the following general result: \begin{lemma}\label{lem:2by2tracep} Let $\mathbf{A}\in \mathcal{H}_2^+$ and let $\mathbf{R}\in\mathcal{H}_2$ with unit rank. Then, for $p$, $t\in\mathbb{Z}^{+}_0$ and $\Re(a)>1$, \begin{align} \int_{\mathbf{X}\in\mathcal{H}_2^+} \mathrm{etr}& \left(-\mathbf{A}\mathbf{X}\right)\mathrm{tr}^p\left(\mathbf{X}\right) \left|\mathbf{X}\right|^{a-2}\mathrm{tr}^t\left(\mathbf{R}\mathbf{X}\right)d\mathbf{X} = \nonumber\\ &p!\frac{(a)_t\tilde \Gamma_2(a)}{|\mathbf{A}|^{a+\frac{p}{2}}}\sum_{k=0}^{\min(p, t)}\frac{(-1)^k\binom{t}{k}}{|\mathbf{A}|^{\frac{k}{2}}}\mathrm{tr}^{t-k}\left(\mathbf{R}\mathbf{A}^{-1}\right)\mathrm{tr}^{k}\left(\mathbf{R}\right) \mathcal{C}^{a+t}_{p-k}\left(\frac{\mathrm{tr}\left(\mathbf{A}\right)}{2\sqrt{\left|\mathbf{A}\right|}}\right) \end{align} where $\mathcal{C}_n^\nu(\cdot)$ denotes an ultraspherical (Gegenbauer) polynomial. \end{lemma} \begin{proof} See \ref{ap:D}. \begin{lemma}\label{lem:3by4rank1} Let $\mathbf{A}\in\mathcal{H}_3^+$ and let $\mathbf{R}(\geq 0)\in\mathcal{H}_3^+$ with unit rank. Then, for $t\in \mathbb{Z}^{+}_0$, \begin{align} \label{110thoerem} \int_{\mathbf{X}\in\mathcal{H}_3^+} \mathrm{etr}& \left(-\mathbf{A}\mathbf{X}\right) \mathrm{tr}^t(\mathbf{RX})C_{1,1,0}(\mathbf{X})d\mathbf{X} =\nonumber\\ & \tilde \Gamma_3(4)|\mathbf{A}|^{-4}\left( (4)_t \mathrm{tr}^t\left(\mathbf{R}\mathbf{A}^{-1}\right)\mathrm{tr}(\mathbf{A})+t(4)_{t-1} \mathrm{tr}^{t-1}\left(\mathbf{R}\mathbf{A}^{-1}\right)\mathrm{tr}(\mathbf{R})\right). \end{align} \end{lemma} \begin{proof} See \ref{ap:E}. \begin{lemma}\label{lem:tracegamma} Let $\mathbf{A},\mathbf{B}\in\mathcal{H}_2^+$. Then, for $p,t \in \mathbb{Z}^+_0$ and $\Re(a)>1$, we have \begin{align} \label{twotracetheo} \int_{\mathbf{X}\in\mathcal{H}_2^+} & \mathrm{etr}\left(-\mathbf{A}\mathbf{X}\right)\mathrm{tr}^p\left(\mathbf{BX}\right) \mathrm{tr}^t\left(\mathbf{X}\right)\left|\mathbf{X}\right|^{a-2}d\mathbf{X}\nonumber\\ &\quad =p!t!|\mathbf{A}|^{-a}\tilde \Gamma_2(a)\displaystyle \sum_{t_1=\left\lceil\frac{t}{2}\right\rceil}^t \frac{(a)_{t_1}(a)_{t-t_1}\left(2t_1+1-t\right)}{\left(t_1+1\right)!\left(t-t_1\right)!} \displaystyle \sum_{i=0}^{\left\lceil\frac{2t_1-t-1}{2}\right\rceil}\mathcal{B}_{\tau,p,i} \end{align} where \begin{align*} \mathcal{B}_{\tau,p,i}=\sum_{k=0}^{\min(p,\;\varepsilon_{t_1,i})} (-1)^{k+i}4^i e_i^{\tau} \binom{\varepsilon_{t_1,i}}{k} \mathrm{tr}^{\varepsilon_{t_1,i}-k}\left(\mathbf{A}\right) & \mathrm{tr}^{k}\left(\mathbf{B}\right) |\mathbf{A}|^{-\varepsilon_{t_1}-\frac{p-k}{2}} |\mathbf{B}|^{\frac{p-k}{2}} \\ & \quad \times \mathcal{C}^{\varepsilon_{t_1}+a}_{p-k}\left(\frac{\mathrm{tr} \left(\mathbf{A}^{-1}\mathbf{B}\right)}{2\sqrt{\left|\mathbf{A}^{-1}\mathbf{B}\right|}}\right), \end{align*} $\varepsilon_{t_1,i}=2t_1-t-2i,\;\varepsilon_{t_1}=t_1-i$, and $\tau=\left(t_1,t-t_1\right)$ is a partition of $t$ such that $\left\lceil\frac{t}{2}\right\rceil\leq t_1\leq t$. Moreover, $e_i^\tau$ denotes the $i$th elementary symmetric function of the parameters \begin{align} \mathcal{S}^{\tau}& := \left\{\cos^2\left(\frac{\pi}{2t_1-t+1}\right),\cos^2\left(\frac{2\pi}{2t_1-t+1}\right),\ldots\ldots\right.\nonumber\\ &\hspace{4.5cm}\quad \left.\ldots,\cos^2\left(\left\lceil\frac{2t_1-t-1}{2}\right\rceil\frac{\pi}{2t_1-t+1}\right)\right\}. \end{align} \end{lemma} \begin{proof} See \ref{ap:F}. Armed with the new results in this section, we are now in a position to derive the extreme eigenvalue distributions of both correlated complex non-central Wishart and gamma-Wishart matrices. These key results are the focus of the following two sections. \section{New Minimum Eigenvalue Distributions} In this section, we consider the minimum eigenvalue distribution. To evaluate this, the most direct approach is to integrate the joint eigenvalue probability density function (p.d.f.) as follows: \begin{align} F_{min} (x) &= 1 - P( \lambda_1 > \cdots > \lambda_m > x ) \nonumber \\ &= 1 - \int_{\mathcal{D}} g(\boldsymbol{\Lambda}) d \lambda_1 \cdots d \lambda_m \end{align} where $\mathcal{D} = \{ x < \lambda_m < \cdots < \lambda_1 \}$ and $g(\boldsymbol{\Lambda})\in\left\{g_{\boldsymbol{\Lambda}}(\boldsymbol{\Lambda}), g_{ \boldsymbol{\widetilde \Lambda}}(\boldsymbol{\Lambda}) \right\}$. This direct approach, however, is difficult for two main reasons: (i) due to the presence of the invariant polynomials in the joint eigenvalue densities, and (ii) due the unbounded upper limit of the integrals which makes term-by-term integration intractable. To circumvent these complexities, in the following we adopt an alternative derivation approach based on integrating directly over the matrix-variate distribution itself, rather than the distribution of the eigenvalues. To highlight the approach, consider $\mathbf{Y}\in\mathcal{H}_m^+$ with minimum eigenvalue $\lambda_{\text{min}}(\mathbf{Y})$ having c.d.f.\ \begin{equation} \label{cdf} F_{{min}}(x)=P\left(\lambda_{{min}}(\mathbf{Y})\leq x\right)=1-P\left(\lambda_{{min}}(\mathbf{Y})>x\right) \; . \end{equation} The key idea is to invoke the obvious relation\footnote{This relation has also been employed previously in \cite{Davis1979,Muirhead,Const,Mathai,Rathna,Kov1}.} \begin{equation} \label{eq:minEVRelation} P\left(\lambda_{{min}}(\mathbf{Y})>x\right) = P\left(\mathbf{Y}> x\mathbf{I}_m \right) \end{equation} which allows one to deal purely with the distribution of $\mathbf{Y}$, rather than the distribution of its eigenvalues. \subsection{Correlated Non-Central Wishart Matrices} For the non-central Wishart scenario, we deal with the matrix $\mathbf{W}$ with joint density given in (\ref{wishart}). Thus, with (\ref{eq:minEVRelation}), we have \begin{align} \label{matrixcdf} P\left(\lambda_{{min}}(\mathbf{W})>x\right) &= \int_{\mathbf{W}>x\mathbf{I}_m }f_{\mathbf{W}}\left(\mathbf{W}\right)d\mathbf{W} \nonumber\\ &= \frac{\exp\left(-\eta\right)}{\tilde{\Gamma}_m(n)\left|\boldsymbol{\Sigma}\right|^n} \int_{\mathbf{W}-x\mathbf{I}_m\in\mathcal{H}_m^+} \left|\mathbf{W}\right|^{n-m} \text{etr}\left(-\boldsymbol{\Sigma}^{-1}\mathbf{W}\right)\nonumber\\ & \hspace{5cm} \times {}_0\widetilde{F}_1\left(n;\boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}\mathbf{W}\right)d\mathbf{W} \end{align} where $\eta=\text{tr}(\boldsymbol{\Theta})$. Applying the change of variables $\mathbf{W}=x\left(\mathbf{I}_m+\mathbf{Y}\right)$ with $d\mathbf{W}=x^{m^2}d\mathbf{Y}$ yields \begin{align*} P\left(\lambda_{{min}}(\mathbf{W})>x\right)&= \frac{x^{mn}\exp\left(-\eta\right)\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)}{\tilde{\Gamma}_m(n)\left|\boldsymbol{\Sigma}\right|^n} \int_{\mathbf{Y}\in\mathcal{H}_m^+} \left|\mathbf{I}_m+\mathbf{Y}\right|^{n-m}\nonumber\\ & \qquad \quad \times \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right) {}_0\widetilde{F}_1\left(n;x\boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}\left(\mathbf{I}_m+\mathbf{Y}\right)\right)d\mathbf{Y}. \end{align*} It is convenient to now expand the hypergeometric function with its equivalent zonal polynomial series expansion (\ref{hypo}) to give \begin{align} \label{zonal} & P\left(\lambda_{{min}}(\mathbf{W})>x\right)= \frac{x^{mn}\exp\left(-\eta\right)\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)}{\tilde{\Gamma}_m(n)\left|\boldsymbol{\Sigma}\right|^n} \sum_{k=0}^\infty\sum_{\kappa}\frac{1}{k![n]_{\kappa}} \nonumber\\ & \qquad\times \int_{\mathbf{Y}\in\mathcal{H}_m^+} \left|\mathbf{I}_m+\mathbf{Y}\right|^{n-m} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right) C_{\kappa}\left(x\boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}\left(\mathbf{I}_m+\mathbf{Y}\right)\right)d\mathbf{Y} \end{align} where $\kappa=\left(\kappa_1,....,\kappa_m\right)$ is a partition of $k$ into not more than $m$ parts such that $\kappa_1\geq....\geq\kappa_m\geq 0$ and $\sum_{i}^m\kappa_i=k$. Observing that $\boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}$ is Hermitian non-negative definite with rank one, it can be represented via its eigen decomposition as \begin{equation} \label{eigendecom} \boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}=\mu \boldsymbol{\alpha}\boldsymbol{\alpha}^H \end{equation} where $\boldsymbol{\alpha}\in\mathbb{C}^{m\times 1}$ and $\boldsymbol{\alpha}^H\boldsymbol{\alpha}=1$. Recalling that zonal polynomials depend only on the \emph{eigenvalues} of their matrix arguments, and noting that $\boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1}\left(\mathbf{I}_m+\mathbf{Y}\right)$ is also rank one, we can write (\ref{zonal}) with the aid of (\ref{eigendecom}) as \begin{align} \label{zonaltrace} & P\left(\lambda_{{min}}(\mathbf{W})>x\right)= \frac{x^{mn}\exp\left(-\eta\right)\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)}{\tilde{\Gamma}_m(n)\left|\boldsymbol{\Sigma}\right|^n} \sum_{k=0}^\infty\sum_{\kappa}\frac{1}{k![n]_{\kappa}} \nonumber\\ &\quad\times \int_{\mathbf{Y}\in\mathcal{H}_m^+} \left|\mathbf{I}_m+\mathbf{Y}\right|^{n-m} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right) C_{\kappa}\left(x\mu\boldsymbol{\alpha}^H\left(\mathbf{I}_m+\mathbf{Y}\right)\boldsymbol{\alpha}\right)d\mathbf{Y}. \end{align} Applying the complex analogue of \cite[Corollary 7.2.4]{Muirhead}, since $\boldsymbol{\alpha}^H\left(\mathbf{I}_m+\mathbf{Y}\right)\boldsymbol{\alpha}$ is rank one, then it follows that $C_{\kappa}\left(x\mu\boldsymbol{\alpha}^H\left(\mathbf{I}_m+\mathbf{Y}\right)\boldsymbol{\alpha}\right)=0$ for all partitions $\kappa$ having more than one non-zero part. Hence \begin{align} \label{zonaldef} C_{\kappa}\left(x\mu\boldsymbol{\alpha}^H\left(\mathbf{I}_m+\mathbf{Y}\right)\boldsymbol{\alpha}\right) &= (x \mu)^k \sum_{t=0}^{k}\binom{k}{t}\text{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) \end{align} and (\ref{zonaltrace}) can be written as \begin{align} \label{cdfintegral} P\left(\lambda_{{min}}(\mathbf{W})>x\right)= \frac{x^{mn}\exp\left(-\eta\right)\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)}{\tilde{\Gamma}_m(n)\left|\boldsymbol{\Sigma}\right|^n} \sum_{k=0}^\infty\frac{\left(x\mu\right)^k}{k!(n)_{k}}\sum_{t=0}^k\binom{k}{t}\mathcal{Q}^t_{m,n}(x) \end{align} where \begin{equation} \label{finalmatintegra} \mathcal{Q}^t_{m,n}(x)=\int_{\mathbf{Y}\in\mathcal{H}_m^+} \left|\mathbf{I}_m+\mathbf{Y}\right|^{n-m} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right) \text{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}. \end{equation} Unfortunately, it appears that this integral is not solvable in closed form for \emph{arbitrary} values of $m$ and $n$. However, as we now show, it can be solved in closed-form for various important configurations, thus yielding exact expressions for the minimum eigenvalue distributions. These results are presented in three key theorems. In each of these, we recall the notation \begin{align} \mu = {\rm tr} \left( \boldsymbol{\Theta}\boldsymbol{\Sigma}^{-1} \right) , \quad \quad \eta = {\rm tr} \left( \boldsymbol{\Theta} \right) . \end{align} The theorem below gives the exact minimum eigenvalue distribution for ``square'' Wishart matrices: \begin{theorem} \label{th:MainResult} Let $\mathbf{X}\sim \mathcal{CN}_{m,m}\left(\boldsymbol{\Upsilon},\mathbf{I}_m\otimes\boldsymbol{\Sigma}\right)$, where $\boldsymbol{\Upsilon}\in\mathbb{C}^{m\times m}$ has rank one, and $\mathbf{W}=\mathbf{X}^H\mathbf{X}$. Then the c.d.f.\ of $\lambda_{\text{min}}(\mathbf{W})$ is given by \begin{equation} \label{cdfans} F_{{\text{min}}}(x)= 1-\exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) \sum_{k=0}^{\infty} \frac{\left(x\mu\right)^k}{k!(m)_{k}} {}_1F_1\left(m;m+k;\eta\right). \end{equation} \end{theorem} \begin{proof} Substituting $m=n$ into (\ref{cdfintegral}) and (\ref{finalmatintegra}) yields \begin{align} \label{cdfintmm} P\left(\lambda_{{min}}(\mathbf{W})>x\right)=& \frac{x^{m^2}\exp\left(-\eta\right)\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)}{\tilde{\Gamma}_m(m)\left|\boldsymbol{\Sigma}\right|^m} \sum_{k=0}^\infty\frac{\left(x\mu\right)^k}{k!(m)_{k}}\sum_{t=0}^k\binom{k}{t}\mathcal{Q}^t_{m,m}(x) \end{align} where \begin{equation} \mathcal{Q}^t_{m,m}(x)=\int_{\mathbf{Y}\in\mathcal{H}_m^+} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right) C_{\tau}\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}. \end{equation} This matrix integral can be solved using \cite[Eq. 6.1.20]{Mathai2} to give \begin{equation} \label{intfinalsol} \mathcal{Q}^t_{m,m}(x)=\frac{\tilde{\Gamma}_m(m)(m)_t \left|\boldsymbol{\Sigma}\right|^m}{x^{m^2}}C_{\tau}\left(\frac{\boldsymbol{\Theta}}{\mu x}\right)=\frac{\tilde{\Gamma}_m(m)(m)_t\left|\boldsymbol{\Sigma}\right|^m}{x^{m^2}}\left(\frac{\eta}{x\mu}\right)^t \; \end{equation} where we have applied (\ref{eigendecom}) to arrive at the argument of the zonal polynomial. Substituting (\ref{intfinalsol}) into (\ref{cdfintmm}) with some manipulation yields \begin{equation} \label{befresum} P\left(\lambda_{{min}}(\mathbf{W})>x\right)= \exp\left(-\eta\right)\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) \sum_{k=0}^{\infty} \frac{\left(x\mu\right)^k}{k!(m)_{k}} \sum_{t=0}^k\binom{k}{t}(m)_t\left(\frac{\eta}{x\mu}\right)^t. \end{equation} To obtain a power series in $x$, we re-sum the infinite series as follows \begin{align} \label{resum} \sum_{k=0}^{\infty} \frac{\left(x\mu\right)^k}{k!(m)_{k}} \sum_{t=0}^k\binom{k}{t}(m)_t\left(\frac{\eta}{x\mu}\right)^t= \sum_{k=0}^\infty\frac{\left(x\mu\right)^{k}}{k!(m)_k}{}_1F_1\left(m;m+k;\eta\right). \end{align} Finally, using (\ref{resum}) in (\ref{befresum}) with (\ref{cdf}) gives the result in (\ref{cdfans}). \end{proof} \begin{remark} An alternative expression for the c.d.f. can be obtained by observing the fact that \begin{align} \sum_{k=0}^{\infty} \frac{\left(x\mu\right)^k}{k!(m)_{k}} \sum_{t=0}^k\binom{k}{t}(m)_t\left(\frac{\eta}{x\mu}\right)^t&=\sum_{t=0}^\infty\sum_{k=0}^\infty \frac{(m)_t}{(m)_{t+k}t!k!}\eta^t\left(x\mu\right)^k\nonumber\\ & = \Phi_3\left(m,m,\eta,x\mu\right) \end{align} where $\Phi_3(a,b,x,y)$ is the confluent hypergeometric function of two variables \cite[Eq. 5.7.1.23 ]{Erdelyi}. Thus, we can write the minimum eigenvalue c.d.f. as \begin{equation} F_{{min}}(x)=1-\exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)\Phi_3\left(m,m,\eta,x\mu\right). \end{equation} \end{remark} The theorem below gives the exact minimum eigenvalue distribution for $2 \times 2$ Wishart matrices with \emph{arbitrary} degrees of freedom: \begin{theorem}\label{th:nby2wishart} Let $\mathbf{X}\sim \mathcal{CN}_{n,2}\left(\boldsymbol{\Upsilon},\mathbf{I}_n\otimes\boldsymbol{\Sigma}\right)$, where $\boldsymbol{\Upsilon}\in\mathbb{C}^{n\times 2}$ has rank one, and $\mathbf{W}=\mathbf{X}^H\mathbf{X}$. Then the c.d.f.\ of $\lambda_{\text{min}}(\mathbf{W})$ is given by \begin{align} \label{cdfn2} F_{{\text{min}}}(x)= 1-\exp\left(-\eta\right)\frac{\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)}{\tilde \Gamma_2(n)\left|\boldsymbol{\Sigma}\right|^{n-2}} \sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{k!(n)_k} \sum_{t=0}^k \binom{k}{t}\left(\frac{\eta}{x\mu}\right)^t\rho(t,x) \end{align} where \begin{align*} \rho(t,x)=\sum_{i=0}^{n-2}\sum_{j=0}^i\sum_{l=0}^{\min(j,t)} &(-1)^l\binom{n-2}{i}\binom{i}{j}\binom{t}{l} j!(\omega_{i,j})_t\tilde \Gamma_2\left(\omega_{i,j}\right)\left(\frac{\mu}{\eta}\right)^l \\ & \times \left|\boldsymbol{\Sigma}\right|^{i+l/2-j/2}\mathcal{C}_{j-l}^{\omega_{i,j}+t}\left(\frac{1}{2}\mathrm{tr} \left(\boldsymbol{\Sigma}^{-1}\right)\sqrt{\left|\boldsymbol{\Sigma}\right|}\right)x^{2n+j-2i-4}\;, \end{align*} and $\omega_{i,j}=i-j+2$. \end{theorem} \begin{proof} We begin by substituting $m=2$ into (\ref{cdfintegral}) and (\ref{finalmatintegra}) to yield \begin{align*} \label{ncdfexp} P\left(\lambda_{min}(\mathbf{W})>x\right)=\frac{\exp(-\eta)}{\tilde \Gamma_2(n)\left|\boldsymbol{\Sigma}\right|^n}x^{2n}\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)\sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{k!(n)_k}\sum_{t=0}^k \binom{k}{t}\mathcal{Q}^t_{2,n}(x). \end{align*} Now we use the determinant expansion \begin{equation} \label{ncdfdetexp} \left|\mathbf{I}_2+\mathbf{Y}\right|^{n-2}=\sum_{i=0}^{n-2}\sum_{j=1}^i \binom{n-2}{i}\binom{i}{j} \text{tr}^j\left(\mathbf{Y}\right)\left|\mathbf{Y}\right|^{i-j} \end{equation} to write $\mathcal{Q}^t_{2,n}(x)$ as \begin{align} \mathcal{Q}^t_{2,n}(x)=\sum_{i=0}^{n-2}\sum_{j=1}^i \binom{n-2}{i}\binom{i}{j} \int_{\mathbf{Y}\in\mathcal{H}_2^+} \text{tr}^j\left(\mathbf{Y}\right)\left|\mathbf{Y}\right|^{i-j} & \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)\nonumber\\ & \hspace*{-1cm} \times \mathrm{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}. \end{align} Lemma \ref{lem:2by2tracep} can be used to solve the above integral in closed form and subsequent use of (\ref{cdf}) followed by some algebraic manipulations gives (\ref{cdfn2}). \end{proof} Although the c.d.f.\ result in Theorem \ref{th:nby2wishart} is seemingly complicated, it can be evaluated numerically for any value of $n$. Moreover, for specific values of $n$ it often gives simplified solutions. Some examples are shown in the following corollaries. \begin{corollary}\label{cor:3by2wishart} Let $\mathbf{X}\sim \mathcal{CN}_{3,2}\left(\boldsymbol{\Upsilon},\mathbf{I}_3\otimes\boldsymbol{\Sigma}\right)$, where $\boldsymbol{\Upsilon}\in\mathbb{C}^{3\times 2}$ has rank one, and $\mathbf{W}=\mathbf{X}^H\mathbf{X}$. Then the c.d.f.\ of $\lambda_{\text{min}}(\mathbf{W})$ is given by \begin{equation} \begin{split} \label{cdfans3} F_{{\text{min}}}(x)= 1-\exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) & \sum_{k=0}^\infty\frac{\left(x\mu\right)^k}{k!(3)_k}\mathcal{F}_{3,2}(k,\eta,x) \end{split} \end{equation} where \begin{equation*} \mathcal{F}_{3,2}(k,\eta,x)=\varrho_1(x){}_1F_1\left(3;3+k;\eta\right)+\varrho_2(x){}_1F_1\left(2;3+k;\eta\right), \end{equation*} \begin{align*} \varrho_1(x)=1+\left(\mathrm{tr}\left(\boldsymbol{\Sigma}^{-1}\right)-\frac{\mu}{\eta}\right)x,\;\;\text{and}\;\; \varrho_2(x)=\frac{\mu}{\eta}x+\frac{x^2}{2|\boldsymbol{\Sigma}|}\;. \end{align*} \end{corollary} \begin{remark} An alternative expression for the above c.d.f. can be written based on the confluent hypergeometric function of two arguments as \begin{align*} F_{{\text{min}}}(x)= 1-\exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) & \left(\varrho_1(x)\Phi_3\left(3,3,\eta,x\mu\right)\right.\\ & \qquad \qquad \left. +\; \varrho_2(x)\Phi_3\left(2,3,\eta,x\mu\right)\right). \end{align*} \end{remark} \begin{corollary}\label{cor:4by2wishart} Let $\mathbf{X}\sim \mathcal{CN}_{4,2}\left(\boldsymbol{\Upsilon},\mathbf{I}_4\otimes\boldsymbol{\Sigma}\right)$, where $\boldsymbol{\Upsilon}\in\mathbb{C}^{4\times 2}$ has rank one, and $\mathbf{W}=\mathbf{X}^H\mathbf{X}$. Then the c.d.f.\ of $\lambda_{\text{min}}(\mathbf{W})$ is given by \begin{equation} \begin{split} \label{cdfans4} F_{{\text{min}}}(x)= 1-& \exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) \sum_{k=0}^\infty\frac{\left(x\mu\right)^k}{k!(4)_k} \mathcal{F}_{4,2}(k,\eta,x) \end{split} \end{equation} where \begin{align*} \mathcal{F}_{4,2}(k,\eta,x)= \nu_1 (x){}_1F_1\left(4;4+k;\eta\right)+\nu_2 (x)&{}_1F_1\left(3;4+k;\eta\right)\\ & + \nu_3 (x) {}_1F_1\left(2;4+k;\eta\right), \end{align*} \begin{align*} \nu_1 (x)=& 1+a_1x+\frac{a_1}{2}x^2,\\ \nu_2 (x)= & \frac{\mu}{\eta}x+\left(\frac{1}{3}+\frac{a_2}{3}+\frac{2}{3}\mathrm{tr}\left(\boldsymbol{\Sigma}^{-1}\right)a_1-a_1^2\right)x^2 +\frac{a_1}{3|\boldsymbol\Sigma|}x^3,\\ \nu_3 (x)=& \left(\frac{a_1^2}{2}-\frac{2}{3}a_1\mathrm{tr}\left(\boldsymbol{\Sigma}^{-1}\right)-\frac{a_2}{3}+ \frac{\mathrm{tr}^2\left(\boldsymbol{\Sigma}^{-1}\right)}{3}+\frac{\mathrm{tr}\left(\boldsymbol{\Sigma}^{-2}\right)}{6}\right)x^2\\ & \hspace{7.5cm}+\frac{\mu x^3}{3\eta |\boldsymbol{\Sigma}|}+\frac{x^4}{12|\boldsymbol{\Sigma}|^2}, \end{align*} $a_1=\mathrm{tr}\left(\boldsymbol{\Sigma}^{-1}\right)-\frac{\mu}{\eta}$, and $a_2=\mathrm{tr}^2\left(\boldsymbol{\Sigma}^{-1}\right)-\frac{2}{|\boldsymbol{\Sigma}|}-\frac{\mu}{\eta}$. \end{corollary} \begin{remark} An alternative expression for the above c.d.f. can be written as \begin{align*} F_{{\text{min}}}(x)= 1-& \exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) \left(\nu_1 (x)\Phi_3(4,4,\eta,x \mu) \right.\\ & \qquad \qquad \left.+ \;\nu_2 (x) \Phi_3(3,4,\eta,x \mu)+\nu_3 (x) \Phi_3(2,4,\eta,x \mu)\right). \end{align*} \end{remark} The theorem below gives the exact minimum eigenvalue distribution for $3 \times 3$ Wishart matrices with $4$ degrees of freedom: \begin{theorem}\label{th:4by3wishart} Let $\mathbf{X}\sim \mathcal{CN}_{4,3}\left(\boldsymbol{\Upsilon},\mathbf{I}_4\otimes\boldsymbol{\Sigma}\right)$, where $\boldsymbol{\Upsilon}\in\mathbb{C}^{4\times 3}$ has rank one, and $\mathbf{W}=\mathbf{X}^H\mathbf{X}$. Then the c.d.f.\ of $\lambda_{\text{min}}(\mathbf{W})$ is given by \begin{align} \label{34} F_{{\text{min}}}(x)= 1-\exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) \sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{k!(4)_k}\mathcal{F}_{4,3}(k,\eta,x) \end{align} where \begin{equation*} \mathcal{F}_{4,3}(k,\eta,x)=\rho_1(x){}_1F_1\left(4;4+k;\eta \right)+\rho_2(x){}_1F_1\left(3;4+k;\eta \right), \end{equation*} \begin{align*} \rho_1(x)& = 1+\left(\mathrm{tr}\left(\boldsymbol{\Sigma}^{-1}\right)-\frac{\mu}{\eta}\right)x+ \frac{\mathrm{tr}\left(\boldsymbol{\Theta\Sigma}\right)}{2\eta|\boldsymbol{\Sigma}|}x^2,\\ \rho_2(x)& = \frac{\mu}{\eta}x+\frac{1}{2|\boldsymbol{\Sigma}|}\left(\mathrm{tr}\left(\boldsymbol{\Sigma}\right)-\mathrm{tr}\left(\boldsymbol{\Theta\Sigma}\right) \frac{1}{\eta}\right)x^2+\frac{x^3}{6|\boldsymbol{\Sigma}|}. \end{align*} \end{theorem} \begin{proof} We can write (\ref{cdfintegral}) and (\ref{finalmatintegra}) in the case of $m=3$ and $n=4$ as \begin{align} \label{34cdfexp} P\left(\lambda_{min}(\mathbf{W})>x\right)=\frac{\exp(-\eta)}{\tilde \Gamma_3(4)\left|\boldsymbol{\Sigma}\right|^4}x^{12}\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)\sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{k!(4)_k}\sum_{t=0}^k \binom{k}{t}\mathcal{Q}^t_{3,4}(x). \end{align} Following the identity \begin{equation} \label{det3ident} \left|\mathbf{I}_3+\mathbf{Y}\right|=1+\text{tr}(\mathbf{Y})+|\mathbf{Y}|+C_{1,1,0}(\mathbf{Y}), \end{equation} we can write $\mathcal{Q}^t_{3,4}(x)$ as \begin{align} \mathcal{Q}^t_{3,4}(x)=\int_{\mathbf{Y}\in\mathcal{H}_3^+} &\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)\text{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}\nonumber\\ & +\int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)\text{tr}(\mathbf{Y})\text{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}\nonumber\\ & + \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)|\mathbf{Y}|\text{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}\nonumber\\ & + \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)C_{1,1,0}(\mathbf{Y})\text{tr}^t\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right) d\mathbf{Y}. \end{align} These matrix integrals can be solved with the aid of \cite[Eq. 6.1.20]{Mathai2}, Lemma \ref{lem:trace}, and Lemma \ref{lem:3by4rank1} to yield \begin{align} \label{34Qans} \mathcal{Q}^t_{3,4}(x)=\frac{|\boldsymbol{\Sigma}|^4\tilde \Gamma_3(4)}{x^{12}}\left(\frac{\eta}{x\mu}\right)^t \left(\rho_1(x)(4)_t+\rho_2(x) (3)_t\right) \end{align} where we have used the relations $t(3)_t=3(4)_t-3(3)_t$ and $t(4)_{t-1}=(4)_t-(3)_t$. Substituting (\ref{34Qans}) into (\ref{34cdfexp}), we obtain \begin{align*} P\left(\lambda_{min}(\mathbf{W})>x\right){=}\exp(-\eta)& \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right) \left(\hspace{-1mm}\rho_1(x)\hspace{-1.5mm}\sum_{k=0}^\infty\hspace{-1mm} \frac{\left(x\mu\right)^k}{k!(4)_k}\hspace{-1mm}\sum_{t=0}^k \hspace{-1mm}\binom{k}{t}(4)_t\hspace{-1mm}\left(\frac{\eta}{x\mu}\right)^t\right.\nonumber\\ & \qquad \left.+\;\rho_2(x)\sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{k!(4)_k}\sum_{t=0}^k \binom{k}{t}(3)_t\left(\frac{\eta}{x\mu}\right)^t\right). \end{align*} Finally, we re-sum the infinite series as power series in $x$ and use (\ref{cdf}) to arrive at the result in (\ref{34}). \end{proof} \begin{remark} An alternative form of the c.d.f.\ above can be written as \begin{align*} F_{{\text{min}}}(x)= 1-\exp\left(-\eta\right)\mathrm{etr}\left(-x\boldsymbol{\Sigma}^{-1}\right)&\left( \rho_1(x)\Phi_3(4,4,\eta,x \mu)\right.\\ & \qquad \left.+ \rho_2(x)\Phi_3(3,4,\eta,x \mu)\right). \end{align*} \end{remark} We now present some simulation results to verify the validity of our new minimum eigenvalue distributions. We construct the covariance $\boldsymbol{\Sigma}$ matrix with $(j,k)$th element \begin{equation} \label{covmatrix} \boldsymbol{\Sigma}_{j,k}=\exp\left(-\frac{\pi^3}{32}(j-k)^2\right),\;\; 1\leq j,k\leq m \end{equation} and the mean matrix $\boldsymbol{\Upsilon}$ as \begin{equation} \boldsymbol{\Upsilon}=\mathbf{a}^H\mathbf{b} \end{equation} where \begin{align*} \mathbf{a}&=\left[ 1\;\exp\left(2i\pi\cos \theta\right)\; \exp\left(4i\pi\cos \theta\right)\ldots\; \exp\left(2(n-1)i\pi\cos \theta\right)\right]\\ \mathbf{b}&=\left[ 1\;\exp\left(2i\pi\cos \theta\right)\; \exp\left(4i\pi\cos \theta\right)\ldots\; \exp\left(2(m-1)i\pi\cos \theta\right)\right] \end{align*} with $\theta=\pi/4$ and $i=\sqrt{-1}$. Note that these particular constructions for the covariance and mean matrices are employed since they are reasonable for modeling practical correlated Rician MIMO channels \cite{MatthewIT,Bol}. Fig. \ref{fig1} compares our analytical results with simulated data. The analytical curves for the cases $m=n$ were calculated based on Theorem \ref{th:MainResult}, while for the cases $m=2$ and $m=3$, they were calculated based on Theorems \ref{th:nby2wishart} and \ref{th:4by3wishart} respectively. The accuracy of our results is clearly evident from the figure. Note that in evaluating these analytical curves, the infinite summations in (\ref{cdfans}), (\ref{cdfn2}), and (\ref{34}) were truncated to a maximum of 20 terms; thereby demonstrating a fast convergence rate for each series. \begin{figure} \centering \vspace*{-1.0cm} \subfigure[$n=m$]{ \includegraphics[width=.8\textwidth]{fig11.eps}} \subfigure[$m=2,3$]{ \includegraphics[width=.8\textwidth]{fig12.eps}}\\ \caption{Comparison of the analytical minimum eigenvalue c.d.f.s with simulated data points for correlated non-central Wishart matrices of various dimensions.} \vspace*{0.6cm} \label{fig1} \end{figure} \subsection{Gamma-Wishart Matrices} We now turn to the analysis of the minimum eigenvalue distribution of gamma-Wishart random matrices. In this case, we deal with the matrix $\mathbf{V}$ with joint density given in (\ref{Gram}). Thus, with (\ref{eq:minEVRelation}), we have \begin{equation*} \label{cdfmin} P\left( \lambda_{{min}}(\mathbf{V})>x\right)= \mathcal{K}_{m,n} \int_{\mathbf{V}-x\mathbf{I}_m\in\mathcal{H}_m^+} \left|\mathbf{V}\right|^{n-m}\text{etr}\left(-\boldsymbol{\Sigma}^{-1}\mathbf{V}\right) {}_1\widetilde{F}_1\left(\alpha;n;\mathbf{S}\mathbf{V}\right)d\mathbf{V} \end{equation*} where $\mathbf{S}=\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}\right)^{-1} \boldsymbol{\Sigma}^{-1}$ and $\mathcal{K}_{m,n}=\frac{|\boldsymbol{\Omega}|^{\alpha}} {\tilde \Gamma_m(n)|\boldsymbol{\Sigma}|^{n}\left|\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}\right|^{\alpha}}$ . Applying the change of variables $\mathbf{V}=x\left(\mathbf{I}_m+\mathbf{Y}\right)$ and using the Kummer relation \cite{James1964} \begin{equation*} {}_1\tilde{F}_1\left(\alpha;n;x\mathbf{S}\left(\mathbf{I}_m+\mathbf{Y}\right)\right)= \text{etr}\left(x\mathbf{S}\left(\mathbf{I}_m+\mathbf{Y}\right)\right) {}_1\widetilde{F}_1\left(n-\alpha;n;-x\mathbf{S}\left(\mathbf{I}_m+\mathbf{Y}\right)\right) \end{equation*} yields \begin{align} \label{cdfseed} P\left(\lambda_{{min}}(\mathbf{V})>x\right)=\mathcal{K}_{m,n} x^{mn}\text{etr}\left(-x\mathbf{Q}\right) &\int_{\mathbf{Y}\in\mathcal{H}_m^+} \left|\mathbf{I}_m+\mathbf{Y}\right|^{n-m} \text{etr}\left(-x\mathbf{Q}\mathbf{Y}\right)\nonumber\\ & \hspace*{-1cm} \times {}_1\widetilde{F}_1\left(n-\alpha;n;-x\mathbf{S}\left(\mathbf{I}_m+\mathbf{Y}\right)\right)d\mathbf{Y} \end{align} where $\mathbf{Q}=\boldsymbol{\Sigma}^{-1}-\mathbf{S}$. This integral seems intractable for arbitrary values of $m$, $n$, and $\alpha$. However, as we now show, it can be solved in closed form solutions for some important configurations, thus yielding new exact expressions for the minimum eigenvalue distributions. The theorem below gives the exact minimum eigenvalue distribution for $2 \times 2$ gamma-Wishart matrices with arbitrary degrees of freedom (i.e., arbitrary $n$). \begin{theorem}\label{th:wishgamnby2} Let $\mathbf{V}\sim \Gamma {\cal W}_2 (n, \alpha, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$, with $\alpha \in \mathbb{Z}^{+}$ such that $\alpha>n\geq 2$. Then the c.d.f.\ of $\lambda_{min}(\mathbf{V})$ is given by \begin{align} \label{wishgamnby2} F_{{{min}}}(x)=1-\mathcal{K}_{2,n}x^{2n}\mathrm{etr}\left(-x\mathbf{Q}\right) \sum_{k=0}^{2(\alpha-n)} \sum_{k_1=\left\lceil\frac{k}{2}\right\rceil}^{\min\left(k,\alpha-n\right)} d_1^{k_1}\sum_{l=0}^{\left\lceil\frac{2k_1-k-1}{2}\right\rceil}d_2^{\kappa,l}\mathcal{I}_{k_1,l}(x)x^k \end{align} where \begin{align*}d_1^{k_1}&=\frac{(\alpha-n)!(\alpha-n+1)!\left(2k_1-k+1\right)}{(\alpha-n-k_1)! (\alpha-n+1+k_1-k)!\left(k_1+1\right)!\left(k-k_1\right)!(n)_{k_1}(n-1)_{k-k_1}}\\ d_2^{\kappa,l}&=(-1)^l4^l e^{\kappa}_l|\mathbf{S}|^{k-k_1+l} . \end{align*} Also, \begin{align*} \mathcal{I}_{k_1,l}(x)=\sum_{p=0}^{\varepsilon_{k_1,l}} \sum_{j=0}^{\nu_{k_1,l}} p! \binom{\varepsilon_{k_1,l}}{p} \binom{\nu_{k_1,l}}{j} \frac{ \mathrm{tr}^{\varepsilon_{k_1,l}-p}(\mathbf{S}) } { |\mathbf{Q}|^{j+2} x^{2(j+2)+p} } \sum_{t=0}^j \frac{j!}{(j-t)!} |\mathbf{Q}|^t\mathcal{J}_{t,p,j}x^{t}\;, \end{align*} with \begin{align*} &\mathcal{J}_{t,p,j}=\sum_{t_1=\left\lceil\frac{t}{2}\right\rceil}^{t} \tilde \Gamma_2(\omega_{j,t})\frac{\left(\omega_{j,t}\right)_{t_1}\left(\omega_{j,t}\right)_{t-t_1}\left(2t_1+1-t\right)} {\left(t_1+1\right)!\left(t-t_1\right)!} \sum_{i=0}^{\left\lceil\frac{2t_1-t-1}{2}\right\rceil} \mathcal{L}_{\tau,p,i,j}, \end{align*} where \begin{align*} \mathcal{L}_{\tau,p,i,j}{=}\sum_{q=0}^{\min(p,\varepsilon_{t_1,i})} (-1)^{q+i}4^ie_i^\tau\binom{\varepsilon_{t_1,i}}{q} \mathrm{tr}^{\varepsilon_{t_1,i}-q}(\mathbf{Q})& \mathrm{tr}^q(\mathbf{S}) |\mathbf{Q}|^{-\varepsilon_{t_1}-\frac{p-q}{2}}|\mathbf{S}|^{\frac{p-q}{2}}\\ & \times \mathcal{C}^{\varepsilon_{t_1}+\omega_{j,t}}_{p-q}\left(\frac{\mathrm{tr}\left(\mathbf{Q}^{-1}\mathbf{S}\right)} {2\sqrt{\left|\mathbf{Q}^{-1}\mathbf{S}\right|}}\right) \; . \end{align*} $\kappa=(k_1,k-k_1)$ is a partition of $k$ such that $ \left\lceil\frac{k}{2}\right\rceil\leq k_1\leq \min(k,(\alpha-n))$, $\tau=(t_1,t-t_1)$ is a partition of $t$ such that $ \left\lceil\frac{t}{2}\right\rceil\leq t_1\leq t$, $\omega_{j,t}=j-t+2$ and $\nu_{k_1,l}=n+l+k-k_1-2$. \end{theorem} \begin{proof} Particularizing (\ref{cdfseed}) to $m = 2$, $\alpha>n\geq 2$ and $\alpha \in \mathbb{Z}^+$, and applying the zonal polynomial expansion (\ref{hyptrk}) yields \begin{align} \label{hypozonexp} P\left(\lambda_{{min}}(\mathbf{V})>x\right)=\mathcal{K}_{2,n}x^{2n}\text{etr}\left(-x\mathbf{Q}\right) \sum_{k=0}^{2(\alpha-n)}&\widetilde\sum_{\kappa} \frac{[-(\alpha-n)]_{\kappa}}{[n]_\kappa k!}(-x)^k\nonumber\\ & \hspace*{-4cm} \times \int_{\mathbf{Y}\in\mathcal{H}_2^+} \left|\mathbf{I}_2+\mathbf{Y}\right|^{n-2}\text{etr}\left(-x\mathbf{Q}\mathbf{Y}\right) C_{\kappa}\left(\mathbf{S}\left(\mathbf{I}_2+\mathbf{Y}\right)\right)d\mathbf{Y} \end{align} where $\kappa=(k_1,k_2)$ is a partition of $k$ into not more than two parts such that $k_1+k_2=k$ and $k_1\geq k_2 \geq 0,\;\forall k_1\in\{0,1,\ldots,\alpha-n\}$. Note that the series over $k$ is finite (truncated at $k=2(\alpha-n)$) due to the negative sign of the generalized complex hypergeometric coefficient. Careful inspection reveals that $\kappa$ can be written as $\kappa=\left(k_1,k-k_1\right)$, where $\left\lceil\frac{k}{2}\right\rceil\leq k_1\leq \min\left(k,(\alpha-n)\right)$. This fact, along with the alternative representation of complex zonal polynomial given in \cite{Takemura,Mathai}, and Lemma \ref{lem:factorize}, \begin{align} \label{zonal2exp} C_{\kappa}\left(\mathbf{S}\left(\mathbf{I}_2+\mathbf{Y}\right)\right)=\;& \frac{k!\left(2k_1-k+1\right)}{\left(k_1+1\right)!\left(k-k_1\right)!} \left|\mathbf{S}\left(\mathbf{I}_2+\mathbf{Y}\right)\right|^{k-k_1}\nonumber\\ & \hspace*{-1cm} \times \displaystyle \sum_{l=0}^{\left\lceil\frac{2k_1-k-1}{2}\right\rceil} (-1)^l4^l e_l^\kappa \left|\mathbf{S}\left(\mathbf{I}_2+\mathbf{Y}\right)\right|^{l} \text{tr}^{\varepsilon_{k_1,l}}\left(\mathbf{S}\left(\mathbf{I}_2+\mathbf{Y}\right)\right) \end{align} gives (after some manipulations) \begin{align} \label{cdf2mexpress} P\left(\lambda_{min}(\mathbf{V})>x\right)=\mathcal{K}_{2,n}x^{2n}\text{etr}\left(-x\mathbf{Q}\right) & \sum_{k=0}^{2(\alpha-n)} \sum_{k_1=\left\lceil\frac{k}{2}\right\rceil}^{\min\left(k,(\alpha-n)\right)} d_1^{k_1}\nonumber\\ & \quad \times \sum_{l=0}^{\left\lceil\frac{2k_1-k-1}{2}\right\rceil}d_2^{\kappa,l}\mathcal{I}_{k_1,l}(x)x^k \end{align} where \begin{equation} \mathcal{I}_{k_1,l}(x)=\int_{\mathbf{Y}\in\mathcal{H}_2^+} \text{etr}\left(-x\mathbf{Q}\mathbf{Y}\right)\left|\mathbf{I}_2+\mathbf{Y}\right|^{\nu_{k_1,l}} \text{tr}^{\varepsilon_{k_1,l}}\left(\mathbf{S}\left(\mathbf{I}_2+\mathbf{Y}\right)\right) d\mathbf{Y}. \end{equation} Using $\left|\mathbf{I}_2+\mathbf{Y}\right|=1+\text{tr}(\mathbf{Y})+|\mathbf{Y}|$ and the binomial theorem yields \begin{align} \label{Idef} \mathcal{I}_{k_1,l}(x)= \sum_{p=0}^{\varepsilon_{k_1,l}} \sum_{j=0}^{\nu_{k_1,l}} p!\binom{\varepsilon_{k_1,l}}{p} \binom{\nu_{k_1,l}}{j} \frac{ \text{tr}^{\varepsilon_{k_1,l}-p}(\mathbf{S}) }{ |\mathbf{Q}|^{j+2} x^{2(j+2)+p} } \sum_{t=0}^j \binom{j}{t}|\mathbf{Q}|^t\mathcal{J}_{t,p,j}x^{t} \end{align} where \begin{equation} \mathcal{J}_{t,p,j}=\frac{|x\mathbf{Q}|^{j-t+2}x^{t+p}}{p!} \int_{\mathbf{Y}\in\mathcal{H}_2^+} \text{etr}\left(-x\mathbf{Q}\mathbf{Y}\right)\text{tr}^{p}(\mathbf{SY})\text{tr}^{t}(\mathbf{Y})|\mathbf{Y}|^{j-t}d\mathbf{Y}. \end{equation} Finally, solving the remaining integral using Lemma \ref{lem:tracegamma} and recalling (\ref{cdf}) concludes the proof. \end{proof} Note that the minimum eigenvalue c.d.f.\ result given in (\ref{wishgamnby2}) can be easily computed numerically, since it contains only finite summations. Moreover, for specific values of $n$ and $\alpha$, it leads to simplified solutions, as shown in the following corollary. \begin{corollary} Let $\mathbf{V}\sim \Gamma {\cal W}_2 (2, 3, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$. Then the c.d.f. of $\lambda_{min}(\mathbf{V})$ is given by \begin{align} F_{min}(x)=1-\frac{|\boldsymbol{\Omega}|}{|\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}|} &\mathrm{etr}\left(-x\mathbf{Q}\right) \left(\left|\mathbf{I}_2+\boldsymbol{\Omega}^{-1}\boldsymbol{\Sigma}^{-1}\right|+ \right.\nonumber\\ & \qquad \left.+\left(\frac{\mathrm{tr}(\mathbf{S})}{2}+\mathrm{tr}(\mathbf{Q}^{-1})|\mathbf{S}|\right)x+\frac{|\mathbf{S}|}{2}x^2\right). \end{align} \end{corollary} The theorem below gives the exact minimum eigenvalue distribution for $3 \times 3$ gamma-Wishart matrices with $3$ degrees of freedom. \begin{theorem}\label{th:4by3wishgama} Let $\mathbf{V}\sim \Gamma {\cal W}_3 (3, 4, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$. Then the c.d.f. of $\lambda_{min}(\mathbf{V})$ is given by \begin{align} \label{cdf334} F_{min}(x){=}1{-}\frac{|\boldsymbol{\Omega}|\mathrm{etr}\left(-x\mathbf{Q}\right)}{|\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}|} &\left(\left|\mathbf{I}_3+\boldsymbol{\Omega}^{-1}\boldsymbol{\Sigma}^{-1}\right|+\mathrm{tr}(\mathbf{F})\frac{x}{6}+ \mathrm{tr}(\mathbf{G})|\mathbf{S}|\frac{x^2}{6} + |\mathbf{S}|\frac{x^3}{6}\right) \end{align} where \begin{equation} \mathbf{F}=2\mathbf{S}-3\mathbf{Q}^{-1}\mathbf{S}-3|\mathbf{S}|\mathbf{Q}^{-1}+6|\mathbf{S}||\mathbf{Q}|^{-1}\mathbf{Q}+3\left|\mathbf{I}_3+\mathbf{S} \right|\mathbf{Q}^{-1}\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S} \end{equation} and \begin{equation} \mathbf{G}=\mathbf{S}^{-1}+3\mathbf{Q}^{-1} \; . \end{equation} \end{theorem} \begin{proof} In this case (\ref{cdfseed}) becomes \begin{align} P\left(\lambda_{{min}}(\mathbf{V})>x\right)=\mathcal{K}_{3,3} x^{9} \text{etr}\left(-x\mathbf{Q}\right) &\int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right)\nonumber\\ & \times {}_1\widetilde F_1\left(-1;3;-x\mathbf{S}\left(\mathbf{I}_3+\mathbf{S}\right)\right) d\mathbf{Y} \end{align} which upon applying the zonal polynomial expansion for the hypergeometric function (\ref{hyptrk}) yields \begin{align} \label{intmed} P\left(\lambda_{{min}}(\mathbf{V})>x\right)=\mathcal{K}_{3,3} x^{9} \text{etr}\left(-x\mathbf{Q}\right) \sum_{k=0}^3 \frac{(-x)^k}{k!}& \widetilde\sum_{\kappa}\frac{[-1]_\kappa}{[3]_\kappa} \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right)\nonumber\\ & \times C_{\kappa}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right) d\mathbf{Y} \end{align} where $\kappa=\left(k_1,k_2,k_3\right)$ is a partition of $k$. It is not difficult to see that the admissible partitions corresponding to the integers $0$, $1$, $2$, and $3$ are $(0,0,0)$, $(1,0,0)$, $(1,1,0)$, and $(1,1,1)$ respectively. Thus, we can write (\ref{intmed}) as \begin{align} \label{gamma3} P\left(\lambda_{\text{min}}(\mathbf{V})>x\right)&=\mathcal{K}_{3,3} x^{9} \text{etr}\left(-x\mathbf{Q}\right) \left( \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) d\mathbf{Y}\right.\nonumber\\ & \qquad \left. + \frac{x}{3} \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) C_{1,0,0}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right) d\mathbf{Y}\right.\nonumber\\ & \qquad \left. + \frac{x^2}{6}\int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) C_{1,1,0}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right) d\mathbf{Y}\right.\nonumber\\ &\quad \left. + \frac{x^3}{6} \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) C_{1,1,1}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right) d\mathbf{Y} \right). \end{align} Moreover, we have \begin{align} \label{zonal3iden} C_{1,0,0}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right)&=\text{tr}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right) \nonumber\\ C_{1,1,1}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right)&=|\mathbf{S}||\mathbf{I}_3+\mathbf{Y}|. \end{align} Utilizing (\ref{det3ident}) we can express \begin{equation} \label{zonal32} C_{1,1,0}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right)=\left|\mathbf{I}_3+\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right)\right|-1- \text{tr}\left(\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)\right)-|\mathbf{S}\left(\mathbf{I}_3+\mathbf{Y}\right)|. \end{equation} Now, using (\ref{zonal3iden}) and (\ref{zonal32}) in (\ref{gamma3}) yields \begin{align} \label{h1h2def} P\left(\lambda_{min}(\mathbf{V})>x\right)=\mathcal{K}_{3,3} & x^{9} \text{etr}\left(-x\mathbf{Q}\right) \left( \left(1-\frac{x^2}{6}-\frac{\text{tr}(\mathbf{S})x^2}{6}+\frac{\text{tr}(\mathbf{S})x}{3}\right)\right.\nonumber\\ & \qquad \times \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) d\mathbf{Y}\nonumber\\ & + \left(\frac{x}{3}-\frac{x^2}{6}\right) \int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) C_{1,0,0}\left(\mathbf{SY}\right)d\mathbf{Y} \nonumber\\ & \left. +\frac{|\mathbf{S}|}{6}\left(x^3-x^2\right)\mathcal{G}_1(x)+\left|\mathbf{I}_3+\mathbf{S}\right|\frac{x^2}{6}\mathcal{G}_2(x) \right) \end{align} where \begin{equation} \label{h1} \mathcal{G}_1(x)=\int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) \left|\mathbf{I}_3+\mathbf{Y}\right|d\mathbf{Y} \end{equation} and \begin{equation} \label{h2} \mathcal{G}_2(x)=\int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) \left|\mathbf{I}_3+\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\mathbf{Y}\right|d\mathbf{Y}. \end{equation} The first and second integrals in (\ref{h1h2def}) can be evaluated using \cite[Eq. 6.1.20]{Mathai2}, thus we concentrate on the evaluation of $\mathcal{G}_1(x)$ and $\mathcal{G}_2(x)$. We provide a detailed solution for the integral $\mathcal{G}_2(x)$ only, since both (\ref{h1}) and (\ref{h2}) share a common structure. Using the relation $\left|\mathbf{I}_3+\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\mathbf{Y}\right|={}_1\widetilde F_0\left(-1;-\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\mathbf{Y}\right)$ in (\ref{h2}) yields \begin{equation} \mathcal{G}_2(x)=\int_{\mathbf{Y}\in\mathcal{H}_3^+} \text{etr}\left(-x\mathbf{QY}\right) {}_1\widetilde F_0\left(-1;-\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\mathbf{Y}\right) d\mathbf{Y}. \end{equation} This integral can be solved using \cite[Eq. 3.20]{Rathna} as \begin{align*} \mathcal{G}_2(x)& =\tilde \Gamma_3(3)|\mathbf{Q}|^{-3}x^{-9}\; {}_2\widetilde F_{0}\left(-1,3;-x^{-1}\mathbf{Q}^{-1}\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\right)\nonumber\\ & = \tilde \Gamma_3(3)|\mathbf{Q}|^{-3}x^{-9} \sum_{k=0}^3\frac{(-1)^k}{x^k k!} \widetilde \sum_{\kappa} [-1]_\kappa [3]_\kappa C_\kappa\left(\mathbf{Q}^{-1}\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\right). \end{align*} Since the valid partitions corresponding to the summation index $k=0,1,2$ and $3$ are respectively $(0,0,0),(1,0,0),(1,1,0)$ and $(1,1,1)$, we can use equations analogous to (\ref{zonal3iden}) to obtain \begin{align} \label{h2ans} \mathcal{G}_2(x) =\tilde \Gamma_3(3)|\mathbf{Q}|^{-3}& x^{-9} \left(1+3x^{-1}\text{tr}\left(\mathbf{Q}^{-1}\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\right)\right.\nonumber\\ & \quad +\;6x^{-2}C_{1,1,0}\left(\mathbf{Q}^{-1}\left(\mathbf{I}_3+\mathbf{S}\right)^{-1}\mathbf{S}\right)\nonumber\\ & \quad \left.+\;\;6 x^{-3}|\mathbf{Q}|^{-1}\left|\mathbf{I}_3+\mathbf{S}\right|^{-1}|\mathbf{S}|\right). \end{align} Following similar arguments, we can obtain \begin{align} \label{h1ans} \mathcal{G}_1(x) & =\tilde \Gamma_3(3)|\mathbf{Q}|^{-3}x^{-9} \left( 1+3x^{-1}\text{tr}\left(\mathbf{Q}^{-1}\right)+6x^{-2}C_{1,1,0}\left(\mathbf{Q}^{-1}\right)+6x^{-3}|\mathbf{Q}|^{-1} \right). \end{align} Finally, using (\ref{h2ans}), (\ref{h1ans}), and (\ref{110zoanlex}) in (\ref{h1h2def}), recalling (\ref{cdf}), and applying some lengthy algebraic manipulations, we arrive at the result in (\ref{cdf334}). \end{proof} Fig. \ref{fig2} compares our analytical results with simulated data. The analytical curves for the cases $m=2$ and $m=3$ were computed based on Theorems \ref{th:wishgamnby2} and \ref{th:4by3wishgama} respectively. Here we have used the same $\boldsymbol{\Sigma}$ as defined in (\ref{covmatrix}), whereas $\boldsymbol{\Omega}$ is constructed with the following $j,k$th element: \begin{equation} \label{gampara} \boldsymbol{\Omega}_{j,k}=\exp\left(-0.7(j-k)i\pi\right)\exp\left(-\frac{147\pi^3}{4000}(j-k)^2\right),\;\; 1\leq j,k\leq m \end{equation} with $i=\sqrt{-1}$. As expected, the analytical curves match closely with the simulated curves. \begin{figure} \centering \vspace*{-1.0cm} \subfigure[$n=3, m=2$]{ \includegraphics[width=.8\textwidth]{fig21.eps}} \subfigure[$m=2,3$]{ \includegraphics[width=.8\textwidth]{fig22.eps}}\\ \caption{Comparison of the analytical minimum eigenvalue c.d.f.s with simulated data points for correlated gamma-Wishart matrices with various dimensions and parameters.} \label{fig2} \end{figure} \section{New Maximum Eigenvalue Distributions} In this section, we shift attention to the distribution of the \emph{maximum} eigenvalue of correlated non-central Wishart and gamma-Wishart random matrices. As for the minimum eigenvalue distribution considered previously, once again the most direct approach of integrating the joint eigenvalue p.d.f. over a suitable multidimensional region seems intractable. To this end, we write the maximum eigenvalue $\lambda_{max} (\mathbf{Y})$ of $\mathbf{Y}\in\mathcal{H}_m^+$ as \begin{equation} \label{cdfmax} F_{max}(x)=P\left(\lambda_{max}(\mathbf{Y})< x\right)=P\left(\mathbf{Y} < x \mathbf{I}_m \right) \; \end{equation} which allows one to deal purely with the distribution of $\mathbf{Y}$, rather than the distribution of its eigenvalues. \subsection{Correlated Non-Central Wishart Case} For the non-central Wishart scenario, we deal with the matrix $\mathbf{W}$ with joint density given in (\ref{wishart}). Thus, with (\ref{cdfmax}), we have \begin{align} \label{maximumeigen} P\left(\lambda_{max}(\mathbf{W})<x\right)&=\int_{\mathbf{W}<x\mathbf{I}_m} f_{\mathbf{W}}(\mathbf{W})d\mathbf{W}\nonumber\\ &= \frac{\exp(-\eta)}{\tilde \Gamma_m(n) |\boldsymbol{\Sigma}|^n} \int_{x\mathbf{I}_m-\mathbf{W}\in\mathcal{H}_m^+}|\mathbf{W}|^{n-m}\text{etr}\left(-\boldsymbol{\Sigma}^{-1}\mathbf{W}\right)\nonumber\\ & \hspace{3cm} \qquad \times {}_0\widetilde F_1\left(n;\boldsymbol{\Theta\Sigma}^{-1}\mathbf{W}\right)d\mathbf{W}. \end{align} Applying the change of variable $\mathbf{W}=x\mathbf{Y}$ with $d\mathbf{W}=x^{m^2}d\mathbf{Y}$ in (\ref{maximumeigen}) gives \begin{align} P\left(\lambda_{max}(\mathbf{W})<x\right)= \frac{x^{mn}\exp(-\eta)}{\tilde \Gamma_m(n) |\boldsymbol{\Sigma}|^n} \int_{\mathbf{0}}^{\mathbf{I}_m} & |\mathbf{Y}|^{n-m} \text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)\nonumber\\ & \times {}_0\widetilde F_1\left(n;x\boldsymbol{\Theta\Sigma}^{-1}\mathbf{Y}\right)d\mathbf{Y}. \end{align} Expanding the hypergeometric function with its equivalent series expansion followed by using the reasoning which led to (\ref{zonaldef}) yields \begin{align} \label{maxwishart} P\left(\lambda_{max}(\mathbf{W})<x\right)= \frac{x^{mn}\exp(-\eta)}{\tilde \Gamma_m(n) |\boldsymbol{\Sigma}|^n} \sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{(n)_k k!} \int_{\mathbf{0}}^{\mathbf{I}_m} & |\mathbf{Y}|^{n-m}\text{etr}\left(-x\boldsymbol{\Sigma}^{-1}\mathbf{Y}\right)\nonumber\\ & \times \text{tr}^k\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right)d\mathbf{Y} \end{align} where we have applied $\left(\boldsymbol{\alpha}^H\mathbf{Y}\boldsymbol{\alpha}\right)^k=\text{tr}^k\left(\boldsymbol{\alpha}\boldsymbol{\alpha}^H\mathbf{Y}\right)$. This matrix integral seems intractable for arbitrary values $m$ and $n$. In fact, this integral seems even more difficult to tackle than that which arises in the minimum eigenvalue formulation, i.e., Eq.\ (\ref{finalmatintegra}). As the following theorem shows, however, we can obtain a solution for the case of $2 \times 2$ non-central Wishart matrices with arbitrary degrees of freedom. This is significant, because it presents the first tractable result for the maximum eigenvalue c.d.f.\ of correlated complex non-central Wishart matrices. \begin{theorem} Let $\mathbf{X}\sim \mathcal{CN}_{n,2}\left(\boldsymbol{\Upsilon},\mathbf{I}_n\otimes\boldsymbol{\Sigma}\right)$, where $\boldsymbol{\Upsilon}\in\mathbb{C}^{n\times 2}$ has rank one, and $\mathbf{W}=\mathbf{X}^H\mathbf{X}$. Then the c.d.f.\ of $\lambda_{max}(\mathbf{W})$ is given by \begin{equation} \label{eq:nonCentMax} F_{max}(x)=\frac{x^{2n}\exp(-\eta)}{n!(n+1)!} \sum_{k=0}^\infty \frac{\left(x\mu\right)^k}{(n)_k k!} \phi_{-x\boldsymbol{\Sigma}^{-1},\boldsymbol{\alpha}\boldsymbol{\alpha}^H,n}^{(k)}(0) \end{equation} where $\phi_{-x\boldsymbol{\Sigma}^{-1},\boldsymbol{\alpha}\boldsymbol{\alpha}^H,n}^{(k)}(0)$ is calculated recursively via (\ref{sub1})-(\ref{initialcond}). \end{theorem} \begin{proof} Substituting $m=2$ into (\ref{maxwishart}), the proof follows upon application of Lemma \ref{lem:1f1}. \end{proof} \begin{remark} An alternative expression for (\ref{eq:nonCentMax}) can be obtained by employing the moment generating function based power series expansion approach given in \cite{Mathai3}. However, we have found that by employing that approach the final expression is more complicated, since it includes two infinite summations along with a recursive summation term. \end{remark} \subsection{Correlated Gamma-Wishart Case} We now turn consider the maximum eigenvalue distribution of gamma-Wishart random matrices. In this case, we deal with the matrix $\mathbf{V}$ with joint density given in (\ref{Gram}). Thus, with (\ref{cdfmax}), we have \begin{align} \label{grammax} P\left(\lambda_{max}(\mathbf{V})<x\right)= \mathcal{K}_{m,n} x^{mn} \int_0^{\mathbf{I}_m} |\mathbf{Y}|^{n-m}&\text{etr}\left(-x\mathbf{QY}\right)\nonumber\\ &\times {}_1\widetilde F_1\left(n-\alpha;n;-x\mathbf{SY}\right)d\mathbf{Y}. \end{align} In the following theorem, we present a new exact closed form expression for the c.d.f.\ of the maximum eigenvalue of $\mathbf{V}$ for some particularizations of $m$, $n$ and $\alpha$. \begin{theorem}\label{th:maxwishgam} Let $\mathbf{V}\sim \Gamma {\cal W}_2 (n, \alpha, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$ with $\alpha>n\geq 2$. Then the c.d.f.\ of $\lambda_{max}(\mathbf{V})$ is given by \begin{align} \label{anmax2} F_{max}(x)&= \mathcal{K}_{2,n}x^{2n} \sum_{k=0}^{2(\alpha-n)} \sum_{k_1=\left\lceil\frac{k}{2}\right\rceil}^{\min\left(k,(\alpha-n)\right)} d_1^{k_1} \sum_{l=0}^{\left\lceil\frac{2k_1-k-1}{2}\right\rceil} d_2^{\kappa,l}\mathcal{R}_{k_1,l}(x)x^k \end{align} where \begin{equation} \label{defR} \mathcal{R}_{k_1,l}(x)=\frac{\tilde \Gamma_2(2)\tilde \Gamma_2\left(\nu_{k_1,l}+2\right)}{\tilde \Gamma \left(\nu_{k_1,l}+4\right)} \phi_{-x\mathbf{Q},\mathbf{S},\nu_{k_1,l}+2}^{(\varepsilon_{k_1,l})}(0), \end{equation} $\varepsilon_{k_1,l}=2k_1-k-2l$, $\nu_{k_1,l}=n+l+k-k_1-2$, $\kappa=\left(k_1,k-k_1\right)$ is a partition of $k$ such that $k_1\in\left\{0,1,\ldots,(\alpha-n)\right\}$ and $\left\lceil\frac{k}{2}\right\rceil\leq k_1\leq\min\left(k,(\alpha-n)\right)$. The term $\phi_{-x\mathbf{Q},\mathbf{S},\nu_{k_1,l}+2}^{(\varepsilon_{k_1,l})}(0)$ is calculated recursively via (\ref{sub1})-(\ref{initialcond}). \end{theorem} \begin{proof} Particularizing (\ref{grammax}) to $m=2$, $\alpha>n\geq 2$ and $\alpha \in \mathbb{Z}^+$ and applying the zonal polynomial expansion (\ref{hyptrk}) gives \begin{align*} F_{max}(x)= \mathcal{K}_{2,n}x^{2n} \sum_{k=0}^{2(\alpha-n)} \widetilde \sum_{\kappa} \frac{[-(\alpha-n)]_\kappa}{[n]_\kappa}\frac{(-x)^k}{k!} \int_0^{\mathbf{I}_2} & |\mathbf{Y}|^{n-2}\text{etr}\left(-x\mathbf{QY}\right)\nonumber\\ & \qquad \times C_\kappa(\mathbf{SY})d\mathbf{Y} \end{align*} Following the similar reasoning which led to (\ref{cdf2mexpress}), with some algebraic manipulations we obtain (\ref{anmax2}), but with \begin{equation*} \mathcal{R}_{\kappa,l}(x)= \int_0^{\mathbf{I}_2} \text{etr}\left(-x\mathbf{QY}\right)|\mathbf{Y}|^{\nu_{k_1,l}}\text{tr}^{\varepsilon_{k_1,l}}(\mathbf{SY})d\mathbf{Y}. \end{equation*} This integrals is solved via Lemma \ref{lem:1f1} to yield (\ref{defR}). \end{proof} Note that the c.d.f.\ result in Theorem \ref{th:maxwishgam} can be evaluated numerically for any value of $n$. Moreover, for specific values of $n$ it often gives simplified solutions. Some examples are shown in the following corollaries. \begin{corollary} Let $\mathbf{V}\sim \Gamma {\cal W}_2 (n, n+1, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$. Then the c.d.f.\ of $\lambda_{max}(\mathbf{V})$ is given by \begin{align} \label{maxgram} F_{max}(x)&= \frac{|\boldsymbol{\Omega}|^{n+1}x^{2n}}{n!(n+1)!|\boldsymbol{\Sigma}|^n|\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}|^{n+1}} \left({}_1\widetilde F_1\left(n;n+2;-x\mathbf{Q}\right)+\frac{x}{n}\phi_{-x\mathbf{Q},\mathbf{S},n}^{(1)}(0)\right.\nonumber\\ &\hspace{3cm}\left.+ \frac{|\mathbf{S}|x^2}{(n+1)(n+2)}{}_1\widetilde F_1\left(n+1;n+3;-x\mathbf{Q}\right)\right). \end{align} \end{corollary} \begin{corollary} Let $\mathbf{V}\sim \Gamma {\cal W}_2 (n, n+2, \boldsymbol{\Sigma}, \boldsymbol{\Omega})$. Then the c.d.f.\ of $\lambda_{max}(\mathbf{V})$ is given by \begin{align} F_{max}(x)&= \frac{|\boldsymbol{\Omega}|^{n+2}x^{2n}}{n!(n+1)!|\boldsymbol{\Sigma}|^n|\boldsymbol{\Sigma}^{-1}+\boldsymbol{\Omega}|^{n+2}} \left({}_1\widetilde F_1\left(n;n+2;-x\mathbf{Q}\right){+}\frac{2x}{n}\phi_{-x\mathbf{Q},\mathbf{S},n}^{(1)}(0)\right.\nonumber\\ &\qquad \qquad+\frac{x^2}{n(n+1)}\phi_{-x\mathbf{Q},\mathbf{S},n}^{(2)}(0)+\frac{2|\mathbf{S}|x^2}{(n+1)^2}{}_1\widetilde F_1\left(n+1;n+3;-x\mathbf{Q}\right)\nonumber\\ & \qquad\qquad+ \frac{2|\mathbf{S}|x^3}{(n+1)^2(n+2)}\phi_{-x\mathbf{Q},\mathbf{S},n+1}^{(1)}(0)\nonumber\\ & \qquad\quad\left.+ \frac{|\mathbf{S}|^2x^4}{(n+1)(n+2)^2(n+3)}{}_1\widetilde F_1\left(n+2;n+4;-x\mathbf{Q}\right)\right). \end{align} \end{corollary} \begin{figure}[h] \centering \includegraphics[width=.8\textwidth]{fig3.eps} \caption{Comparison of the analytical maximum eigenvalue c.d.f.s with simulated data points for correlated gamma-Wishart matrices with various dimensions and parameters.} \label{fig3} \end{figure} Fig. \ref{fig3} compares the analytical c.d.f.\ results for the maximum eigenvalue of gamma-Wishart matrices with simulated data. the matrix $\boldsymbol{\Sigma}$ and $\boldsymbol{\Omega}$ are constructed as in (\ref{covmatrix}) and (\ref{gampara}) respectively. The analytical curves were computed based on Theorem \ref{th:maxwishgam}. The agreement between the analysis and simulation is clearly evident. \section{Conclusions} We have derived new exact closed-form expressions for the c.d.f.\ of the extreme eigenvalues of correlated complex non-central Wishart and gamma-Wishart random matrices. We would like to conclude by emphasizing that these results provide the first tractable exact analytical results pertaining to the eigenvalue distributions of both complex non-central Wishart and gamma-Wishart random matrices with non-trivial correlation structures. Obtaining tractable solutions for extreme eigenvalue densities for generalized parameters (e.g., for arbitrary matrix dimensions) remains an important open problem.
1,116,691,500,549
arxiv
\section{Introduction}\label{sec:intro} Extreme scattering events (ESEs) are fluctuations in the brightness of radio sources \citep{Fiedler_1987}. They typically appear as U-shaped month-long dips in the light curves of extragalactic compact radio sources and possess a strong frequency dependence \citep{Fiedler_1994}. It is generally accepted that ESEs are not related to the intrinsic source variability, but they are caused by ionized gas structures in the interstellar medium (ISM) that act as refractive lenses \citep{Fiedler_1987, Romani_1987}. The inferred values of the free-electron number density of the lens ($\sim 10^3-10^4$~cm$^{-3}$) suggest a pressure of at least $10^3$ times larger than the typical ISM pressure \cite[e.g.][]{Clegg_1998}. One way to overcome the over-pressure problem is to consider a lens that is elongated along the light of sight with a length that is much larger than its other dimensions. Alternatively, the lenses could reside in regions of high pressure, such as old supernova remnants \citep{Romani_1987}. Still, the origin and geometry of plasma lenses remain enigmatic. The free-electron column density profile of the lens is related to the lens geometry itself: a sheet of ionized plasma (planar geometry) can be described by an one-dimensional (1D) column density profile, whereas a spherical cloud or a cylindrical tube of plasma with its axis aligned to the line of sight is best described by an axisymmetric two-dimensional (2D) density profile \citep[e.g.][]{Clegg_1998, Walker_1998, Goldreich_2006, Bannister_2016, Tuntsov_2016}. Predictions about the properties of ESEs that are unique to each lens model may shed light into their physical origin. In this paper, we present a detailed investigation of the 2D axisymmetric refractive lens model. We demonstrate that the shape of ESEs produced by axisymmetric lenses depends on the so-called impact parameter, namely the perpendicular distance between the observer's path and the symmetry axis of the lens. For sufficiently large impact parameters, we discover ``atypical'' ESEs (i.e., events whose light curves do not exhibit a dip) and show that these should be more frequent at lower frequencies. We apply the axisymmetric lens model to five well sampled ESEs discovered during the monitoring program of extragalactic radio sources with the Green Bank Interferometer \citep{Fiedler_1987, Fiedler_1994}. We show that a non-zero impact parameter is crucial for describing the light curve of, at least, four events, including the high-frequency ESE of quasar 0954+658. This paper is structured as follows. In Sect.~\ref{sec:model} we outline the 2D axisymmetric model which we then apply to five well sampled dual-frequency ESEs (Sect.~\ref{sec:apply}). We discuss the implications of our results in Sect.~\ref{sec:discuss} and conclude with a summary in Sect.~\ref{sec:summary}. \begin{figure} \includegraphics[width=0.49\textwidth, trim = 0 2cm 0 0]{lens_view.pdf \caption{Illustration of a refractive plasma lens with axisymmetry in the regime of geometric optics. A 3D representation of the 2D Gaussian free-electron column density is shown as a blue coloured region. Light rays from a distant source (not shown here) that are unaffected by the lens are plotted with green lines, while those that are refracted by more than 0.02 rad are shown in violet. A contour map of the intensity of refracted light is also shown on the observer's plane (lower plane). Two caustic rings are formed by the focusing of light rays (red coloured rings). The defocussing of light rays due to the large gradient in column density results in decreased intensity (dark blue coloured region).} \label{fig:lens} \end{figure} \section{2D axisymmetric lens model}\label{sec:model} In the case of an axisymmetric lens, its free-electron column density $N_{\rm e}$ depends only on the distance $r^\prime$ from the symmetry axis (primed and unprimed quantities correspond to the plane of lens and the observer, respectively). We model an axisymmetric lens with a smooth 2D Gaussian profile for $N_{\rm e}$, namely: \begin{eqnarray} \label{eq:Ne} N_{\rm e} (r^\prime) = N_0 e^{-r^{\prime 2}/l^2}, \end{eqnarray} where $N_0$ is a normalization factor and $l$ is the characteristic size of the lens. The adopted density profile, which acts as a diverging lens (see Fig.~\ref{fig:lens}), may describe spherical clouds or elongated tubes of plasma overdensities \citep[see also][]{Walker_2007}. The strength of the lens, $\alpha$, depends on frequency $\nu$ as \citep{Clegg_1998}: \begin{eqnarray} \alpha = \frac{q_{\rm e}^2}{\pi m_{\rm e}}\frac{N_0 D}{\nu^2 l^2} = \alpha_0 \left(\frac{\nu_0}{\nu}\right)^2, \label{eq:alpha} \end{eqnarray} where $\nu_0=1$~GHz, unless stated otherwise, $q_{\rm e}$ is the electron charge, $m_{\rm e}$ is the mass of the electron, $c$ is the speed of light, and $D$ is the distance from the observer to the lens\footnote{In the case of very distant sources to the lens, which are the topic of this work, it is the distance from the observer to the lens that enters in eq.~(\ref{eq:alpha}).}. Equation~(\ref{eq:alpha}) can also be written as $\alpha \propto \lambda \left(l_{\rm F}/l \right)^2$, where $l_{\rm F}=\sqrt{\lambda D}$ is the Fresnel scale. Diffractive effects become important, if $l < l_{\rm F}\simeq 6\times 10^{11} \text{cm} \, (\lambda/1 \, \text{m})^{1/2} (D/1 \, \text{kpc})^{1/2}$. For a lens size of $\mathcal{O}(\text{au})$ and $\nu \gtrsim 0.3$~MHz, diffraction can be safely neglected. The appearance of an ESE depends not only on the lens strength but also on the angular size of the lensed source, $\theta_{\rm s}$. The majority of ESEs involves background radio-loud active galactic nuclei, which appear more compact at higher radio frequencies. We model the ratio of the angular sizes of the source and the lens as: \begin{eqnarray} \beta_{\rm s}\equiv \frac{\theta_{\rm s}}{\theta_{\rm l}}= \beta_{\rm s0} \left(\frac{\nu_0}{\nu}\right)^{s}, \label{eq:beta} \end{eqnarray} where $\theta_{\rm l}\equiv l/D$. We adopt $s=2$, unless stated otherwise \citep[see][and references therein]{Fiedler_1994, Clegg_1998}. The parameter $\beta_{\rm s}$ is crucial for the appearance of ESEs at different observing frequencies. To compute the refracted light from a plasma lens described by eq.~(\ref{eq:Ne}) one needs to consider the radial and tangential magnification of the lensed images as well as to integrate over the azimuthal angles of incoming rays for extended sources (see Appendix~\ref{app-a} for details). The lensing of a background source can result in up to three images in the observer's plane with an angular separation that depends on the strength of the lens and the angular extend of the source \citep[e.g.][]{Clegg_1998}. For example, the maximum angular separation of images produced by a lens of physical size 1~AU and strength $\alpha=10$ located at 1~kpc from the observer is $\sim4.3$~mas. Such resolutions are achievable with Very-Long-Baseline Interferometry (VLBI) at $\sim$ GHz frequencies \citep[for application to ESEs, see][]{Lazio_2000, Bannister_2016}. However, not all ESEs should create multiple images resolvable with VLBI and, henceforth, we adopt the simplifying assumption that the individual images cannot be resolved. The refracted intensity profile of a background source with angular extend $\beta_{\rm s}=0.03$ is shown in the top panel of Fig.~\ref{fig:2-1}. The inner circular region of minimum intensity (dip region) is caused by the refraction of light rays passing from regions of the lens with large column density gradients; the ionized plasma acts as a diverging lens. Meanwhile, the refracted intensity of the source increases at certain locations on the observer's plane (caustic rings). The number of caustic rings is a global property of the lens and depends on its strength. For example, only one caustic ring forms for sufficiently weak lenses ($\alpha < \alpha^*=2.241$). Still, the angular extend of the background source (i.e., $\beta_{\rm s}\gtrsim 3$) may conceal one of the caustic rings due to the convolution of the magnification factor with the source's angular profile (see Appendix \ref{app-a}). \begin{figure} \includegraphics[width=0.48\textwidth, trim=0 0 2cm 0]{pro_plane.pdf} \\ \includegraphics[width=0.49\textwidth]{light_curves.pdf} \caption{Top panel: Refracted intensity profile at 1 GHz of an extended radio source ($\beta_{\rm s}=0.03$) on the observer's plane caused by a 2D Gaussian lens with $\alpha=10$. Horizontal black lines illustrate different paths of the lens with respect to a stationary observer. Each path is characterized by its ``impact parameter'' $b$, namely its perpendicular distance from the symmetry axis. Bottom panel: Light curves of ESEs at 1 GHz obtained for the different paths shown in the top panel. Here, $d$ is the distance measured along the observer's path.} \label{fig:2-1} \end{figure} In the case a point-like source (i.e., $\beta_s = 0$), we numerically determined the radius of the inner ring, $r_{\rm i}$, which depends weakly on $\alpha$ and is well-described by the expression: \begin{eqnarray} r_{\rm i}=2.02+ 0.19\ln{(\alpha - \alpha^*)}. \label{eq:rin} \end{eqnarray} The radius of the outer ring can be derived analytically \citep[see also][eq. (22)]{Clegg_1998} and is written as: \begin{eqnarray} r_{\rm o} = \frac{\sqrt{2}}{2}(1+\alpha e^{-1/2}) \label{eq:rout} \end{eqnarray} Both radii are expressed in units of the physical lens size $l$. \subsection{Light curves of ESEs}\label{sec:lc} Let us consider a lens traveling at a constant distance $D$ from the observer, which an be equivalently thought as the observer moving on a straight path with respect to the lens. Then, the light curve of an ESE can be obtained by making a horizontal cut on the intensity profile as seen in the observer's plane. Different cuts are indicated on the plot with solid black lines (top panel in Fig.~\ref{fig:2-1}). These correspond to different paths of the observer and $d$ denotes the distance traveled along this path (bottom panel in Fig.~\ref{fig:2-1}). Each path is characterized by its ``impact parameter'' $b$, namely its perpendicular distance from the symmetry axis (here, $b$ is in units of the lens size $l$). The appearance of an ESE depends strongly on $b$, as shown in the bottom panel of Fig.~\ref{fig:2-1}. Although the total flux is conserved on the image plane, it is not conserved on every individual path. Depending on the light curve shape of ESEs produced by strictly axisymmetric 2D lenses, we may classify them in the following types: \begin{itemize} \item Type 1: if $b \ll r_{\rm i}$, the light curve shows a dip and two spikes on each side (path 1 in Fig.~\ref{fig:2-1}). At a fixed frequency, the resulting type 1 ESEs have the same qualitative features as the ESEs produced by 1D Gaussian lenses. Any quantitative differences (e.g., the amplitude of the spikes and dip) between the 1D lens model and the 2D lens model with $b\sim 0$ are small and arise only from the tangential magnification (see eq.~(\ref{eq:Gkt})). \item Type 2: if $b=r_{\rm i}$, the light curve shows three spikes which are separated by equal time intervals (path 2 in Fig.~\ref{fig:2-1}). This should be a rare event, as it can be realized only for a narrow range of values of the impact parameter. \item Type 3: if $r_{\rm i} < b <r_{\rm o}$, the light curve exhibits two spikes but no dip. The lens magnifies the incident flux (path 3 in Fig.~\ref{fig:2-1}). ESEs of this type should occur more frequently than type 1 events, especially at lower frequencies, where the radius of the outer caustic ring is much larger than that of the inner caustic ring, i.e., $r_{\rm o} \gg r_{\rm i}$ (see also eqs.~(\ref{eq:rin}) and (\ref{eq:rout})). \item Type 4: if $b = r_{\rm o}$, the light curve exhibits only one spike (path 4 in Fig.~\ref{fig:2-1}). Similar to type 2 events, this type of ESE should also be rare. Moreover, it should be more difficult to identify, especially if the lensed source is extended ($\beta_{\rm s} \ne 0$). In the specific example shown in Fig.~\ref{fig:2-1}, the ESE appears as a broad long-lasting low-amplitude flare. \end{itemize} \subsection{Impact parameter}\label{sec:impact} The impact parameter, introduced in the previous section, turns out to play a major role in determining the shape of observed ESEs in the 2D axisymmetric lens model. Deviations from the typical ESE (i.e., type 1 event) are expected for non-zero impact parameters. In the following, we explore in more detail the effect of the impact parameter on ESEs. Figure~\ref{fig:lc_b} shows the dependence of the ESE light curves (at $\nu=1$~GHz) on the ratio $b/r_{\rm i}$ (colour bar), for an extended source (top panel) and a point-like source (bottom panel). In both cases, we find that the ESE light curves produced by a 2D Gaussian lens with $b=0$ are very similar to those produced by an 1D lens. However, for $b > 0$ the shapes of ESEs from a 2D lens begin to deviate significantly from those produced by an 1D lens. The flattened shape of the dip is preserved as long as $b$ is smaller than a critical value, $b_{\rm cr}$. The latter is larger for point-like sources; we find $b_{\rm cr} \simeq 0.98\, r_{\rm i}$ for $\beta_{\rm s0}=0.03$ (bottom panel) and $b_{\rm cr}\simeq 0.5 \, r_{\rm i}$ for $\beta_{\rm s0}=0.8$ (top panel). For $b > b_{\rm cr}$, the light curve obtains a rounded shallow minimum and the two spikes get closer while retaining their amplitudes. A 2D axisymmetric lens with $b> b_{\rm cr}$ can, therefore, explain ESEs with a shallow dip and prominent spikes, even for non point-like sources. On the contrary, a shallow dip in the context of the 1D Gaussian model requires either a weak lens or an extended source \citep[e.g.][]{Clegg_1998}, both of which result in spikes with small amplitude (see next section). \begin{figure} \includegraphics[width=0.48\textwidth, trim=0 0 2cm 0]{lc_b_20_08.pdf} \\ \includegraphics[width=0.49\textwidth, trim=0 0 1.5cm 0]{lc_b_20_003.pdf} \caption{Top Panel: Light curves of ESEs at 1 GHz caused by a 2D Gaussian lens with $\alpha= 20$, $\beta_{\rm s} = 0.8$ and different impact parameters $b$ (colour bar). The light curve obtained for $b = b_{\rm cr}$ is overplotted for clarity (black dashed line). The effect of $b$ on the shape of ESEs is negligible when it is small, but it becomes important as $b\rightarrow r_{\rm i}$: the dip of the light curves becomes shorter in duration, the minimum becomes rounded, and the dip-to-spike ratio becomes smaller. In the limit of $b\simeq r_{\rm i}$, the two spikes merge into one and the dip disappears. Bottom panel: Same as in the top panel, but for a point-like source with $\beta_{\rm s} = 0.03$. } \label{fig:lc_b} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth, trim=0 0cm 2cm 0]{ratio_cm.pdf} \caption{Colour map of the ratio $b_{\rm cr}/r_{\rm i}$ as a function of $\alpha$ (in logarithmic scale) and $\beta_{\rm s}$ (in linear scale). The ratio is insensitive on the former, but it is almost inverse proportional to the latter.} \label{fig:bcr} \end{figure} We numerically determined the critical impact parameter for different values of $\alpha$ and $\beta_{\rm s}$ in the ranges $[10, 1000]$ and $[0,2]$, respectively. We find that $b_{\rm cr}$ is only weakly dependent on the strength of the lens, whereas it scales as $b_{\rm cr} \propto 1/\beta_{\rm s}$ -- see Fig.~\ref{fig:bcr}. Fig.~\ref{fig:bcr} shows that for very extended sources (i.e., $\beta_{\rm s} \gtrsim 2$) the critical impact parameter approaches zero. In this regime and for $b\le r_{\rm i}$, ESEs produced by both 1D and 2D Gaussian lenses will have light curves with rounded dips. Nevertheless, atypical ESE light curves (in the context of 1D Gaussian lenses) can still be obtained for $b\ge r_{\rm i}$. \begin{figure} \centering \includegraphics[width=0.48\textwidth, trim=0 0cm 2cm 0]{inten_cm.pdf} \caption{Colour map of the ratio of lensed-to-unlensed brightness $I/I_0$ at the inner caustic as a function of lens strength log$(\alpha)$ and source size $\beta_{\rm s}$ for $b = 0$. Overplotted is the dependence of log$(\alpha)$ and $\beta_{\rm s}$ with frequency (black line). For clarity, seven points with the frequency information are marked along the curve. At low frequencies ( $< 1$ GHz) the ratio decreases, although the lens becomes stronger ($\alpha \propto \nu^{-2}$). This is due to the larger extent of the background source at low frequencies, $\beta_{\rm s} \propto \nu^{-2}$, which smooths the light curve.} \label{fig:inten_cm} \end{figure} The detectability of ESEs also depends on the ratio of the lensed-to-unlensed radio brightness (i.e., $I/I_0$). To demonstrate its dependence on frequency, we computed $I/I_0$ at the inner caustic of an ESE produced by a 2D Gaussian lens for different values of $\alpha$ and $\beta_{\rm s}$, and for $b = 0$ -- see Fig.~\ref{fig:inten_cm}. As the ratio of intensities at the inner caustic is not affected by the impact parameter, the results displayed in Fig.~\ref{fig:inten_cm} are applicable to $b > 0$, too. Although the lens becomes stronger at lower frequencies (see eq.~(\ref{eq:alpha})), the relative intensity is typically lower than that at higher frequencies. The main reason for this is that the extension of the background source becomes larger at lower frequencies (see eq.~(\ref{eq:beta})). In general, large values of $\beta_{\rm s}$ tend to smoothen the light curves. Our analysis suggests that a large sample of ESEs observed at multiple frequencies would be ideal for identifying cases explained only by axisymmetric plasma lenses. \section{Application to observations}\label{sec:apply} We apply the 2D Gaussian lens model to five ESEs discovered during the monitoring program of extragalactic radio sources with the Green Bank Interferometer \citep{Fiedler_1987, Fiedler_1994}. The selected ESEs have good temporal coverage (i.e., no gaps) at both observing frequencies (2.25 GHz and 8.3~GHz). Theoretical light curves of ESEs were computed in terms of the dimensionless ratio $d / l$ (e.g., Fig.~\ref{fig:lc_b}), where $d$ is the distance measured along the observer's path and $l$ is the physical size of the lens (i.e., its radius for axisymmetry). Assuming that the relative transverse velocity $v$ is constant, the ratio $d /l $ can be associated with the duration of an ESE, $\Delta t$, as follows: \begin{equation} v= \frac{d}{\Delta t} = \frac{l}{\tau} \end{equation} where $\tau$ is some scaling factor (in units of time), determined by matching the observed to the theoretical ESE light curve (see Table~\ref{tab:tab2}). While fitting the observed ESEs (Sect.~\ref{sec:apply}), we introduced another free parameter to the model, i.e., the fraction of the source intensity that is being lensed. In cases where the brightness profile of the background source remained constant in time, the flux of the lensed (or unlensed) component corresponds to a constant value. It can be also expressed as a constant fraction of the source intensity, if the latter is changing with time (see Table~\ref{tab:tab2}). In all cases, we assumed a lens located at $D=1$~kpc with a size $l=\left(\theta_{\rm s0}/\beta_{\rm s0}\right)D$, where $\theta_{\rm s0}$ is the angular source size at 1 GHz. The latter was estimated using published results of the angular size at other frequencies (typically, at 5 GHz) and assuming a scaling law $\theta_{\rm s}\propto \nu^{-2}$, as in eq.~ (\ref{eq:beta}). We discuss each case separately in the following paragraphs. Our fitting results are summarized in Table~\ref{tab:tab2} at the end of this section. \subsection{Interesting cases} \subsubsection{Q0954+658} The ESE with the best coverage, so far, is the one detected in the light curve of quasar 0954+658 at the beginning of 1981. It appears as a typical ESE at low frequencies (2.25 GHz), but it has an irregular shape with several spikes at 8.30 GHz (see Fig.~\ref{fig:0954+658}). The source flux at that frequency was decreasing at a rate 0.25 Jy/yr during 1981. The dual-frequency light curve of the ESE poses a challenge to the 1D Gaussian lens model, as this fails to reproduce the high-frequency ESE \citep[see also][]{Walker_1998}. We argue that the ESE of 0954+658 at 8.3 GHz can be interpreted as a type 2 event caused by an axisymmetric lens, for the following reasons: \begin{itemize} \item it is a rare event; to the best of our knowledge, no other similar ESE has been detected so far. \item it resembles the 3-spike structure of a type 2 event caused by a smooth 2D Gaussian lens (see bottom panel in Fig.~\ref{fig:2-1}); any inhomogeneities on top of the smooth column density profile could explain the two spikes with very short temporal separation and different peak fluxes that were observed at the beginning of the ESE. \end{itemize} By interpreting the high-frequency ESE of 0954+658 as a type 2 event we can constrain the strength of the axisymmetric lens as follows. Let $\alpha_{\rm h}$ and $\alpha_{\rm l}$ be the strength of the lens at $\nu_{\rm h}=8.3$~GHz and $\nu_{\rm l}=2.25$~GHz, respectively. The appearance of a type 2 event at $\nu_{\rm h}$ requires that $b=r_{\rm i}(\alpha_{\rm h})$. The ratio of durations of the ESE at the two frequencies can be written as: \begin{eqnarray} \frac{\Delta t_{\rm h}}{\Delta t_{\rm l}} = \sqrt{\frac{r_{\rm o}^2(\alpha_{\rm h})-r_{\rm i}^2(\alpha_{\rm h})}{r_{\rm i}^2(\alpha_{\rm l})-r_{\rm i}^2(\alpha_{\rm h})}}, \label{eq:ratio} \end{eqnarray} where we also used the fact that the ESE appears as a typical type 1 event at $\nu_{\rm l}$. Since $\Delta t_{\rm h}\simeq \Delta t_{\rm l}$, we obtain the following constraint, $r_{\rm o}(\alpha_{\rm h}) \simeq r_{\rm i}(\alpha_{\rm l})$, which results in $\alpha_0\simeq 230$ when combined with eqs.~(\ref{eq:alpha}), (\ref{eq:rin}), and (\ref{eq:rout}). We can then infer the impact parameter as $b=r_{\rm i}(\alpha_{\rm h})\simeq 2$. The expressions used to infer the lens properties are exact only for point sources. Still, for sufficiently compact sources ($\beta_{\rm s0}<1$), the analytical estimates are close to those inferred from the actual fit to the ESEs. Our best-fit results are presented in Fig.~\ref{fig:0954+658}. A strictly axisymmetric lens model can capture the basic features of the ESE at both frequencies. The mismatch between the model and the data at 8.3~GHz can be attributed to inhomogeneities of the free-electron column density, which in our model was described by a smooth and well-behaved function. Alternatively, a small amount of shear may produce its own double spiked magnification patterns, which cannot be accounted for in the idealized case of a pure axisymmetric lens. Although the requirement for axisymmetry in the case of 0954+658 has been already pointed out in earlier studies \citep{Walker_1998, Walker_2001, Walker_2007}, our interpretation of the spikes at the 8.3 GHz light curve is different. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{0954+658.pdf} \caption{Light curves of quasar 0954+658 at 8.3 GHz (orange coloured symbols) and 2.25 GHz (blue coloured symbols) focused around the period of the ESE. Best-fit light curves obtained for an axisymmetric lens with $\alpha_0=230$, $\beta_{\rm s0}=2.0$, and $b=2.0$ are overplotted (black dashed lines). The 2.25 GHz light curve is plotted with an offset of 2 Jy. } \label{fig:0954+658} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{1611+343.pdf} \caption{Light curves of quasar 1611+343 at 8.3 GHz (orange coloured symbols) and 2.25 GHz (blue coloured symbols) focused around the period of the ESE. Red dotted lines: The best-fit light curves obtained from a 2D Gaussian lens, assuming $b = 0$. The dip of the ESE is fitted well, but the model fails to explain the spikes before and after the dip. Black solid lines: The best-fit light curves obtained assuming an axisymmetric 2D lens with $b \ne 0$. The parameters inferred from the fit are: $\alpha_0=100$, $\beta_{\rm s0}=1.0$, and $b=2.4$. Here, we assumed that $9\%$ of the total source flux is lensed. } \label{fig:1611+343} \end{figure} \subsubsection{Q1611+343} The ESE of quasar 1611+343 occurred in 1985 and is another example that supports the axisymmetric lens model. The low- and high-frequency light curves of the source are presented in Fig. \ref{fig:1611+343}. At 2.25 GHz the source flux was increasing at a rate 0.2 Jy/yr during 1985. What makes this event particular is the appearance of the ESE at 2.25 GHz: a shallow dip with two prominent spikes. In other words, the light rays that are being refracted out to produce the dip are fewer than those accumulated at the caustics to produce the spikes in the light curve. In the framework of the 1D lens model or the 2D lens model with $b\rightarrow 0$, a shallow dip in the ESE light curve suggests a weak lens or an extended background source. This is illustrated in Fig.~\ref{fig:1611+343}, where the ESE produced by a 2D lens with $b=0$, $\alpha_0=10$, and $\beta_{\rm s0}=1$ is plotted with red dashed line. No other combination of parameters can reproduce the low-frequency light curve, unless $b>0$. The data suggest a large impact parameter, i.e. $b > b_{\rm cr}$, as discussed in Sect.~\ref{sec:impact}. The resulting light curve is shown in the bottom panel of \ref{fig:1611+343}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{2352+495.pdf} \caption{Light curves of quasar 2352+495 at 8.3 GHz (orange coloured symbols) and 2.25 GHz (blue coloured symbols) focused around the period of the ESE. Red dotted lines: The best-fit light curves obtained without trying to model the small flux increase observed at 8.3 GHz. Black solid lines: The best-fit light curves obtained when the small flux increase at 8.3 GHz is taken into account. Although both parameter sets (see inset legend) have similar non-zero impact parameters and can describe the event at 2.25 GHz equally well, the lens strengths and angular source sizes are very different. } \label{fig:2352+495} \end{figure} \subsubsection{Q2352+495} The 2.25 GHz light curve of quasar 2352+495 during 1984-1986 shows two ESEs \citep[Fig.~1 in][]{Fiedler_1994}. Here, we focus on the one that occurred during 1984, as this was accompanied by a period of increased flux at 8.3 GHz. We find that the ESE at 2.25 GHz can be equally well described by two parameter sets with similar non-zero impact parameter but very different lens strengths and angular source sizes (see inset legend in Fig.~\ref{fig:2352+495}). Interestingly, for $\alpha_0=500$, $\beta_{s0}=5.1$, and $b=2.8$ the ESE at 8.3~GHz resembles that of a type 2 event (see Fig.~\ref{fig:2-1}), where the central spike appears very broad and smooth because of the large angular source size. We refer to this set of parameters as 2352+495(a), and refer to the other set as 2352+495(b). As it is not possible to tell if the increased flux at 8.3~GHz is intrinsic to the source or a result of an ESE, we cannot lift the degeneracy between the two models considered here. Future detections of ESEs at more than two frequencies are crucial for constraining the properties of plasma lenses \citep[see also][]{Bannister_2016}. \subsection{Other cases} The ESEs detected in the light curves of quasars 0333+321 and 1821+107 are type 1 events according to our classification scheme (see Sect.~\ref{sec:model}). They can be fitted by an axisymmetric lens model with $b < r_{\rm i}$ (see Figs.~\ref{fig:other1}-\ref{fig:other2}) or equivalently by an 1D lens model \citep[see also][for PKS 1939--315]{Bannister_2016}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{0333+321.pdf} \caption{Same as in Fig.~\ref{fig:2352+495} for $\alpha_0=25$, $\beta_{\rm s0}=7.1$, and $b=1.5$. } \label{fig:other1} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{1821+107.pdf} \caption{Same as in Fig.~\ref{fig:2352+495} for $\alpha_0=5$, $\beta_{\rm s0}=9.1$, and $b=0$.} \label{fig:other2} \end{figure} \begin{table* \centering \caption{Parameter values of the 2D Gaussian lens model for five ESEs. Below $\tau$ represents the scaling factor obtained by comparing theoretical and observed light curves and is inversely proportional to the speed of the lens. ``Lensed'' and ``Unlensed'' denote, respectively, the component of the source that has been lensed by the plasma lens or not (for more details, see Sect.~\ref{sec:apply}). ``L'' and ``H'' stand for low (2.25 GHz) and high (8.3 GHz) frequencies, respectively.} \label{tab:tab2} \begin{tabular}{ccccccccccc} \hline Source & $\alpha_0$ & $\beta_{\rm s0}$ & $b$ & $s$ & $\tau$(yr) & Date & Lensed (L) & Unlensed (L) & Lensed (H) & Unlensed (H)\\ \hline 0954+658 & 230 & 2.0 & 2.0 & 2 & 0.05 & 1981.1 & 0.35 Jy& 0.3 Jy & 7\%& 93\% \\ 1611+343 & 100 & 1.0 & 2.4 & 2 & 0.11 & 1985.4 & 9\% & 91\% & 0.2 Jy & 1.95 Jy \\ 2352+495 (a) & 500 & 5.1 & 2.8 & 2 & 0.09 & 1984.7 & 0.7 Jy & 1.4 Jy & 0.3 Jy & 0.9 Jy \\ 2352+495 (b) & 150 & 7.1 & 2.6 & 2 & 0.08 & 1984.7 & 0.8 Jy & 1.3 Jy & 0.3 Jy & 0.9 Jy \\ 0333+321 & 25 & 7.1 & 1.5 & 2 & 0.04 & 1986.4 & 0.6 Jy & 2.1 Jy & 0.4 Jy & 1.05 Jy \\ 1821+107 & 5 & 9.1 & 0 & 2 & 0.05 & 1984.2 & 0.85 Jy & 0.5 Jy & 0.4 Jy & 0.6 Jy \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Physical properties of the lenses as obtained from the parameter values shown in Table~\ref{tab:tab2}. In all cases, we assumed a spherical lens geometry, a plasma temperature $T=10^4$~K, and a distance $D=1$~kpc. The angular sizes of sources are adopted by \citet{Gabuzda_1996} and \citet{Fey_1996}; these are extrapolated to 1 GHz using the scaling $\theta_{\rm s}\propto \nu^{-2}$. } \label{tab:tab3} \begin{tabular}{cccccc} \hline Source & $\theta_{\rm s0}$ [mas] & $l$ [cm] & $v$ [km s$^{-1}$] & $N_0$ [cm$^{-2}$] & $p_{\rm e}$ [K cm$^{-3}$] \\ \hline 0954+658 & $\sim 7$ & $5.2\times 10^{13}$ & $3.3\times 10^2$ & $2.7\times 10^{18}$ & $5.1\times 10^8$\\ 1611+343 & $\sim 11$ & $1.6\times 10^{14}$ & $4.7\times 10^2$ & $1.2\times 10^{19}$ & $7.0\times 10^8$ \\ 2352+495 (a) & $\sim 6$ & $1.8\times 10^{13}$ & $6.2\times 10$ & $6.6\times 10^{17}$ & $3.7\times 10^8$ \\ 2352+495 (b) & $\sim 6$ & $1.3\times 10^{13}$ & $4.5\times 10$ & $1.0\times 10^{17}$ & $8.1\times 10^7$ \\ 0333+321 & $\sim 6$ & $1.3\times 10^{13}$ & $1.0\times 10^2$ & $1.7\times10^{16}$ & $1.3\times 10^7$ \\ 1821+107 & $\sim 3$ & $4.9\times 10^{12}$ & $3.1\times 10$ & $5.2\times10^{14}$ & $1.0\times 10^6$ \\ \hline \end{tabular} \end{table*} \section{Summary and Discussion}\label{sec:discuss} In this paper, we investigated the observational properties of ESEs caused by lenses with 2D pure axisymmetric column density profiles in the limit of geometric optics. In addition to the refractive strength of the lens ($\alpha$) and the relative angular size of the source to the lens ($\beta_{\rm s}$), we introduced a new parameter, the so-called impact parameter $b$, which denotes the perpendicular distance between the path of the observer and the symmetry axis of the lens. Although the number of caustic rings depends solely on the global properties of the lens, we demonstrated that the appearance of ESE light curves produced by a 2D Gaussian lens depends sensitively on the impact parameter. We classified the resulting ESEs into four types, depending on the relation of $b$ with the radii of the inner and outer rings. All types of events, but one (type 1), are unique products of the 2D axisymmetric lens model. In particular, ESEs with three spikes (type 2) are expected when $b=r_{\rm i}$. The symmetric pair of spikes is more prominent for point-like sources and, thereby, the appearance of type 2 events strongly depends on observing frequency. Regardless, these should be very rare events. For $b>r_{\rm i}$, any magnification observed in the light curves (type 3 and 4 events) will be a result of the outer caustic ring crossing. The spikes in the type 3 and 4 ESE light curves are more pronounced for stronger lenses and/or background sources with smaller angular extent. In addition, the duration of the spikes is typically short. All the above suggests that type 3 and 4 ESEs might be harder to detect. Nevertheless, they should be more frequent at lower observing frequencies (see text in Sect.~\ref{sec:lc}). Thus, low-frequency surveys, such as the Murchison Widefield Array \citep{tingay_13} and the Australian Square Kilometre Array Pathfinder \citep[ASKAP,][]{johnston_08} with good temporal resolution may reveal more of the atypical ESEs. The Canadian Hydrogen Intensity Mapping Experiment \citep[CHIME,][]{CHIME_2018} operating at 400-800 MHz may also discover ESEs while surveying for pulsars and fast radio bursts. However, being a non-imaging instrument, it may not be very efficient at monitoring slowly evolving sources, like radio-loud active galaxies. At GHz frequencies, MeerKAT \citep{meerkat1}, with smaller field-of-view but higher sensitivity than ASKAP, will also be efficient at detecting ESEs. The details of the U-shaped dip of an ESE can also be used to probe the lens geometry. We showed that in the 2D Gaussian lens model there is a critical value of the impact parameter, $b_{\rm cr}$, beyond which a dip with rounded bottom and a smaller dip-to-spike ratio can be produced. We numerically determined the ratio $b_{\rm cr}/r_{\rm i}$ for a wide range of lens' strengths $\alpha$ and source sizes $\beta_{\rm s}$ (Fig.~\ref{fig:bcr}). The detection of an ESE of a point-like source (i.e., small $\beta_{\rm s}$) with rounded bottom cannot be explained by the 1D model, and is indicative of a non-zero impact factor (see Fig.~\ref{fig:1611+343}). However, for extended sources, ESEs with rounded dips can be explained, in general, by both 2D and 1D lenses. So, we expect more atypical ESEs (e.g., type 2 and those with non-zero impact factor) when the sources are more compact and the lenses are stronger. We applied the 2D lens model to five well sampled ESEs and showed that four of them support the scenario of axisymmetric lenses (cylindrical tubes or spheres). The remaining one can be explained by either axisymmetric or planar geometries. The size of lens can also be determined as $l = D (\theta_{\rm s0}/\beta_{\rm s0})$, where we assumed a distance of $D=1$~kpc and extrapolated published values of $\theta_{\rm s}$ to 1~GHz assuming a $\nu^{-2}$ scaling. The free-electron column density $N_0$ can be then derived using eq.~(\ref{eq:alpha}) and the values of $\alpha_0$ and $l$ (see Table~\ref{tab:tab2}). If the axisymmetric model describes spherical lenses, we may estimate the free-electron pressure as $p_{\rm e}/k=N_0 T/l$ where $T$ is the electron temperature. Our results are summarized in Table~\ref{tab:tab3}. In all cases, the inferred pressure exceeds that of the ISM by many orders of magnitude, in agreement with past studies (e.g., \cite{Bannister_2016, Clegg_1998}). The constraints on the free-electron density can be relaxed, if the lenses are cylindrical tubes with one dimension being much larger than $l$. The geometry of the lens may also be related to the formation mechanism and, in turn, the formation site. \cite{Fiedler_1994} demonstrated that the sources with ESEs have a small angular separation from the Galactic radio continuum loops (with $\sim 1\%$ chance coincidence probability). They concluded that the Galactic loops may provide the necessary conditions for the formation of plasma lenses. Other authors also considered how does the elongated plasma structure along the line of sight solve the overpressure problem we encountered above. For example, \cite{Pen_2014} and \cite{Simard_2018} proposed elongated and corrugated plasma sheets along the line of sight as means of explaining pulsar scintillation and ESEs. Recently, \cite{Tuntsov_2016} presented a technique for fitting ESEs that does not rely on a presumed smooth column density profile. The column density can be, instead, reconstructed from the dynamic spectrum interpolated from observations at multiple frequencies \citep{Bannister_2016}. However, their method can only be applied to point sources, thus limiting its applicability. On the contrary, our method, being an extension of the methodology presented by \cite{Clegg_1998}, can be applied to extended sources and at all frequencies (within the limit of geometric optics. Our method is also computationally fast, because the refracted intensity profile on the observer's plane is also axisymmetric. For point sources, it can be as fast as the calculation for 1D lens; in the extended source case, it requires the integration of the azimuthal component of the source (see eq.~(\ref{eq:intensity})), which only increases the computational time by a factor of ten. This makes it ideal for the analysis of large data sets and at frequencies below 1 GHz. Yet, our method is limited to the idealized case of an axisymmetric lens with a specific and well behaved column density profile, and thus cannot capture any finer structures present in ESEs (see e.g. Fig.~\ref{fig:0954+658}). \section{Conclusion}\label{sec:summary} The unusual U-shaped dips in the luminosity of radio sources have been traditionally used as an identification means of ESEs. However, if the lenses are predominantly plasma structures with axisymmetric column density profiles, then more ESEs with shapes different than those produced by 1D lenses are expected and especially, at frequencies below a few GHz. Our axisymmetric lens model can account for the observed features of five well sampled ESEs indicating that a cylindrical tube or sphere may be better describing the lens geometry. A systematic search of atypical ESEs at multiple frequencies, from a few hundred MHz to a few GHz, may reveal the geometry of interstellar plasma lenses. \section*{Acknowledgements} We thank David Kaplan for useful discussion and comments. We also thank the anonymous referee and Dr. A. Tuntsov for constructive comments that helped to improve the manuscript. We acknowledge support from the Research Corporation for Science Advancement’s Scialog program with award ID \#24247. We also acknowledge the GBI-NASA monitoring program. The Green Bank Interferometer is a facility of the National Science Foundation operated by the NRAO in support of NASA High Energy Astrophysics programs. MP acknowledges support by the Lyman Jr.~Spitzer Fellowship. \bibliographystyle{mnras}
1,116,691,500,550
arxiv
\section{Introduction} The internal dynamical processes of open clusters (OCs) are mass loss during stellar evolution, mass segregation and evaporation of its stellar content with time. Tidal interactions with the Galaxy's disc and bulge, as well as collisions with Giant Molecular clouds (hereafter GMCs) are the main external dynamical effects upon OCs. Because of these dynamical interactions, as clusters age, their structures are subject to considerable changes, and may even be dissolved in the Galactic field. A massive cluster can be dissolved by central tidal effects in $\approx$ 50 Myr \cite{por02,ber01}. This time is much shorter than $\sim 1\; Gyr$ found for most OCs within the Solar circle \cite{bon06a}. Interactions with the galactic disc, the tidal pull of the Galactic bulge and collisions with GMCs destroy more easily the poorly-populated OCs, on a time-scale of $10^{8}$ yr , particularly inside the Solar circle \cite{ber01}. A cluster loses low-mass stars from its outer regions into the field by stellar evaporation. As a result of this mass segregation, low-mass stars are transferred from its core to the cluster's outskirts while massive stars accumulate in the core \cite{bon05,sch06}. This results in a flat mass function (MF hereafter) in the core and steep one in the halo. These external and internal dynamical processes play different roles, depending on the location of an OC with respect to the Solar circle: old OCs with Age $>1$ Gyr tend to be concentrated in the anti-centre, a region with a low density of GMCs \cite{van80, cam09}. Tidal shocks from the Galaxy and from GMCs and observational incompleteness or biases are responsible for the scarcity of OCs in direction to the Galactic center \cite{bon07a}. Due to absorption and crowding in regions dominated by disc and bulge stars, the OCs' observational completeness is decreased. With the effect of tidal interaction, an OC heats and its stars gain kinetic energy, which leads to an increase in the evaporation rate. In this paper we have considered 40 OCs with 2MASS JH${K_{s}}$ photometric data, which are selected in respective to the cluster location and the age (Age$\ge100$ Myr) from WEBDA OC and Dias et al. catalogues \cite{mer92,dia12}. These OCs have been considered to study their dynamical evolution, particularly in dependence of their location in the Galaxy. We state that our sample is relatively small but our work have the advantage of being based on a uniform database, in the sense that we determine the parameters following the same methods, based on the same kind of photometry. The robust structural parameters have been derived from high-contrast stellar radial density profiles following the method of \cite{bon07a}, and the ages were derived from a fit of isochrones to decontaminated colour-magnitude diagrams of the 2MASS JH${K_{s}}$ photometric data. As can be seen from the WEBDA database, CCD-based CMDs of these 40 OCs are also available. We stress that the CMDs presented here go fainter than is available there. From our sample of young, intermediate and old OCs ($100\; Myr \leq Age \leq 5\; Gyr$), the relations between the dynamical evolution indicators and cluster radius (R$_{RDP}$, hereafter), core radius (R$_{core}$, hereafter), mass, mass function slope $\chi$, mass density $\rho$, evolutionary parameter $\tau$-, and the parameters $(Age, d, R_{GC}, z)$ have been derived and compared with the values given in the literature. Here, d, R$_{GC}$, and z denote heliocentric, galactocentric (hereafter R$_{GC}$), and Galactic plane distances, respectively. Such relations have been studied by \cite{lyn82}, \cite{jan94},\cite{nil02}, \cite{tad02}, \cite{bon05}, \cite{sch06}, \cite{sha06}, \cite{bon07a}, \cite{mn07}, \cite{buk11}, and \cite{cam09}. In this paper, R$_{\odot}$$=7.2\pm0.3$ kpc which is based on the updated distances of Galactic globular clusters \cite{bic06b}, is taken through this paper. This paper is organised as follows: the selection of the OCs is presented in Section 2. In Section~3 the 2MASS JH${K_{s}}$ photometry and the field star decontamination algorithm (employed in the CMD analyses) are given. The derivations of astrophysical and structural parameters, mass and mass functions, relaxation time and evolutionary parameter are presented in Sections~4 to 6. Section 7 is devoted to Results, which contain the following subsections: 7.1 the relation between $R_{RDP}$ and $R_{core}$, 7.2 relations of cluster dimensions with distance and age, 7.3 the relations between $R_{RDP}$ and $Age$ and $R_{core}$ with $Age$, 7.4 the relations $R_{RDP}$ with $R_{GC}$ and $R_{core}$ with $R_{GC}$, 7.5 the spatial distribution of the 40 OCs in the Galaxy, 7.6 relations between the overall mass with ($R_{RDP}$, $R_{core}$) and with $(Age, R_{GC})$, 7.7 the relations between the mass density with $MF$ slopes, $Age$, $R_{RDP}$ and with $R_{GC}$, 7.8 the relation between the $MF$ slopes and the evolutionary parameter, and a comparison with Kroupa's IMF. Conclusions are presented in Section~8. \section{Open cluster sample and Spatial distribution} We applied two criteria to select the OCs for our work from WEBDA OC and Dias et al. catalogues \cite{mer92,dia12}. Namely, the cluster location in the Galaxy and their ages, see Fig.~1. In order to study dynamical evolution of middle- and older-age OCs, 40 OCs with $100\; Myr \leq Age \leq 5\; Gyr$ as a function of the Galactic location (see Fig. 1, slices I-IV) from the 2MASS data base are considered. The location criteria is important because the longevity/survival rate of the OCs are related to the Galactic slices inside/outside the Solar circle. Over 40 OCs in WEBDA OC \cite{mer92} and \cite{dia12} catalogue have been considered. The OCs which were not appropriate to the decontamination technique of field stars were eliminated by examining their decontamination surface density distributions (see sect.~3). Thus the sample size resulted to be 40 OCs. We are aware that the sample is not large but we intended that the sample with robust parameters would be significant to address the dynamical problems mentioned earlier. From the 40 OCs, we have also studied the relations between the parameters (Age, d, R$_{GC}$, $z$) and dynamical indicators (R$_{core}$,~R$_{RDP}$, $m$, $\chi$, $\tau$). \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure1.jpg} \caption {Spatial distribution (X,Y) of 40 OCs. Open triangles and filled circles represent OCs with ages younger than 1 Gyr and older than 1 Gyr, respectively. The schematic projection of the Galaxy is seen from the North pole. The Sun's distance to the Galactic center is taken to be 7.2 kpc of \cite{bic06b}.} \end{figure} 1148 OCs out of 2000 OCs in \cite{dia12} catalogue, which can be considered the most representative for our purpose have age determinations. As can be seen from Table 1, there are 13 OCs with $200\; Myr \leq Age < 1\; Gyr$ (3.5 \%), 26 OCs with $1\; Gyr \leq Age <5 Gyr$ (14.5 \%) and one with $Age \le 200\; Myr$ in our sample. \begin{table} \tiny \centering \tiny \caption{Comparison of our cluster sample to the catalogue of \cite{dia12} for the age data.} \begin{tabular}{cccc} \hline Age (Myr) & N~(This~work) & N(Dias) & Percentage (\%) \\ \hline Age $<$ 200 & 1 & 579 & 0.17 \\ 200 $\leq$ Age $<$ 1000 & 13 & 373 & 3 \\ 1000 $\leq$ Age $<$ 5000 & 26 & 179 & 15 \\ 5000 $\leq$ Age $\leq$10000 & - & 17 & - \\ \hline Total & 40 & 1148 & 3 \\ \hline \end{tabular} \end{table} The spatial distribution in (X, Y) plane together spiral arms \footnote{(X, Y) is a right handed Cartesian coordinate system with the Sun on its center, with the X axis pointing towards the Galactic centre and the Y axis pointing in the disc rotation direction.} of the 40 OCs is displayed in Fig.~1. As seen from the Fig.~1, our sample comprises the OCs of four Galactic slices (I-IV). Note that the number of OCs towards the anti-center in Fig.~1 is larger than the ones toward the Galactic center directions. Six out of eight OCs with Age $ < 1\, $ Gyr fall in the Galactic anticenter directions, whereas the remaining two occupy the Galactic center direction. This is because the OCs in Galactic center directions cannot be observed due to strong absorption, crowding or were systematically dissolved by different tidal effects such as high frequency of collisions with GMCs \citep{gie06}. The majority of OCs with Age $\ge$ 1 Gyr lies outside the Solar circle. From Fig.~1, one readily sees that the number of OCs inside the Solar radius is biased in direction of the Galactic center. The reason is that the inner Galaxy clusters cannot be observed because of strong absorption and crowding, or because they have been dissolved by a combination of tidal effects. In a good measure the latter is caused by the expected higher frequency of collisions with GMCs in that direction \cite{gie06,cam09}. From an inspection of Fig.~1, there are more OCs in the anticentre direction than in the opposite direction, in agreement with \cite{van80}, who find that the OCs with Age $\ge$ 1.0~Gyr tend to be concentrated in the anticentre, which is a region with lower density of GMCs. Our sample has small statistics to draw significant conclusions in that respect. However, statistically, working with a representative sub-population of the Galactic OCs minimizes the occurrence of biases in the analyses. Finally, to put the present OC sample in context, in Fig.~2 we compare some observational data together with fundamental parameters (derived in subsequent sections) with the corresponding ones found in OC databases. This analysis is also important for checking for the presence of systematic biases in our sample. For this analysis we use the parameters derived by \cite{kha13} for 3006 OCs. The advantage of their work is that the parameters follow from a systematic and uniform analysis. Since \cite{kha13} do not provide cluster mass, we take such values from \cite{pis08}, although for a smaller number of OCs, 236. \begin{figure} \centering \includegraphics*[width = 7cm, height = 11cm]{Figure2.eps} \caption {Normalized distribution functions of our OC sample (circles) compared to those of \cite{kha13} and \cite{pis08} (solid line).} \end{figure} Our analysis compares distribution functions of the several parameters between both sets, as is seen from Fig.~2. Uncertainties in the parameters have been incorporated into the respective distribution function. And, since the samples differ significantly in the number of OCs, the distribution functions have been scaled to provide the best visual comparison between both. The top panels of Fig.~2 show how the OCs distribute with respect to the Galactic longitude (left) and latitude (right). Clearly, most of our sample corresponds to clusters directed towards the 2nd and 3rd Galactic quadrants. Regarding Galactic latitude, our sample tend to avoid the plane. In terms of distance from the Sun (middle-left), our sample is somewhat consistent with that of \cite{kha13}, particularly for distances in excess of 2~kpc. The same applies to the core radius (middle-right) for R$_{core}$ $>$ 1pc; below this threshold, our sample appears to contain a lower fraction of OCs than that in \cite{kha13}. Regarding mass, both distributions have a similar shape, but with a shift of $\approx$ 0.7 dex between the peaks, which suggests that our sample occupies the high-mass wing observed in \cite{pis08} distribution. The age distributions also have similar shapes, with our sample consisting essentially of clusters older than 100 Myrs. Thus, we can conclude that the 40 OCs dealt with here are a representative sub-sample of the Galactic OC population, with no systematic biases. \section{The 2MASS photometry and the field-star decontamination} We have used JH${K_{s}}$ photometry of 2MASS\footnote{The Two Micron All Sky Survey Catalogue, available at \textit{http://www.ipac.caltech.edu/2mass/releases/allsky/}} to find the apparently cluster members of 40 OCs \citep{skr06}. We used VizieR\footnote{http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=II/246.} to extract the near infrared (NIR) (J, H, and ${K_{s}}$ 2MASS) photometry for a large-area centered on each cluster, which is essential to build the RDPs with a high contrast relative to the background, and for a better field star decontamination. 2MASS provides an all-sky coverage with the spatial and photometric uniformity required for high star count statistics. For the photometric constraint, the 2MASS magnitude extractions have been restricted to stars with errors smaller than 0.2 mag in JH${K_{s}}$ magnitudes. The extraction radii of 40 OCs have been chosen by visual inspection on the DSS-I image\footnote{Extracted from the Canadian Astronomy Data Centre (CADC), at \textit{http://ledas-www.star.le.ac.uk/DSSimage/}}, and taking into account the RDP, in the sense that the profile must become relatively stable in the outer region. As an example we show only the DSS-I image of Pismis~19 in Fig.~3. The technique used here for determining the cluster members of the 40 OCs is known as the field star decontamination procedure coupled to the 2MASS JH${K_{s}}$ photometry, and it was succesfully used by \cite {bon07a,bon07b,bon08} and more recently by \cite{gun12}, This decontamination procedure was applied to the 40 OCs discussed here. This technique samples photometric properties of the stars in a neighbour comparison field considered free of cluster stars to (statistically) remove the contamining field stars from the cluster stars with help of the colour magnitude diagrams (CMD). \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure3.jpg} \caption {The image of Pismis~19 in DSS-I $25\,^\prime$x $25\,^\prime$.} \end{figure} Firstly, the stellar surface densities $\sigma(stars\,\rm arcmin^{-2})$ and the surface isopleths of both the raw and decontamination data of 40 OCs, computed for a mesh size of $3^{\prime}\times3^{\prime}$ and centred on the galactic coordinates of Table~2 (see Supplementary material section) If necessary, we have re-determined them in this work (see below). Here, isopleth is star density map. These maps have been used to maximise the contrast of the cluster against the background. In Figs.~4 and 5 we show the result for Pismis~19 as an example. The central stellar density excesses are significant in the decontamination surface-density distributions, as is seen in Fig.~5 for Pismis~19. \clearpage \begin{table*} \renewcommand\thetable{2} \centering \caption{Literature (left columns) and presently optimised (right columns) Equatorial and Galactic coordinates of 40 OCs.} \begin{tabular}{lcccccccc} \hline {Cluster} & ${\alpha}$(2000) & ${\delta}$(2000) & \textit{l} & \textit{b} & ${\alpha}$(2000) & ${\delta}$(2000) & \textit{l} & \textit{b} \\ & (h m s) & ($^{o}$ $'$ $''$) & ( $^{o}$ ) & ( $^{o}$ ) & (h m s) & ($^{o}$ $'$ $''$) & ( $^{o}$ ) & ( $^{o}$ ) \\ \hline NGC 436 & 01 15 58 & 58 48 42 & 126.11 & -3.91 & 01 15 58 & 58 48 42 & 126.11 & -3.91 \\ King 5 & 03 14 45 & 52 41 12 & 143.78 & -4.29 & 03 14 45 & 52 41 12 & 143.78 & -4.29 \\ NGC 1513 & 04 09 57 & 49 30 54 & 152.59 & -1.57 & 04 09 50 & 49 31 17 & 152.57 & -1.58 \\ Be 15 & 05 02 06 & 44 30 43 & 162.26 & 1.62 & 05 02 06 & 44 30 43 & 162.26 & 1.62 \\ NGC 1798 & 05 11 39 & 47 41 30 & 160.70 & 4.85 & 05 11 39 & 47 41 30 & 160.70 & 4.85 \\ Be 17 & 05 20 36 & 30 36 00 & 175.65 & -3.65 & 05 20 38 & 30 34 28 & 175.67 & -3.66 \\ NGC 1907 & 05 28 05 & 35 19 30 & 172.62 & 0.31 & 05 28 09 & 35 18 20 & 172.64 & 0.31 \\ NGC 2112 & 05 53 45 & 00 24 36 & 205.87 & -12.62 & 05 53 51 & 00 25 44 & 205.87 & -12.58 \\ Koposov 12 & 06 00 56 & 35 16 36 & 176.16 & 6.00 & 06 00 56 & 35 16 36 & 176.16 & 6.00 \\ NGC 2158 & 06 07 25 & 24 05 48 & 186.63 & 1.78 & 06 07 30 & 24 05 50 & 186.64 & 1.80 \\ Koposov 53 & 06 08 56 & 26 15 49 & 184.90 & 3.13 & 06 08 56 & 26 15 49 & 184.90 & 3.13 \\ NGC 2194 & 06 13 45 & 12 48 24 & 197.25 & -2.35 & 06 13 45 & 12 48 24 & 197.25 & -2.35 \\ NGC 2192 & 06 15 17 & 39 51 18 & 173.42 & 10.65 & 06 15 22 & 39 51 06 & 173.42 & 10.67 \\ NGC 2243 & 06 29 34 & -31 17 00 & 239.48 & -18.01 & 06 29 34 & -31 17 00 & 239.48 & -18.01 \\ Trumpler 5 & 06 36 42 & 09 26 00 & 202.86 & 1.05 & 06 36 36 & 09 25 21 & 202.86 & 1.02 \\ Col 110 & 06 38 24 & 02 01 00 & 209.65 & -1.98 & 06 38 35 & 02 01 30 & 209.66 & -1.93 \\ NGC 2262 & 06 39 38 & 01 08 36 & 210.57 & -2.10 & 06 39 38 & 01 08 36 & 210.57 & -2.10 \\ NGC 2286 & 06 47 40 &-03 08 54 & 215.31 & -2.27 & 06 47 43 & -03 10 20 & 215.33 & -2.27 \\ NGC 2309 & 06 56 03 & -07 10 30 & 219.84 & -2.24 & 06 56 02 & -07 11 05 & 219.85 & -2.25 \\ Tombaugh 2 & 07 03 05 & -20 49 00 & 232.83 & -6.88 & 07 03 05 & -20 49 00 & 232.83 & -6.88 \\ Be 36 & 07 16 06 & -13 06 00 & 227.38 & -0.59 & 07 16 24 & -13 11 23 & 227.49 & -0.56 \\ Haffner 8 & 07 23 24 & -12 20 00 & 227.53 & 1.34 & 07 23 09 & -12 16 12 & 227.45 & 1.32 \\ Mel 71 & 07 37 30 & -12 04 00 & 228.95 & 4.50 & 07 37 30 & -12 04 00 & 228.95 & 4.50 \\ NGC 2425 & 07 38 22 & -14 52 54 & 231.52 & 3.31 & 07 38 22 & -14 52 54 & 231.52 & 3.31 \\ NGC 2506 & 08 00 01 & -10 46 12 & 230.56 & 9.93 & 07 59 59 & -10 45 28 & 230.55 & 9.93 \\ Pismis 3 & 08 31 22 & -38 39 00 & 257.86 & 0.50 & 08 31 16 & -38 39 02 & 257.85 & 0.48 \\ NGC 2660 & 08 42 38 & -47 12 00 & 265.93 & -3.01 & 08 42 38 & -47 12 00 & 265.93 & -3.01 \\ NGC 3680 & 11 25 38 & -43 14 36 & 286.76 & 16.92 & 11 25 35 & -43 15 11 & 286.76 & 16.91 \\ Ru 96 & 11 50 38 & -62 08 23 & 295.89 & -0.10 & 11 50 37 & -62 09 04 & 295.89 & -0.11 \\ Ru 105 & 12 34 15 & -61 34 11 & 300.88 & 1.24 & 12 34 12 & -61 33 00 & 300.88 & 1.25 \\ Trumpler 20 & 12 39 34 & -60 37 00 & 301.48 & 2.22 & 12 39 34 & -60 37 00 & 301.48 & 2.22 \\ Pismis 19 & 14 30 40 & -60 53 00 & 314.71 & -0.30 & 14 30 40 & -60 53 00 & 314.71 & -0.30 \\ NGC 6134 & 16 27 46 & -49 09 06 & 334.92 & -0.20 & 16 27 46 & -49 09 06 & 334.92 & -0.20 \\ IC 4651 & 17 24 49 & -49 56 00 & 340.09 & -7.91 & 17 24 46 & -49 55 06 & 340.10 & -7.89 \\ NGC 6802 & 19 30 35 & 20 15 42 & 55.33 & 0.92 & 19 30 33 & 20 15 48 & 55.32 & 0.92 \\ NGC 6819 & 19 41 18 & 40 11 12 & 73.98 & 8.48 & 19 41 18 & 40 11 12 & 73.98 & 8.48 \\ Be 89 & 20 24 36 & 46 03 00 & 83.16 & 4.82 & 20 24 30 & 46 02 53 & 83.15 & 4.84 \\ NGC 6939 & 20 31 30 & 60 39 42 & 95.90 & 12.30 & 20 31 30 & 60 39 42 & 95.90 & 12.30 \\ NGC 7142 & 21 45 09 & 65 46 30 & 105.35 & 9.48 & 21 45 12 & 65 47 43 & 105.36 & 9.50 \\ NGC 7789 & 23 57 24 & 56 42 30 & 115.53 & -5.39 & 23 57 24 & 56 42 30 & 115.53 & -5.39 \\ \hline \end{tabular} \end{table*} The stellar radial density profiles (RDP) were derived from the isopleth surfaces of each cluster, the coordinates were checked and the cluster radii were determined (e.g. Table 4). The residual background level of each RDP corresponds to the average number of CM-filtered stars measured in the comparison field. A wide external ring $(\Delta R=13'-70')$ centered in the cluster (Col.~11 of Tables~4 and S4) has been considered to eliminate field stars of the 40 OCs. Stars within the cluster radii have been considered to be probable members. The stellar radial density profile (RDP) of each cluster, built based on the JH${K_{s}}$ photometry extracted with the WEBDA\footnote{www.univie.ac.at/WEBDA-Mermilliod \& Paunzen (2003)} coordinates are displayed in Table~2 and have been computed to check cluster centering. In some cases the RDP built with the original cluster coordinates presented a dip at the center. Then, new central coordinates are searched after field star decontamination to maximise the star counts in the innermost RDP bin. From these RDPs, the cluster radii of 40 OCs are determined (Table 4). The stellar RDP is the projected number of stars per area around the cluster centre. To avoid oversampling near the centre and undersampling for large radii, the RDPs are built by counting stars in concentric rings of increasing width with distance to the centre. The number and width of rings are optimised so that the resulting RDPs have adequate spatial resolution with moderate $1\sigma$ Poission errors. The residual background level of each RDP corresponds to the average number of CM-filtered stars measured in the comparison field. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure4.jpg} \includegraphics*[width = 7cm, height = 7cm]{Figure5.jpg} \caption {For observed (raw) photometry, top panel: stellar surface$-$density $\sigma (stars\,\rm arcmin^{-2}$) of Pismis~19, computed for a mesh size of $3^\prime\times3^\prime$, centred on the coordinates in Table 2. Bottom panel : The corresponding isopleth surface.} \end{figure} \begin{figure} \centering \includegraphics*[width = 6.5cm, height = 7cm]{Figure6.jpg} \includegraphics*[width = 6.5cm, height = 7cm]{Figure7.jpg} \caption {For decontaminated photometry, top panel: stellar surface$-$density $\sigma (stars\,\rm arcmin^{-2}$) of Pismis~19, computed for a mesh size of $3^\prime\times3^\prime$, centred on the coordinates in Table 2. Bottom panel : The corresponding isopleth surface.} \end{figure} As \cite{cam10} noted, RDPs of OCs built based on the WEBDA coordinates usually show a dip in the inner RDP region when a mismatch between the ''true'' and catalogue coordinates exists. For this reason, new central coordinates of these clusters have been searched to maximise the star counts at the innermost RDP bin. Then, the 2MASS photometry was extracted again, but now centered on the optimized cluster coordinates. As a representative, the optimised central coordinate of Pismis~19 is displayed in Fig.~3 as small circle, and given in the right section of Table~2. To have the intrinsic morphology of the clusters in the CMD, as explained above, the statistical field star decontamination procedure of \cite{bon07a} is used. This procedure is based on the relative stars densities per sky area in a cluster region and on a neighboring offset field. It divides the full range of magnitudes and colours of a CMD into the cell dimensions of $\Delta{J}=1.0$, and $\Delta(J-H)={\Delta(J-K_{s})}=0.15$. These dimensions are adequate to allow for sufficient star counts in individual cells and preserve the intrinsic morphology of the evolutionary sequences. \cite{bon07a} showed that the field star decontamination procedure with 2MASS JH${K_{s}}$ photometry is efficient isolating those stars with a high probability of being cluster members. More details on the algorithm can be found in \cite{bon07a, bon07b}, \cite{bon09a, bon09b, bon09c}, and \cite{cam10}. By following the field decontamination technique which is briefly explained above,the probable cluster members of the 40 OCs have been identified for further analysis. \begin{table*} \renewcommand\thetable{3} \centering \caption{Derived fundamental astrophysical parameters from 2MASS JH${K_{s}}$ photometry of 40 OCs.} \renewcommand{\tabcolsep}{1.1mm} \renewcommand{\arraystretch}{1.1} \begin{tabular}{lccccccc} \hline Cluster & Z & {E(J-H)} & {E(B-V)} & {Age(Gyr)} & {(m-M)j} & {d(kpc)} & {R$_{GC}$(kpc)} \\ \hline NGC 436 & 0.019 & 0.13$\pm$0.03 & 0.42$\pm$0.10 & 0.4$\pm$0.1 & 12.54$\pm$0.31 & 3.22$\pm$0.46 & 9.48$\pm$0.28 \\ King 5 & 0.0105 & 0.26$\pm$0.05 & 0.83$\pm$0.16 & 1.0$\pm$0.2 & 11.53$\pm$0.24 & 2.03$\pm$0.23 & 8.93$\pm$0.18 \\ NGC 1513 & 0.019 & 0.23$\pm$0.02 & 0.74$\pm$0.06 & 0.1$\pm$0.02 & 10.37$\pm$0.28 & 1.18$\pm$0.15 & 8.29$\pm$0.13 \\ Be 15 & 0.019 & 0.27$\pm$0.03 & 0.86$\pm$0.10 & 0.5$\pm$0.1 & 12.45$\pm$0.31 & 3.10$\pm$0.44 & 10.21$\pm$0.42 \\ NGC 1798 & 0.0105 & 0.16$\pm$0.04 & 0.51$\pm$0.13 & 1.5$\pm$0.3 & 13.51$\pm$0.26 & 5.03$\pm$0.59 & 12.07$\pm$0.55 \\ Be 17 & 0.006 & 0.26$\pm$0.04 & 0.83$\pm$0.13 & 5.0$\pm$0.5 & 11.93$\pm$0.29 & 2.43$\pm$0.33 & 9.65$\pm$0.33 \\ NGC 1907 & 0.019 & 0.18$\pm$0.03 & 0.58$\pm$0.10 & 0.4$\pm$0.1 & 11.45$\pm$0.26 & 1.95$\pm$0.24 & 9.16$\pm$0.23 \\ NGC 2112 & 0.019 & 0.20$\pm$0.04 & 0.64$\pm$0.13 & 2.0$\pm$0.3 & 10.15$\pm$0.23 & 1.07$\pm$0.11 & 8.18$\pm$0.10 \\ Koposov 12 & 0.0105 & 0.07$\pm$0.02 & 0.22$\pm$0.06 & 1.8$\pm$0.2 & 11.56$\pm$0.18 & 2.05$\pm$0.17 & 9.26$\pm$0.17 \\ NGC 2158 & 0.019 & 0.05$\pm$0.01 & 0.16$\pm$0.03 & 2.5$\pm$0.3 & 13.21$\pm$0.10 & 4.39$\pm$0.21 & 11.59$\pm$0.21 \\ Koposov 53 & 0.019 & 0.01$\pm$0.00 & 0.03$\pm$0.02 & 1.0$\pm$0.1 & 13.05$\pm$0.18 & 4.08$\pm$0.34 & 11.28$\pm$0.34 \\ NGC 2194 & 0.019 & 0.13$\pm$0.04 & 0.42$\pm$0.13 & 0.8$\pm$0.2 & 11.87$\pm$0.27 & 2.37$\pm$0.30 & 9.51$\pm$0.28 \\ NGC 2192 & 0.019 & 0.01$\pm$0.00 & 0.03$\pm$0.00 & 1.3$\pm$0.1 & 13.12$\pm$0.15 & 4.21$\pm$0.29 & 11.37$\pm$0.28 \\ NGC 2243 & 0.0105 & 0.01$\pm$0.00 & 0.03$\pm$0.00 & 2.0$\pm$0.2 & 13.37$\pm$0.12 & 4.73$\pm$0.26 & 10.36$\pm$0.14 \\ Trumpler 5 & 0.006 & 0.24$\pm$0.05 & 0.77$\pm$0.16 & 3.0$\pm$0.3 & 12.19$\pm$0.29 & 2.74$\pm$0.36 & 9.80$\pm$0.33 \\ Col 110 & 0.019 & 0.06$\pm$0.01 & 0.19$\pm$0.03 & 3.0$\pm$0.2 & 11.93$\pm$0.15 & 2.44$\pm$0.17 & 9.41$\pm$0.15 \\ NGC 2262 & 0.0105 & 0.11$\pm$0.01 & 0.35$\pm$0.03 & 1.3$\pm$0.1 & 12.36$\pm$0.30 & 2.96$\pm$0.41 & 9.88$\pm$0.35 \\ NGC 2286 & 0.019 & 0.03$\pm$0.00 & 0.10$\pm$0.02 & 1.0$\pm$0.2 & 11.82$\pm$0.30 & 2.31$\pm$0.32 & 9.20$\pm$0.26 \\ NGC 2309 & 0.019 & 0.16$\pm$0.02 & 0.51$\pm$0.06 & 0.5$\pm$0.1 & 12.41$\pm$0.21 & 3.03$\pm$0.29 & 9.74$\pm$0.22 \\ Tombaugh 2 & 0.019 & 0.35$\pm$0.05 & 1.12$\pm$0.16 & 3.0$\pm$0.3 & 10.43$\pm$0.24 & 1.22$\pm$0.14 & 8.01$\pm$0.08 \\ Be 36 & 0.019 & 0.12$\pm$0.02 & 0.38$\pm$0.06 & 3.0$\pm$1.0 & 13.67$\pm$0.16 & 5.42$\pm$0.40 & 11.59$\pm$0.27 \\ Haffner 8 & 0.006 & 0.06$\pm$0.02 & 0.19$\pm$0.06 & 1.0$\pm$0.1 & 11.98$\pm$0.16 & 2.49$\pm$0.18 & 9.09$\pm$0.12 \\ Mel 71 & 0.019 & 0.01$\pm$0.00 & 0.03$\pm$0.02 & 1.5$\pm$0.2 & 11.54$\pm$0.15 & 2.03$\pm$0.14 & 8.69$\pm$0.09 \\ NGC 2425 & 0.019 & 0.10$\pm$0.02 & 0.32$\pm$0.06 & 3.2$\pm$0.5 & 12.27$\pm$0.26 & 2.85$\pm$0.34 & 9.26$\pm$0.21 \\ NGC 2506 & 0.006 & 0.03$\pm$0.01 & 0.10$\pm$0.03 & 2.0$\pm$0.3 & 12.27$\pm$0.20 & 2.84$\pm$0.26 & 9.27$\pm$0.17 \\ Pismis 3 & 0.006 & 0.33$\pm$0.02 & 1.06$\pm$0.06 & 3.2$\pm$0.2 & 11.19$\pm$0.11 & 1.73$\pm$0.09 & 7.77$\pm$0.03 \\ NGC 2660 & 0.019 & 0.13$\pm$0.03 & 0.42$\pm$0.10 & 1.5$\pm$0.3 & 11.89$\pm$0.17 & 2.39$\pm$0.19 & 7.76$\pm$0.06 \\ NGC 3680 & 0.019 & 0.05$\pm$0.01 & 0.16$\pm$0.03 & 1.5$\pm$0.2 & 10.16$\pm$0.10 & 1.08$\pm$0.05 & 7.00$\pm$0.02 \\ Ru 96 & 0.019 & 0.07$\pm$0.01 & 0.22$\pm$0.03 & 1.0$\pm$0.1 & 12.01$\pm$0.25 & 2.52$\pm$0.29 & 6.53$\pm$0.15 \\ Ru 105 & 0.019 & 0.05$\pm$0.01 & 0.16$\pm$0.03 & 1.0$\pm$0.4 & 11.56$\pm$0.20 & 2.05$\pm$0.19 & 6.41$\pm$0.10 \\ Trumpler 20& 0.019 & 0.10$\pm$0.03 & 0.32$\pm$0.10 & 1.5$\pm$0.5 & 12.52$\pm$0.31 & 3.20$\pm$0.46 & 6.19$\pm$0.27 \\ Pismis 19 & 0.019 & 0.41$\pm$0.03 & 1.31$\pm$0.10 & 0.8$\pm$0.1 & 11.42$\pm$0.38 & 1.92$\pm$0.34 & 6.02$\pm$0.24 \\ NGC 6134 & 0.019 & 0.10$\pm$0.01 & 0.32$\pm$0.03 & 1.5$\pm$0.1 & 10.22$\pm$0.12 & 1.11$\pm$0.06 & 6.23$\pm$0.06 \\ IC 4651 & 0.019 & 0.02$\pm$0.00 & 0.06$\pm$0.02 & 2.5$\pm$0.3 & 9.64$\pm$0.20 & 0.85$\pm$0.08 & 6.44$\pm$0.07 \\ NGC 6802 & 0.019 & 0.23$\pm$0.03 & 0.74$\pm$0.10 & 0.9$\pm$0.1 & 11.77$\pm$0.31 & 2.25$\pm$0.32 & 6.22$\pm$0.19 \\ NGC 6819 & 0.019 & 0.02$\pm$0.00 & 0.06$\pm$0.02 & 2.5$\pm$0.5 & 11.84$\pm$0.15 & 2.34$\pm$0.16 & 6.96$\pm$0.06 \\ Be 89 & 0.019 & 0.23$\pm$0.02 & 0.74$\pm$0.06 & 2.0$\pm$0.5 & 12.37$\pm$0.21 & 2.97$\pm$0.28 & 7.47$\pm$0.11 \\ NGC 6939 & 0.019 & 0.12$\pm$0.03 & 0.38$\pm$0.10 & 2.0$\pm$0.3 & 11.27$\pm$0.31 & 1.79$\pm$0.26 & 7.61$\pm$0.06 \\ NGC 7142 & 0.019 & 0.13$\pm$0.03 & 0.42$\pm$0.10 & 2.5$\pm$0.3 & 12.04$\pm$0.22 & 2.56$\pm$0.25 & 8.27$\pm$0.10 \\ NGC 7789 & 0.0105 & 0.08$\pm$0.02 & 0.26$\pm$0.06 & 1.8$\pm$0.2 & 11.23$\pm$0.21 & 1.76$\pm$0.17 & 8.13$\pm$0.08 \\ \hline \end{tabular} \end{table*} \section{Astrophysical parameters} We have derived the fundamental parameters of 40 OCs using the decontaminated $(J, J-H)$ CMDs (see Figs.~S5-S9 in the supplementary material) eye-fitted with Padova isochrones \citep[hereafter M08]{mar08}. Since the spectroscopic metal abundances [Fe/H]$_{spec}$ are only available for 21 (Col.~7, Table 4) out of 40 OCs, we have considered the abundances of $Z= +0.019$~([Fe/H]=0), $Z= +0.0105$~([Fe/H]=$-$0.25), and $Z= +0.006$~([Fe/H]=$-$0.50), respectively. In the sense OCs need to be uniformly and homogeneously analysed. M08 isochrones for three Z abundances were fitted to the $(J, J-H)$ CMDs of each of the 40 OCs. The most appropriate $Z$ fit solution on the CMDs has been made by eye. Accordingly, the M08 isochrones of $Z= +0.019$ for 29 OCs, $Z= +0.0105$ for six OCs, and $Z= +0.006$ for five OCs, respectively, have provided us good fits for reddening, age and distance modulus. As an example, such $(J, J-H)$ CMDs have been displayed in Figs.~6(a)-(c) for Pismis~19, for three Z abundances. The shaded areas in the panels are the colour-magnitude filters which follow the distribution of the decontaminated star sequences in the CMDs, or stars comprised in the shaded area are considered probable members. These filters are wide enough to accommodate the colour distributions of main sequence and evolved stars of the clusters, allowing 1 $\sigma$ photometric uncertainties. The fitted 0.8 Gyr isochrone of $Z= +0.019$ for Pismis~19 in panel~(a) provides a good solution. As can be seen from Fig.~6(a), the M08 isochrone fits well the main sequence (MS), turn$-$off (TO) and Red Giant/Red Clump (RG/RC) regions on the CMD of Pismis~19. Due to the presence of binaries, the M08 isochrones have been shifted to the left and below of the main sequence in Figs.~6(a)$-$(c), and all CMDs of the 40 OCs are presented in Figs.~S5$-$S9 as supplementary material. The reddening, distance modulus (i.e. distance), age and the appropriate $Z$ abundances were derived this way for all 40 OCs of our sample. These astrophysical parameters together with their uncertainties are presented in Table 3. However, the reddening is degenerate with the metallicity. For this, we have determined E(B-V), d~(pc), Age~(Gyr) of 21 OCs (Table 4) for three Z abundances. The E(B-V) and d~(pc) values (Table 4) of three Z abundances are reasonably close to our original ones (Table 3) within the uncertainties. The age values (Col.~6, Table 4) derived from three Z values are the same. As stated by \cite{bon09a}, any metallicity for the range of $+0.006\leq Z \leq+0.019$ would produce acceptable solutions for the astrophysical parameters, due to the filters of 2MASS. Our derived ages here are almost robust enough to allow inferences about cluster evolution. For this, NGC 2286 (Fig.~7) is presented as an example. The 0.8 Gyr (blue line), 1 Gyr (solid black line), and 1.2 Gyr (red line) isochrones of $Z= +0.019$ for NGC~2286 are fitted to CMD of the cluster. As is seen from Fig.~7, 1$\pm$0.2 Gyr isochrone (solid line) fit well the main sequence (MS), turn$-$off (TO) and Red Giant/Red Clump (RG/RC) regions on the CMD of the cluster. The uncertainties in our derived ages of 40 OCs are in the level of $\pm$0.02$-$0.5 Gyr (Table 3), except for Be~36 ($\pm$1 Gyr). JHK photometry is unsensitive to metallicity, in opposition to optical photometry, where the blue (B) and principally the ultraviolet (U) are sensitive to the photospheric metal lines, reaching its maximum blanketing effect by SpT~F5. For later than SpT = G2 it becomes too fuzzy to disentagle it from the molecular lines. On the other hand, metallicity affects significantly the distance and the age of a cluster, i.e. the less Z is, the shorter the distance and larger the age. \begin{figure} \centering \includegraphics*[width = 4.5cm, height = 5.5cm]{Figure8.jpg} \includegraphics*[width = 4.5cm, height = 5.5cm]{Figure9.jpg} \includegraphics*[width = 4.5cm, height = 5.5cm]{Figure10.jpg} \caption {Observed decontaminated $J\times(J-H)$ CMDs extracted from the region of $R=11'.03$ for Pismis~19. The solid lines in the panels represent the fitted 0.8 Gyr Padova isochrones for Z$=$+0.019 (solar), Z$=$+0.0105, and Z$=$+0.006, respectively. The CMD filter used to isolate cluster MS/evolved stars is shown with the shaded area.} \end{figure} \begin{figure} \centering \includegraphics*[width = 4.5cm, height = 5.5cm]{isochrone.jpg} \caption {Observed decontaminated $J\times(J-H)$ CMD of NGC~2286. The solid lines in the panels represent the fitted 0.8 Gyr (blue line), 1 Gyr (solid black line), and 1.2 Gyr (red line) isochrones of $Z= +0.019$. The CMD filter used to isolate cluster MS/evolved stars is shown with the shaded area.} \end{figure} \begin{table*} \renewcommand\thetable{4} \tiny \centering \tiny \caption{Z, E(B-V), d~(pc), Age~(Gyr) values of 21 OCs with [Fe/H]$_{spec}$. E(B-V) values are listed in Col.~3 for three Z abundances of 21 OCs in our sample. [Fe/H]$_{iso}$ values in Col.~4 are converted from the expression $Z = Z_\odot \cdot 10^{[Fe/H]}$. The solar abundance value is taken as $Z_\odot = +0.019$. Ages are given in Col.~6. [Fe/H]$_{spec}$ values together with literature are listed in Col.~7-8. } \renewcommand{\tabcolsep}{1.1mm} \renewcommand{\arraystretch}{1.1} \tiny \begin{tabular}{cccccccc} \hline Cluster & Z & E(B-V) & [Fe/H]$_{iso}$ & d (kpc)& Age(Gyr) & [Fe/H]$_{spec}$ & Reference \\ \hline \multicolumn{ 1}{c}{Trumpler 5} &0.019 &0.54$\pm$0.13 & &3.13$\pm$0.39&3 & & \\ \multicolumn{ 1}{c}{} &0.0105 &0.70$\pm$0.16 & &2.87$\pm$0.38&3 & & \\ \multicolumn{ 1}{c}{} &0.006 &0.77$\pm$0.16 &-0.50&2.74$\pm$0.36&3 &-0.36& Carrera et al. 2007 \\ \multicolumn{ 1}{c}{NGC 2158} &0.019 &0.16$\pm$0.03 & 0 &4.39$\pm$0.21&2.5&-0.28& Jacobson et al. 2011 \\ \multicolumn{ 1}{c}{} &0.0105 &0.29$\pm$0.03 & &3.98$\pm$0.19&2.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.38$\pm$0.03 & &3.75$\pm$0.18&2.5& & \\ \multicolumn{ 1}{c}{Col 110} &0.019 &0.19$\pm$0.03 & 0 &2.44$\pm$0.17&3 &-0.01& Carrera et al. 2007 \\ \multicolumn{ 1}{c}{} &0.0105 &0.29$\pm$0.06 & &2.29$\pm$0.17&3 & & \\ \multicolumn{ 1}{c}{} &0.006 &0.42$\pm$0.06 & &2.03$\pm$0.15&3 & & \\ \multicolumn{ 1}{c}{NGC6134} &0.019 &0.32$\pm$0.03 & 0 &1.11$\pm$0.06&1.5&0.12 & Smiljanic et al.2009 \\ \multicolumn{ 1}{c}{} &0.0105 &0.48$\pm$0.06 & &0.97$\pm$0.07&1.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.64$\pm$0.10 & &0.85$\pm$0.07&1.5& & \\ \multicolumn{ 1}{c}{NGC2425} &0.019 &0.32$\pm$0.06 & 0 &2.85$\pm$0.34&3.2&-0.15& Jacobson et al. 2011 \\ \multicolumn{ 1}{c}{} &0.0105 &0.42$\pm$0.10 & &2.74$\pm$0.33&3.2& & \\ \multicolumn{ 1}{c}{} &0.006 &0.51$\pm$0.16 & &2.64$\pm$0.35&3.2& & \\ \multicolumn{ 1}{c}{Trumpler 20}&0.019 &0.32$\pm$0.10 & 0 &3.20$\pm$0.46&1.5& 0.09& Carraro et al. 2014 \\ \multicolumn{ 1}{c}{} &0.0105 &0.48$\pm$0.10 & &2.80$\pm$0.40&1.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.64$\pm$0.10 & &2.45$\pm$0.35&1.5& & \\ \multicolumn{ 1}{c}{NGC 2112} &0.019 &0.64$\pm$0.13 & 0 &1.07$\pm$0.11&2 &-0.10& Brown et al. 1996 \\ \multicolumn{ 1}{c}{} &0.0105 &0.77$\pm$0.06 & &0.87$\pm$0.12&2 & & \\ \multicolumn{ 1}{c}{} &0.006 &0.90$\pm$0.06 & &0.77$\pm$0.09&2 & & \\ \multicolumn{ 1}{c}{Mel 71} &0.019 &0.03$\pm$0.02 & 0 &2.03$\pm$0.14&1.5&-0.30& Brown et al. 1996 \\ \multicolumn{ 1}{c}{} &0.0105 &0.16$\pm$0.06 & &1.93$\pm$0.14&1.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.29$\pm$0.06 & &1.92$\pm$0.17&1.5& & \\ \multicolumn{ 1}{c}{NGC 7789} &0.019 &0.19$\pm$0.03 & &1.81$\pm$0.17&1.8& & \\ \multicolumn{ 1}{c}{} &0.0105 &0.26$\pm$0.06 &-0.25&1.76$\pm$0.17&1.8& 0.02& Jacobson et al. 2011 \\ \multicolumn{ 1}{c}{} &0.006 &0.32$\pm$0.06 & &1.72$\pm$0.19&1.8& & \\ \multicolumn{ 1}{c}{NGC 3680} &0.019 &0.16$\pm$0.03 & 0 &1.08$\pm$0.05&1.5& 0.04& Smiljanic et al.2009 \\ \multicolumn{ 1}{c}{} &0.0105 &0.32$\pm$0.06 & &0.95$\pm$0.06&1.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.48$\pm$0.10 & &0.85$\pm$0.07&1.5& & \\ \multicolumn{ 1}{c}{IC 4651} &0.019 &0.06$\pm$0.02 & 0 &0.85$\pm$0.08&2.5& 0.10& Pasquini et al. 2004 \\ \multicolumn{ 1}{c}{} &0.0105 &0.16$\pm$0.03 & &0.78$\pm$0.08&2.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.26$\pm$0.06 & &0.72$\pm$0.08&2.5& & \\ \multicolumn{ 1}{c}{NGC 6819} &0.019 &0.06$\pm$0.02 & 0 &2.34$\pm$0.16&2.5& 0.09& Bragaglia et al.2001 \\ \multicolumn{ 1}{c}{} &0.0105 &0.16$\pm$0.03 & &2.15$\pm$0.18&2.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.32$\pm$0.10 & &1.84$\pm$0.15&2.5& & \\ \multicolumn{ 1}{c}{NGC 1798} &0.019 &0.32$\pm$0.06 & &5.69$\pm$0.54&1.5& & \\ \multicolumn{ 1}{c}{} &0.0105 &0.51$\pm$0.13 &-0.25&5.03$\pm$0.59&1.5&-0.12& Carrera 2012 \\ \multicolumn{ 1}{c}{} &0.006 &0.7$\pm$0.16 & &4.45$\pm$0.58&1.5& & \\ \multicolumn{ 1}{c}{NGC 2243} &0.019 &0.005$\pm$0.005& &5.01$\pm$0.23&2 & & \\ \multicolumn{ 1}{c}{} &0.0105 &0.03$\pm$0.005 &-0.25&4.73$\pm$0.26&2 &-0.48& Gratton et al. 1994 \\ \multicolumn{ 1}{c}{} &0.006 &0.16$\pm$0.03 & &4.49$\pm$0.25&2 & & \\ \multicolumn{ 1}{c}{NGC 6939} &0.019 &0.38$\pm$0.10 & 0 &1.79$\pm$0.26&2 & 0& Jacobson et al. 2007 \\ \multicolumn{ 1}{c}{} &0.0105 &0.45$\pm$0.13 & &1.64$\pm$0.26&2 & & \\ \multicolumn{ 1}{c}{} &0.006 &0.58$\pm$0.19 & &1.50$\pm$0.26&2 & & \\ \multicolumn{ 1}{c}{NGC 7142} &0.019 &0.42$\pm$0.10 & 0 &2.56$\pm$0.25&2.5& 0.08& Jacobson et al. 2008 \\ \multicolumn{ 1}{c}{} &0.0105 &0.54$\pm$0.13 & &2.32$\pm$0.26&2.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.67$\pm$0.16 & &2.11$\pm$0.25&2.5& & \\ \multicolumn{ 1}{c}{NGC 2194} &0.019 &0.42$\pm$0.13 & 0 &2.37$\pm$0.30&0.8&-0.08& Jacobson et al. 2011 \\ \multicolumn{ 1}{c}{} &0.0105 &0.51$\pm$0.16 & &2.15$\pm$0.26&0.8& & \\ \multicolumn{ 1}{c}{} &0.006 &0.61$\pm$0.16 & &1.97$\pm$0.24&0.8& & \\ \multicolumn{ 1}{c}{NGC 2660} &0.019 &0.42$\pm$0.10 & 0 &2.39$\pm$0.19&1.5& 0.04& Bragaglia et al.2008 \\ \multicolumn{ 1}{c}{} &0.0105 &0.51$\pm$0.13 & &2.20$\pm$0.19&1.5& & \\ \multicolumn{ 1}{c}{} &0.006 &0.67$\pm$0.16 & &1.97$\pm$0.21&1.5& & \\ \multicolumn{ 1}{c}{Be 17} &0.019 &0.64$\pm$0.13 & &3.02$\pm$0.38&5 & & \\ \multicolumn{ 1}{c}{} &0.0105 &0.74$\pm$0.13 & &2.71$\pm$0.40&5 & & \\ \multicolumn{ 1}{c}{} &0.006 &0.83$\pm$0.13 &-0.50&2.43$\pm$0.33&5 &-0.10& Friel et al. 2005 \\ \multicolumn{ 1}{c}{Tombaugh 2} &0.019 &1.12$\pm$0.16 & 0 &1.22$\pm$0.14&3 &-0.45& Brown et al. 1996 \\ \multicolumn{ 1}{c}{} &0.0105 &1.22$\pm$0.19 & &1.18$\pm$0.14&3 & & \\ \multicolumn{ 1}{c}{} &0.006 &1.31$\pm$0.19 & &1.13$\pm$0.18&3 & & \\ \multicolumn{ 1}{c}{NGC 2506} &0.019 &0.02$\pm$0.005 & &3.07$\pm$0.28&2 & & \\ \multicolumn{ 1}{c}{} &0.0105 &0.06$\pm$0.03 & &2.94$\pm$0.27&2 & & \\ \multicolumn{ 1}{c}{} &0.006 &0.10$\pm$0.03 &-0.50&2.84$\pm$0.26&2 &-0.20& Carretta et al. 2004 \\ \hline \end{tabular} \end{table*} The reddenings $E(J-H)$ (Col.~3 in Table 3) of the 40 OCs were derived from the CMD diagrams. These are converted to $E(B-V)$ (Col.~4 in Table 3) with the extinction law $A_{J}/{A_{V}}=0.276,\, A_{H}/{A_{V}}=0.176,\, A_{K_{s}}/{A_{V}}=0.118, A_{J}=2.76\times{E(J-H)}$, and $E(J-H)=0.33 \times{ E(B-V)}$ \citep{dut02a}, assuming a constant total-to-selective absorption ratio $R_{V}=3.1$. The distance moduli of the clusters have been derived and listed in Col.~6 of Table 3. The estimated heliocentric $d~(kpc)$ and its corresponding galactocentric $R_{GC}$ (kpc) distances are given in Cols.~7$-$8, respectively. When estimating the $R_{GC}$ distances, we adopted the galactocentric distance of the Sun as $R_{\odot}=7.2\pm 0.3$ kpc of \cite{bic06b}. The errors in E(J-H), hence in colour excess E(B-V), distance moduli and ages, given in Table~3 have been estimated as follows: \begin{enumerate} \item The uncertainties of E(J-H) were estimated moving the M08 isochrones up and down, back and forward and in direction of the reddening vector in the colour-magnitude diagram $(J,J-H)$ until a good fit with the observed MS, TO, the subgiant branch (SG), RG/RC sequences were achieved. \item The uncertainties of distance moduli in Table 3 stem to a lesser degree from the photometric errors and fitting the appropriate isochrone to the observational data points in the CMDs. A larger uncertainty, up to 2 mag in the distance moduli, originates by the assumption of the metallicity: for a larger Z the OCs are more distant and metal poor stars are nearer. \item For the uncertainties in the age estimates, see those of the distance moduli. Again, metal-rich stars are younger than the metal- poor ones. \end{enumerate} The precision of the parameters depends on the scatter of the data points in the CMDs. The uncertainties of distance moduli in Table 3 stem from fitting the appropriate isochrone to the observation in the CMDs, by taking into the uncertainties of the photometric data. The uncertainties of distance moduli of 40 OCs are at the level of 0.10$-$0.31. The uncertainty of age is obtained from fitting the M08 isochrone with appropriate heavy element to the CMDs. In this regard, the uncertainty of the age depends on the uncertainties of E(J-H) and distance moduli of 40 OCs. The uncertainties of the ages of 40 OCs in Table~3 fall in the range of 0.02$-$1.0 Gyr. The relations of $E(B$--$V)$ versus Galactic longitude $\textit l^{\circ}$ and $E(B$--$V)$ versus Galactic latitude $\it b^{\circ}$ as a function of the cluster distances, are displayed in Figs.~8(a) and (b), respectively. In Figs.~8 open and filled circles show the $d=[0,~2.1]$ kpc and $d=(2.1,~5.42]$ kpc subsets, respectively. The reddenings of the OCs in the anticentre directions have $0.03 \leq E(B-V) \leq 1.31$. From panel~(a), the bulk of the 40 clusters lies within $|b|\leq 5^{\circ}$ and $0.03\leq E(B-V) \leq 1.31$. There are two OCs with $E(B-V)>0.50$ in the Galactic centre directions. \begin{figure} \centering \includegraphics*[width = 7cm, height = 10cm]{Figure11.eps} \caption {$E(B$-$V)$ versus $\textit l^{\circ}$ (panel a) and versus $b^{\circ}$ (panel b) for the 40 OCs. Open and filled circles show clusters with $d=[0, 2.1]$ kpc and $d=(2.1, 5.42]$ kpc, respectively.} \end{figure} The reddenings of 40 OCs have been compared to those of the dust maps of \cite[hereafter SFD]{sch98}, which are based on the COBE/DIRBE and IRAS/ISSA maps. These maps take into account the dust absorption $E(B-V)_{\infty}$ all the way to infinity. The relations of $E(B$--$V)_{\rm SFD,\infty}$ versus $E(B$--$V)$, and $E(B$--$V)_{\rm SFD}$versus $E(B$--$V)$ of the 40 OCs are displayed in Figs.~9(a) and (b), respectively. As is seen from Fig.~9(a), the values of $E(B$--$V)_{\rm SFD,\infty}$ are at the level of $0.07 \leq E(B$--$V)_{\rm SFD,\infty}\leq25.81$. For seven clusters, differences in between both reddenings are $\Delta E(B-V)\leq0.10$, while the differences of 33 OCs are larger than 0.10 mag. The equation given by \cite{bon} has been adopted to correct the SFD reddening estimates. Then the final reddening, $E(B$--$V)_{\rm SFD}$, for a given star is reduced compared to the total reddening $E(B$--$V)(\ell, b)_\infty$by a factor $\lbrace1-\exp[-d \sin |b|/H]\rbrace$, given by \cite{bs80}, where $b$, $d$, and $H$ are the Galactic latitude (Col.~9 of Table~2), the distance from the observer to the object (Col.~7 of Table~3), and the scale height of the dust layer in the Galaxy, respectively. The value of $H=125$ pc is adopted \citep{bon}. The reduced final reddenings have been compared with the ones of 40 OCs in Fig.~9(b). The reduced $E(B$--$V)$ values fall in the range of $0.07 \leq E(B-V) \leq 1.261$. \begin{figure} \centering \includegraphics*[width = 7cm, height = 10cm]{Figure12.eps} \caption {Relations of E(B-V)$_{cluster}$-E(B-V)$_{SFD, \infty}$ (panel a), E(B-V)$_{cluster}$-E(B-V)$_{SFD, d}$ (panel b), respectively.} \end{figure} There are significant differences for 27 OCs between both $E(B$--$V)$ color excess values. For the rest, the $E(B$--$V)$ values of 13 OCs are quite close to the ones of SFD. Note that SFD maps are not reliable at regions $|b|<5^{\circ}$ due to contaminating sources and uncertainties in the dust temperatures \citep{gon12}. Therefore, the SFD values resulted from line-of-sight integral through the Milky Way and with low spatial resolution, it is quite a normal to have different reddening values for these relatively close ($\sim 1$~kpc) star clusters. \begin{table*} \renewcommand\thetable{5} \centering \caption{Structural parameters of 40 OCs. Col.~2 represents arcmin to parsec scale. $\sigma_0K$ in Col.~3 and 7 is the central density of stars. $\sigma_{bg}$ in Col.~4 and 8 is the residual background density. R$_{core}$ in Col.~5 and 8 and R$_{RDP}$ in Col.~6 and 10 are the core and cluster radii, respectively. The symbols $* pc^{-2}$ and $*^{-2}$ in cols.~3, 4, 7 and 8 mean $stars~pc^{-2}$ and $stars~arcmin^{-2}$, respectively. $\Delta$ R($'$) in Col.~11 denotes comparison field ring. Col.~12 represents the correlation coefficient.} \tiny \begin{tabular}{lccccccccccc} \hline Cluster & (1$'$) pc & $\sigma_{0K}$ (*pc$^{-2}$) & $\sigma_{bg}$ (*pc$^{-2}$) & R$_{core}$(pc) & R$_{RDP}$ (pc) & $\sigma_{0K}$ (*'$^{-2}$) & $\sigma_{bg}$ (*'$^{-2}$) & R$_{core}$ ($'$) & R$_{RDP}$ ($'$) & $\Delta$ R($'$) & C.C. \\ \hline NGC 436 &0.94&10.94$\pm$2.97&0.71$\pm$0.03&1.04$\pm$0.20&6.97$\pm$0.26& 9.60$\pm$2.60& 0.62$\pm$0.03&1.11$\pm$0.22&7.44$\pm$0.27&22-32&0.93\\ King 5 &0.59&24.79$\pm$5.28&2.70$\pm$0.06&0.95$\pm$0.15&5.62$\pm$0.18&8.65$\pm$1.84&0.94$\pm$0.02&1.60$\pm$0.25&9.52$\pm$0.30&20-30&0.94\\ NGC 1513 &0.34&30.60$\pm$4.73&17.44$\pm$0.44&1.65$\pm$0.26&6.51$\pm$0.20&3.61$\pm$0.55&2.05$\pm$0.05&4.80$\pm$0.75&16.99$\pm$0.58&42-57&0.94\\ Be 15 &0.90&25.70$\pm$11.33&0.97$\pm$0.02&0.35$\pm$0.10&5.04$\pm$0.30&20.89$\pm$9.20&0.79$\pm$0.02&0.39$\pm$0.11&5.59$\pm$0.33&30-40&0.86\\ NGC 1798 &1.46&6.64$\pm$1.90&0.58$\pm$0.01&1.10$\pm$0.22&9.11$\pm$0.48&18.20$\pm$5.22&1.59$\pm$0.03&0.67$\pm$0.13&5.51$\pm$0.29&50-60&0.92\\ Be 17 &0.71&7.25$\pm$1.38&3.26$\pm$0.10&2.10$\pm$0.39&5.29$\pm$0.20&3.62$\pm$0.69&1.63$\pm$0.05&2.98$\pm$0.55&7.48$\pm$0.29&42-52&0.94\\ NGC 1907 &0.57&17.03$\pm$4.16&4.32$\pm$0.19&1.28$\pm$0.27&4.26$\pm$0.16&5.47$\pm$1.34&1.39$\pm$0.06&2.26$\pm$0.47&7.50$\pm$0.28&50-60&0.91\\ NGC 2112 &0.31&24.69$\pm$3.51&6.96$\pm$0.21&1.64$\pm$0.21&5.92$\pm$0.19&2.39$\pm$0.34&0.67$\pm$0.02&5.28$\pm$0.68&19.01$\pm$0.61&50-60&0.95\\ Koposov 12 &0.60&8.96$\pm$3.74&1.27$\pm$0.05&0.87$\pm$0.27&3.82$\pm$0.20&3.18$\pm$1.33&0.45$\pm$0.02&1.46$\pm$0.46&6.41$\pm$0.33&15-25&0.83\\ NGC 2158 &1.27&28.85$\pm$4.67&1.45$\pm$0.06&1.74$\pm$0.20&14.03$\pm$0.71&47.05$\pm$7.61&2.37$\pm$0.10&1.36$\pm$0.16&10.99$\pm$0.56&45-60&0.97\\ Koposov 53 &1.18&7.12$\pm$0.23&0.48$\pm$0.05&0.66$\pm$0.04&4.18$\pm$0.33&10.04$\pm$0.33&0.67$\pm$0.08&0.56$\pm$0.03&3.52$\pm$0.28&25-35&0.99\\ NGC 2194 &0.69&21.00$\pm$3.31&3.30$\pm$0.15&1.66$\pm$0.22&6.55$\pm$0.21&9.98$\pm$1.58&1.57$\pm$0.07&2.41$\pm$0.32&9.5$\pm$0.31&40-50&0.96\\ NGC 2192 &1.22&5.26$\pm$1.59&0.41$\pm$0.01&1.11$\pm$0.24&5.47$\pm$0.34&7.87$\pm$2.42&0.62$\pm$0.02&0.91$\pm$0.19&4.47$\pm$0.28&45-55&0.90\\ NGC 2243 &1.38&13.35$\pm$4.18&0.22$\pm$0.01&0.89$\pm$0.18&12.94$\pm$0.37&25.28$\pm$7.95&0.42$\pm$0.02&0.65$\pm$0.13&9.40$\pm$0.27&20-30&0.93\\ Trumpler 5 &0.79&13.62$\pm$1.72&3.37$\pm$0.09&3.86$\pm$0.43&15.18$\pm$0.47&8.65$\pm$1.09&2.14$\pm$0.06&4.85$\pm$0.54&19.05$\pm$0.58&27-37&0.97\\ Col 110 &0.71&5.32$\pm$0.51&2.68$\pm$0.05&6.25$\pm$0.63&12.12$\pm$0.40&2.68$\pm$0.26&1.35$\pm$0.02&8.79$\pm$0.88&17.07$\pm$0.57&40-50&0.97\\ NGC 2262 &0.86&22.03$\pm$5.17&1.93$\pm$0.05&0.85$\pm$0.14&6.37$\pm$0.24&16.32$\pm$3.81&1.43$\pm$0.04&0.99$\pm$0.17&7.40$\pm$0.28&30-45&0.94\\ NGC 2286 &0.67&6.81$\pm$2.29&2.66$\pm$0.12&1.59$\pm$0.48&6.39$\pm$0.19&3.07$\pm$1.03&1.20$\pm$0.05&2.37$\pm$0.72&9.51$\pm$0.29&27-37&0.85\\ NGC 2309 &0.88&12.41$\pm$5.96&0.64$\pm$0.05&0.84$\pm$0.28&7.50$\pm$0.25&9.64$\pm$4.63&0.50$\pm$0.04&0.95$\pm$0.32&8.51$\pm$0.29&50-60&0.84\\ Tombaugh 2 &0.35&134.29$\pm$83.62&3.67$\pm$0.25&0.17$\pm$0.07&1.92$\pm$0.11&16.91$\pm$10.5&0.46$\pm$0.03&0.47$\pm$0.18&5.42$\pm$0.31&20-25&0.98\\ Be 36 &1.57&3.30$\pm$1.76&0.51$\pm$0.02&1.32$\pm$0.51&10.23$\pm$0.40&8.21$\pm$4.39&1.27$\pm$0.04&0.83$\pm$0.32&6.50$\pm$0.25&25-40&0.79\\ Haffner 8 &0.72&5.31$\pm$2.85&3.89$\pm$0.08&1.47$\pm$0.68&6.93$\pm$0.21&2.79$\pm$1.50&2.04$\pm$0.04&2.03$\pm$0.94&9.56$\pm$0.29&45-60&0.69\\ Mel 71 &0.59&22.61$\pm$3.97&4.39$\pm$0.09&1.27$\pm$0.18&5.00$\pm$0.17&7.89$\pm$1.39&1.53$\pm$0.03&2.16$\pm$0.30&8.46$\pm$0.29&45-60&0.95\\ NGC 2425 &0.83&11.00$\pm$2.04&2.28$\pm$0.04&1.16$\pm$0.17&5.43$\pm$0.22&7.55$\pm$1.41&1.57$\pm$0.03&1.40$\pm$0.20&6.54$\pm$0.26&42-47&0.95\\ NGC 2506 &0.82&18.56$\pm$3.13&1.07$\pm$0.04&1.65$\pm$0.20&10.76$\pm$0.48&12.67$\pm$2.14&0.73$\pm$0.03&2.00$\pm$0.24&13.02$\pm$0.58&40-50&0.96\\ Pismis 3 &0.50&26.09$\pm$2.91&6.53$\pm$0.11&2.10$\pm$0.20&8.58$\pm$0.29&6.61$\pm$0.74&1.65$\pm$0.03&4.17$\pm$0.40&17.05$\pm$0.57&37-47&0.98\\ NGC 2660 &0.69&90.44$\pm$24.32&3.84$\pm$0.09&0.39$\pm$0.07&5.27$\pm$0.17&43.71$\pm$11.74&1.86$\pm$0.04&0.55$\pm$0.10&7.58$\pm$0.25&25-35&0.94\\ NGC 3680 &0.31&19.37$\pm$7.61&1.84$\pm$0.05&0.47$\pm$0.13&2.98$\pm$0.10&1.90$\pm$0.75&0.18$\pm$0.005&1.49$\pm$0.41&9.49$\pm$0.32&40-50&0.85\\ Ru 96 &0.73&6.52$\pm$2.75&5.95$\pm$0.13&1.43$\pm$0.53&2.60$\pm$0.21&3.54$\pm$1.45&3.20$\pm$0.07&1.94$\pm$0.73&3.54$\pm$0.29&50-60&0.77\\ Ru 105 &0.56&2.63$\pm$1.81&1.52$\pm$0.05&1.35$\pm$0.78&3.83$\pm$0.21&0.93$\pm$0.64&0.54$\pm$0.02&2.27$\pm$1.30&6.42$\pm$0.34&47-57&0.64\\ Trumpler 20 &0.93&10.22$\pm$1.22&4.73$\pm$0.12&3.12$\pm$0.38&13.98$\pm$0.55&8.86$\pm$1.06&4.10$\pm$0.10&3.36$\pm$0.41&15.02$\pm$0.59&40-50&0.97\\ Pismis 19 &0.56&104.06$\pm$15.47&13.68$\pm$0.18&0.54$\pm$0.06&6.16$\pm$0.33&32.46$\pm$4.82&4.27$\pm$0.05& 0.96$\pm$0.10&11.03$\pm$0.59&47-57&0.97\\ NGC 6134 &0.32&72.12$\pm$24.74&7.82$\pm$0.18&0.45$\pm$0.11&3.08$\pm$0.10&7.52$\pm$2.59&0.81$\pm$0.02&1.39$\pm$0.34&9.53$\pm$0.30&13-22&0.86\\ IC 4651 &0.25&38.57$\pm$7.88&12.50$\pm$0.99&1.02$\pm$0.22&2.35$\pm$0.08&2.36$\pm$0.48&0.76$\pm$0.06&4.13$\pm$0.91&9.49$\pm$0.31&50-60&0.92\\ NGC 6802 &0.65&34.48$\pm$9.18&7.90$\pm$0.27&1.03$\pm$0.18&4.24$\pm$0.18&14.72$\pm$3.63&3.38$\pm$0.12&1.58$\pm$0.28&6.49$\pm$0.38&45-60&0.92\\ NGC 6819 &0.68&38.22$\pm$4.18&3.97$\pm$0.08&1.50$\pm$0.12&12.92$\pm$0.40&17.15$\pm$1.95&1.84$\pm$0.04&2.20$\pm$0.18&18.98$\pm$0.59&40-50&0.98\\ Be 89 &0.88&7.02$\pm$1.35&3.80$\pm$0.12&2.75$\pm$0.53&7.48$\pm$0.26&5.42$\pm$1.04& 2.90$\pm$0.09&3.10$\pm$0.60&8.50$\pm$0.30&15-20&0.93\\ NGC 6939 &0.52&33.06$\pm$5.19&3.35$\pm$0.17&1.16$\pm$0.15&4.92$\pm$0.18&8.96$\pm$1.41&0.91$\pm$0.05&2.24$\pm$0.28&9.46$\pm$0.28&35-45&0.97\\ NGC 7142 &0.74&10.15$\pm$1.87&1.72$\pm$0.10&1.98$\pm$0.32&11.19$\pm$0.44&5.63$\pm$1.04&0.95$\pm$0.05&2.65$\pm$0.43&15.02$\pm$0.59&50-60&0.94\\ NGC 7789 &0.51&31.23$\pm$2.22&3.88$\pm$0.04&2.32$\pm$0.13&26.88$\pm$0.74&8.18$\pm$0.58&1.02$\pm$0.01&4.52$\pm$0.25&52.5$\pm$1.44&55-70&0.99\\ \hline \end{tabular} \end{table*} \section{Structural parameters} We derived the structural parameters of 40 OCs from the stellar radial density profiles (RDPs). Usually, the RDPs of star clusters can be described by an analytical profile, like the empirical, single mass, modified isothermal spheres of \cite{kin66} and \cite{wil75}, and the power law with a core of \cite{els87}. These functions are characterized by different sets of parameters that are related to the cluster structure. Here we adopted the two-parameter function $\sigma(R) = \sigma_{bg} + \sigma_0/(1+(R/R_c)^2)$, where $\sigma_{bg}$ is the residual background density, $\sigma_0$ the central density of stars, and R$_{core}$ the core radius. Applied to star counts, this function is similar to that used by \cite{kin62} to describe the surface brightness profiles in the central parts of globular clusters. To minimize degrees of freedom in RDP fits with the King-like profile, $\sigma_{bg}$ was kept fixed (measured in the respective comparison fields) while $\sigma_{0}$ and $R_{core}$ were determined by the best profile fit to the data. As a representative of the OCs sample, the RDP of Pismis~19 fitted with King's profile is shown in Fig.~10, where the solid line shows the best profile fit. The horizontal red bar in the figure denotes the stellar background level measured in the comparison field, and the $1\sigma$ profile fit uncertainty is shown by the shaded domain. The stellar RDPs fitted profiles of the 40 OCs have been given in Figs.~S10$-$S13 as supplementary material. The cluster radius (R$_{RDP}$) is also obtained from the measured distance from the cluster centre where the RDP and residual background are statistically indistinguishable \citep{bon07a}. The R$_{RDP}$ can be taken as an observational truncation radius, whose value depends on the radial distribution of member stars and the stellar field density. $\Delta R$ means the wide external ring of the stellar comparison field (see also Sect.~3). These structural parameters and their meaning are listed in Table~5. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure13.jpg} \caption {Stellar RDP (open circles) of Pismis~19 built with CMD filtered photometry. Solid line shows the best-fit King profile. Horizontal red bar: stellar background level measured in the comparison field. Shaded region: $1\sigma$ King fit uncertainty.} \end{figure} From the distributions of R$_{core}$ and R$_{RDP}$, given in Fig.~11(a) and (b), there seems to be two groupings at R$_{RDP} $$=$7 pc and R$_{core}$$=$1.5 pc, respectively, which are close to the values of 10 pc and 1.5 pc of \cite{buk11}. \begin{figure} \centering \includegraphics*[width = 7cm, height = 11cm]{Figure14.jpg} \caption {Distributions of R$_{RDP}$ (panel a) and R$_{core}$ (panel b) of 40 OCs, respectively.} \end{figure} \section{Mass and Mass functions} The stellar masses stored in the OCs of our sample have been determined by means of their mass functions (MFs), built for the observed MS mass range, according to \cite{bic06a}. By following the algorithm, which is basically defined by \cite{bon05}, luminosity functions from the decontaminated $(J, J-H)$ diagrams of the OCs have been transformed into MFs through the corresponding mass-luminosity relations derived from the M08 isochrones which correspond to the ages in Col.~5 of Table 3. We determined the overall masses of 26 OCs and the core masses of 24 OCs in our sample. The total mass locked up in stars of these OCs was obtained by considering all stars from the turnoff to the H-burning mass limit. We do this by directly extrapolating the low-mass MFs down to $0.08M_{\odot}$. Here we have based our results on the CMD filtered photometry of open cluster and offset field stars. The filtering process contemplates most of the background, leaving a residual contamination. Due to the relatively large sizes of the OCs and the brightness limitation of the 2MASS photometry, we do not have access to the whole stellar mass range of the OCs. Here, we stress that the values we derive should be taken as approximations. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure15.jpg} \caption {$\phi(m)(stars ~m_\odot^{-1})$ versus $m_\odot$ of Pismis~19 cluster, as a function of distance from the core.} \end{figure} The relation of $\phi(m)(stars ~m_{\odot}^{-1})$ versus $m_{\odot}$ of our representative open cluster Pismis~19 is shown in Figs.~12(a)$-$(c) for different cluster regions. The main sequence mass functions (MFs) in the panels~(a)$-$(c) of Fig.~12 are fitted with the function $\phi(m)\propto{m}^{-(1+\chi)}$, and the MF slopes ($\chi$) have been determined for the different segments of the mass function MF in Col.~1 of Table~6. More details of this approach are given in Table~6, where we also show the number and mass of the evolved stars (m$_{evol}$). The MF slopes of the core (29 OCs) and the overall (31 OCs) regions of OCs are presented in Cols.~2 and 5 of Table 7. Since the lower MS is not accessible on the $(J, J-H)$ diagrams of the OCs sample, we assumed that the low-mass content is still present, and use Kroupa's MF\footnote{$\chi=0.3\pm0.5$ \cite{kro01} for $0.08<M_{\odot}<0.5$, $\chi=1.3\pm0.3$ for $0.5 < M_{\odot}<1.0$, and $\chi=1.3\pm 0.7$ for $1.0<M_{\odot}$} to estimate the total stellar mass, down to the H-burning mass limit. The results: number of stars, MS and evolved star contents (m$_{obs}$), MF slope ($\chi$), and mass extrapolated (m$_{tot}$) to 0.08~$M_{\odot}$) for each cluster region are given in Table 6. The mass densities of $\rho$ in unit of $M_{\odot}\: pc^{-3}$ are also estimated and given in Cols.~8 and 11 of Table 6 (See also sect.~7.7). When deriving the mass functions, the part of the steep, that is observed in the core may come from crowding and completeness. 2MASS is not very photometrically deep and has just a moderate spatial resolution. So, in crowded regions (such as the core of most clusters) many stars are not detected, especially the faint ones. This, in turn may mimic mass segregation. The relaxation time $t_{rlx}$ (Myr) is the characteristic time-scale for a cluster to reach some level of energy equipartition \citep{bin98}. As discussed in \cite{bon05}, \cite{bon06a}, and \cite{bon07a}, the evolutionary parameter ($\tau = Age/t_{rlx}$) appears to be a good indicator of dynamical state. Following \cite{bon06a}, we parameterize $t_{rlx}$ as $t_{rlx}\approx0.04\left(\frac{N}{lnN}\right)\left(\frac{R}{1pc}\right)$, where N is the number of stars located inside the region of radius R. The relaxation time and evolutionary parameter for both core and the overall regions are listed in Table 7. The uncertainties in the evolutionary parameters ($\tau$) of OCs have been estimated by propagating the errors in Age (Table 3), Radii (Table 5) and N (Table 6) into $t_{rlx}$ and $\tau$. When propagated, the latter two errors produce a large uncertainty in $t_{rlx}$ (Table 7) and, consequently, a large uncertainty in the evolutionary parameter. In this sense, both $t_{rlx}$ and $\tau$ should be taken simply as an order of magnitude estimate. From the overall mass distribution ($m_{overall}$) of 26 OCs displayed in Fig.~13, 2000~$M_\odot$ value is considered as a criteria in classifying the clusters as less massive and massive. \begin{figure} \centering \includegraphics*[width = 6.5cm, height = 7cm]{Figure16.eps} \caption {The overall mass distribution of 26 OCs.} \end{figure} \clearpage \begin{table*} \renewcommand\thetable{6} \centering \caption{The number of stars, mass information, mass function slope, mass density, which correspond to cluster regions of available clusters for the cases of Evolved, Observed+Evolved, and Extrapolated+Evolved. The full version is available in the online version of this manuscript in the supplementary material section (Table~S6).} \tiny \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{11}{c}{NGC 436}\\\cline {5-7} \\ & \multicolumn{2}{c}{Evolved} & \multicolumn{2}{c} {$\chi$} & \multicolumn{3}{c}{Observed+Evolved} & \multicolumn{3}{c}{Extrapolated+Evolved} \\\cline {2-11} \\ {Region} & {N*} & {m$_{evol}$} & {1.38-2.78} & {-} & {N*} & {m$_{obs}$} & {$\rho$} & {N*}& {m$_{tot}$} & {$\rho$} \\ (pc) & (Stars) & ($10^1 M_{\odot}$)& & & ($10^2 Stars$) & ($10^2 M_{\odot}$) & $M_{\odot} pc^{-3}$ & ($10^2 Stars$) & ($10^2 M_{\odot}$ & $M_{\odot} pc^{-3}$ \\ \hline 0.0-1.04 & 1$\pm$1 & 0.4$\pm$0.4 & -1.46$\pm$0.47 & {-} & 0.25$\pm$0.03 & 0.56$\pm$0.28 & 11.9$\pm$5.97 & 0.4$\pm$0.1 & 0.7$\pm$0.03 & 15.2$\pm$6.11 \\ 1.04-6.97 & 12$\pm$6 & 3.5$\pm$1.8 & 1.74$\pm$0.36 & {-} & 1.01$\pm$0.1 & 2$\pm$0.57 & 0.14$\pm$0.04 & 25.6$\pm$19.6 & 9.8$\pm$3.8 & 0.69$\pm$0.27 \\ 0.0-6.97 & 14$\pm$6 & 3.9$\pm$1.9 & 0.86$\pm$0.29 & {-} & 1.12$\pm$0.09 & 2.55$\pm$0.63 & 0.18$\pm$0.04 & 17.1$\pm$11.9 & 7.9$\pm$2.4 & 0.56$\pm$0.17 \\ $\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$ &$\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$&$\cdot \cdot \cdot$ \\ \hline \end{tabular} \\ Col.~1: the distance from the core. Cols.~2,6,9 : cluster stars for the regions in Col.~1. Col.~4 gives the MF slopes ($\chi$), derived for the low-mass and high-mass ranges. The masses of $m_{evol}$, $m_{obs}$, and $m_{tot}$ are listed in Cols.~3, 7 and 10, respectively. The mass densities are given in Cols.~8 and 11. \\ \end{table*} \begin{table*} \renewcommand\thetable{7} \centering \caption{Mass function slopes ($\chi$), relaxation time (t$_{rlx}$(Myr)) and evolutionary parameter ($\tau$) of core and overall regions of the available clusters.} \scriptsize \begin{tabular}{lcccccc} \hline & \multicolumn{3}{c}{Core}& \multicolumn{3}{c}{Overall} \\\cline{2-7} Cluster & $\chi$ & t$_{rlx}$(Myr) & $\tau$$_{core}$ & $\chi$& t$_{rlx}$(Myr) & $\tau$$_{overall}$ \\ \hline NGC 436 &$-$1.46$\pm$0.47 &0.46$\pm$0.12 & 869.57$\pm$314.19 & 0.86$\pm$0.29 & 63.95$\pm$38.60 & 6.25$\pm$4.08 \\ King 5 &$-$3.06$\pm$0.96 &0.58$\pm$0.11 & 1724.14$\pm$475.22 & 1.80$\pm$0.49 & 215.00$\pm$148.09 & 4.65$\pm$3.34 \\ NGC 1513 &1.12$\pm$0.24 &10.21$\pm$6.40 & 9.79$\pm$6.44 & 1.90$\pm$0.12 & 175.88$\pm$118.13 & 0.57$\pm$0.40 \\ Be 15 & - & - & - &$-$1.54$\pm$1.15 & 5.05$\pm$2.78 & 99.01$\pm$57.99 \\ NGC 1907 &$-$0.76$\pm$0.40 &1.71$\pm$0.43 & 233.92$\pm$82.95 & 0.00$\pm$0.23 & - & - \\ NGC 2112 &$-$1.28$\pm$0.51 &3.28$\pm$3.59 & 609.76$\pm$673.63 & 0.50$\pm$0.42 & 126.22$\pm$78.13 & 15.85$\pm$10.10 \\ NGC 2158 &$-$4.24$\pm$1.00 &5.05$\pm$0.16 & 495.05$\pm$61.44 &$-$1.55$\pm$0.71 & - & - \\ Koposov 53&$-$3.96$\pm$3.40 & - & - & 0.93$\pm$0.81 & 15.51$\pm$11.41 & 64.47$\pm$47.86 \\ NGC 2194 & 0.38$\pm$0.42 &13.15$\pm$7.87 & 60.84$\pm$39.46 & 2.52$\pm$0.37 & 456.55$\pm$311.22 & 1.75$\pm$1.27 \\ NGC 2192 &$-$2.78$\pm$0.96 &0.36$\pm$0.10 & 3611.11$\pm$1040.84 &$-$3.12$\pm$0.43 & 7.23$\pm$0.85 & 179.81$\pm$25.26 \\ NGC 2243 & - & - & - & 2.09$\pm$1.01 & 826.05$\pm$654.27 & 2.42$\pm$1.93 \\ Trumpler 5&0.42$\pm$0.75 &1195.27$\pm$869.82 & 2.51$\pm$1.84 & 1.32$\pm$1.18 & 3804.93$\pm$2817.13 & 0.79$\pm$0.59 \\ Col 110 &$-$2.58$\pm$0.21 &29.78$\pm$4.37 & 100.74$\pm$16.24 &$-$2.84$\pm$0.57 & 100.01$\pm$22.83 & 30.00$\pm$7.13 \\ NGC 2262 &$-$1.49$\pm$0.80 &0.43$\pm$0.15 & 3023.26$\pm$1079.96 & 1.01$\pm$0.44 & 186.81$\pm$126.13 & 6.96$\pm$4.73 \\ NGC 2286 &1.30$\pm$0.50 &5.97$\pm$4.26 & 167.50$\pm$124.13 & 1.45$\pm$0.30 & 99.95$\pm$65.65 & 10.01$\pm$6.87 \\ NGC 2309 &$-$1.52$\pm$1.03 &0.35$\pm$0.07 & 1428.57$\pm$404.06 &$-$0.89$\pm$0.60 & - & - \\ Haffner 8&1.28$\pm$0.77 &5.50$\pm$4.79 & 181.82$\pm$159.39 & 1.82$\pm$0.59 & 101.14$\pm$68.47 & 9.89$\pm$6.77 \\ Mel 71 &0.30$\pm$1.04 & - & - & 1.29$\pm$0.40 & 146.55$\pm$99.37 & 10.24$\pm$7.08 \\ NGC 2506 & 4.11$\pm$1.63 &56.00$\pm$49.28 & 35.71$\pm$31.88 & 0.97$\pm$0.63 & 822.94$\pm$594.95 & 2.43$\pm$1.79 \\ Pismis 3 &$-$1.60$\pm$0.84 &10.42$\pm$2.03 & 307.10$\pm$62.83 & 1.71$\pm$0.49 & 1348.20$\pm$939.15 & 2.37$\pm$1.66 \\ Ru 96 &4.58$\pm$0.98 &17.70$\pm$15.06 & 56.50$\pm$48.40 & 4.55$\pm$0.65 & 57.49$\pm$40.92 & 17.39$\pm$12.50 \\ Trumpler 20&$-$1.07$\pm$0.50 &12.85$\pm$17.60 & 116.73$\pm$164.55 & 2.06$\pm$0.68 & 2272.78$\pm$1663.64 & 0.66$\pm$0.53 \\ Pismis 19&$-$2.42$\pm$1.07 &0.45$\pm$0.10 & 1777.78$\pm$453.27 & 1.18$\pm$0.38 & 402.04$\pm$272.33 & 1.99$\pm$1.37 \\ NGC 6134 &$-$0.95$\pm$0.90 & - & - &$-$0.83$\pm$1.11 & - & - \\ IC 4651 &$-$2.78$\pm$0.75 &1.09$\pm$0.06 & 2293.58$\pm$302.81 &$-$0.60$\pm$0.41 & - & - \\ NGC 6802 &$-$0.46$\pm$0.84 & - & - & 1.66$\pm$0.24 & 232.20$\pm$156.40 & 3.88$\pm$2.65 \\ NGC 6819 &$-$1.07$\pm$0.55 & - & - & 0.47$\pm$0.40 & 680.42$\pm$429.71 & 3.67$\pm$2.43 \\ Be 89 & 0.18$\pm$0.66 & - & - & 1.64$\pm$0.93 & 312.83$\pm$220.34 & 6.39$\pm$4.78 \\ NGC 6939 &$-$2.17$\pm$0.66 &1.42$\pm$0.07 & 1408.45$\pm$222.38 & 0.84$\pm$0.46 & - & - \\ NGC 7142 &$-$1.97$\pm$1.44 & - & - &$-$1.06$\pm$0.56 & - & - \\ NGC 7789 &$-$0.42$\pm$0.45 & - & - & 0.79$\pm$0.65 & 5471.49$\pm$3972.27 & 0.33$\pm$0.24 \\ \hline \end{tabular} \end{table*} \section{Results} \subsection{Relation of R$_{RDP}$--R$_{core}$} The cluster and core radii (R$_{RDP}$, R$_{core}$) of 40 OCs, given in Fig.~14 are related by the following relation, R$_{RDP}$$=(4.69\pm0.35)R_{core}^{(0.56\pm0.11)}$ with a mild correlation coefficient (CC, hereafter) of 0.61. This relation of Fig.~14 is almost linear between $log\: (R_{RDP})$ and $log\: (R_{core})$, where the axes are in a log-log scale. Their core and cluster sizes are $0.17\leq R_{core}~(pc) \leq 6.25$ and $1.92\leq R_{RDP}~(pc)\leq 26.88$, respectively. The OCs in our sample which do not follow the relation above are either intrinsically small or have been suffering significant evaporation effects. Our coefficient value (4.69) of Fig.~14 falls in the range of $3.1-8.9$ of the literature (Table 8). However the coefficients in Table~8 are affected by the sample size. The relation between R$_{RDP}$ and R$_{core}$ found by us is reasonably similar to that given by \cite{cam10}. However analogue functions were found by other authors, \cite{nil02, bic05, sha06, mn07, buk11}. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure17.eps} \caption {Relation of R$_{RDP}$ - R$_{core}$ of 40 OCs. Empty circles show 40 OCs. Solid line and shaded area show the best fit and $1\sigma$ uncertainty, respectively.} \end{figure} \begin{table} \tiny \renewcommand\thetable{8} \centering \tiny \caption{The coefficients of the relation, R$_{RDP}$$=$a+bR$_{core}$, given in the literature between R$_{core}$ and R$_{RDP}$. The form of the relation of Camargo et al.~(2010) is $R_{RDP}=bR_{core}^{a}$. CC and N in last two columns mean the correlation coefficient and data number, respectively.} \begin{tabular}{lcccc} \hline Author & a & b & CC & N \\ \hline \cite{nil02} & - &6 & - & 38 \\ \cite{bic05} &1.05 &7.73 &0.95 & 16 \\ \cite{sha06} & - &3.1 & - & 9 \\ \cite{mn07} & - &3.1 &0.74 & 42 \\ \cite{cam10} & 0.3 &8.9 & - & 50 \\ \cite{buk11} &0.58 &6.98 &0.93 &140 \\ \hline \end{tabular} \end{table} \subsection{Relations of Cluster Dimensions to the Distance and Age} The relations of R$_{RDP}$ and R$_{core}$ with d(kpc) are apparently linear, and are displayed in Figs.~15(a)$-$(b). The linear best fit to the data (solid lines) are the following, R$_{RDP}=(2.67\pm0.27)\: d(kpc)$ (CC$=$0.84) and R$_{core}= (0.50 \pm 0.07)$~d(kpc) (CC$=$0.76), respectively. Given a couple of deviants, the sizes (R$_{RDP}$ and R$_{core}$) increase on the average with the distance from the Sun. Similar trends were also obtained by \cite{lyn82}, \cite{van91}, \cite{tad02}, \cite{bon10} and \cite{buk11}. \begin{figure} \centering \includegraphics*[width = 6.5cm, height = 11cm]{Figure18.jpg} \caption {Relations of R$_{RDP}$ - d(kpc)(panel a), R$_{core}$ - d(kpc)(panel b), respectively. Solid and dashed lines show the best fit and $1\sigma$ uncertainty, respectively.} \end{figure} The relations of $|z|$ and R$_{GC}$ as a function of Age and R$_{RDP}$, respectively, are presented in Figs.~16(a)$-$(b). Younger and older clusters than 1 Gyr in panels~(a)$-$(b) lie inside/outside the Solar circle. No cluster with R$_{RDP} > 8$~pc is seen in panel~(a). The OCs with Age$\geq$1~Gyr in panel~(b) do not show any dependence of R$_{GC}$ and R$_{RDP}$. The OCs, NGC 2243 and NGC 2192 with $|z| > 800$ pc outside the Solar circle in panels~(b)$-$(c), where GMCs are scarce, might have been moved to outer parts of the Galactic radii via tidal interactions with the disc and the Galactic bulge, and collisions with GMCs. Alternatively, they may have been formed from molecular clouds at these distances. Note that \cite{sch06} have also detected large and small sized clusters outside the Solar circle. From panel~(b) we note that most of large/small sized OCs inside or outside the Solar circle are located near the galactic plane ($|z| < 300$ pc) and the OCs inside the Solar circle seem to survive four or more rotations around the Galactic centre. Their survival can be explained by which they survived against external shocks \citep{jan94}. \begin{figure} \centering \includegraphics*[width = 7cm, height = 13cm]{Figure19.eps} \caption {Relations of $|z|$- R$_{GC}$ in terms of R$_{RDP}$ (panel a) and Age (Myr) (panel b). Filled squares and empty circles show the OCs with R$_{RDP}<8$~pc and R$_{RDP}\ge8$~pc, respectively. Relation of $|z|$-Age as function of R$_{\odot}$ (panel c).} \end{figure} Old clusters with large dimensions inside the Solar circle in panel~(b) may have a primordial origin, or their sizes may have been increased via expansion due to stellar mass black hole couples. For the relation, $|z|$- Age as a function of R$_{\odot}$ in Fig.~16(c), the OCs with Age $\geq 1$ Gyr reach higher z distances, whereas those with Age $<1 $ Gyr have $|z|<300$~pc. \subsection{Relations of R$_{RDP}$-Age and R$_{core}$-Age} The relations of R$_{core}$$-$Age and R$_{RDP}$$-$Age have been displayed in Figs.~17(a)$-$(b). In Fig.~17, filled circles and empty triangles show 16 OCs with $m_{overall} \ge 2000~M_{\odot}$ and 10 OCs with $m_{overall}<2000~M_{\odot}$, respectively. 14 OCs which have no mass determinations are marked by open squares. The relation in Figs.~17(a)$-$(b) suggests a bifurcation which is seen at an age $\approx$ 1 Gyr. In the sense, in Fig.~17 some clusters appear to expand (`A' arrow), while others contract (`B' arrow) with a bifurcation occuring at about 1 Gyr. \cite{mac08} observed the bifurcation at $\approx$~500-600~Myr (shown with `C' in the panels of Fig.~17). This kind of relations in the panels were also observed by \cite{bon07a}, \cite{mn07}, and \cite{cam10} from their OC samples. \begin{figure} \includegraphics*[width = 7cm, height = 9.5cm]{Figure20.eps} \caption {Relations of Age--R$_{RDP}$ (panel a) and Age--R$_{core}$ (panel b), respectively. Filled circles and empty triangles show 16 OCs with $m_{overall}\ge20~M_{\odot}$ and 10 OCs with $m_{overall}<20~M_{\odot}$, respectively. 14 OCs which have no mass determinations are marked by open squares. R1, R2, R3 and R4 mean the regions.} \end{figure} \cite{mac08} argue that some clusters show the expanded cores due to stellar mass black holes (hereafter BHs), and others contract due to dynamical relaxation and core collapse. To be able to see the effect of BHs in our core radius-age relation, the information of the OCs in regions of R2 and R4 in Fig.~17(a) is given in Tables~9$-$10. We call the regions in Fig.~17 as R1, R2, R3 and R4. N$_{bh}$ in Tables~9$-$10 means the estimated number of stellar mass black holes (BHs). This value is estimated from a relation N$_{bh}=6 \times 10^{-4}N_{star}$, given by \cite{pm00}. Here, $N_{star}$ is the extrapolated number of stars in the OCs, and is given in Col.~9 of Table 6 for the overall regions of OCs. Because the extrapolated stellar number for NGC 2158 is not available (Col.~9 of Table 6; supplementary material), the number of BHs could be estimated from the relation of $N_{bh} \approx 0.002 \: M_{cluster}$, given by \cite{pm00}. The BH numbers of seven OCs in the regions R2 and R4 in Fig.~17(a) cannot be estimated, because their extrapolated star numbers or overall masses are not available (see Col.~9 of Table 6; supplementary material). \begin{table} \tiny \renewcommand\thetable{9} \centering \tiny \caption{Age, dimensions and mass (Cols.~2$-$5) for OCs, which show the core expansion in Fig.~17(a). The number of black holes (N$_{bh}$) is listed in last column.} \begin{tabular}{lccccc} \hline Cluster &Age &R$_{RDP}$&R$_{core}$ &\textit{m}$_{overall}$& N$_{bh}$ \\ & (Myr) & (pc) & (pc)& (100m$_{\odot})$ & \\ \hline NGC 2112 & 2000 & 5.92 & 1.64 & 18.10 & 4 \\ NGC 2158 & 2500 & 14.03 & 1.74 & 33.40 & 7 \\ Trumpler 5 & 3000 & 15.18 & 3.86 & 223.0 & 45 \\ Col 110 & 3000 & 12.12 & 6.25 & 16.50 & 3 \\ NGC 2286 & 1000 & 6.39 & 1.59 & 11.10 & 2 \\ NGC 2506 & 2000 & 10.76 & 1.65 & 65.80 & 13 \\ Pismis 3 & 3200 & 8.58 & 2.10 & 133.00 & 27 \\ Trumpler 20& 1500 & 13.98 & 3.12 & 150.00 & 30 \\ NGC 6819 & 2500 & 12.92 & 1.50 & 49.10 & 10 \\ Be 89 & 2000 & 7.48 & 2.75 & 32.00 & 6 \\ NGC 7789 & 1800 & 26.88 & 2.32 & 194.00 & 39 \\ \hline \end{tabular} \end{table} \begin{table} \tiny \renewcommand\thetable{10} \small \centering \tiny \caption{Age, dimensions and mass (Cols.~2$-$5) for OCs, which show the core shrinkage in Fig.~17(a). The number of black holes (N$_{bh}$) is listed in last column.} \begin{tabular}{lccccc} \hline Cluster & Age & R$_{RDP}$& R$_{core}$ & \textit{m}$_{overall}$& N$_{bh}$ \\ & (Myr) & (pc) & (pc)& (100m$_{\odot})$ & \\ \hline King 5 & 1000 & 5.62 & 0.95 & 30.00 & 6 \\ Koposov 53 & 1000 & 4.18 & 0.66 & 2.37 & 0 \\ NGC 2192 & 1300 & 5.47 & 1.11 & 2.27 & 0 \\ NGC 2243 & 2000 & 12.94 & 0.89 & 51.70 & 10 \\ NGC 2262 & 1300 & 6.37 & 0.85 & 24.10 & 5 \\ Haffner 8 & 1000 & 6.93 & 1.47 & 9.85 & 2 \\ Mel 71 & 1500 & 5.00 & 1.27 & 21.80 & 4 \\ Ru 96 & 1000 & 2.60 & 1.43 & 15.80 & 3 \\ \hline \end{tabular} \end{table} \subsection{Relations of R$_{RDP}$ and R$_{core}$ with R$_{GC}$} The dependence of the structural parameters (R$_{RDP}$ \& R$_{core}$) with their galactocentric distance R$_{GC}$ of the 40 OCs and as a function of the ages are plotted in Figs.~18(a)$-$(b). The large and small-sized clusters in Fig.~18(a) occupy the inner- and the outer- Galactic radii. Two OCs with R$_{RDP}$ $ < $ 7 pc and Age $ < $ 1 Gyr in Fig.~18(a) are locate inner Galactic radius. Such OCs with these sizes and ages are also seen in \citet[their fig.~3(c)]{sch06}. The relation between R$_{RDP}$ and R$_{GC}$ is the following, R$_{RDP}$$=(0.98\pm0.25)R_{GC}+(-3.07\pm2.03)$, with a correlation coefficient of 0.53 (see Fig.~18a), Our result shows that there is no strong dependence of R$_{RDP}$ on R$_{GC}$. However, \cite{lyn82}, \cite{tad02} and \cite{cam09, cam10} mention a correlation from their OC samples. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure22.jpg} \includegraphics*[width = 7cm, height = 7cm]{Figure23.eps} \caption {Relations of R$_{GC}$--R$_{RDP}$ (panel a) and R$_{GC}$--R$_{core}$ (panel b), respectively. Filled squares and open circles denote the OCs with Age $<$ 1 Gyr and Age $\ge$ 1 Gyr, respectively.} \end{figure} \subsection{Relations of m$_{overall}$ with R$_{RDP}$, R$_{core}$, Age and R$_{GC}$} Figs.~19(a) and (b) show the relations of m$_{overall}$ versus R$_{RDP}$ and m$_{overall}$ versus R$_{core}$ as a function of Age of 26 of our 40 OCs. The relations to fit m$_{overall}$ with R$_{RDP}$ and with R$_{core}$ are $\ln m_{overall} = (1.57\pm0.42)\: \ln R_{RDP}+(0.01\pm0.02)$ (CC$=$0.60) and $\ln m_{overall} = (1.14\pm0.37)\\ln R_{core}+(2.81\pm0.26)$ (CC$=$0.53), respectively. These correlations between size and mass of the clusters are in concordance with the mass-radius relation for massive OCs with Age $ > $ 100 Myr \citep{por10,cam10}. In Figs.~20(a) and (b) the relations of m$_{overall}$ with R$_{GC}$ and of m$_{overall}$ with Age of 26 of our 40 OCs are shown. As is seen from Fig.~20(a), massive and less massive OCs than m$_{overall} =2000\: M_{\odot}$ are located indistinctly in- or outwards of the Solar circle. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure25.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure26.eps} \caption {Relations of R$_{RDP}$ - m$_{overall}$ (panel a) and R$_{core}$ - m$_{overall}$(panel b) of 26 OCs. Filled squares and open circles represent the OCs with Age $<$ 1 Gyr and Age $\ge$ 1 Gyr. Dashed lines denote the best fits.} \end{figure} \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure27.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure28.eps} \caption {Relations of R$_{GC}$ - m$_{overall}$ (panel a) and Age - m$_{overall}$ (panel b) of 26 OCs.} \end{figure} \subsection{Relations between MF slopes, Age, R$_{RDP}$, R$_{GC}$, and the mass density} The relation of $\chi_{overall}$ with $\chi_{core}$ of 29 OCs is presented in Fig.~21. The fit which is applied to the data is given as following, $\chi_{overall}=(0.47\pm0.12)\chi_{core}+(1.10\pm0.26)$, with a moderate CC$=$0.60. The OCs with flat/steep positive overall MF slopes for $\chi_{core}<0$ in Fig.~21 show signs of a mild to large scale mass segregation, whereas the OCs with negative overall MF slopes for ~$\chi_{core} < 0$ ~indicate an advanced dynamical evolution. These MF slopes of ~$\chi_{core}<0$ ~in Fig.~21 can be explained by the external dynamical effects such as tidal stripping by tidal interactions (in the form of shocks) due to disc and bulge crossings, as well as encounters with GMCs. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure29.eps} \caption {Relation of $\chi_{core}$ - $\chi_{overall}$ of 29 OCs. Dashed line shows the best fit.} \end{figure} The relations of Age vs. $\chi_{overall}$ of 31 OCs and Age vs. $\chi_{core}$ of 29 OCs of our sample are displayed in Figs.~22(a) and (b). The age dependence of the overall and core MF slopes has been parameterised by the linear-decay function (shown as dashed curve) $\chi (t) = \chi_\circ - t/t_{f}$, where $\chi_\circ$ represents the MF slope in the early phases and $t_{f}$ is the flattening time scale. For the overall MF we derive $\chi_\circ = 1.68\pm0.30$ and $t_{f}=1569\pm600$~Myr (CC$=$0.44); the core values $\chi_\circ = 0.74\pm0.39$ and $t_{f}=1006\pm206$~Myr (CC$=$0.68). Within the expected uncertainties the overall MF values are quite close to $\chi_\circ=1.30\pm0.30$ of \cite{kro01} and $\chi_\circ=1.35$ of \cite{sal55}. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure30.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure31.eps} \caption {Relations of Age - $\chi_{overall}$ (31 OCs,~panel a) and Age - $\chi_{core}$ (29 OCs,~ panel b). Dashed and solid lines show the best fit and $1\sigma$ uncertainty, respectively.} \end{figure} The relations of m$_{overall}$ with the slope $\chi_{overall}$ of 26 OCs and m$_{core}$ with $\chi_{core}$ of 24 OCs have been presented in Figs.~23(a) and (b). In Fig.~23(a), most of the OCs of with positive overall slopes are mass-rich and present little or no signs of mass segregation. For the relation of m$_{core}$ with $\chi_{core}$ dispayed in Fig.~23(b), most of OCs with m$_{core}<1000\: m_{\odot}$ have negative core MF slopes implicating mass segregation effects at a larger scale. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure32.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure33.eps} \caption {Relations of m$_{overall}$ - $\chi_{overall}$ (panel a) of 26 OCs and m$_{core}$ - $\chi_{core}$ (panel b) of 24 OCs, respectively.} \end{figure} In the relations of R$_{RDP}$ with $\chi_{overall}$ and R$_{RDP}$ with $\chi_{core}$ for 31 and 29 OCs of our sample, respectively, given in Figs.~24(a) and (b), the OCs with larger or smaller dimensions than R$_{RDP} = 7\:pc$ have a positive or negative sloped overall- and core-MFs, repectively. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure34.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure35.eps} \caption {Relations of $\chi_{overall}$ - R$_{RDP}$ (panel a) of 31 OCs and $\chi_{core}$ - R$_{RDP}$ (panel b) of 29 OCs. Filled squares and open circles denote clusters with Age $<$ 1 Gyr and Age $\ge$ 1Gyr, respectively.} \end{figure} From the relations between R$_{GC}$ and $\chi_{overall}$ of 31 OCs, and between R$_{GC}$ and $\chi_{core}$ of 29 OCs shown in Figs.~25(a) and (b), apparently MF slopes are not correlated with R$_{GC}$ or Age. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure36.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure37.eps} \caption {Relations of $\chi_{overall}$ - R$_{GC}$ (panel a) of 31 OCs and $\chi_{core}$ - R$_{GC}$ (panel b) of 29 OCs, respectively.} \end{figure} The cluster mass density $\rho(m_\odot\: pc^{-3})$ is plotted in Fig.~26(a) and (b) as a function of $\chi_{overall}$ for 26 OCs and $\chi_{core}$ for 24 OCs, respectively. In panel~(a) the mass densities of the OCs having $\chi_{overall} < 0$ are low, as compared to the ones of the OCs with $\chi_{overall} > 0$. This indicates that low mass stars of OCs with negative MF slopes are significantly lost due to external dynamical processes. From panel~(b) one can see that $\chi_{core}$ and $\rho_{core}$ are not correlated. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure38.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure39.eps} \caption {Relations of $\chi_{overall}$ - $\rho_{overall}$ (panel a) of 26 OCs and $\chi_{core}$ - $\rho_{core}$ (panel b) of 24 OCs, respectively.} \end{figure} \subsection{Relation between the MF slope and the evolutionary parameter and a comparison to Kroupa's IMF} From MF slopes and evolutionary parameters of the overall and the core of the OCs given in Table~7, the relations of $\tau_{overall}$ with $ \chi_{overall} $ of 24 OCs and similarly, of $\tau_{core}$ with $\chi_{core}$ of 21 OCs have been plotted in Figs.~27(a)$-$(b). The dashed curve on the figure shows the fit, $\chi (\tau) = \chi_\circ - \chi_{1}e^{-(\frac{\tau_{o}}{\tau})}$. As seen in panels~ (a) and (b) of Figure~27, the overall- and core-MF slopes undergo an exponential decay with $\tau$. Here, $\chi_\circ$ and $\chi_{1}$ mean MF slopes at birth and in the advanced stage, respectively. For the overall MF slope, we derive $\chi_\circ = 1.67\pm 0.18$ and $\tau_{o}=29.92\pm 12.29$ (CC$=$0.77); the core values $\chi_\circ = 1.19\pm 0.89$ and $\tau_{o} = 31.62\pm 34.79$ (CC$=$0.64). Similar relations were obtained by \citet[see their Fig.~8(a) and (b)]{bic06a} and \citet[see their Figs.~7(b), (d), (e)]{mn07}. \begin{figure} \centering \includegraphics*[width = 7cm, height = 7cm]{Figure40.eps} \includegraphics*[width = 7cm, height = 7cm]{Figure41.eps} \caption {Relations of $\tau_{overall}$ - $\chi_{overall}$ (panel a) of 24 OCs and $\tau_{core}$ - $\chi_{core}$ (panel b) of 21 OCs. Dashed and solid lines show the best fit and $1\sigma$ uncertainty, respectively. In panel~(b), $1\sigma$ uncertainty is not shown due to too large error of $\tau_{core}$.} \end{figure} \section{Conclusions} Our main conclusions are summarized as follows: \begin{enumerate} \item The astrophysical and structural parameteres of 40 OCs have been derived from the filtered 2MASS $(J, J-H)$ CMDs, and the stellar RDPs. The field star decontamination technique is utilised for separating the cluster members. The astrophysical parameters (Age,~d,~E(B-V)) of 40 OCs comprise with ages in the range of 0.1 Gyr to 5.0 Gyr, at heliocentric distances, 0.85 kpc to 5.42 kpc, and with reddenings, $0.03 \leq E(B-V) \leq 1.31$ (Table 3). Having combined the derived structural, mass and mass functions, relaxation and evolutionary parameters with astrophysical parameters of 40 OCs, dynamical evolution of these OCs have been studied. The reduced final reddenings from the dust maps of SFD have been compared with the ones of 40 OCs (Fig.~9(b)). There are significant differences for 27 OCs between both $E(B$--$V)$ color excess values. For the rest, the $E(B$--$V)$ values of 13 OCs are quite close to the ones of SFD. Note that SFD maps are not reliable at regions $|b|<5^{\circ}$ due to contaminating sources and uncertainties in the dust temperatures \citep{gon12}. Therefore, the SFD values resulted from line-of-sight integral through the Milky Way and with low spatial resolution, it is quite a normal to have different reddening values for these relatively close ($\sim 1$~kpc) star clusters. \item The relation between R$_{RDP}$ and R$_{core}$ in Fig.~14 found by us is reasonably similar to that given by \cite{cam10}. The OCs in our sample which do not follow the relation are either intrinsically small or have been suffering significant evaporation effects. The dimensions (R$_{RDP}$ and R$_{core}$) in Figs.~15(a) and (b) increase on the average with the distance from the Sun. \item From Fig.~17(a) and Tables 9-10, apparently the sizes of core radii of the OCs are related with their respective BH numbers. The black hole numbers of the OCs in region R2 of Fig.~17(a) are generally larger than the ones of the OCs in the region of IV. Note that the BH numbers of six, out of 10 OCs in Table 10 are almost close to the ones of the OCs in Table 9. However, with almost similar BH numbers, these six OCs show shrinkage, whereas those in Table 9 indicate the expanded cores. For example, If the statement of \cite{mac08} is correct, NGC~2243 with its 10 BHs would develope a large core. However, NGC~2243 has small core value, R$_{core}$ = 0.89 pc. Col.~110 with a few BHs shows an expanding core with R$_{core}$ = 6.25 pc. The core sizes of NGC~7789 (R$_{core}$ = 2.32 pc) and Be~89 (R$_{core}$ = 2.75 pc), respectively are quite close together. But their BH numbers of the two OCs are very different. Therefore, the presence of BHs is not the only possible explanation for the bifurcation seen in Figs.~17(a) and (b). Alternatively, one should also consider the effect of the mass range of OCs. In other words, for clusters older than ~1 Gyr, Fig. 17 shows that massive OCs (filled circles) can be found in both regions R2 and R4. There are two low-mass OCs (open triangles) in the region R4, (see Fig. 17(a)). In this sense, the distribution of OCs in Fig. 17 can be partly attributed to clusters with large radii retaining larger masses. \cite{mac03, mac08} also argue that the expanded cores are the cause of growth of the limiting radii and the shrinking cores lead to the contraction of the limiting radii. There are 32 OCs of our sample in the regions R2 and R4 in Fig.~17(a); 16 out of 19 OCs with R$_{core} < 1.5$ pc in the region R4 of Fig.~17(a) have R$_{RDP} < 7$ pc; three of 19 OCs have R$_{RDP} > 7$ pc. Similarly, 10 out of 13 OCs in the region R2 of Fig.~17(a) with R$_{core} > 1.5$ pc have R$_{RDP} > 7$ pc; three of them have R$_{RDP} < 7$ pc. Here, R$_{RDP} = 7$ pc means the separation into two groups of our sample (Fig.11a). These findings imply that the OCs with their core expanding could have small cluster limiting radii, in a similar manner, the OCs with shrinking cores could have large limiting cluster radii. Note that there are six OCs with incompatible cores and limiting radii in the regions R2 and R4 of Fig.~17(b). These six OCs are inconsistent with the arguments of \cite{mac03, mac08}. \item For this paper, we do not make an effort to determine the binary fractions of our sample OCs. However, the OCs Binaries widen the main sequence of the OCs by as much as 0.75 mag, so theoretical isochrones are fitted to the mid-points of CMDs of the OCs, rather than the faint or blue sides, as emphasized by \cite{car01}. Binaries are indeed an effective way of storing energy in a cluster. Non-primordial binary formation, especially the close ones, requires many encounters of at least 3 stars, 2 of which end up having orbits around each other and the 3rd one gets "ejected". So, depending on the binary fraction, a cluster can get dynamically swollen. As a consequence of the dynamical evolution in OCs, multiple systems tend to concentrate in central regions \cite{tak00}. As indicated by \cite{bont05}, the main effect of a significant fraction of binaries in central parts of OCs is that the number of low-mass stars is underestimated with respect to the higher mass stars. \cite{sol10} give the complete fractions of binaries as 35 $\%$ to 70 $\%$. They also give a minimum binary fraction, which is larger than 11 $\%$ within the core of OCs. \item It is seen from Figs.~18(a) and (b) that the OCs with R$_{RDP} < 3$ pc and R$_{core} < 0.6$ pc inside the Solar circle are older than 1 Gyr. As they lost their stellar content, they shrunk in size and mass with time. Nevertheless, they seem to survive against external shocks for a longer time, according to the simulations of \cite{sc73}. As one can see from Fig.~18(b), there is no strong dependence of R$_{core}$ with R$_{GC}$. \item As can be seen in Figs.~19(a) and (b), OCs with large dimensions are on the average more massive. There does not seem to be an age dependence for the relations in the panels~(a)$-$(b) of Fig~19. As can be seen from Fig.~20(a), massive and less massive OCs than m$_{overall} =2000\: M_{\odot}$ are located indistinctly in- or outwards of the Solar circle. Less massive OCs which are located outside the Solar circle appear to survive, because they are subject to less external dynamical processes: The OCs inside the Solar radius survive against the combined dynamical effects such as interactions with GMCs, tidal effects with the spiral arm and the Galactic disc, which are quite efficient in the Galactic center directions. As is seen in Fig.~20(b), less massive OCs older than 1 Gyr are scarcer since they are dissolved into the field, i.e. the more massive and older OCs ($ > 1$ Gyr) survive. \item The OCs with flat/steep positive overall MF slopes for $\chi_{core}<0$ in Fig.~21 show signs of a mild to large scale mass segregation, whereas the OCs with negative overall MF slopes for ~$\chi_{core} < 0$ ~indicate an advanced dynamical evolution. These MF slopes of ~$\chi_{core}<0$ ~in Fig.~21 can be explained by the external dynamical effects such as tidal stripping by tidal interactions (in the form of shocks) due to disc and bulge crossings, as well as encounters with GMCs. \item As considered these MF slopes in Figs.~22(a) and (b), OCs are formed with flat core and Kroupa and Salpeter-like overall MFs, as stated by \cite{bon06a}. As is seen from Fig.~22, at cluster birth the core MF seems to be much flatter than the overall MF. Early core flattening may be partly linked to primordial processes associated to molecular-cloud fragmentation. Within the expected uncertainties the overall MF values are quite close to $\chi_\circ=1.30\pm0.30$ of \cite{kro01} and $\chi_\circ=1.35$ of \cite{sal55}. However, our core MF value are smaller than the ones of Kroupa and Salpeter. As is seen from panels~(a)$-$(b), except for few MF slopes, the overall and core MF slopes tend to be negative values towards older ages, because of mild/large scale mass segregation, the presence of GMCs and tidal effects from disk and Bulge crossings as external processes. \item Most of the OCs of with positive overall slopes in Fig.~23(a) are mass-rich and present little or no signs of mass segregation. Apparently they retain their low-mass stars because they are strongly bounded to the clusters. The OCs with negative overall MF slopes in Fig.~23(a) seem to be in the phase of more advanced dynamical evolution. In panels~(a)$-$(b) of Fig.~23 there is no indication of age dependence that is seen between the MF slopes and the core- or overall-masses. In panels~(a)$-$(b) of Fig.~23, the OCs with steep overall and core MF slopes present signs of larger scale mass segregation in the core or halo region. As expected, there are no indications of age dependence among the positive/negative MF slopes. \item From Fig.~27(a), one sees that for $\tau > 30$, the overall MF slopes of the OCs are negative, with one exception. For $\tau < 30$, the overall MF slopes of the remaining OCs fall in the range of $ +0.5< \chi_{overall} < +2.5$. For $\tau>30$, as a result of the loss of low mass stars, $\chi_{overall}$ tends to negative values. As can be seen from panel~(b), the core MF slopes for the majority of OCs tend to be negative values after $\tau\approx 32$, with two exceptions. In panel~(b) there are two OCs with flat slopes for $\tau< 32$. For $\tau>32$, it is seen from panel~(b) that the OCs with dynamically evolved cores reveal a sign of strong mass segregation. From eleven OCs, \cite{bon05} detected the significant flattening in MF slopes for $\tau_{core} \leq 100$ and $\tau_{overall} \leq 7$, respectively. From their OCs, \cite{mn07} give for these values $\tau_{core} \le 1000$ and $\tau_{overall}\le 450$, respectively. Here we detect the flattening of MF slopes at $\tau_{core} \leq 32$ and $\tau_{overall} \leq 30$, respectively. However, these values are affected by the sample size with young and old OCs. Note that our sample also contains the OCs with intermediate and old ages. The overall MF slopes of 31 OCs with $m > 0.5$, M$_{\odot}$ could have been compared with the one given by \citet[$\chi=+1.3 \pm 0.3$]{kro01}. As compared to the uncertainties of our MF slopes (Col.~5; Table 7) and the one ($\pm0.3$) of Kroupa, the overall MF slopes of 14 out of 31 OCs are consistent with the one of \citet[$\chi=+1.3\pm0.3$]{kro01}, which implies little or no dynamical evolution for these clusters. The remaining 17 OCs with MF slopes that depart from that of \cite{kro01} show mild to large scale mass segregation, due to the dynamical evolution. \item We do not make an effort to determine the binary fractions of our sample OCs. \cite{sol10} give the complete fractions of binaries as 35 $\%$ to 70 $\%$. They also give a minimum binary fraction, which is larger than 11 $\%$ within the core of OCs. However, binaries of the OCs widen the main sequence of the OCs by as much as 0.75 mag, so theoretical isochrones are fitted to the mid-points of CMDs of the OCs, rather than the faint or blue sides, as emphasized by \cite{car01}. We have considered this issue for isochrone fitting to CMDs (see Sect.4). Binaries are indeed an effective way of storing energy in a cluster. Non-primordial binary formation, especially the close ones, requires many encounters of at least 3 stars, 2 of which end up having orbits around each other and the 3rd one gets "ejected". So, depending on the binary fraction, a cluster can get dynamically swollen. As a consequence of the dynamical evolution in OCs, multiple systems tend to concentrate in central regions \citep{tak00}. As indicated by \cite{bont05}, the main effect of a significant fraction of binaries in central parts of OCs is that the number of low-mass stars is underestimated with respect to the higher mass stars. \end{enumerate} \acknowledgements We thank the anonymous referee for her/his comments and suggestions on the manuscript. We thank C. Chavarria for the correction of English in the text. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Centre/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the WEBDA database, operated at the Institute for Astronomy of the University of Vienna. \clearpage
1,116,691,500,551
arxiv
\section{Appendix} \vspace{-1mm} In the Appendix, we provide additional details and results for our Neural Data Server. \vspace{-0mm} \subsection{Web Interface} Our NDS is running as a web-service at \url{http://aidemos.cs.toronto.edu/nds/}. We are inviting interested readers to try it and give us feedback. A snapshot of the website is provided in Figure~\ref{fig:web-interface}. \subsection{Additional Results} \vspace{0mm} We visually assess domain confusion in Figures~\ref{fig:modanet-clusters},~\ref{fig:cityscapes-clusters}, and~\ref{fig:voc-clusters}. We randomly select 9 images per cluster and display the top 8 clusters corresponding to the experts with the highest proxy task performance in miniModaNet, Cityscapes, and VOC-Pascal. We can observe that the images from the top clusters do indeed reflect the types of objects one encounters for autonomous driving, fashion, and general scenes corresponding to the respective target (client) datasets, showcasing the plausibility of our NDS. We further extend Table 2 and 3 in the main paper by showing detailed instance segmentation results for fine-tuning on the Cityscapes dataset. We report the performance measured by the COCO-style mask AP (averaged over IoU thresholds) for the 8 object categories. Table~\ref{table:maskrcnn-seg-coco} reports the mask AP by sampling 23K, 47K, and 59K images from COCO to be used for pre-training for Cityscapes, and Table~\ref{table:maskrcnn-seg-openimages} reports the mask AP by sampling 118K, 200K images from OpenImages for pre-training. \noindent\textbf{Self-Supervised Pretraining:} We evaluate NDS in a scenario where a client uses self-supervised learning to pretrain on the selected server data. We follow the same setup as described in Section 4.2, except that rather than pretraining using classification labels, clients ignore the availability of the labels and pretrain using two self-supervised learning approaches: MoCo~\cite{he2019momentum} and RotNet~\cite{gidaris2018unsupervised}. In Table ~\ref{table:moco}, we pretrain on the selected data subset using MoCo, an approach recently proposed by He \etal, where the model is trained on the pretext task of instance discrimination. In Table~\ref{table:rotnet}, we use ~\cite{gidaris2018unsupervised} to pretrain our model on the pretext task of predicting image rotation. We observe that in the case of MoCo, pretraining on NDS selected subset does not always yield better performance than pretraining on a randomly sampled subset of the same size. In the case of RotNet, pretraining on NDS selected subset has a slight gain over the baseline of uniform sampling. These results suggest that the optimal dataset for pretraining using self-supervised learning may be dependent on the pretext task. More formal studies on the relationship connecting training data, pretraining task, and transferring performance is required. \section{Conclusion} \vspace{-1.5mm} In this work, we propose a novel method that aims to optimally select subsets of data from a large dataserver given a particular target client. In particular, we represent the server's data with a mixture of experts trained on a simple self-supervised task. These are then used as a proxy to determine the most important subset of the data that the server should send to the client. We experimentally show that our method is general and can be applied to any pre-training and fine-tuning scheme and that our approach even handles the case where no labeled data is available (only raw data). We hope that our work opens a more effective way of performing transfer learning in the era of massive datasets. In the future, we aim to increase the capability of NDS to also support other modalities such as 3D, text and speech. \vspace{-3.5mm} \paragraph{Acknowledgments:} The authors acknowledge partial support by NSERC. SF acknowledges the Canada CIFAR AI Chair award at Vector Institute. We thanks Relu Patrascu for his continuous infrastructure support. We also thank Amlan Kar, Huan Ling and Jun Gao for early discussions, and Tianshi Cao and Jonah Philion for feedback in the manuscript.} \section{Experiments} \vspace{-1mm} \begin{table*}[t] \centering \addtolength{\tabcolsep}{-2pt} \resizebox{\textwidth}{!}{ \footnotesize \begin{tabular}{c|c|c|c|c|c||c|c|c||c|c|c} \toprule \multicolumn{3}{c|}{Pretrain Server Data (COCO + OpenImages)} & \multicolumn{8}{c}{Client Dataset} \\ \hline \hline \multicolumn{2}{c|}{Sampled Data Size} & \multirow{2}{*}{Method} & \multicolumn{3}{c||}{PASCAL-VOC2007} & \multicolumn{3}{c||}{miniModaNet} & \multicolumn{3}{c}{Cityscapes} \\ \cline{0-1} File Size & \# Images & & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP^{bb}_{75}$ & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP^{bb}_{75}$ & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP^{bb}_{75}$ \\ \hline \hline \multicolumn{3}{c|}{ImageNet Initialization} & 44.30 & 73.66 & 46.44 & 33.40 & 57.98 & 35.00 & 34.94 & 59.86 & 35.69 \\ \hline \hline \multirow{2}{*}{26GB / 538GB} & \multirow{2}{*}{90K (5\%)} & Uniform Sampling & 47.61 & 76.88 & 51.95 & 35.64 & 58.40 & 39.09 & 36.49 & 61.88 & 36.36 \\ & & \bf{NDS} & \bf{48.36} & \bf{76.91} & \bf{52.53} & \bf{38.84} & \bf{61.23} & \bf{43.86} & \bf{38.46} & \bf{63.79} & \bf{39.59} \\ \hline \hline \multirow{2}{*}{54GB / 538GB} & \multirow{2}{*}{180K (10\%)} & Uniform Sampling & 48.05 & 77.17 & 52.04 & 35.78 & 58.50 & 39.71 & 36.41 & 61.22 & 37.17 \\ & & \bf{NDS} & \bf{50.28} & \bf{78.61} & \bf{55.47} & \bf{38.97} & \bf{61.32} & \bf{42.93} & \bf{40.07} & \bf{65.85} & \bf{41.14} \\ \bottomrule \end{tabular} } \vspace{-3mm} \caption{\small{Results for object detection on the 3 client datasets. Scores are measured in \%. }\label{combined-server-bbox}} \vspace{-1mm} \end{table*} \begin{table}[ht] \vspace{-1mm} \small \resizebox{\linewidth}{!}{% \addtolength{\tabcolsep}{-2pt} \begin{tabular}{c|c|cc|cc} \toprule Data (\# Images) & Method & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP$ & $AP_{50}$ \\ \hline \hline 0 & ImageNet Initial. & 36.2 & 62.3 & 32.0 & 57.6 \\ \hline \hline \multirow{2}{*}{23K} & Uniform Sampling & 38.1 & 64.9 & 34.3 & 60.0 \\ & \textbf{NDS} & 40.7 & 66.0 & 36.1 & 61.0 \\ \hline \hline \multirow{2}{*}{47K} & Uniform Sampling & 39.8 & 65.5 & 34.4 & 60.0 \\ & \textbf{NDS} & \bf{42.2} & \bf{68.1} & \bf{36.7} & \bf{62.3} \\ \hline \hline \multirow{2}{*}{59K} & Uniform Sampling & 39.5 & 64.9 & 34.9 & 60.4 \\ & \textbf{NDS} & 41.7 & 66.6 & \bf{36.7} & 61.9 \\ \hline \hline 118K & Full COCO & 41.8 & 66.5 & 36.5 & \bf{62.3} \\ \bottomrule \end{tabular}% } \vspace{-3mm} \caption{\footnotesize{Transfer learning results for instance segmentation with Mask R-CNN on Cityscapes by selecting images from COCO.}} \vspace{-1mm} \label{maskrcnn-seg} \end{table} \begin{table}[ht] \vspace{-1mm} \small \resizebox{\linewidth}{!}{% \addtolength{\tabcolsep}{-2pt} \begin{tabular}{c|c|cc|cc} \toprule Data (\# Images) & Method & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP$ & $AP_{50}$ \\ \hline \hline 0 & ImageNet Initial. & 36.2 & 62.3 & 32.0 & 57.6 \\ \hline \hline \multirow{2}{*}{118K} & Uniform Sampling & 37.5 & 62.5 & 32.8 & 57.2 \\ & \textbf{NDS} & \bf{39.9} & \bf{65.1} & \bf{35.1} & \bf{59.8} \\ \hline \hline \multirow{2}{*}{200K} & Uniform Sampling & 37.8 & 63.1 & 32.9 & 57.8 \\ & \textbf{NDS} & \bf{40.7} & \bf{65.8} & \bf{36.1} & \bf{61.2} \\ \bottomrule \end{tabular}% } \vspace{-3mm} \caption{\footnotesize{Transfer learning results for instance segmentation with Mask R-CNN on Cityscapes by selecting images from OpenImages.}} \label{maskrcnn-seg-openimages} \vspace{-3mm} \end{table} \begin{figure*}[ht] \vspace{-5mm} \begin{minipage}{.428\linewidth} \begin{table}[H] \resizebox{\linewidth}{!}{% \addtolength{\tabcolsep}{-4pt} \begin{tabular}{|ll|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Pretrain. Sel. Method}}} & \multicolumn{5}{c|}{\textbf{Target Dataset}} \\ \multicolumn{2}{|c|}{} & Stanf. Dogs & Stanf. Cars & Oxford-IIIT Pets & Flowers 102 & CUB200 Birds \\ \hline 0\% & Random Init. & 23.66 & 18.60 & 32.35 & 48.02 & 25.06 \\ \hline 100\% & Entire Dataset & 64.66 & 52.92 & 79.12 & 84.14 & 56.99 \\ \hline \multirow{4}{*}{20\%} & Uniform Sample & 52.84 & 42.26 & 71.11 & 79.87 & 48.62 \\ & NDS (SP+TS) & 72.21 & 44.40 & 81.41 & 81.75 & 54.00 \\ & NDS (SP+SS) & 73.46 & 44.53 & 82.04 & 81.62 & 54.75 \\ & NDS (UP+SS) & 66.97 & 44.15 & 79.20 & 80.74 & 52.66 \\ \hline \multirow{4}{*}{40\%} & Uniform Sample & 59.43 & 47.18 & 75.96 & 82.58 & 52.74 \\ & NDS (SP+TS) & 68.66 & 50.67 & 80.76 & 83.31 & 58.84 \\ & NDS (SP+SS) & 69.97 & 51.40 & 81.52 & 83.27 & 57.25 \\ & NDS (UP+SS) & 67.16 & 49.52 & 79.69 & 83.51 & 57.44 \\ \hline \end{tabular}% } \vspace{-3mm} \caption{ Ablation experiments on gating and expert training. SP=Superclass Partition, UP=Unsupervised Partition, TS=Task-Specific experts (experts trained on classif. labels), and SS=Self-Supervised experts (experts trained to predict image rotation).} \label{classification-result} \end{table} \end{minipage} \hspace{1.5mm} \begin{minipage}{.266\linewidth} \begin{figure}[H] \includegraphics[width=\linewidth]{figs/domain-confusion.png} \vspace{-3mm} \caption{\footnotesize{Relationship between domain classifier and proxy task performance on subsets $\hat{\mathcal{S}}$.}} \label{figure-domain-confusion} \end{figure} \end{minipage} \hspace{1.5mm} \begin{minipage}{.272\linewidth} \begin{table}[H] \resizebox{\linewidth}{!}{% \addtolength{\tabcolsep}{-4pt} \begin{tabular}{|l|l|c|c|} \hline Data & Method & Oxford-IIIT Pet & CUB200 Birds \\ \hline \multirow{4}{*}{20\%} & Uniform Samp. & 71.1 & 48.6 \\ & KNN + ~\cite{Cui2018LargeSF} & 74.4 & 51.6 \\ & ~\cite{ngiam2018domain} & 81.3 & 54.3 \\ & NDS & \bf{82.0} & \bf{54.8} \\ \hline \multirow{4}{*}{40\%} & Uniform Samp. & 76.0 & 52.7 \\ & KNN + ~\cite{Cui2018LargeSF} & 78.1 & 56.1 \\ & ~\cite{ngiam2018domain} & 81.0 & \bf{57.4} \\ & NDS & \bf{81.5} & 57.3 \\ \hline \multicolumn{2}{|c|}{Entire ImageNet} & 79.1 & 57.0 \\ \hline \end{tabular}% } \vspace{-2mm} \caption{\footnotesize{Transfer learning performance on classification datasets comparing data selection methods. \label{baseline-comparisons}}} \end{table} \end{minipage} \end{figure*} \begin{figure*}[ht] \vspace{-1mm} \includegraphics[width=\linewidth,height=15.5mm]{figs/bbox-visualize/city-visualize-1.png} \includegraphics[width=\linewidth,height=15.5mm]{figs/bbox-visualize/city-visualize-2.png} \vspace{-7mm} \caption{\footnotesize{Instance segmentation results on Cityscapes using network pre-trained from ImageNet initialization (\textbf{left}), 47K images uniformly sampled (\textbf{middle}), and 47K images from NDS (\textbf{right}). Notice that the output segmentations generally look cleaner when training on NDS-recommended data.}} \label{cityscapes-visualize}. \vspace{-5mm} \end{figure*} \begin{figure*}[ht] \includegraphics[width=\linewidth,height=36mm]{figs/bbox-visualize/modanet-bbox.png} \vspace{-6.5mm} \caption{\footnotesize{Object detection results in miniModaNet using network pre-trained from ImageNet initialization (\textbf{left}), 90K images uniformly sampled (\textbf{middle}), and 90K images sampled using NDS (\textbf{right}). A score threshold of 0.6 is used to display these images.} \label{fig:modanet-visualize}} \vspace{-2mm} \end{figure*} We perform experiments in the tasks of classification, detection, and instance segmentation. We experiment with 3 datasets on the sever side and 7 on the client side. \subsection{Support for Diverse Clients and Tasks} In this section, we provide an extensive evaluation of our approach on three different client's scenarios: autonomous driving, fashion and general scenes. In each of them, the client's goal is to improve the performance of its downstream task (\ie, object detection or instance segmentation) by pretraining in a budget-constrained amount of data. Here, the {\emph{dataserver}} is the same and indexes the massive OpenImages~\cite{OpenImages} and MS-COCO ~\cite{Lin2014MicrosoftCC} datasets. Specifically, our server dataset can be seen as the union of COCO and OpenImages ~\cite{OpenImages, Lin2014MicrosoftCC} (approx $538$ GB) represented in the weights of the self-supervised trained experts ($2$ GB). \noindent \textbf{Autonomous Driving:} Here, we use Cityscapes~\cite{Cordts2016TheCD} as the client's dataset, which contains $5000$ finely annotated images divided into $2975$ training and $500$ validation images. Eight object classes are provided with per-instance annotation. In practice, this simulates the scenario of a client that wants to crunch its performance numbers by pretraining on some data. This scenario is ubiquitous among state-of-the-art instance and semantic segmentation approaches on the Cityscapes leaderboard ~\cite{Takikawa2019GatedSCNNGS,He2017MaskR,Zhu2018ImprovingSS}. \noindent \textbf{Fashion:} We use the ModaNet dataset~\cite{Zheng2018ModaNetAL} to simulate a client that wants to improve its models' performance in the task of object detection of fashion related objects. ModaNet is a large-scale street fashion dataset consisting of $13$ classes of objects and $55,176$ annotated images. Since the effectiveness of pre-training diminishes with the size of the dataset~\cite{he2018rethinking}, we create a small version of the dataset for our experiments. This constitutes of $1000$ training and $1000$ validation images that are randomly selected but keeping the same class distribution of the original dataset. We call it miniModaNet in our experiments. \noindent \textbf{General Scenes:} We use PASCAL VOC object detection ~\cite{Everingham2009ThePV} as the client's dataset for this scenario. The task in this case is object detection on $20$ object classes. We use the \texttt{trainval2007} set containing $5011$ images for training and evaluate on \texttt{test2007} containing $4962$ images. \noindent \textbf{Evaluation:} We use Intersection-Over-Union (IoU) to measure client's performance in its downstream task. Specifically, we follow the MS-COCO evaluation style and compute IoU at three different thresholds: a) $0.50$, b) $0.75$, c) an average of ten thresholds $(.5:.05:.95)$. The same evaluation style is used for both, object detection and instance segmentation. Notice however that in the case of instance segmentation, the overlap is based on segmented regions. \noindent \textbf{Baselines:} In this regime, we compare our approach vs no pretraining, uniform sampling, and pretraining on the whole server dataset (\ie, MS-COCO). In all cases, we initialize with ImageNet pretrained weights as they are widely available and this has become a common practice. \noindent \textbf{Implementation Details:} \textbf{Client.} We use Mask-RCNN~\cite{He2017MaskR} with a ResNet50-FRN backbone detection head as the client's network. After obtaining a subset of ${\mathcal{S}}$, the client pre-trains a network on the selected subset and uses the pre-trained model as initialization for fine-tuning using the client (target) dataset. For object detection, we pre-train with a 681 class (80 class from COCO, 601 class from OpenImages) detection head using bounding box labels. For instance segmentation, we pre-train with 80 class (for COCO) or 350 class (for OpenImages) detection head using object mask labels. \textbf{Server.} For all self-supervised experts, we use \textit{ResNet18} ~\cite{He2015DeepRL}, and train our models to predict image rotations. MS-COCO and OpenImages are partitioned into $ K = 6$ and $K=50$ experts, respectively. \vspace{-4mm} \subsubsection{Qualitative and Quantitative Results} \vspace{-2mm} \noindent \textbf{Object Detection:} Table~\ref{combined-server-bbox} reports the average precision at various IoU of the client's network pre-trained using data selected using different budgets and methods. First, we see that a general trend of pre-training the network on sampled detection data helps performance when fine-tuning on smaller client detection datasets compared to fine-tuning the network from ImageNet initialization. By pre-training on 90K images from COCO+OpenImages, we observe a 1-5\% gain in AP at 0.5 IoU across all 3 client (target) datasets. This result is consistent with~\cite{Li2019AnAO} which suggests that a pre-training task other than classification is beneficial for improving transfer performance on localization tasks. Next, we see that under the same budget of 90K/180K images from the server, pre-training with data selected by NDS outperforms the baseline which uses images randomly sampled from ${\mathcal{S}}$ for all client datasets. \noindent \textbf{Instance Segmentation:} Table~\ref{maskrcnn-seg} reports the instance segmentation performance by sampling 23K, 47K, and 59K images from COCO for pre-training on Cityscapes. We can see that pre-training using subsets selected by NDS is 2-3\% better than the uniform sampling baseline. Furthermore, using 40\% (47K/118K), or 50\% (59K/118K) images from COCO yields comparable (or better) performance to using the entire 100\% (118K) of data. Table~\ref{maskrcnn-seg-openimages} shows the results of sampling 118K, 200K images from OpenImages dataset as our server dataset. \noindent\textbf{Qualitative Results:} Figure~\ref{fig:modanet-visualize} shows qualitative results on \textsc{miniModaNet} from detectors pre-trained from Imagenet, uniformly sampled images from ${\mathcal{S}}$, and images sampled using NDS. In the cases shown, the network pre-trained using the data recommended by NDS shows better localization ability, and is able to make more accurate predictions. \subsection{Support for Diverse Clients Same Task} For completeness, and in order to compare to stronger baselines that are limited to classification tasks, we also quantitatively evaluate the performance of NDS in the same-client-same-task regime. In this case, the task is set to be classification and the server indexes the Downsampled ImageNet ~\cite{chrabaszcz2017downsampled} dataset. This a variant of ImageNet ~\cite{imagenet_cvpr09} resized to 32$\times$32. In this case, we use $K=10$ experts. \noindent \textbf{Client's Datasets:} We experiment with several small classification datasets. Specifically, we use Stanford Dogs~\cite{KhoslaYaoJayadevaprakashFeiFei_FGVC2011}, Stanford Cars~\cite{KrauseStarkDengFei-Fei_3DRR2013}, Oxford-IIIT Pets~\cite{parkhi12a}, Flowers 102~\cite{Nilsback08}, and CUB200 Birds~\cite{WahCUB_200_2011} as client datasets. \noindent \textbf {Implementation Details:} We use ResNet18~\cite{He2015DeepRL} as our client's network architecture, and an input size of $32 \times 32$ during training. Once subsets of server data are selected, we pre-train on the selected subset and evaluate the performance by fine-tuning on the client (target) datasets. \noindent\textbf{Comparison to data selection methods:} Cui \etal~\cite{Cui2018LargeSF} and Ngiam \etal~\cite{ngiam2018domain} recently proposed data selection methods for improving transfer learning for classification tasks. In this restricted regime, we can compare to these methods. Specifically, we compare our NDS with~\cite{ngiam2018domain}, where they sample data based on the probability over source dataset classes computed by pseudo-labeling the target dataset with a classifier trained on the source dataset. We also create a baseline KNN by adapting Cui \etal 's method~\cite{Cui2018LargeSF}. Here, we sample from the most similar categories measured by the mean features of categories between the client and server data. We emphasize that the previous two approaches are limited to the classification task, and cannot handle diverse tasks. Furthermore, they do not scale to datasets beyond classification, and~\cite{ngiam2018domain} does not scale to a growing {\emph{dataserver}}. Our approach achieves comparable results to~\cite{ngiam2018domain}, and can be additionally applied to source datasets with no classification labels such as MS-COCO, or even datasets which are not labeled. \begin{figure}[t!] \centering \vspace{-5mm} \begin{minipage}{0.49\linewidth} \includegraphics[width=1\linewidth, trim=20 0 20 0,clip]{figs/scale-plot.png} \end{minipage} \begin{minipage}{0.49\linewidth} \vspace{3mm} \caption{\footnotesize Simulating an incrementally growing {\emph{dataserver}}, and the time required to ``train" a model to represent the server. We NDS compare to the baseline of~\cite{ngiam2018domain} (which is limited to classification tasks).} \label{fig:scalability} \end{minipage} \vspace{-5mm} \end{figure} \subsection{Ablation Experiments} \vspace{-1mm} \noindent \textbf{Domain Confusion:} To see how well the performance of the proxy task reflects the domain confusion, we perform an experiment comparing the proxy task performance and $\hat{d}_{\mathcal{A}}(\hat{\mathcal{S}}, {\mathcal{T}})$. To estimate $\hat{d}_{\mathcal{A}}$, we follow the same idea from \cite{NIPS2006_2983,chen2015marginalizing,Ganin2015DomainAdversarialTO} and for each subset $\hat{\mathcal{S}}$, we estimate the domain confusion. Figure \ref{figure-domain-confusion} shows the domain confusion vs the proxy task performance using several classification datasets as the target (client) domain. In this plot, the highest average loss corresponds to the subset with the highest domain confusion (\ie, ${\mathcal{S}}_i$ that is the most indistinguishable from the target domain). Notice that this correlates with the expert that gives the highest proxy task performance. \noindent\textbf{Ablation on gating and expert training:} In Table \ref{classification-result}, we compare different instantiations of our approach on five client classification datasets. For all instantiations, pre-training on our selected subset significantly outperforms the pre-training on a randomly selected subset of the same size. Our result in Table \ref{classification-result} shows that under the same superclass partition, the subsets obtained through sampling according to the transferability measured by self-supervised experts (SP+SS) yield a similar downstream performance compared to sampling according to the transferability measured by the task-specific experts (SP+TS). This suggests that self-supervised training for the experts can successfully be used as a proxy to decide which data points from the source dataset are most useful for the target dataset. \noindent\textbf{Scalability:} Fig~\ref{fig:scalability} analyzes the (simulated) required training time of the server as a new dataset is being incrementally added to it. We simulate a comparison between~\cite{ngiam2018domain} (which needs to retrain the model on all datasets each time a dataset is added, and thus scales linearly) and NDS (where expert training is only ran on the additional dataset). \begin{figure*}[htb!] \begin{center} \addtolength{\tabcolsep}{-5pt} \begin{tabular}{cccc} \includegraphics[height=2.62cm,trim=160 0 150 0, clip]{figs/nds_web_1.png} & \includegraphics[height=2.62cm,trim=0 0 00 0, clip]{figs/nds_web_2.png} & \includegraphics[height=2.62cm,trim=60 0 60 0, clip]{figs/nds_web_3a.png}& \includegraphics[height=2.62cm,trim=00 0 10 0, clip]{figs/nds_web_4.png}\\ {\href{http://aidemos.cs.toronto.edu/nds/}{\footnotesize\color{magenta}{aidemos.cs.toronto.edu/nds/}}} & \small{Dataset Registry} & \small{Adapt Experts} & \small{Download Recommended Data} \end{tabular} \end{center} \vspace{-5mm} \caption{\small Our Neural Data Server web-service. Note that NDS does not host datasets, but rather links to datasets hosted by original providers.} \label{fig:web-interface} \vspace{-1mm} \end{figure*} \begin{table*}[t] \footnotesize \resizebox*{\linewidth}{!}{ \addtolength{\tabcolsep}{-1pt} \begin{tabular}{c|c|c|c|c} \toprule Dataset & Images & Class & Task & Evaluation Metric \\ \hline \hline Downsampled ImageNet~\cite{chrabaszcz2017downsampled} & 1281167 & 1000 & classification & - \\ OpenImages~\cite{OpenImages} & 1743042 & 601(bbox) / 300(mask) & detection & - \\ COCO~\cite{Lin2014MicrosoftCC} & 118287 & 80 & detection & - \\ \hline \hline VOC2007~\cite{Everingham2009ThePV} & 5011(trainval) / 4962(test) & 20 & detection & mAP \\ miniModaNet~\cite{Zheng2018ModaNetAL} & 1000(train) / 1000(val) & 13 & detection & mAP \\ Cityscapes~\cite{Cordts2016TheCD} & 2975(train) / 500(val) & 8 & detection & mAP \\ \hline \hline Stanford Dogs ~\cite{KhoslaYaoJayadevaprakashFeiFei_FGVC2011} & 12000(train) / 8580(val) & 120 & classification & Top-1 \\ Stanford Cars ~\cite{KrauseStarkDengFei-Fei_3DRR2013} & 8144(train) / 8041 (val) & 196 & classification & Top-1 \\ Oxford-IIIT Pets ~\cite{parkhi12a} & 3680(train) / 3369(val) & 37 & classifiation & Top-1 \\ Flowers 102 ~\cite{Nilsback08} & 2040(train) / 6149(val) & 102 & classification & Top-1 \\ CUB200 Birds ~\cite{WahCUB_200_2011} & 5994(train) / 5794(val) & 200 & classification & Top-1 \\ \bottomrule \end{tabular}% } \vspace{-3mm} \caption{\small Summary of the number of images, categories, and evaluation metrics for datasets used in our experiments. We used 10 datasets (3 server datasets and 7 client datasets) to evaluate NDS.} \vspace{-2mm} \label{dataset-stats} \end{table*} \begin{table*}[t] \vspace{0mm} \small \resizebox{\linewidth}{!}{% \addtolength{\tabcolsep}{-1pt} \begin{tabular}{c|c|cc|cc|cccccccc} \toprule Data (\# Images) & Method & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP$ & $AP_{50}$ & car & truck & rider & bicycle & person & bus & mcycle & train \\ \hline \hline 0 & ImageNet Initialization & 36.2 & 62.3 & 32.0 & 57.6 & 49.9 & 30.8 & 23.2 & 17.1 & 30.0 & 52.4 & 17.9 & 35.2 \\ \hline \hline \multirow{2}{*}{23K} & Uniform Sampling & 38.1 & 64.9 & 34.3 & 60.0 & 50.0 & 34.2 & 24.7 & 19.4 & 32.8 & 52.0 & 18.9 & 42.1\\ & \textbf{NDS} & 40.7 & 66.0 & 36.1 & 61.0 & 51.3 & 35.4 & 25.9 & 20.4 & 33.9 & 56.9 & 20.8 & 44.0 \\ \hline \hline \multirow{2}{*}{47K} & Uniform Sampling & 39.8 & 65.5 & 34.4 & 60.0 & 50.7 & 31.8 & 25.4 & 18.3 & 33.3 & 55.2 & 21.2 & 38.9 \\ & \textbf{NDS} & 42.2 & 68.1 & 36.7 & 62.3 & 51.8 & 36.9 & 26.4 & 19.8 & 33.8 & 59.2& 22.1 & 44.0 \\ \hline \hline \multirow{2}{*}{59K} & Uniform Sampling & 39.5 & 64.9 & 34.9 & 60.4 & 50.8 & 34.8 & 26.3 & 18.9 & 33.2 & 55.5 & 20.8 & 38.7 \\ & \textbf{NDS} & 41.7 & 66.6 & 36.7 & 61.9 & 51.7 & 37.2 & 26.9 & 19.6 & 34.2 & 56.7 & 22.5 & 44.5 \\ \hline \hline 118K & Full COCO & 41.8 & 66.5 & 36.5 & 62.3 & 51.5 & 37.2 & 26.6 & 20.0 & 34.0 & 56.0 & 22.3 & 44.2 \\ \bottomrule \end{tabular}% } \vspace{-3mm} \caption{\small Transfer to instance segmentation with Mask R-CNN~\cite{He2017MaskR} on Cityscapes by selecting images from COCO.} \label{table:maskrcnn-seg-coco} \vspace{2mm} \small \resizebox{\linewidth}{!}{% \addtolength{\tabcolsep}{-1pt} \begin{tabular}{c|c|cc|cc|cccccccc} \toprule Data (\# Images) & Method & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP$ & $AP_{50}$ & car & truck & rider & bicycle & person & bus & mcycle & train \\ \hline \hline 0 & ImageNet Initialization & 36.2 & 62.3 & 32.0 & 57.6 & 49.9 & 30.8 & 23.2 & 17.1 & 30.0 & 52.4 & 17.9 & 35.2 \\ \hline \hline \multirow{2}{*}{118K} & Uniform Sampling & 37.5 & 62.5 & 32.8 & 57.2 & 49.6 & 33.2 & 23.3 & 18.0 & 30.8 & 52.9 & 17.4 & 37.1 \\ & \textbf{NDS} & 39.9 & 65.1 & 35.1 & 59.8 & 51.6 & 36.7 & 24.2 & 18.3 & 32.4 & 56.4 & 18.0 & 42.8 \\ \hline \hline \multirow{2}{*}{200K} & Uniform Sampling & 37.8 & 63.1 & 32.9 & 57.8 & 49.7 & 31.7 & 23.8 & 17.8 & 31.0 & 51.8 & 18.4 & 38.8 \\ & \textbf{NDS} & 40.7 & 65.8 & 36.1 & 61.2 & 51.4 & 38.2 & 24.2 & 17.9 & 32.3 & 57.8 & 19.7 & 47.3 \\ \bottomrule \end{tabular}% } \vspace{-3mm} \caption{\small Transfer to instance segmentation with Mask R-CNN~\cite{He2017MaskR} on Cityscapes by selecting images from OpenImages.} \label{table:maskrcnn-seg-openimages} \end{table*} \noindent \textbf{Limitations and Discussion:} A limitation in our method is that the annotation quality/statistics in the {\emph{dataserver}} datasets is not considered. This is shown in our instance segmentation experiment where the gains from pre-training on images sampled from OpenImages is smaller than pre-training on MS-COCO. This is likely due to the fact that MS-COCO has on average $\sim$7 instance annotations per image while OpenImages contains many images without mask annotations or at most $\sim$2 instance annotations per image. OpenImages has further been labeled semi-automatically and thus in many cases the annotations are noisy. \section{Introduction} \vspace{-1.5mm} In recent years, we have seen an explosive growth of the number and the variety of computer vision applications. These range from generic image classification tasks to surveillance, sports analytics, clothing recommendation, early disease detection, and to mapping, among others. Yet, we are only at the beginning of our exploration of what is possible to achieve with Deep Learning. \begin{figure}[t] \vspace{-1mm} \begin{center} \includegraphics[width=\linewidth]{figs/NeuralDataServer2.pdf} \end{center} \vspace{-6mm} \caption{\footnotesize {\bf Neural Data Server}: Search engine for finding relevant transfer learning data for the user's target domain. In NDS, a {\emph{dataserver}} indexes several popular image datasets, represents them with a mixture-of-experts model, and uses {\emph{client}}'s target data to determine most relevant samples. Note that NDS {\bf indexes} available public datasets and {\bf does not host} them. Data recommendation is done by providing {\bf links} to relevant examples. } \label{fig:NDS} \vspace{-4mm} \end{figure} One of the critical components of the new age of computer vision applications is the need for labeled data. To achieve high performance, typically a massive amount of data needs to be used to train deep learning models. Transfer learning provides a promising approach to reduce the need for large-scale labeled data for each target application. In transfer learning, a neural network is pretrained~\cite{deeplab, He2017MaskR, Shelhamer2017FCN} on existing large generic datasets and then fine-tuned in the target domain. While transfer learning is a well studied concept that has been proven successful in many applications~\cite{deeplab, He2017MaskR, Shelhamer2017FCN}, deciding which data to use for pretraining the model is an open research question that has received surprisingly little attention in the literature. We argue that this is a crucial problem to be answered in light of the ever increasing scale of the available datasets. To emphasize our point, recent efforts on curating computer vision benchmarks\footnote{Websites listing CV datasets: {\scriptsize{\url{https://www.visualdata.io/}}}, {\scriptsize{\url{https://pytorch.org/docs/stable/torchvision/datasets.html}}}, {\scriptsize{\url{https://datasetsearch.research.google.com/}}}} list over 400 public datasets, ranging from generic imagery, faces, fashion photos, to self-driving data. Furthermore, the dataset sizes are significantly increasing: the recently released OpenImages ~\cite{OpenImages} contains 9M labeled images (600GB in size), and is 20 times larger compared to its predecessor MS-COCO ~\cite{Lin2014MicrosoftCC} (330K images, 30GB). The video benchmark YouTube8m ~\cite{AbuElHaija2016YouTube8MAL} (1.9B frames, 1.5TB), is 800$\times$ larger compared to Davis ~\cite{Caelles2018The2D} (10k frames, 1.8GB), while the recently released autonomous driving dataset nuScenes~\cite{Caesar2019nuScenesAM} contains 100$\times$ the number of frames than KITTI~\cite{Geiger2012CVPR} which was released in 2012. \begin{figure*}[t] \vspace{-1.5mm} \includegraphics[width=\textwidth,height=60mm]{figs/sample-visualize.pdf} \vspace{-8.5mm} \caption{\footnotesize{Examples of images from the {\emph{dataserver}} (COCO+OpenImages) recommended to each {\emph{client}} dataset by our Neural Data Server.}} \label{example-visualize} \vspace{-5mm} \end{figure*} It is evident that downloading and storing these datasets locally is already cumbersome and expensive. This is further amplified by the computational resources required for training neural networks on this massive amount of data. The latter is an even more pronounced issue in research, where the network architectures are continuously being developed and possibly many need to be tested. Furthermore, for commercial applications, data licensing may be another financial issue to consider. Recent works~\cite{he2018rethinking, ngiam2018domain} have also shown that there is not a ``the more the better" relationship between the amount of pretraining data and the downstream task performance. Instead, they showed that selecting an appropriate subset of the data was important to achieve good performance on the target dataset. In this paper, we introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain. One can imagine NDS as a web-service where a centralized server, referred to as the {\emph{dataserver}}, recommends data to {\emph{client}}s (Fig~\ref{fig:NDS}). A {\emph{client}} is an end-user with an A.I. application in mind, and has a small set of labeled target data. We assume that each client is only interested in downloading a subset of the server-indexed data that is most relevant to the client's target domain, limited to the user-specified budget (maximum desired size). We further require the transaction between the {\emph{dataserver}} and the {\emph{client}} to be both computationally efficient and privacy-preserving. This means the client's data should not be visible to the server. We also aim to minimize the amount of {\emph{dataserver}}'s online computation per client, as it may possibly serve many clients in parallel. We index several popular image datasets and represent them using a mixture-of-experts (MoE) model, which we store on the {\emph{dataserver}}. MoE is significantly smaller in size than the data itself, and is used to probe the usefulness of data in the {\emph{client}}'s target domain. In particular, we determine the accuracy of each expert on the target dataset, and recommend data to the {\emph{client}} based on these accuracies. We experimentally show significant performance improvements on several downstream tasks and domains compared to baselines. Furthermore, we show that with only 20\% of pretraining data, our method achieves comparable or better performance than pretraining on the entire {\emph{dataserver}}-indexed datasets. We obtain significant improvements over ImageNet pretraining by downloading only $26$ Gb of server's data in cases when training on the entire {\emph{dataserver}} ($538$ Gb) would take weeks. Our Neural Data Server will be made available as a web-service with the aim of improving performance and reducing the development cost of the end-users' A.I. applications. \section{Neural Data Server} \vspace{-1.0mm} Neural Data Server (NDS) is a search engine that aims to recommend transfer learning data. NDS consists of a {\emph{dataserver}} which has access to a massive source dataset(s), and aims to suggest most relevant data samples to a {\emph{client}}. A {\emph{client}} is an end-user who wants a budget-constrained amount of data to improve the performance of her/his model in the target domain in a transfer learning scenario. We note that the {\emph{dataserver}} does not host the data, and thus its recommendations are to be provided as a list of urls to data samples hosted by the original datasets' providers. The {\emph{dataserver}}'s indexed datasets may or may not be completely labeled, and the types of labels (\eg, segmentation masks, detection boxes) across data samples may vary. The {\emph{client}}'s target dataset is considered to only have a small set of labeled examples, where further the type of labels may or may not be the same as the labels in the {\emph{dataserver}}'s dataset(s). The main challenge lies in requiring the {\emph{dataserver}}-{\emph{client}} transactions to have low computational overhead. As in any search engine that serves information to possibly numerous users, we want the online computation performed by the {\emph{dataserver}} to be minimal. Thus we defer most of the computation to be performed on the {\emph{client}}'s side, while still aiming for this process to be fast. Furthermore, the transactions should ideally be privacy preserving for the {\emph{client}}, \ie, the client's data nor the model's architecture are accessible, since the {\emph{client}} may have sensitive information such as hospital records or secret tech. In NDS, we represent the {\emph{dataserver}}'s data using a mixture-of-experts (MoE) trained on a self-supervised task. MoE naturally partition the indexed datasets into different subsets and produce classifiers whose weights encode the representation of each of these subsets. The experts are trained offline and hosted on the {\emph{dataserver}} for online transactions with the clients. In particular, the experts are sent to each client and used as a proxy to determine the importance of {\emph{dataserver}}'s data samples for the {\emph{client}}'s target domain. To compute importance, the experts are fast-adapted on the client's dataset, and their accuracy is computed on a simple self-supervised task. We experimentally validate that the accuracy of each adapted expert indicates the usefulness of the data partition used to train the expert. The {\emph{dataserver}} then uses these accuracies to construct the final list of data samples that are relevant for the {\emph{client}}. Figure~\ref{method-fig} provides an illustration while Algorithm~\ref{algo-overview} summarizes our NDS. In Section~\ref{problem-def} we formalize our problem. In Section~\ref{server} we describe how we train our mixture-of-experts model and analyze the different choices of representation learning algorithms for the experts ({\emph{dataserver}} side). In Section~\ref{server-client-trans} we propose how to exploit the experts' performance in the {\emph{client}}'s target domain for data selection. \subsection{Problem Definition} \label{problem-def} Let ${\mathbb{X}}$ denote the input space (images in this paper), and ${\mathbb{Y}}_a$ a set of labels for a given task $a$. Generally, we will assume that multiple tasks are available, each associated with a different set of labels, and denote these by ${\mathbb{Y}}$. Consider also two different distributions over ${\mathbb{X}} \times {\mathbb{Y}}$, called the source domain $\mathcal{D}_s$ and target domain $\mathcal{D}_t$. Let ${\mathcal{S}}$ (\emph{dataserver}) and ${\mathcal{T}}$ (\emph{client}) be two sample sets drawn i.i.d from $\mathcal{D}_s$ and $\mathcal{D}_t$, respectively. We assume that $|{\mathcal{S}}|\gg|{\mathcal{T}}|$. Our problem then relies on finding the subset ${\mathcal{S}}_* \in \mathcal{P}(S)$, where $\mathcal{P}(S)$ is the power set of ${\mathcal{S}}$, such that ${\mathcal{S}}_* \cup {\mathcal{T}} $ minimizes the risk of a model $h$ on the target domain: \vspace{-2.0mm} \begin{align} \label{eqn_intro} {\mathcal{S}}_* = \argmin_{\hat{\mathcal{S}} \in \mathcal{P}(S) }\ \mathbb{E}_{({\bm{x}}, \hat {\bm{y}}) \sim \mathcal{D}_t} [ \mathcal{L}(h_{{\hat{\mathcal{S}} \cup {\mathcal{T}}}}({\bm{x}}), \hat {\bm{y}})] \end{align} \vspace{-2.0mm} \noindent Here, $h_{{\hat{\mathcal{S}} \cup {\mathcal{T}}}}$ indicates that $h$ is trained on the union of data $\hat{\mathcal{S}}$ and ${\mathcal{T}}$. Intuitively, we are trying to find the subset of data from ${\mathcal{S}}$ that helps to improve the performance of the model on the target dataset. However, what makes our problem particularly challenging and unique is that we are restricting the visibility of the data between the {\emph{dataserver}} and the {\emph{client}}. This means that fetching the whole sample set ${\mathcal{S}}$ is prohibitive for the client, as it is uploading its own dataset to the server. We tackle this problem by representing the {\emph{dataserver}}'s indexed dataset(s) with a set of classifiers that are agnostic of the client ({Section~\ref{server}}), and use these to optimize~\eqref{eqn_intro} on the \emph{client}'s side ({Section~\ref{server-client-trans}}). \subsection{Dataserver} We now discuss our representation of the {\emph{dataserver}}'s indexed datasets. This representation is pre-computed offline and stored on the {\emph{dataserver}}. \vspace{-3mm} \label{server} \subsubsection{Dataset Representation with Mixture-of-Experts} We represent the {\emph{dataserver}}'s data $\mathcal S$ using the mixture-of-experts model~\cite{Jacobs1991AdaptiveMO}. In MoE, one makes a prediction as: \vspace{-3mm} \begin{align} {\bm{y}} ({\bm{x}})= \sum_{i=1}^K g_{\theta,i}({\bm{x}}) e_{\theta_i}({\bm{x}}) \end{align} \vspace{-4mm} Here, ${\bm{g}}_\theta$ denotes a gating function ($\sum_{i=1}^K g_{\theta,i}(.)=1$), $e_{\theta_i}$ denotes the $i$-th expert model with learnable weights $\theta_i$, ${\bm{x}}$ an input image, and $K$ corresponds to the number of experts. One can think of the gating function as softly assigning data points to each of the experts, which try to make the best guess on their assigned data points. The MoE model is trained by using maximum-likelihood estimation (MLE) on an objective $\mathcal{L}$: \vspace{-3mm} \begin{align} \mathbb\theta = \argmin_{\mathbb\theta} \mathbb{E}_{({\bm{x}},\hat {\bm{y}})\sim {\mathcal{S}}} [ \mathcal{L} ({\bm{y}}({\bm{x}}),\hat {\bm{y}}) ] \end{align} \vspace{-2.5mm} We discuss the choices for the objective $\mathcal{L}$ in Sec~\ref{sec:experts}, dealing with the fact that the labels across the source datasets may be defined for different tasks. While the MoE objective allows end-to-end training, the computational cost of doing so on a massive dataset is extremely high, particularly when $K$ is considerably large (we need to backpropagate gradients to every expert on every training example). A straightforward way to alleviate this issue is to associate each expert with a local cluster defined by a hard gating, as in~\cite{Hinton2015DistillingTK, gross2017hard}. In practice, we define a gating function $g$ that partitions the dataset into mutually exclusive subsets $\mathcal S_i$, and train one expert per subset. This makes training easy to parallelize as each expert is trained independently on its subset of data. Furthermore, this allows for new datasets to be easily added to the {\emph{dataserver}} by training additional experts on them, and adding these to {\emph{dataserver}}. This avoids re-training MoE over the full indexed set of datasets. In our work, we use two simple partitioning schemes to determine the gating: (1) superclass partition, and (2) unsupervised partition. For superclass partition (1), we represent each class $c$ in the source dataset as the mean of the image features $f_c$ for category $c$, and perform $k$-means clustering over $\{f_c\}$. This gives a partitioning where each cluster is a superclass containing a subset of similar categories. This partitioning scheme only applies to datasets with class supervision. For unsupervised partitioning (2), we partition the source dataset using $k$-means clustering on the image features. In both cases, the image features are obtained from a pretrained neural network (\ie, features extracted from the penultimate layer of a network pre-trained on ImageNet). \vspace{-3mm} \subsubsection{Training the Experts} \label{sec:experts} We discuss two different scenarios to train the experts. In the simplified scenario, the tasks defined for both the {\emph{dataserver}}'s and {\emph{client}}'s datasets are the same, \eg, classification. In this case, we simply train a classifier for the task for each subset of the data in $\mathcal S$. We next discuss a more challenging case where the tasks across datasets differ. Ideally, we would like to learn a representation that can generalize to a variety of downstream tasks and can therefore be used in a task agnostic fashion. To this end, we use a self-supervised method to train the MoE. In self-supervision, one leverages a simple surrogate task that can be used to learn a meaningful representation. Furthermore, this does not require any labels to train the experts which means that the {\emph{dataserver}}'s dataset may or may not be labeled beforehand. This is useful if the client desires to obtain raw data and label the relevant subset on its own. To be specific, we select classifying image rotation as the task for self-supervision as in~\cite{gidaris2018unsupervised}, which showed this to be a simple yet powerful proxy for representation learning. Formally, given an image ${\bm{x}}$, we define its corresponding self-supervised label ${\bm{y}}$ by performing a set of geometric transformations $\{r({\bm{x}}, j)\}_{j=0}^{3}$ on ${\bm{x}}$, where $r$ is an image rotation operator, and $j$ defines a particular rotation by one of the predefined angles, $\{0, 90, 180, 270\}$. We then minimize the following learning objective for the experts: \vspace{-3mm} \begin{align} {\mathcal{L}}(\theta_i) = -\sum_{{\bm{x}}\in \mathcal S_i} \sum_{j=0}^{3} \log e_{{\theta_i}}(r({\bm{x}},j))_j \end{align} \vspace{-4.5mm} \noindent Here, index $j$ in $e(.)_j$ denotes the output value for class $j$. \subsection{Dataserver-Client Transactions} In this section, we describe the transactions between the {\emph{dataserver}} and {\emph{client}} that determines the relevant subset of the server's data. The {\emph{client}} first downloads the experts in order to measure their performance on the {\emph{client}}'s dataset. If the tasks are similar, we perform a quick adaptation of the experts on the {\emph{client}}'s side. Otherwise, we evaluate the performance of the experts on the {\emph{client}}'s data using the surrogate task (i.e image rotation) (Section~\ref{server-client-trans}). The performance of each expert is sent back to the {\emph{dataserver}}, which uses this information as a proxy to determine which data points are relevant to the {\emph{client}} (Section~\ref{selection}). We describe these steps in more detail in the following subsections. \vspace{-3mm} \subsubsection{\textsc{FastAdapt} to a Target Dataset (on Client)} \label{server-client-trans} \vspace{-1mm} \paragraph{Single Task on Server and Client:} We first discuss the case where the dataset task is the same for both the {\emph{client}} and the {\emph{dataserver}}, e.g., classification. While the task may be the same, the label set may not be (classes may differ across domains). An intuitive way to adapt the experts is to remove their classification head that was trained on the server, and learn a small decoder network on top of the experts's penultimate representations on the client's dataset, as in~\cite{Zamir2018TaskonomyDT}. For classification tasks, we learn a simple linear layer on top of each pre-trained expert's representation for a few epochs. We then evaluate the target's task performance on a held-out validation set using the adapted experts. We denote the accuracy for each adapted expert $\hat e_{\theta_i}$ as $z_i$. \vspace{-3mm} \paragraph{Diverse Tasks on Server and Client:} To generalize to unseen tasks and be further able to handle cases where the labels are not available on the {\emph{client}}'s side, we propose to evaluate the performance of the common self-supervised task used to train the experts on the {\emph{dataserver}}'s data. Intuitively, if the expert performs well on the self-supervised task on the target dataset, then the data it was trained on is likely relevant for the {\emph{client}}. Specifically, we use the self-supervised experts trained to learn image rotation, and evaluate the proxy task performance (accuracy) of predicting image rotation angles on the target images: \vspace{-3mm} \begin{align} z_i = \frac{1}{4 |\mathcal{{\mathcal{T}}}|} \sum_{{\bm{x}} \in {\mathcal{T}} } \sum_{j=0}^{3} \mathbbm{1}\big(\argmax_{k} [e_{{\theta_i}} ({\bm{r}}({\bm{x}},j))_k] ==j\big) \end{align} \vspace{-3.0mm} \noindent Here, index $k$ in $e(.)_k$ denotes the output value for class $k$. Note that in this case we do not adapt the experts on the target dataset (we only perform inference). \vspace{-2mm} \subsubsection{Data Selection (on Dataserver)} \label{selection} We now aim to assign a weighting to each of the data points in the source domain ${\mathcal{S}}$ to reflect how well the source data contributed to the transfer learning performance. The accuracies ${\bm{z}}$ from the client's \textsc{FastAdapt} step are normalized to $[0, 1]$ and fed into a \textit{softmax} function with temperature $T=0.1$. These are then used as importance weights $ w_i$ for estimating how relevant is the representation learned by a particular expert for the target task's performance. We leverage this information to weigh the individual data points ${\bm{x}}$. More specifically, each source data ${\bm{x}}$ is assigned a probabilistic weighting: \vspace{-3mm} \begin{align} \pi({\bm{x}}) = \sum_{i=1}^{K} \ w_i \ g_{\theta,i}({\bm{x}}) \frac{1}{|S_i|} \end{align} \vspace{-3.5mm} \noindent Here, $|S_i|$ represents the size of the subset that an expert $e_{\theta_i}$ was trained on. Intuitively, we are weighting the set of images associated to the $i$-th expert and uniformly sampling from it. We construct our dataset by sampling examples from $\mathcal{{\mathcal{S}}}$ at a rate according to $\bm{\pi}={[\pi_{{\bm{x}}_1},\pi_{{\bm{x}}_2},\hdots,\pi_{{\bm{x}}_n}]}^T$. \subsection{Relation to Domain Adaptation} If we assume that the client and server tasks are the same then our problem can be interpreted as domain adaptation in each of the subset $\hat{\mathcal{S}} \in \mathcal{P}(S)$ and the following generalization bound from ~\cite{BenDavid2009ATO} can be used: \vspace{-2.5mm} \begin{align} \label{eqn_bound} \varepsilon_{\mathcal{T}}(h) < \varepsilon_{\hat{\mathcal{S}}}(h)+ \frac{1}{2} d_{\mathcal{H} \Delta \mathcal{H} }(\hat{\mathcal{S}}, \mathcal{T}) \end{align} \vspace{-2.5mm} \noindent where $\varepsilon$ represents the risk of a hypothesis function $h \in \mathcal{H}$ and $d_{\mathcal{H} \Delta \mathcal{H} }$ is the ${\mathcal{H} \Delta \mathcal{H} }$ divergence ~\cite{BenDavid2009ATO}, which relies on the capacity of $\mathcal{H}$ to distinguish between data points from $\hat{\mathcal{S}}$ and ${\mathcal{T}}$, respectively. Let us further assume that the risk of the hypothesis function $h$ on any subset $\hat{\mathcal{S}}$ is similar such that: $\varepsilon_{\hat{\mathcal{S}}}(h) \approx \varepsilon(h) \ \ \forall \hat{\mathcal{S}} \in \mathcal{P}({\mathcal{S}}) $. Under this assumption, minimizing equation~\ref{eqn_intro} is equivalent to finding the subset ${\mathcal{S}}_*$ that minimizes the divergence with respect to ${\mathcal{T}}$. Formally, \vspace{-2.0mm} \begin{align} {\mathcal{S}}_*=\argmin_{\hat{\mathcal{S}} } d_{\mathcal{H} \Delta \mathcal{H} (\hat{\mathcal{S}},{\mathcal{T}}) } \end{align} \vspace{-3.5mm} \noindent In practice, it is hard to compute $d_{\mathcal{H} \Delta \mathcal{H} }$ and this is often approximated by a \emph{proxy $\mathcal{A}$-distance} ~\cite{NIPS2006_2983,chen2015marginalizing,Ganin2015DomainAdversarialTO}. A classifier that discriminates between the two domains and whose risk $\varepsilon$ is used to approximate the second part of the equation~\ref{eqn_bound}. \vspace{-2.0mm} \begin{align} \hat{d}_{\mathcal{H}} \approx \hat{d}_{\mathcal{A}} \approx 2(1-2\varepsilon) \end{align} \vspace{-2.0mm} \noindent Note that doing so would require having access to ${\mathcal{S}}$ and ${\mathcal{T}}$ in at least one of the two sides (i.e to train the new discriminative classifier) and this is prohibitive in our scenario. In our case, we compute the domain confusion between $\hat{\mathcal{S}}$ and ${\mathcal{T}}$ by evaluating the performance of expert $e_i$ on the target domain. We argue that this proxy task performance (or error rate) is an appropriate proxy distance that serves the same purpose but does not violate the data visibility condition. Intuitively, if the features learned on the subset cannot be discriminated from features on the target domain, the domain confusion is maximized. We empirically show the correlation between the domain classifier and our proposed proxy task performance in our experiments. \section{Related Work} \vspace{-1.5mm} \noindent\textbf{Transfer Learning.} The success of deep learning and the difficulty of collecting large scale datasets has recently brought significant attention to the long existing history of transfer learning, cross-domain annotation and domain adaptation ~\cite{pan2009survey,csurka2017domain, acuna2018efficient, sun2017revisiting, Acuna_2019_CVPR,tremblay2018training}. Specifically in the context of neural networks, fine-tuning a pretrained model on a new dataset is the most common strategy for knowledge transfer. Most literature in this domain analyzes the effect of pretraining on large-scale datasets~\cite{sun2017revisiting,mahajan2018exploring,imagenet_cvpr09} with respect to network architectures, network layers, and training tasks~\cite{Yosinski2014HowTA,Zamir2018TaskonomyDT}. Concurrent with our work, Achille \etal~\cite{Achille2019Task2VecTE} proposes a framework for selecting the best pre-trained feature extractor for a new task from a collection of classifiers. In contrast, our work aims to identify the optimal set of data points for pre-training. Works most related to ours are~\cite{Cui2018LargeSF,ngiam2018domain} which show that pretraining on only relevant examples is important to achieve good performance on fine-grained classification tasks. Specifically, in~\cite{Cui2018LargeSF} the authors use a predefined similarity metric between the source and target categories in order to greedily select the most similar categories from the source dataset to be used for pretraining.~\cite{ngiam2018domain}, on the other hand, exploits a model pretrained on the source domain to obtain pseudolabels of the target images, and uses these to re-weight the source examples. Unlike ours,~\cite{Cui2018LargeSF,ngiam2018domain} are limited to classification tasks, and do not easily scale to a constantly growing datacenter (the model needs to be retrained each time a new dataset is added). Thus, their approach does not naturally handle our scenario in which indexed datasets have diverse sets of tasks and labels, and where the number of indexed datasets may grow over time. \noindent \textbf{Federated Learning.} ~\cite{McMahan2016FederatedLO, Bonawitz2017PracticalSA} introduce a distributed ML approach with the goal of training a centralized model on decentralized data over a large number of client devices, (\ie, mobile phones). Our work shares a similar idea of restricting the visibility of data in a client-server model. However, in our case the representation of data is centralized (\emph{dataserver}) and the clients exploit the transfer learning scenario for their own (decentralized) models. \noindent \textbf{Active and Curriculum Learning.} In active learning~\cite{settles2009active} one searches over unlabeled data to find optimal samples to be labeled by an oracle, while in curriculum learning~\cite{bengio2009curriculum} subsets of data of increasing difficulty are sought for during training. In both scenarios, data search is performed at each iteration of training a particular model. Search is typically done by running inference on the data samples with the current snapshot of the model and selecting the examples based on uncertainty-based metrics. Our scenario differs in that we do not have the luxury of running inference with the {\emph{client}}'s model on the massive amount of indexed data as this would induce a prohibitive computational overhead on the {\emph{dataserver}} per {\emph{client}}. Moreover, we do not assume {{\emph{dataserver}} to have access to the {\emph{client}}'s model: this would entail the {\emph{client}}s to share their inference code which many users may not be willing to do. \noindent \textbf{Learning to Generate Synthetic Images.} Related to NDS are also \cite{ruiz2018learning,Metasim19,tripathi2019learning,mehta2019active}. These approaches aim to bridge the synthetic vs real imagery gap by optimizing/searching over the set of parameters of a surrogate function that interfaces with a synthesizer. In NDS, the search has to be done over massive (non-parametric) datasets and further, the target data cannot be sent to the server side. Our method is also significantly more computationally efficient.
1,116,691,500,552
arxiv
\section{Introduction} Ancestral sequence reconstruction (ASR) is a key task in computational evolutionary biology~\cite{liberles_ancestral_2007}. It consists in inferring a molecular sequence at an ancestral species of a known phylogeny, given descendant sequences at the tip of the tree. Numerous approaches are available for this task. Some are based on statistical models of sequence evolution on a tree, while others rely on combinatorial optimization formulations~\cite{semple_phylogenetics_2003, yang_molecular_2014}. In addition to its many biological applications, ASR has played a key role in elucidating the statistical performance of phylogeny estimation methods~\cite{mossel_impossibility_2003,mossel_phase_2004,roch_toward_2010,roch_phase_2017}. Here we establish a formal connection to sequence alignment. Rigorous analyses of the accuracy of ASR methods have been performed mainly in two asymptotic settings. In phylogenies of arbitrarily large depth, an achievable goal is to infer a sequence that is correlated site-by-site with the true ancestral sequence~\cite{steel_five_1995,ioffe_extremality_1996,mossel_recursive_1998,evans_broadcasting_2000}. In the taxon-rich setting, on the other hand, where the depth of the phylogeny is bounded as the number of taxa increases, consistent estimators are known to exist~\cite{gascuel_inferring_2010,roch_sufficient_2021}. That is, under conditions on the branching of the phylogeny around its root, the correct inference of a single site in the ancestral sequence can be guaranteed as the number of leaves goes to infinity. Most theoretical results in this area are derived under models of sequence evolution by single site substitutions. More complex models allowing for site insertions and deletions (indels) have also been considered~\cite{andoni_global_2012,ganesh_optimal_2019,fan_statistically_2020}. The star case, also known as trace reconstruction, has been the subject of much recent interest~\cite{holenstein_trace_2008,nazarov_trace_2017,holden_lower_2020,davies_approximate_2021,davies_reconstructing_2019}. See also \cite{THATTE200658,mitrophanov_convergence_2007,daskalakis2013alignment,Allman2015StatisticallyCK,fan2020impossibility} for rigorous analyses of indel models in other contexts, e.g., distance-based phylogeny reconstruction. Indel models are closely related to another important bioinformatics problem, multiple sequence alignment (MSA), in which one attempts to best align a collection of molecular sequences under some mismatch penalty score by inserting gaps. In practice, MSA is a hard problem, especially at large evolutionary distances~\cite{rost_twilight_1999,chang_phylogenetic_2008}. While statistical approaches based on indel models have also been developed~\cite{Lunter2005}, commonly used approaches involve progressively aligning the given sequences up a guide tree, in what is reminiscent of ASR procedures~\cite{ranwez:hal-02535389}. In fact, many trace reconstruction and ASR methods under indels involve partial local alignments of sequences. In this paper, we combine insights from ASR in the taxon-rich setting together with the probabilistic analysis of indel models to prove the first (as far as we know) rigorous guarantee for sequence alignment under an indel model on a phylogenetic tree. Our result is somewhat counter-intuitive: we show that perfect pairwise sequence alignment with high probability is in principle possible \emph{at arbitrary large evolutionary distances}---provided the phylogeny is known and dense enough. While such a condition may not be satisfied in real datasets, our analysis is a step towards a better theoretical understanding of MSA and its connections to ASR. In a nutshell, we take advantage of the density of the phylogeny to estimate ancestral sequences with high probability along the path between two leaf sequences of interest, then reconstruct the history of mutations along the way. For the ASR step, we use a standard phylogenetic method known as parsimony, which seeks to use the smallest number of mutations possible to explain sequences at the leaves of a phylogeny. Rigorous analyses of parsimony are often challenging and have revealed the intricate, often unintuitive, behavior of the method~\cite{li_more_2008,fischer_maximum_2009,zhang_analyzing_2010,herbst_ancestral_2017,herbst_accuracy_2018}. In our taxon-rich setting, branching process results lead to rigorous guarantees on the ancestral reconstruction. The rest of the paper is organized as follows. In Section~\ref{section:main-results}, we state our main result after introducing some background. The alignment algorithm is presented in Section~\ref{section:alignment}. The proof is comprised of two parts: the ancestral estimation step is analyzed in Section~\ref{section:ancestral} and the alignment step is analyzed in Section~\ref{section:one-mutation}. \section{Background and main result} \label{section:main-results} In this section, we state our main result. First, we introduce the model of sequence evolution we use here as well as the multiple sequence alignment problem. \subsection{Definitions} We consider the TKF91 insertion-deletion (indel) sequence evolution model. Technically, we use a slight variant of the TKF91 model defined in~\cite{Thorne1991}, where we only allow an alphabet with two letters $0$ and $1$ to simplify the analysis and its presentation. Our results extend naturally to more general settings. \begin{definition}[TKF91 model: two-state version] \label{Def:BinaryIndel} Consider the following Markov process $\mathcal{I} = \{\mathcal{I}_{t}\}_{t \geq 0}$ on the space $\mathcal{S}$ of binary digit sequences together with an \textbf{immortal link $``\bullet"$}, that is, \begin{equation*}\label{S} \mathcal{S} := ``\bullet" \otimes \bigcup_{M\geq 1} \{0,1\}^M, \end{equation*} where the notation above indicates that all sequences begin with the immortal link. Positions of a sequence, except for that of the immortal link, are called \textbf{sites} or \textbf{mortal links}. Let $(\eta,\lambda,\mu) \in (0,\infty)^{3}$ and $(\pi_0,\pi_1) \in [0,1]^2$ with $\pi_0 + \pi_1 = 1$ be given parameters. The continuous-time dynamics are as follows: If the current state is the sequence $\vec{x} \in \mathcal{S}$, then the following events occur independently: \begin{itemize} \item \emph{Substitution:} Each site is substituted independently at rate $\eta > 0$. When a substitution occurs, the corresponding digit is replaced by $0$ and $1$ with probabilities $\pi_0$ and $\pi_1$, respectively. \item \emph{Deletion:} Each site is removed independently at rate $\mu$. \item \emph{Insertion:} Each site, as well as the immortal link, gives birth to a new digit independently at rate $\lambda$. When a birth occurs, the new site is added immediately to the right of its parent site. The newborn site has digit $0$ and $1$ with probabilities $\pi_0$ and $\pi_1$, respectively. \end{itemize} \end{definition} We run this process on a rooted metric tree as follows. Consider a \textit{rooted binary tree} $T = (V,E,\rho,\mathbf{t})$ with vertices $V$, edges $E$, root $\rho$, and edge lengths $\mathbf{t} = \{t_e\}_{e \in E}$ (in time units). We restrict ourselves to ultrametric trees, that is, the sum of edge lengths from root to leaf is the same for every leaf. We refer to this common quantity as the \textbf{depth} of the tree and denote it by $h$. The rooted metric tree $T$ is then indexed by all points along the edges of $T$. The root vertex has an initial sequence $\sigma_{\rho} \in \mathcal{S}$. With an initial sequence $\sigma_u \in \mathcal{S}$, the TKF91 process is recursively performed on each descending edge $e = (u,v)$ over the time interval $[0,t_e]$ to obtain another sequence $\sigma_v \in \mathcal{S}$. Processes running along descending edges of $u$ are independent, conditioned on state $\sigma_u$ at $u$. We refer to the full process as the \textbf{(two-state) TKF91 process on tree $T$}. For any sequence $\sigma \in \mathcal{S}$, let $|\sigma|$ be the length of the sequence, and let $|\sigma|_{0}$ and $|\sigma|_{1}$ be the number of $0$'s and $1$'s in the sequence, respectively. The stationary distribution of the sequence length $|\sigma| = M$ is known~\cite{Thorne1991} to be \begin{equation} \label{eq:LengthStationary} \gamma_{M} = \left(1 - \frac{\lambda}{\mu}\right)\left(\frac{\lambda}{\mu}\right)^{M}, \qquad M \in \mathbb{Z}_+, \end{equation} provided $\mu > \lambda$. We assume that the root sequence $\sigma_\rho$ follows its stationary distribution. That is, $|\sigma_{\rho}|$ is distributed according to $\gamma_{M}$ and its sites are i.i.d.~in $\{0,1\}$ with respective probabilities $\pi_0$ and $\pi_1$. Stationarity of $\sigma_{\rho}$ implies stationarity of the TKF91 process throughout the tree. We assume from now on that $\mu > \lambda$ and that stationarity holds. \paragraph{Some notation} Later on, we will need the following notation. For a sequence $\sigma \in \mathcal{S}$, let $\mathcal{S}_{s}(\sigma)$, $\mathcal{S}_{d}(\sigma)$, and $\mathcal{S}_{i}(\sigma)$ be the sequences that differ from $\sigma$ respectively by a single substitution, a single deletion, and a single insertion. Observe that these sets are disjoint as the sequence lengths in each necessarily differ. Further, let $\mathcal{S}_1(\sigma) = \mathcal{S}_{s}(\sigma) \cup \mathcal{S}_{d}(\sigma) \cup \mathcal{S}_{i}(\sigma)$ be the sequences obtained by performing a single mutation on $\sigma$, and define \begin{equation}\label{eq:lambdastar} \lambda^{\ast}(\sigma) = \sum_{\tau \in \mathcal{S}_1(\sigma)}Q(\sigma,\tau) = \lambda (|\sigma| + 1) + \mu |\sigma| + \eta \pi_1 |\sigma|_0 + \eta \pi_0 |\sigma|_1 \end{equation} as the total rate under the TKF91 process of moving away from $\sigma$, where $Q(\sigma,\tau)$ is the rate at which the TKF91 process on an edge jumps from $\sigma$ to $\tau$. Formula~\eqref{eq:lambdastar} is derived formally in the appendix. \subsection{Multiple sequence alignment} To compare sequences descending from a common ancestor through substitutions, insertions and deletions, it is natural to attempt to align them as best as possible, that is, to construct a multiple sequence alignment. \begin{definition}[Multiple sequence alignment] For any integer $n \geq 1$ and sequences $\boldsymbol{\sigma} = (\sigma_{v_1},\ldots,\sigma_{v_m}) \in \mathcal{S}^m$ at points $v_1,\ldots,v_m \in T$, a \textbf{multiple sequence alignment} (or pairwise alignment when $m=2$) is a collection of sequences $\mathbf{a}(\boldsymbol{\sigma}) = (a_1(\boldsymbol{\sigma}),\ldots,a_m(\boldsymbol{\sigma}))$ whose entries come from $\{0,1,-\}$ ($-$ is called a \textit{gap}) such that: \begin{itemize} \item the lengths satisfy $$|a_1(\boldsymbol{\sigma})| = |a_2(\boldsymbol{\sigma})| = \cdots = |a_m(\boldsymbol{\sigma})| \geq \max\{|\sigma_{v_1}|,|\sigma_{v_2}|,\ldots,|\sigma_{v_m}|\},$$ \item no corresponding entries of $a_1(\boldsymbol{\sigma}),\ldots,a_m(\boldsymbol{\sigma})$ all equal $-$, and \item removing $-$ from $a_i(\boldsymbol{\sigma})$ yields $\sigma_{v_i}$ for all $i \in \{1,2,\ldots,m\}$. \end{itemize} A multiple sequence alignment can be expressed as an $m \times |a_1(\boldsymbol{\sigma})|$ matrix where the rows are the sequence alignments and where no column consists of all gaps. If $m=2$, the alignment is referred to as pairwise. \end{definition} \noindent More generally, a multiple sequence alignment procedure may take as input further auxiliary information (beyond the sequences to be aligned), such as a tree or sequences at other points of the tree. Our alignment algorithm (see Section~\ref{section:alignment}) will indeed use additional information. Two sites, one from one sequence and the other from another sequence, are said to be \textbf{homologous} provided they descend from a common site in their most recent common ancestral sequence \textit{only through substitutions} under the evolutionary process on the tree. A \textbf{true} multiple sequence alignment is one that places homologous sites in the same column and non-homologous sites in different columns. We note however that certain homology relationships are unknowable a priori: for example, if in the course of evolution a $0$ is inserted in a sequence next to another $0$, which of them descends from the ancestral $0$ is arbitrary. Here we take the convention that a repeated site is always inserted at the beginning of a run; and that similarly a repeated site is always deleted at the beginning of a run. \subsection{Statement of main result} The following theorem states that it is possible to construct with high probability a true pairwise alignment of the sequences at two arbitrary leaves $v$ and $w$ of a phylogeny as long as the maximal branch length is sufficiently small. \begin{theorem}[Main Result] \label{thm:main} Fix $\eta,\mu,\lambda \in (0,\infty)$, the substitution, deletion, and insertion rates under the TKF91 model. There is a polynomial-time alignment procedure $A$ such that for any tree depth $h > 0$ and any failure probability $\varepsilon > 0$, there exists a maximum branch length $t_{\textnormal{max}} := t_{\textnormal{max}}(h,\varepsilon) > 0$ such that the following property holds. For any \textit{rooted binary tree} $T = (V,E,\rho,\mathbf{t})$ with vertices $V$, edges $E$, root $\rho$, and edge lengths $\mathbf{t} = \{t_e\}_{e \in E}$, assume that the leaves $\partial T = \{\ell_i\}_{i=1}^{n}$ are ordered from left to right in a planar realization of $T$, and let $v = \ell_1$ and $w=\ell_n$. Then the alignment procedure applied to the sequences $\sigma_{\ell_1},\sigma_{\ell_2},\ldots,\sigma_{\ell_n}$ outputs a true pairwise alignment of $\sigma_v$ and $\sigma_w$ with probability at least $1- \varepsilon$, provided that $t_e \leq t_{\textnormal{max}}$ for all edges $e \in E$. \end{theorem} \noindent Note that the tree depth $h$ is arbitrary. The alignment procedure, which is described in Section~\ref{section:alignment}, takes as input leaf sequences at the leaves of $T$ as well as $T$ itself. \paragraph{Extensions} While we assume above that the rate of substitution is the same throughout the tree, our proof still goes through if the parameter $\eta$ is merely an upper bound on that rate across edges. Similarly, our two-state assumption and the details of the substitution model do not play a critical role in the proof. We make these assumptions to simplify the presentation. \section{Alignment algorithm} \label{section:alignment} In this section, we describe the alignment procedure of Theorem~\ref{thm:main}. We emphasize that this algorithm is not meant to be practical, but rather serve as a proof of our main result. \subsection{Overview of full alignment algorithm} We introduce the following alignment algorithm $A$ which takes as input a rooted metric tree $T$, two distinguished leaves $v$ and $w$, all leaf sequences, and a pre-processing parameter $\delta_1$. We take $\delta_1$ to satisfy $t_{\textnormal{max}} \leq \delta_1 \leq h$. The algorithm outputs a pairwise alignment for the sequences at $v$ and $w$. There is a unique path between $v$ and $w$ that we henceforth call the \textit{backbone}. We let $B$ be the number of \textit{non-root} vertices on the backbone. Then $v = x_1$ and $w = x_B$ and the other \textit{non-root} backbone vertices are in order $x_2,...,x_{B-1}$. For some parts of the algorithm and analysis, it will be convenient to use an alternative numbering of the backbone vertices---away from the root, numbering the left side, then numbering the right side. Specifically, let $x_1,\ldots,x_{B^-}$ be the backbone vertices on the same side of the root as $x_1$. Let $x_{B^-+1},\ldots,x_B$ be the backbone vertices on the same side of the root as $x_B$ and let $B^+$ be their number. Then we set $$ \tilde{x}^-_i := x_{B^- - (i-1)}, \qquad i =1,\ldots, B^- $$ and $$ \tilde{x}^+_i := x_{B^- + i}, \qquad i = 1,\ldots, B^+. $$ Notice in particular that $\tilde{x}^-_1$ and $\tilde{x}^+_1$ are the children of the root and $\tilde{x}^-_{B^-} = x_1$ and $\tilde{x}^+_{B^+} = x_B$. \begin{figure}[t] \centering \includegraphics[scale=1.2]{Figures/AlignmentProc1-12.png} \caption{(a) Tree $T$ with leaf sequences $\sigma_1 = \sigma_{x_1},\sigma_2= \sigma_{\ell_{2,1}},..., \sigma_{\ell_{i_2}},\sigma_B = \sigma_{x_B}$. (b) Tree $T$ with leaf sequences $\sigma_1 = \sigma_{x_1},\sigma_2 = \sigma_{x_{2,1}},..., \sigma_{\ell_{i_2}},\sigma_{3} = \sigma_{x_{3,1}},...,\sigma_{\ell_{i_3}},\sigma_B$.} \label{fig:AlignmentProc1-12} \end{figure} We now describe the main steps of the algorithm. Some details will be given in the following subsections. Figure~\ref{fig:AlignmentProc1-12} illustrates part of this algorithm at a high level. We start with a pre-processing step. \begin{itemize} \item \textbf{Pre-processing: backbone sparsification.} We first construct a subtree $T'$ by pruning some backbone vertices and their descendants. Initialize $T' := T$. Then, for $o = -,+$ and for $k = 1,...,B^o-1$, \begin{enumerate} \item Check whether the vertex $\tilde{x}^o_k$ is a vertex in the tree $T'$. If not, do nothing. \item\label{item:preproc-ell} If $\tilde{x}^o_k$ is in the tree $T'$, find the minimal $\ell \geq 1$ such that the distance (accounting for edge lengths) between $\tilde{x}^o_k$ and $\tilde{x}^o_{k+\ell}$ is at least $\delta_1$. Observe that, by assumption, the distance between $\tilde{x}^o_k$ and $\tilde{x}^o_{k+\ell}$ is necessarily at most $\delta_1 + t_{\textnormal{max}} \leq 2 \delta_1$. \item Remove the vertices $\tilde{x}^o_{k+1},...,\tilde{x}^o_{k+\ell-1}$, except $\tilde{x}^o_{B^o}$, and all of their off-backbone descendants from the tree $T'$ (if they exist). \end{enumerate} \end{itemize} The result is a tree where the distance between consecutive vertices on the backbone is in $[\delta_1, 2 \delta_1]$ (by the observation in Item~\ref{item:preproc-ell}), with the possible exception of the children of the root and the last pair on each side of the root all of whose distances are in $(0,2\delta_1]$. To simplify the notation, we re-assign $T$ to be this new rooted metric tree and we re-assign $x_1, x_2, \ldots,x_B$ to be the backbone vertices on this tree (with an updated value for $B$ and updated alternative numbering $\tilde{x}^o_k$ for $o = -,+$ and $k=1,\ldots,B^o$). We then proceed with the alignment algorithm, which consists of two main steps both proceeding along the backbone: \begin{enumerate} \item \textbf{Ancestral estimation:} We infer the ancestral sequences at the backbone vertices as follows. For $k = 2,...,B-1$: \begin{enumerate} \item For the child vertex $z_k$ of $x_k$ that is off the backbone, infer the sequence $\hat{\sigma}_{z_k}$ at $z_k$ using the Fitch method~\cite{Fitch71} (described below in Section~\ref{section:fitch}) applied to the subtree rooted at $z_k$. \item Set $\hat{\sigma}_{x_k}$ equal to $\hat{\sigma}_{z_k}$. \end{enumerate} \item \textbf{Recursive alignment:} Now that the sequences at the non-root backbone vertices $\{x_k\}_{k=1}^{B}$ have been estimated, we construct a multiple sequence alignment sequentially, starting from $x_1$, going to $x_2$, and ending at $x_{B-1}$ and $x_B$. This stepwise alignment procedure is described in Section~\ref{section:stepwise} below. If the inferred sequences of successive backbone vertices are not at most one mutation apart, then we terminate the algorithm with no output. Else, a pairwise sequence alignment is produced for vertices $v = x_1$ and $w = x_B$. \end{enumerate} We will show in Proposition~\ref{prop:Fitch2} below that, with high probability, $\hat{\sigma}_{z_k} = \sigma_{z_k}$ for all $k = 2,\ldots, B-1$. We will then show in Proposition~\ref{prop:Intersection} below that the above stepwise alignment outputs a true pairwise alignment with high probability. \subsection{Ancestral sequence reconstruction} \label{section:fitch} We briefly describe below the ancestral sequence reconstruction subroutine. Note that we use the Fitch method for the convenience of its analysis, but other methods could also be used. \begin{definition}[Fitch estimator] Let $T = (V,E)$ be a finite binary rooted tree with root $z$ and leaf set $\partial T \subset V$ with given leaf sequences $(\sigma_{\ell})_{\ell \in \partial T}$. For any leaf vertex $\ell$, define $\hat{S}_\ell \subset \mathcal{S}$ to be the subset $\hat{S}_\ell = \{\sigma_\ell\}$. For each non-leaf vertex $v$ with children $v_1$ and $v_2$, define $\hat{S}_v \subset \mathcal{S}$ recursively to be \begin{align*} \hat{S}_v = \begin{cases} \hat{S}_{v_1} \cap \hat{S}_{v_2} & \textnormal{if} \ \hat{S}_{v_1} \cap \hat{S}_{v_2} \ne \emptyset \\ \hat{S}_{v_1} \cup \hat{S}_{v_2} & \textnormal{otherwise}. \end{cases} \end{align*} Then define the \textit{Fitch estimator} $\hat{\sigma}_z$ of $\sigma_z$ to be a uniformly chosen member of $\hat{S}_z$. \end{definition} \noindent An analysis of this method in our setting is provided in Proposition~\ref{prop:Fitch2} below. \subsection{Stepwise alignment} \label{section:stepwise} In this section, we describe the stepwise alignment subroutine. It is based on the assumption that along the backbone (of the pruned tree): \begin{enumerate} \item[(i)] the sequences have been correctly inferred; and \item[(ii)] consecutive ones differ by at most one mutation. \end{enumerate} We establish these facts in Propositions~\ref{prop:Fitch2} and~\ref{prop:Intersection} below. In these circumstances, we show that homologous sites can be traced (up to the convention we described earlier). We will construct a sequence of alignments $\mathbf{a}^2$, $\mathbf{a}^3$, etc. We first describe the alignment of two sequences, then the alignment of alignments, and so on. Given two sequences $\hat\sigma, \hat\tau$ satisfying the assumptions (i) and (ii) above, there are three possible cases: \begin{enumerate}[label=(\Alph*)] \item If $\hat\sigma=\hat\tau$, then a true alignment is obtained by setting $a^{2}_1(\hat\sigma,\hat\tau) = \hat\sigma$ and $a^{2}_2(\hat\sigma, \hat\tau) = \hat\tau$, corresponding to no mutation. \item If $|\hat\sigma| = |\hat\tau|$ but $\hat\sigma$ and $\hat\tau$ agree on all sites except one, then a true alignment is obtained by setting $a^{2}_1(\hat\sigma,\hat\tau) = \hat\sigma$ and $a^{2}_2(\hat\sigma,\hat\tau) = \hat\tau$, corresponding to exactly one substitution between the sequences. \item If $|\hat\sigma| = |\hat\tau| + 1$ (or vice versa) and there exists $j \in \{1,2,...,|\hat\tau|\}$ and $\hat\sigma_{\textnormal{ins}} \in \{0,1\}$ such that \begin{align*} \hat\sigma_i = \begin{cases} \hat\tau_i & i < j \\ \hat\sigma_{\textnormal{ins}} & i = j \\ \hat\tau_{i-1} & i > j. \end{cases} \end{align*} As we discussed before, the location of the insertion cannot be determined from the sequences alone. For example, if $\hat\sigma$ and $\hat\tau$ are separated by an insertion so that they are given by \begin{align*} \hat\tau &= (0,1,0,1,0,1,0,0,0,0,0,1,0)\\ \hat\sigma &= (0,1,0,1,0,1,0,0,0,0,0,0,1,0), \end{align*} we cannot tell which site gave birth to the new $0$ to obtain $\hat\sigma$. So we assume by convention that $j$ is the minimal choice possible. Then a true alignment is obtained by setting $a^{2}_1(\hat\sigma,\hat\tau) = \hat\sigma$ and for $i=1,\ldots,|\hat\tau|+1$ \begin{align*} a^{2}_2(\hat\sigma,\hat\tau)_i = \begin{cases} \hat\tau_i & i < j \\ - & i = j \\ \hat\tau_{i-1} & i > j, \end{cases} \end{align*} corresponding to a single site $\hat\sigma_{\textnormal{ins}}$ being inserted into the sequence $\hat\tau$ to the left of the $j$th site to obtain $\hat\sigma$. \end{enumerate} In fact, we will need to align alignments along the backbone, rather than sequences. Suppose we have sequences $\hat\sigma_{x_1},\hat\sigma_{x_2},...,\hat\sigma_{x_B}$ and successive pairs $\{\hat\sigma_{x_1},\hat\sigma_{x_2}\},\{\hat\sigma_{x_2},\hat\sigma_{x_3}\},...,\{\hat\sigma_{x_{B-1}},\hat\sigma_{x_B}\}$ each satisfy exactly one of the cases (A), (B), or (C). (We terminate without output if the assumptions do not hold.) Then we recursively construct a multiple sequence alignment as follows. To simplify the notation, we let $ \hat\sigma_{1:k} = (\hat\sigma_{x_1},\ldots,\hat\sigma_{x_k}). $ \begin{enumerate} \item Given $\hat\sigma_{x_1}$ and $\hat\sigma_{x_2}$, let $a^{2}_1(\hat\sigma_{1:2})$ and $a^{2}_2(\hat\sigma_{1:2})$ be the pairwise alignment constructed above. \item For $k = 3,...,B$: \begin{enumerate} \item We are given a multiple alignment $a^{k-1}_1(\hat\sigma_{1:k-1}),\ldots,a^{k-1}_{k-1}(\hat\sigma_{x_{1:k-1}})$ of the sequences $\hat\sigma_{x_1},\ldots,\hat\sigma_{x_{k-1}}$, and a new sequence $\hat\sigma_{x_k}$ that is at most one mutation away from $\hat\sigma_{x_{k-1}}$. \item The sequences $\hat\sigma_{x_{k-1}}$ and $\hat\sigma_{x_k}$ satisfy one of the three cases (A), (B) or (C) by assumption, so their alignment $a^{k}_{k-1}(\hat\sigma_{x_{1:k}})$ and $a^{k}_{k}(\hat\sigma_{1:k})$ (within the larger multiple sequence alignment) will differ by at most one entry similarly to the sequence case above. The full alignment is defined as follows: \begin{itemize} \item If $\hat\sigma_{x_{k-1}} = \hat\sigma_{x_k}$, then set $a^{k}_k(\hat\sigma_{1:k})$ to be equal to $a^{k-1}_{k-1}(\hat\sigma_{1:k-1})$ and $a^{k}_i(\hat\sigma_{1:k})$ to be equal to $a^{k-1}_i(\hat\sigma_{1:k-1})$ for all $i < k$. \item If $\hat\sigma_{x_{k-1}}$ and $\hat\sigma_{x_k}$ have equal length and disagree at a single segregating site, set $a^{k}_i(\hat\sigma_{1:k})$ to $a^{k-1}_i(\hat\sigma_{1:k})$ for all $i \leq k-1$. Each entry of $a^{k}_k(\hat\sigma_{1:k})$ is set to the corresponding entry of $a^{k}_{k-1}(\hat\sigma_{1:k})$, except for the segregating site. If the latter occurs at position $i$ within $a^{k}_{k-1}(\hat\sigma_{1:k})$, then we set $a^{k}_k(\hat\sigma_{1:k})_i$ to $a^{k}_{k-1}(\hat\sigma_{1:k})_i + 1 \ (\textnormal{mod}\ 2)$. \item If $\hat\sigma_{x_k}$ has one more site than $\hat\sigma_{x_{k-1}}$, then an insertion has occurred and the inserted site in $\hat\sigma_{x_k}$ cannot be ancestral to any site in $\hat\sigma_{x_1},\ldots,\hat\sigma_{x_{k-1}}$. So the inserted site in $\hat\sigma_{x_{k}}$ must correspond to a gap in all previous sequences. More specifically, if the site $\hat\sigma_{\textnormal{ins}}$ is inserted to the left of position $j^{\ast} \in \{1,\ldots,|a^{k-1}_{k-1}(\hat\sigma_{1:k-1})|\}$ in the $(k-1)$-st sequence \textit{in the previously constructed alignment $a^{k-1}_{k-1}(\hat\sigma_{1:k-1})$} (where $j^{\ast}$ is the minimal such choice) then set \begin{align*} a^k_k(\hat\sigma_{1:k})_{i} = \begin{cases} a^{k-1}_{k-1}(\hat\sigma_{1:k-1})_{i} & 1 \leq i < j^{\ast} \\ \hat\sigma_{\textnormal{ins}} & i = j^{\ast} \\ a^{k-1}_{k-1}(\hat\sigma_{1:k-1})_{i-1} & j^{\ast} < i \leq |a^{k-1}_{k-1}(\hat\sigma_{1:k-1})| + 1 \end{cases} \end{align*} and for all $\ell \leq k-1$ \begin{align*} a^{k}_\ell(\hat\sigma_{1:k})_{i} = \begin{cases} a^{k-1}_\ell(\hat\sigma_{1:k-1})_{i} & 1 \leq i < j^{\ast} \\ - & i = j^{\ast} \\ a^{k-1}_\ell(\hat\sigma_{1:k-1})_{i-1} & j^{\ast} < i \leq |a^{k-1}_{k-1}(\hat\sigma_{1:k-1})| + 1. \end{cases} \end{align*} \item The case where $\hat\sigma_{x_k}$ has one fewer site than $\hat\sigma_{x_{k-1}}$ is handled symmetrically. \end{itemize} \end{enumerate} \item Output the pairwise alignment $(a^{B}_1(\hat\sigma_{1:B}), a^{B}_B(\hat\sigma_{1:B}))$ \emph{after removing all columns with only gaps}. \end{enumerate} \subsection{Theoretical guarantee} We establish the two claims below in the next sections. \begin{prop}[Correctness of alignment] \label{prop:correct-align} Let $T$ be the output of the pre-processing step and let $x_1,\ldots,x_B$ be the resulting backbone vertices. Then the alignment algorithm produces a true pairwise alignment of $\sigma_{x_1}$ and $\sigma_{x_B}$ provided that: \begin{enumerate} \item (Correctness of ancestral estimation) For $k=2,\ldots,B-1$, $\hat\sigma_{x_k} = \sigma_{x_k}$. \item (One-mutation condition) Successive pairs of true backbone sequences $$ \{\sigma_{x_1},\sigma_{x_2}\},\{\sigma_{x_2},\sigma_{x_3}\},\ldots,\{\sigma_{x_{B-1}},\sigma_{x_B}\}, $$ are at most one mutation away. \end{enumerate} \end{prop} \section{Correctness of ancestral estimation} \label{section:ancestral} In this section, we analyze the ancestral sequence estimation step. The analysis proceeds by coupling the TKF91 process with a percolation process. Roughly, we say that an edge is \textbf{open} if the sequence does not change along it under the TKF91 process. We will show that, provided the edge lengths are short enough, the open cluster of the root forms a fairly ``dense'' subtree with high probability. The latter property will lead to a correct reconstruction by the Fitch method. \subsection{The percolation process} Consider again the backbone vertices $\{x_k\}_{k=1}^{B}$ and the off-backbone child vertices $\{z_k\}_{k=2}^{B-1}$. For each $k=2,\ldots,B-1$ separately, we couple the sequence evolution process on the subtree descending from $z_k$ with a simpler percolation process. We will need some notation. We denote by $T_k$ the subtree of $T$ (after pre-processing) rooted at $z_k$. For two sequences $\sigma, \tau$, we let $P_t(\sigma,\tau)$ be the probability under the TKF91 model on an edge that, started at $\sigma$, the state is $\tau$ after time $t$. Similarly, we let $\tilde{P}_t(\sigma,\tau)$ be the same probability \textit{conditioned on not being at state $\sigma$ at time $t$}. It will be convenient to work on an infinite tree. Specifically, let $\overline{T}_k$ be the completion of $T_k$ into an infinite binary tree where new edges have length $0$. We now describe the coupling: \begin{itemize} \item \textbf{The percolation process on $\overline{T}_k$.} We condition on the sequence $\sigma_{z_k}$ at $z_k$, the root of $T_k$. For an edge $e$ on $\overline{T}_k$, let $t_e$ be its length and set $$ \zeta^{(k)}_{e} = P_{t_e}(\sigma_{z_k}, \sigma_{z_k}), $$ that is, the probability that the sequence does not change along edge $e$ if started at $\sigma_{z_k}$. We then perform percolation on $\overline{T}_k$ with probabilities $\zeta^{(k)}_e$: for each edge $e$, it is open independently with probability $\zeta^{(k)}_e$. Let $\mathcal{C}_k$ be the resulting open cluster including $z_k$ (i.e., all vertices of $\overline{T}_k$ that can be reached from $z_k$ using only open edges). \item \textbf{The joint process on $\overline{T}_k$.} For each vertex $v$ in $\mathcal{C}_k$, set $\sigma_v = \sigma_{z_k}$. For each descendant $w$ of a vertex $v \in \mathcal{C}_k$ that is not itself in $\mathcal{C}_k$, assign a sequence to $w$ taken from the conditional distribution $\tilde{P}_{t_e}(\sigma_{z_k},\,\cdot\,)$ where $e = (v,w)$. For each remaining vertex, we run the TKF91 process recursively from the states already assigned. Note that the edges of length $0$ added in the completion of $T_k$ simply entail copying the sequences at the leaves of $T_k$ to all their descendants. \end{itemize} We will be interested in the properties of the cluster $\mathcal{C}_k$. Let $\overline{T}_k^{\mathcal{O}}$ be the subtree of $\overline{T}_{k}$ made of all the vertices in $\mathcal{C}_k$ and the edges connecting them. For any $\ell,b \in \mathbb{Z}_{+}$, a rooted tree $T'$ is said~\cite{mossel2001} to be a \textbf{$(b,\ell)$-diluted tree} if: for all $i \in \mathbb{Z}_{+}$, each of the vertices of $T'$ at graph distance $i \ell$ from the root has at least $b$ descendants at graph distance $i\ell + \ell$ from the root. From the following lemma adapted from \cite{mossel2001}, $\overline{T}_k^{\mathcal{O}}$ is $(2,3)$-diluted with arbitrarily high probability provided edge lengths are short enough. \begin{figure} \centering \includegraphics[scale=0.75]{Figures/DilutedTree.png} \caption{An open $2$-diluted $3$-regular subtree of the infinite binary rooted tree. Solid lines not descending from any dashed line indicate no mutation. Dashed lines indicate a mutation may have occurred. For every vertex in the even generations, at least three of its grandchildren share the same trait.} \label{fig:DilutedTree} \end{figure} Figure \ref{fig:DilutedTree} depicts the infinite $2$-diluted $3$-regular tree. Later in the proof, we will need to condition on the length of $\sigma_{z_k}$ being less than a threshold $\bar{L}$. Let $\PP^{\bar{L}}$ be the probability measure of the joint process where $\sigma_{z_k}$ is drawn from the stationary distribution of the TKF91 process conditioned on $|\sigma_{z_k}| \leq \bar{L}$. We first record a simple observation. \begin{lemma}[Staying probability] \label{lemma:staying} Fix $\bar{L} < +\infty$. For any sequence $\sigma$ such that $|\sigma| \leq \bar{L}$ and any $t > 0$, we have $$ P_t(\sigma, \sigma) \geq 1 - t (\bar{L}+1)(\mu + \lambda + \eta). $$ \end{lemma} \begin{proof} Indeed $P_t(\sigma, \sigma)$ is lower bounded by the probability that no mutation occurs up to time $t$ which, by~\eqref{eq:lambdastar}, is at least $$ P_t(\sigma, \sigma) \geq \exp\left( -(\bar{L}+1)[\mu + \lambda + \eta] t \right) \geq 1 - (\bar{L}+1)[\mu + \lambda + \eta] t, $$ as claimed. \end{proof} \begin{lemma}[Existence of an open diluted tree] \label{lem:Diluted} For any $\bar{L} \in \mathbb{Z}_+$ and $\delta_a > 0$, there is $t_{\textnormal{max}} > 0$ small enough that, if $t_e \leq t_{\textnormal{max}}$ for all $e$, $$ \PP^{\bar{L}}[\overline{T}_k^{\mathcal{O}}\ \textnormal{is $(2,3)$-diluted}] \geq 1 - \delta_a. $$ \end{lemma} \begin{proof} By Lemma~\ref{lemma:staying}, for any sequence $\sigma_{z_k}$ such that $|\sigma_{z_k}|\leq \bar{L}$, we have that $$ \zeta^{(k)}_{e} = P_{t_e}(\sigma_{z_k}, \sigma_{z_k}) \geq 1 -(\bar{L}+1)[\mu + \lambda + \eta] t_{\textnormal{max}}, $$ where we recall that $t_e \leq t_{\textnormal{max}}$ by assumption (and that of course includes the added edges of length $0$). Hence $\zeta^{(k)}_{e}$ can be made arbitrarily close to $1$ (uniformly in $e$) by taking $t_{\textnormal{max}}$ small enough (as a function of $\bar{L}$). The result then follows directly from \cite[Lemma 8]{mossel2001} (which can be extended in a straightforward manner to the case where percolation probabilities vary across edges but are uniformly bounded). \end{proof} \subsection{Analyzing the Fitch estimator} Next, we analyze the Fitch estimator in the event that $\overline{T}_k$ contains an open $(2,3)$-diluted subtree. For any $D \in \mathbb{Z}_+$, let $\overline{T}_{k,D}$ be the truncation of $\overline{T}_{k}$ at level $D$, that is, the finite tree obtained by removing all vertices of $\overline{T}_{k}$ at graph distance greater than $D$ from its root. Let $\beta_k$ be the smallest positive integer such that $\overline{\overline{T}}_k := \overline{T}_{k,2 \beta_k}$ contains all of $T_k$. Importantly, we make the following observation about the Fitch estimator. \begin{lemma}[Fitch estimator on the completion] \label{lemma:fitch-completion} The Fitch estimator applied to the leaves of $\overline{\overline{T}}_k$ produces the same ancestral sequence estimate as the Fitch estimator applied to the leaves of $T_k$. \end{lemma} \begin{proof} All leaves $\bar{\bar{\ell}}$ of $\overline{\overline{T}}_k$ descending from a leaf $\ell$ of $T_k$ satisfy $\sigma_{\bar{\bar{\ell}}} = \sigma_\ell$, so by definition of the Fitch estimator $\hat{S}_\ell = \sigma_\ell$. The claim follows. \end{proof} Let $\overline{\overline{T}}_k^{\mathcal{O}}$ be the truncation of $\overline{T}_k^{\mathcal{O}}$ at level $2 \beta_k$. \begin{lemma}[Fitch estimator in the presence of an open diluted tree] \label{lemma:fitch-diluted} If $\overline{\overline{T}}_k^{\mathcal{O}}$ is $(2,3)$-diluted, then the Fitch estimator $\hat{\sigma}_{z_k}$ over the tree $\overline{\overline{T}}_k$ equals the true sequence $\sigma_{z_k}$ at $z_k$. \end{lemma} \begin{proof} We prove this claim by induction on $\beta_k$. We start with the $\beta_k = 1$ case. Then $\overline{\overline{T}}_k$ consists of $z_k$, two children $z_k^1$ and $z_k^2$, and the grandchildren $z_k^{1,1},z_k^{1,2},z_k^{2,1},z_k^{2,2}$. If all four grandchildren belong to $\overline{\overline{T}}_k^{\mathcal{O}}$, then we are done. The other case, without loss of generality, is $z_k^{2,1} \notin \overline{\overline{T}}_k^{\mathcal{O}}$. Then $\sigma_{z_k^{2,1}} \ne \sigma_{z_k^{2,2}} = \sigma_{z_k}$, so the Fitch method gives $\hat{S}_{z_k^{2}} = \hat{S}_{z_k^{2,1}} \cup \hat{S}_{z_k^{2,2}} = \{\sigma_{z_k^{2,1}},\sigma_{z_k}\}$. Since $\sigma_{z_k^{1,1}} = \sigma_{z_k^{1,2}} = \sigma_{z_k}$, we have $\hat{S}_{z_k^{1}} = \{\sigma_{z_k}\}$. Continuing on, we have $\hat{S}_{z_k} = \hat{S}_{z_k^{1}} \cap \hat{S}_{z_k^2} = \{\sigma_{z_k}\}$. Since $\hat{S}_{z_k}$ contains only the state $\sigma_{z_k}$, the Fitch method is guaranteed to return $\sigma_{z_k}$. Now, we assume the $r$-th case holds for $r \geq 1$ and we show that the $(r+1)$-st case holds as well. As before, consider the four grandchildren of $z_k$ and the same cases. If all four grandchildren belong to $\overline{\overline{T}}_k^{\mathcal{O}}$, then they are each the root of a subtree of $2r$ levels with root state equal to $\sigma_{z_k}$. The induction assumption implies that the Fitch method returns $\sigma_{z_k}$ as estimates for $\sigma_{z_k^{i,j}}, i,j \in \{1,2\}$. The Fitch method then returns $\hat{S}_{z_k} = \{\sigma_{z_k}\}$, as required. For the other case when $z_k^{2,1} \notin \overline{\overline{T}}_k^{\mathcal{O}}$, we know only that $\hat{S}_{z_k^{2,1}}$ is an arbitrary set of sequences. If $\sigma_{z_k} \in \hat{S}_{z_k^{2,1}}$, then $\hat{S}_{z_k^{2}} = \hat{S}_{z_k^{2,1}} \cap \hat{S}_{z_k^{2,2}} = \{\sigma_{z_k}\}$, and we are done. Else, we have $\hat{S}_{z_k^2} = \hat{S}_{z_k^{2,1}} \cup \hat{S}_{z_k^{2,2}}$, where $\sigma_{z_k} \in \hat{S}_{z_k^{2,2}}$ so that $\hat{S}_{z_k} = \hat{S}_{z_k^1} \cap \hat{S}_{z_k^2} = \{\sigma_{z_k}\}$, as required. This completes the proof for the $(r+1)$-st case, and hence of the lemma. \end{proof} Combining Lemmas~\ref{lem:Diluted}, \ref{lemma:fitch-completion}, and \ref{lemma:fitch-diluted}, we get the following. \begin{prop}[Correctness of ancestral estimation off the backbone] \label{prop:Fitch2} For any $\bar{L} \in \mathbb{Z}_+$ and $\delta_a > 0$, there is $t_{\textnormal{max}} > 0$ small enough that, under $\PP^{\bar{L}}$, the Fitch estimator on $T_k$ returns the correct ancestral state $\hat\sigma_{z_k} = \sigma_{z_k}$ with probability at least $1-\delta_a$. \end{prop} \section{One-mutation condition} \label{section:one-mutation} In this section, we establish the one-mutation condition required by Proposition~\ref{prop:correct-align} and use it to finish the proof of the main result. \subsection{A bound on the transition probabilities} We will need a bound on the probability that at most one mutation occurs on an edge along the backbone. Because the state space of the sequence process is infinite, the rates are unbounded and we state the next bound explicitly in terms of the length of the sequence at the start of the edge. Later on, we will use the fact that the length is stationary to control it. \begin{lemma}[At most one mutation] \label{lemma:atmostone} Fix $\bar{L} < +\infty$. For any sequence $\sigma$ such that $|\sigma| \leq \bar{L}$ and any $t > 0$, we have $$ P_t(\sigma, Y_\sigma) \geq 1 - \left\{t (\bar{L}+2)[\mu+\lambda+\eta]\right\}^2, $$ where $Y_{\sigma} = \{\sigma\} \cup \mathcal{S}_1(\sigma)$ are the sequences at most one mutation away from $\sigma$. \end{lemma} \begin{proof} For a TKF91 process on an edge started at $\sigma$, let $X_s \in \mathcal{S}$ be the sequence observed at time $s \in [0,t]$ and $T_i$ be the time of the $i$th jump from one state to another state. Then $$ P_t(\sigma, Y_\sigma) \geq \PP_\sigma[T_2 > t], $$ as the event on the right-hand side guarantees a single jump, which in turn guarantees that $X_t \in Y_\sigma$. Here $\PP_\sigma$ indicates that the edge process is started at $\sigma$. Letting $$ f_{T_1|\sigma}(s) = \lambda^{\ast}(\sigma) \exp\left(-s \lambda^{\ast}(\sigma)\right), \qquad F_{T_1|\sigma}(s) = 1 - \exp\left(-s \lambda^{\ast}(\sigma)\right), $$ be the probability density function and cumulative distribution function of the time of the first jump started at $\sigma$, we get by the strong Markov property \begin{align*} \PP_\sigma[T_2 \leq t] &= \int_{0}^t f_{T_1|\sigma}(s) \sum_{\tau \in \mathcal{S}_1(\sigma)} \frac{Q(\sigma,\tau)}{\lambda^{\ast}(\sigma)} F_{T_1|\tau}(t-s) \,\mathrm{d} s\\ &\leq \int_{0}^t \lambda^{\ast}(\sigma) \exp\left(-s \lambda^{\ast}(\sigma)\right) \max_{\tau \in \mathcal{S}_1(\sigma)} \left\{1- \exp\left(-(t-s) \lambda^{\ast}(\tau)\right)\right\} \,\mathrm{d} s. \end{align*} Under the assumption that $|\sigma| \leq \bar{L}$, it holds that $|\tau| \leq \bar{L}+1$ for any $\tau \in \mathcal{S}_1(\sigma)$, and hence $\max\{\lambda^{\ast}(\sigma), \lambda^{\ast}(\tau)\} \leq (\bar{L}+2)[\mu+\lambda+\eta]$. Continuing on, the last line in the previous display is \begin{align*} &\leq \left\{1- \exp\left(-t (\bar{L}+2)[\mu+\lambda+\eta]\right)\right\} \int_{0}^t \lambda^{\ast}(\sigma) \exp\left(-s \lambda^{\ast}(\sigma)\right) \,\mathrm{d} s\\ &= \left\{1- \exp\left(-t (\bar{L}+2)[\mu+\lambda+\eta]\right)\right\} \left\{1 - \exp\left(-t \lambda^{\ast}(\sigma)\right)\right\}\\ &\leq \left\{t (\bar{L}+2)[\mu+\lambda+\eta]\right\}^2, \end{align*} establishing the claim. \end{proof} \subsection{Union bound over the backbone} We define a number of events whose joint occurrence guarantees the success of our alignment procedure: \begin{itemize} \item \textit{(One-mutation condition)} For $o = -,+$ and $k = 1,\ldots,B^o-1$, let $F^o_k$ be the event that the sequences at $\tilde{x}^o_k$ and the backbone child vertex of $\tilde{x}^o_k$ (i.e., $\tilde{x}^o_{k+1}$) satisfy constraints (A), (B), or (C) from Section~\ref{section:stepwise}. \item \textit{(Ancestral reconstruction)} For $o = -,+$ and $k = 1,\ldots,B^o-1$, let $G^o_k$ be the event that there is no mutation between the sequences at $\tilde{x}^o_k$ and its off-backbone child vertex $\tilde{z}^o_k$ \textit{and} that $\sigma_{\tilde{z}^o_k}$ is correctly reconstructed by applying the Fitch method on the subtree rooted at $\tilde{z}^o_k$. \item \textit{(Root segment)} For $o = -,+$, let $H^o$ be the event that the sequences at the root and at $\tilde{x}^o_1$ are identical. \end{itemize} The following proposition provides a requirement on the maximum branch length $t_{\textnormal{max}}$ for all the above events to occur simultaneously. Define the bad event $$ \mathcal{B} = (H^-)^c \cup (H^+)^c \cup \left\{\bigcup_{o=-,+} \bigcup_{k=1}^{B^o-1} (F^o_k)^c \cup (G^o_k)^c\right\}. $$ Recall that the pre-processing procedure has a parameter $\delta_1$. \begin{prop}[Union bound over the backbone] \label{prop:Intersection} Fix a tree height $h > 0$. For any $0 < \delta_1 < h$, there is a $t_{\textnormal{max}}$ small enough that $$ \PP\left[ \mathcal{B} \right] \leq C h \delta_1 \log^2(\delta_1^{-1}), $$ where $C$ is a constant depending only on $\lambda,\mu,\eta$. \end{prop} \begin{proof} We take a union bound over the events making up $\mathcal{B}$. \paragraph{Controlling the lengths} For each event, we first apply the law of total probability to control for the length of the starting sequence as follows. Suppose that sequence $\tau$ is stationary, which we denote by $\tau \sim \Pi$. Using the stationary distribution for the length (i.e.,~\eqref{eq:LengthStationary}), we have $$ \PP_{\tau \sim \Pi} \left[|\tau| > \bar{L}\right] = \sum_{M=\bar{L}+1}^{\infty} \left(1 - \frac{\lambda}{\mu}\right) \left(\frac{\lambda}{\mu}\right)^{M} = \left(\frac{\lambda}{\mu}\right)^{\bar{L} + 1}. $$ The expression on the right is made less than $\delta_1^2$ by choosing \begin{equation}\label{eq:barLdef} \bar{L} = \bigg\lceil \frac{\log(\delta_1^{2})}{\log(\lambda/\mu)}\bigg\rceil \leq C' \log(\delta_1^{-1}), \end{equation} for a constant $C' > 0$ depending only on $\mu, \lambda$, where recall that $\mu > \lambda$. Then for any event $\mathcal{E}$ which depends on $\tau$, we can write \begin{align} \PP[\mathcal{E}] &= \PP[\mathcal{E}\,|\,|\tau| \leq \bar{L}] \,\PP[|\tau| \leq \bar{L}] + \PP[\mathcal{E}\,|\,|\tau| > \bar{L}] \,\PP[|\tau| > \bar{L}]\nonumber\\ &\leq \PP[\mathcal{E}\,|\,|\tau| \leq \bar{L}] + \PP[|\tau| > \bar{L}]\nonumber\\ &\leq \PP[\mathcal{E}\,|\,|\tau| \leq \bar{L}] + \delta_1^2,\label{eq:controllingLength} \end{align} for the choice of $\bar{L}$ above. \paragraph{Events $H^o$} For $o = -,+$, we use Lemma~\ref{lemma:staying} to bound the probability of $(H^o)^c$. By construction, $\tilde{x}^o_1$ is a child of the root, so the edge length between the root and $\tilde{x}^o_1$ is at most $t_{\textnormal{max}}$. Hence, using Lemma~\ref{lemma:staying} and~\eqref{eq:controllingLength} with $\tau := \sigma_\rho$ and $\mathcal{E} := (H^o)^c$, we get \begin{equation} \label{eq:boundH} \PP[(H^o)^c] \leq t_{\textnormal{max}} (\bar{L}+1)(\mu + \lambda + \eta) + \delta_1^2. \end{equation} \paragraph{Events $G^o_k$} For $o = -,+$ and $k = 1,\ldots,B^o-1$, we use Lemma~\ref{lemma:staying} together with Proposition~\ref{prop:Fitch2} to bound the probability of $(G^o_k)^c$. Here we take $\tau := \sigma_{\tilde{x}^o_k}$ and $\mathcal{E} := (G^o)^c$. By assumption, the edge length between $\tilde{x}^o_k$ and its off-backbone child $\tilde{z}^o_k$ is at most $t_{\textnormal{max}}$. Further, for any fixed failure probability $\delta_a > 0$ and length threshold $\bar{L}$, the maximum branch length $t_{\textnormal{max}}$ can be taken small enough for Proposition~\ref{prop:Fitch2} to hold. By~\eqref{eq:controllingLength}, we get \begin{align*} \PP[(G^o_k)^c] &\leq \PP[(G^o_k)^c\,|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}] + \delta_1^2\\ &\leq \PP\left[\{\sigma_{\tilde{x}^o_k} \neq \sigma_{\tilde{z}^o_k}\} \bigcup \left\{\{\sigma_{\tilde{x}^o_k} = \sigma_{\tilde{z}^o_k}\} \cap \{\hat\sigma_{\tilde{z}^o_k} \neq \sigma_{\tilde{z}^o_k}\}\right\} \,\middle|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right] + \delta_1^2\\ &\leq \PP\left[\{\sigma_{\tilde{x}^o_k} \neq \sigma_{\tilde{z}^o_k}\} \,\middle|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right]\\ &\qquad + \PP\left[ \left\{\{\sigma_{\tilde{x}^o_k} = \sigma_{\tilde{z}^o_k}\} \cap \{\hat\sigma_{\tilde{z}^o_k} \neq \sigma_{\tilde{z}^o_k}\}\right\} \,\middle|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right] + \delta_1^2 \end{align*} We use the Markov property to bound the second term as follows: \begin{align*} &\PP\left[ \left\{\{\sigma_{\tilde{x}^o_k} = \sigma_{\tilde{z}^o_k}\} \cap \{\hat\sigma_{\tilde{z}^o_k} \neq \sigma_{\tilde{z}^o_k}\}\right\} \,\middle|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right]\\ &= \PP\left[ \sigma_{\tilde{x}^o_k} = \sigma_{\tilde{z}^o_k} \,\middle|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right] \,\PP\left[ \hat\sigma_{\tilde{z}^o_k} \neq \sigma_{\tilde{z}^o_k} \,\middle|\,\sigma_{\tilde{x}^o_k} = \sigma_{\tilde{z}^o_k}, |\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right]\\ &= \PP\left[ \sigma_{\tilde{x}^o_k} = \sigma_{\tilde{z}^o_k} \,\middle|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}\right] \,\PP\left[ \hat\sigma_{\tilde{z}^o_k} \neq \sigma_{\tilde{z}^o_k} \,\middle|\,|\sigma_{\tilde{z}^o_k}| \leq \bar{L}\right]\\ &\leq \PP\left[ \hat\sigma_{\tilde{z}^o_k} \neq \sigma_{\tilde{z}^o_k} \,\middle|\,|\sigma_{\tilde{z}^o_k}| \leq \bar{L}\right]. \end{align*} Plugging this back above and using Lemma~\ref{lemma:staying} and Proposition~\ref{prop:Fitch2} gives \begin{equation} \label{eq:boundG} \PP[(G^o_k)^c] \leq t_{\textnormal{max}} (\bar{L}+1)(\mu + \lambda + \eta) + \delta_a + \delta_1^2. \end{equation} \paragraph{Events $F^o_k$} For $o = -,+$ and $k = 1,\ldots,B^o-1$, we use Lemma~\ref{lemma:atmostone} to bound the probability of $(F^o_k)^c$. Here we take $\tau := \sigma_{\tilde{x}^o_k}$ and $\mathcal{E} := (F_k^o)^c$. By construction (i.e., by the backbone sparsification pre-processing step), the edge length between $\tilde{x}^o_k$ and its backbone child $\tilde{x}^o_{k+1}$ is at most $2 \delta_1$. By~\eqref{eq:controllingLength} and Lemma~\ref{lemma:atmostone}, we get \begin{align} \PP[(F^o_k)^c] &\leq \PP[(F^o_k)^c\,|\,|\sigma_{\tilde{x}^o_k}| \leq \bar{L}] + \delta_1^2\nonumber\\ &\leq \left\{2 \delta_1 (\bar{L}+2)[\mu+\lambda+\eta]\right\}^2 + \delta_1^2.\label{eq:boundF} \end{align} \paragraph{Union bound} Taking a union bound over all events above gives \begin{align*} \PP[\mathcal{B}] &\leq 2 \left[t_{\textnormal{max}} (\bar{L}+1)(\mu + \lambda + \eta) + \delta_1^2\right]\\ &\qquad + \sum_{o=-,+}\sum_{k=1}^{B^o-1} \left[ t_{\textnormal{max}} (\bar{L}+1)(\mu + \lambda + \eta) + \delta_a + \delta_1^2 \right]\\ &\qquad + \sum_{o=-,+}\sum_{k=1}^{B^o-1} \left[ \left\{2 \delta_1 (\bar{L}+2)[\mu+\lambda+\eta]\right\}^2 + \delta_1^2 \right] \end{align*} by \eqref{eq:boundH},~\eqref{eq:boundG} and \eqref{eq:boundF}. We make all terms in square brackets of order $\delta_1^2 \log \delta_1^{-1}$ by choosing $\delta_a := \delta_1^2$ and then choosing $0 < t_{\textnormal{max}} \leq \delta_1^2$ small enough for Proposition~\ref{prop:Fitch2} to hold. Then we get, using~\eqref{eq:barLdef}, \begin{align*} \PP[\mathcal{B}] &\leq 2 \left[\delta_1^2 ( C' \log(\delta_1^{-1})+1)(\mu + \lambda + \eta) + \delta_1^2\right]\\ &\qquad + 2 (B^o-1) \left[ \delta_1^2 ( C' \log(\delta_1^{-1})+1)(\mu + \lambda + \eta) + 2\delta_1^2 \right]\\ &\qquad + 2 (B^o-1) \left[ \left\{2 \delta_1 ( C' \log(\delta_1^{-1})+2)[\mu+\lambda+\eta]\right\}^2 + \delta_1^2 \right]. \end{align*} Because the tree has height $h$ and each backbone edge has length at least $\delta_1$ (after pre-processing), with the exception of the first and last one on each side of the root, we must have $(B^o-2) \delta_1 \leq h$, or after rearranging $B^o \leq h/\delta_1 + 2$. Employing this bound and simplifying gives finally \begin{equation*} \PP[\mathcal{B}] \leq C h \delta_1 \log^2(\delta_1^{-1}), \end{equation*} for a constant $C$ depending only on $\mu, \lambda, \eta$, as claimed. \end{proof} \subsection{Proof of the theorem} We are now ready to finish the proof of the main result. \begin{proof}[Proof of Theorem~\ref{thm:main}] For a fixed failure probability $\varepsilon$, we first choose $\delta_1$ small enough (as a function of $h, \mu, \lambda, \eta$) such that $C h \delta_1 \log^2(\delta_1^{-1}) \leq \varepsilon$. We then choose $t_{\textnormal{max}}$ small enough (again as a function of $h, \mu, \lambda, \eta$) that Proposition~\ref{prop:Intersection} implies $\PP[\mathcal{B}] \leq \varepsilon$. Proposition~\ref{prop:correct-align} then completes the proof of the theorem. \end{proof} \section*{Acknowledgments} SR is grateful to Alexandre Bouchard-C\^ot\'e (UBC) for insightful discussions in the early stages of this project. SR was supported by NSF grants DMS-1614242, DMS-1902892, DMS-1916378 and DMS-2023239 (TRIPODS Phase II), as well as a Simons Fellowship and a Vilas Associates Award. BL was supported by NSF grants DMS-1614242, CCF-1740707 (TRIPODS), DMS-1902892 and a Vilas Associates Award (to SR). BL was also supported by NSF grant DMS-1646108 (University of Michigan-Ann Arbor, Department of Statistics, RTG). \newpage \bibliographystyle{alpha}
1,116,691,500,553
arxiv
\section{Supplemental Material} In this Supplemental Material, we discuss the mathematical details. \subsection{Zeno dynamics. Error bound} Consider the spectral resolution of $H=H^\dagger$: \begin{equation} H=\sum_{k=1}^d h_k P_k, \end{equation} where \begin{equation} P_k P_\ell =\delta_{k\ell}P_\ell = \delta_{k\ell}P_\ell^\dagger, \qquad \sum_k P_k = \openone, \end{equation} $d\leq \dim \mathcal{H}$ is the number of distinct eigenvalues of $H$, and $h_k\in \mathbb{R}$, with $h_k\neq h_{\ell}$ for $k\neq \ell$. Given a perturbation $V=V^\dagger$ its diagonal part (Zeno Hamiltonian) is given by \begin{equation} V_{\mathrm{Z}}= \sum_k P_k V P_k. \end{equation} We want to bound the divergence \begin{equation} \delta_{\mathrm{Z}}(t) = \|\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \| \end{equation} between the dynamics generated by $H+\varepsilon V$ and the dynamics generated by its block-diagonal part $H+\varepsilon V_{\mathrm{Z}}$. We will use a trick elaborated in Ref.~\cite{ref:unity1}, which is based on Kato's seminal proof of the adiabatic theorem~\cite{ref:KatoAdiabatic}. Fix a spectral projection $P_\ell$ and consider the reduced resolvent at $h_\ell$, $\lim_{z\to h_\ell} (H-z\openone)^{-1} (\openone - P_\ell)$, that is \begin{equation} S_\ell = \sum_{k \,:\, k\neq \ell} \frac{1}{h_k - h_\ell} P_k. \end{equation} In the following, we will use $1$ for the identity operator $\openone$ and simply write $H-z$ instead of $H-z\openone$. We get $P_\ell S_\ell = S_\ell P_\ell = 0$ and \begin{equation} (H-h_\ell) S_\ell = S_\ell (H-h_\ell) = \sum_{k \,:\, k\neq \ell} \frac{h_k - h_\ell}{h_k-h_\ell} P_k = \sum_{k \,:\, k\neq \ell} P_k = 1- P_\ell, \label{eq:invres} \end{equation} that is $S_\ell$ is the inverse of $H-h_\ell$ on the subspace range of $1-P_\ell$. We get \begin{equation} \mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} = -\int_0^t \mathrm{d} s\,\frac{\partial}{\partial s} \bigl( \mathrm{e}^{\mathrm{i} (t-s)(H+\varepsilon V)} \mathrm{e}^{\mathrm{i} s(H+\varepsilon V_{\mathrm{Z}}) }\bigr) = \mathrm{i}\varepsilon \int_0^t \mathrm{d} s \, \mathrm{e}^{\mathrm{i} (t-s)(H+\varepsilon V)} (V-V_{\mathrm{Z}})\mathrm{e}^{\mathrm{i} s(H+\varepsilon V_{\mathrm{Z}}) }, \end{equation} whence \begin{equation} \bigl(\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \bigr) P_\ell = \mathrm{i}\varepsilon \int_0^t \mathrm{d} s\, \mathrm{e}^{\mathrm{i} (t-s)(H+\varepsilon V)} (1 - P_\ell) V P_\ell \mathrm{e}^{\mathrm{i} s(h_\ell+\varepsilon V_{\mathrm{Z}}) }. \end{equation} By~\eqref{eq:invres} we have \begin{equation} \bigl(\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \bigr) P_\ell = \mathrm{i}\varepsilon \int_0^t \mathrm{d} s \, \mathrm{e}^{\mathrm{i} (t-s)(H+\varepsilon V)} (H- h_\ell) S_\ell V P_\ell \mathrm{e}^{\mathrm{i} s(h_\ell+\varepsilon V_{\mathrm{Z}}) }. \end{equation} Now notice that \begin{equation} \mathrm{i} \mathrm{e}^{\mathrm{i} (t-s)(H+\varepsilon V)} (H- h_\ell) = - \frac{\partial}{\partial s} \bigl( \mathrm{e}^{\mathrm{i} (t-s) (H+\varepsilon V)} \mathrm{e}^{\mathrm{i} s(h_\ell + \varepsilon V)} \bigr) \mathrm{e}^{-\mathrm{i} s(h_\ell + \varepsilon V)}, \end{equation} and thus \begin{equation} \bigl(\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \bigr) P_\ell = - \varepsilon \int_0^t \mathrm{d} s \, \frac{\partial}{\partial s} \bigl( \mathrm{e}^{\mathrm{i} (t-s) (H+\varepsilon V)} \mathrm{e}^{\mathrm{i} s(h_\ell + \varepsilon V)} \bigr) \mathrm{e}^{-\mathrm{i} s \varepsilon V} S_\ell V P_\ell \mathrm{e}^{\mathrm{i} s \varepsilon V_{\mathrm{Z}} }. \end{equation} By integrating by parts \begin{align} \bigl(\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \bigr) P_\ell ={} & {-\varepsilon} \int_0^t \mathrm{d} s \, \frac{\partial}{\partial s} \bigl( \mathrm{e}^{\mathrm{i} (t-s) (H+\varepsilon V)} \mathrm{e}^{\mathrm{i} s(h_\ell + \varepsilon V)} \mathrm{e}^{-\mathrm{i} s \varepsilon V} S_\ell V P_\ell \mathrm{e}^{\mathrm{i} s \varepsilon V_{\mathrm{Z}} } \bigr) \nonumber\\ & {} + \varepsilon \int_0^t \mathrm{d} s \, \mathrm{e}^{\mathrm{i} (t-s) (H+\varepsilon V)} \mathrm{e}^{\mathrm{i} s(h_\ell + \varepsilon V)} \frac{\partial}{\partial s} \bigl( \mathrm{e}^{-\mathrm{i} s \varepsilon V} S_\ell V P_\ell \mathrm{e}^{\mathrm{i} s \varepsilon V_{\mathrm{Z}} } \bigr) \nonumber\\ ={} & \varepsilon \bigl( \mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} S_\ell V P_\ell - S_\ell V P_\ell \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}}) } \bigr) \nonumber\\ & {}-\mathrm{i} \varepsilon^2 \int_0^t \mathrm{d} s \, \mathrm{e}^{\mathrm{i} (t-s) (H+\varepsilon V)} (V S_\ell V P_\ell - S_\ell V P_\ell V_{\mathrm{Z}}) \mathrm{e}^{\mathrm{i} s (H+ \varepsilon V_{\mathrm{Z}} )} . \end{align} Finally, by summing over $\ell$ we have \begin{equation} \mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} = \varepsilon \bigl( \mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} X - X \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}}) } \bigr) -\mathrm{i} \varepsilon^2 \int_0^t \mathrm{d} s \, \mathrm{e}^{\mathrm{i} (t-s) (H+\varepsilon V)} (V X - X V_{\mathrm{Z}}) \mathrm{e}^{\mathrm{i} s (H+ \varepsilon V_{\mathrm{Z}} )} , \end{equation} where \begin{equation} X= \sum_\ell S_\ell V P_\ell. \label{eq:Xdef} \end{equation} By taking the operator norm, one gets \begin{equation} \delta_{\mathrm{Z}}(t) = \|\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \| \leq 2 \varepsilon \|X\| + \varepsilon^2 \int_0^t \mathrm{d} s \, (\|V\| \|X\| +\| X \| \|V_{\mathrm{Z}}\| ) = 2 \varepsilon \|X\| + \varepsilon^2 \|X\| (\|V\| + \| V_{\mathrm{Z}}\| ) t. \label{eq:bounddelta} \end{equation} Now, we get \begin{equation} \| X \|^2 = \| X X^\dag\| = \left\| \sum_\ell S_\ell V P_\ell V S_\ell \right\| \leq \sum_\ell \| S_\ell V P_\ell V S_\ell\| \leq \sum_\ell \|S_\ell\|^2 \|V\|^2, \end{equation} while \begin{equation} \|S_\ell\| =\left\| \sum_{k \,:\, k\neq \ell} \frac{P_k}{h_k-h_\ell} \right\| = \max_{k \,:\, k\neq \ell} \left| \frac{1}{h_k-h_\ell} \right| \leq \frac{1}{\eta}, \end{equation} where \begin{equation} \eta = \min_{k,\ell \,:\, k\neq\ell} |h_k - h_\ell | \end{equation} is the minimum spectral gap of $H$, and thus \begin{equation} \| X \| \leq \frac{\sqrt{d}}{\eta} \|V\|. \label{eq:boundX} \end{equation} Moreover, in the operator norm, \begin{equation} \|V_{\mathrm{Z}}\| = \left\| \sum_k P_k V P_k \right\| = \max_k \|P_k V P_k \| \leq \|V\|. \label{eq:boundXVZ} \end{equation} Therefore, by plugging~\eqref{eq:boundX} and~\eqref{eq:boundXVZ} into~\eqref{eq:bounddelta}, we finally get \begin{equation} \delta_{\mathrm{Z}}(t) = \|\mathrm{e}^{\mathrm{i} t (H+\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H+\varepsilon V_{\mathrm{Z}})} \| \leq\frac{2 \sqrt{d}}{\eta}\varepsilon\|V\|(1+\varepsilon\|V\|t), \label{eq:bounddelta1} \end{equation} which for $\|V\|=1$ reduces to Eq.~(\ref{eq:deltaZ}) of the Letter. \subsection{Robust symmetries} Consider now a robust symmetry \begin{equation} M = \sum_km_kP_k, \end{equation} with $m_k\in\mathbb{R}$. This is a conserved observable, $M=M^\dagger$, $[M, H]=0$, that acts uniformly within each eigenspace of $H$. We have $M_t = \mathrm{e}^{\mathrm{i} t H } M \mathrm{e}^{-\mathrm{i} t H } = M$, and for every perturbation $\varepsilon V$, \begin{align} \|M_t^{\varepsilon} - M\| &= \| \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} M \mathrm{e}^{-\mathrm{i} t (H +\varepsilon V)} -M \| \nonumber\\ &=\| \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} M -M \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} \| \nonumber\\ & = \bigl\| \bigl(\mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}})} \bigr) M + \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}})} M -M \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} \bigr\|. \end{align} By making use of the commutativity $[M, V_{\mathrm{Z}}] =0$, one gets \begin{equation} \|M_t^{\varepsilon} - M\| = \bigl\| \bigl(\mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}})} \bigr) M -M \bigl(\mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}})} \bigr) \bigr\| \leq 2 \| M\| \| \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t (H +\varepsilon V_{\mathrm{Z}})} \| , \end{equation} that is \begin{equation} \|M_t^{\varepsilon} - M\| \leq 2 \| M\| \delta_{\mathrm{Z}}(t), \end{equation} which is the inequality~(4) of the Letter. Analogously, by substituting in the previous derivation $V_{\mathrm{Z}}$ with $V_H(\varepsilon)$, which still commutes with the robust conserved observable $M$, i.e.\ $[M, V_{H}(\varepsilon)] =0$, one has the bound \begin{equation} \|M_t^{\varepsilon} - M\| = \bigl\| \bigl(\mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t[H +\varepsilon V_{H}(\varepsilon)]}\bigr) M -M \bigl(\mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t[H +\varepsilon V_{H}(\varepsilon)]} \bigr) \bigr\| \leq 2 \| M\| \delta_\infty, \label{eq:Minf} \end{equation} where \begin{equation} \delta_\infty = \sup_t \| \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t[H +\varepsilon V_{H}(\varepsilon)]} \| \label{eq:25} \end{equation} is the uniform bound on the divergence of the two dynamics. This is the first inequality in Eq.~(9) of the Letter. The block-diagonal perturbation $V_{H}(\varepsilon)$ can be chosen such that $\delta_\infty = O(\varepsilon)$. The crucial ingredient is to choose a block-diagonal perturbation $H+\varepsilon V_H(\varepsilon)$ which is \emph{isospectral} with $H+\varepsilon V$, and thus is unitarily equivalent to it: \begin{equation} H+ \varepsilon V_H(\varepsilon) = W_\varepsilon^\dagger (H+\varepsilon V) W_\varepsilon, \label{eq:isospec0} \end{equation} with a unitary $W_\varepsilon = 1 + O(\varepsilon)$. Such a block-diagonal $V_H(\varepsilon)$ and a unitary $W_\varepsilon$ actually exist \cite{Cloizeaux,Klein,eternal}. By plugging~\eqref{eq:isospec0} into~\eqref{eq:25}, we get \begin{align} \delta_\infty & = \sup_t\| \mathrm{e}^{\mathrm{i} t(H+\varepsilon V)} - W_\varepsilon^\dagger \mathrm{e}^{\mathrm{i} t (H+ \varepsilon V)} W_\varepsilon \| \nonumber\\ & = \sup_t\| (1 - W_\varepsilon^\dagger) \mathrm{e}^{\mathrm{i} t(H+\varepsilon V)} + W_\varepsilon^\dagger \mathrm{e}^{\mathrm{i} t (H+ \varepsilon V)} (1 - W_\varepsilon) \| \nonumber\\ & \leq \sup_t\| 1 - W_\varepsilon^\dagger \| \| \mathrm{e}^{\mathrm{i} t(H+\varepsilon V)} \| + \| W_\varepsilon^\dagger \mathrm{e}^{\mathrm{i} t (H+ \varepsilon V)}\| \| 1 - W_\varepsilon \| \nonumber\\ & = \| 1 - W_\varepsilon^\dagger \| + \| 1 - W_\varepsilon \| \vphantom{\sup_t} \nonumber\\ &= O(\varepsilon). \label{eq:28} \end{align} The existence, and the explicit construction, of a unitary $W_\varepsilon$, that carries the perturbed Hamiltonian into a block-diagonal form, is proved and discussed in detail in Ref.~\cite{eternal}. Here, in the next subsections, we will show the \emph{necessity} of an isospectral perturbation, and then discuss its construction and prove that \begin{equation} V_H(\varepsilon) = V_{\mathrm{Z}} + O(\varepsilon), \end{equation} by exploiting the connection with quantum KAM theory. \subsection{Isospectral perturbations} Consider a Hamiltonian $H=H^\dagger$ and a perturbation $\tilde{H} = H +O(\varepsilon)$, with small $\varepsilon$. We want to compare the two dynamics by looking at their divergence: \begin{equation} \delta_{H, \tilde{H}} (t) = \|\mathrm{e}^{\mathrm{i} t \tilde{H}}- \mathrm{e}^{\mathrm{i} t {H}}\| . \end{equation} Consider the spectral decompositions \begin{equation} H= \sum_{k=1}^d h_k P_k , \qquad \tilde{H} = \sum_{k=1}^d \tilde{h}_k \tilde{P}_k, \end{equation} where $d$ is the number of distinct eigenvalues of $\tilde{H}$, i.e.\ $\tilde{h}_k\neq\tilde{h}_\ell$ for $k\neq\ell$. It may happen that $h_k=h_\ell$ for some $k\neq\ell$, if the degeneracy is lifted by the perturbation. However in such a case we choose the orthogonal projections $P_k$ and $P_\ell$ such that they are adapted to the perturbation, that is $\tilde{P}_k = P_k + O(\varepsilon)$ and $\tilde{P}_\ell = P_\ell + O(\varepsilon)$~\cite{ref:KatoBook}. As for the eigenvalues, $\tilde{h}_k = h_k + O(\varepsilon)$. We get \begin{equation} \mathrm{e}^{\mathrm{i} t \tilde{H}}- \mathrm{e}^{\mathrm{i} t {H}} = \sum_k \bigl( \mathrm{e}^{\mathrm{i} t \tilde{h}_k } \tilde{P}_k - \mathrm{e}^{\mathrm{i} t h_k} P_k \bigr) = \sum_k \mathrm{e}^{\mathrm{i} t \tilde{h}_k } (\tilde{P}_k - P_k) - \sum_k (\mathrm{e}^{\mathrm{i} t \tilde{h}_k } -\mathrm{e}^{\mathrm{i} t h_k}) P_k . \end{equation} The first sum on the right-hand side is $O(\varepsilon)$ uniformly in time, as \begin{equation} \left\| \sum_k \mathrm{e}^{\mathrm{i} t \tilde{h}_k } (\tilde{P}_k - P_k) \right\| \leq \sum_k \| \tilde{P}_k - P_k \| = O(\varepsilon). \end{equation} On the other hand, the last term reads \begin{equation} \left\| \sum_k (\mathrm{e}^{\mathrm{i} t \tilde{h}_k } -\mathrm{e}^{\mathrm{i} t h_k}) P_k \right\| = \max_k |\mathrm{e}^{\mathrm{i} t \tilde{h}_k} -\mathrm{e}^{\mathrm{i} t h_k}| = 2 \max_k \left| \sin\!\left(t \frac{\tilde{h}_k - h_k}{2}\right)\right|, \end{equation} so that \begin{equation} \delta_{H,\tilde{H}} (t) = 2 \max_k \left| \sin \left(t \frac{\tilde{h}_k - h_k}{2}\right)\right| + O(\varepsilon). \label{eq:deltaHH} \end{equation} Therefore, since $\tilde{h}_k - h_k = O(\varepsilon)$, we get \begin{equation} \delta_{H,\tilde{H}}(t) = O(\varepsilon), \qquad \text{for} \quad t=O(1). \end{equation} However, the divergence has a slow drift (secular term) and becomes $O(1)$ for sufficiently large times $O(1/\varepsilon)$. Indeed, \begin{equation} \delta_{H, \tilde{H}} (t) = 2 + O(\varepsilon), \qquad \text{for} \quad t=\frac{\pi} {\tilde{h}_k - h_k} = O(1/\varepsilon), \end{equation} that is, the maximal divergence \begin{equation} \delta_\infty = \sup_t \delta_{H,\tilde{H}} (t) = 2 + O(\varepsilon). \end{equation} Geometrically, the evolution of a Hamiltonian with $d$ distinct eigenvalues yields a (quasi-)periodic motion of a point on a torus. Two motions with different frequencies, however small the differences may be, will eventually accumulate a divergence of $O(1)$. The only way to avoid this slow drift is that the two motions be isochronous, that is the first term in~\eqref{eq:deltaHH} should be identically zero. This means that \begin{equation} \delta_\infty = O(\varepsilon) \qquad \text{iff} \qquad \tilde{h}_k = h_k, \qquad \text{for all } k , \end{equation} i.e., the Hamiltonian $H$ and its perturbation $\tilde{H}$ must be isospectral. \subsection{Quantum KAM iteration. Homological equation} We are looking for a unitary transformation $W_\varepsilon$ close to the identity, such that the transformed total Hamiltonian is isospectral to $H+\varepsilon V$, \begin{equation} H+\varepsilon V_H(\varepsilon) = W_\varepsilon^\dagger (H+\varepsilon V) W_\varepsilon, \label{eq:isospec} \end{equation} with the constraint that $V_H(\varepsilon)$ be block-diagonal, \begin{equation} V_H = \langle V_H \rangle := \sum_k P_k V_H P_k. \label{eq:diag} \end{equation} By writing \begin{equation} W_\varepsilon = \mathrm{e}^{\mathrm{i} K(\varepsilon)}, \qquad K(\varepsilon) = \varepsilon K_1 + O(\varepsilon^2), \end{equation} with $K_1 = K_1^\dagger$, and \begin{equation} V_H(\varepsilon) = V_0 + O(\varepsilon), \end{equation} with $V_0=V_0^\dagger$, Eq.~\eqref{eq:isospec} reads \begin{equation} H+ \varepsilon V_H(\varepsilon) = (1 - \mathrm{i} \varepsilon K_1) (H+\varepsilon V) (1 + \mathrm{i} \varepsilon K_1) + O(\varepsilon^2), \end{equation} whence \begin{equation} V_0 = \mathrm{i} [H, K_1] + V. \label{eq:step1} \end{equation} Notice that \begin{equation} \langle [H, K_1] \rangle = \sum_k P_k ( H K_1- K_1 H) P_k = \sum_k P_k (h_k K_1 - K_1 h_k ) P_k = 0. \end{equation} Therefore, the constraint~\eqref{eq:diag}, which implies $V_0 = \langle V_0 \rangle$, gives \begin{equation} V_0 = \langle V \rangle = \sum_k P_k V P_k = V_{\mathrm{Z}}, \end{equation} and \begin{equation} \mathrm{i} [H, K_1] =- \{V\}, \label{eq:homological} \end{equation} where \begin{equation} \{V\}:= V- \langle V \rangle = \sum_{k,\ell \,:\, k\neq \ell} P_k V P_\ell = \sum_k P_k V (1 - P_k) = \frac{1}{2} \sum_k [P_k,[P_k,V]] \end{equation} is the off-diagonal part of $V$. The expression~\eqref{eq:homological} should be understood as an equation for $K_1$, the first-order term of the generator $K(\varepsilon)$ of the unitary $W_\varepsilon$. It is known as the \emph{homological} equation and is the fundamental block of quantum KAM theory~\cite{ref:Sinai,ref:Russman,ref:Craig,ref:Poschel,ref:Bellissard}. It is the quantum analog of the homological equation of KAM theory in classical mechanics, where the commutator is replaced by ($-\mathrm{i}$ times) the Poisson bracket, while $\langle{}\cdot{}\rangle$ and $\{{}\cdot{}\}$ are replaced by the averaged and the oscillating part of the perturbation, respectively~\cite{ref:ThirringClassicalPhysics,ref:ArnoldBook}. One can prove that the homological equation~\eqref{eq:homological} has a unique solution with $\langle K_1 \rangle=0$, for every $H$ and $V$. Indeed, by sandwiching~\eqref{eq:homological} between $P_k$ and $P_\ell$ with $k\neq \ell$ we get \begin{equation} (h_k - h_\ell) P_k K_1 P_\ell = \mathrm{i} P_k V P_\ell, \end{equation} that is \begin{equation} \{ K_1\} = \mathrm{i} \sum_{k,\ell \,:\, k\neq \ell} \frac{P_k V P_\ell}{h_k-h_\ell} = \mathrm{i} \sum_\ell S_\ell V P_\ell. \label{eq:solutionh} \end{equation} Notice that $\{K_1\} = \{K_1\}^\dagger$, as it should be, and in fact one has \begin{equation} \{ K_1 \} = \mathrm{i} \sum_\ell S_\ell V P_\ell = -\mathrm{i} \sum_\ell P_\ell V S_\ell = \frac{\mathrm{i}}{2} \sum_\ell (S_\ell V P_\ell - P_\ell V S_\ell). \end{equation} Moreover, notice that we have a complete freedom in the choice of the block-diagonal part $\langle K_1 \rangle$ of $K_1$, since it commutes with $H$ and thus is immaterial in equation~\eqref{eq:homological}, so that \begin{equation} K_1 = \mathrm{i} \sum_\ell S_\ell V P_\ell + \sum_\ell P_\ell Z P_\ell, \end{equation} with an arbitrary $Z=Z^\dagger$. In the following, for simplicity, we will \emph{fix the gauge} $Z=0$, i.e.\ $\langle K_1 \rangle=0$, and thus will make the solution of~\eqref{eq:homological} unique. From the explicit expression of the generator $K_1$, we can now easily evaluate a uniform bound on the divergence~\eqref{eq:25}. From the inequality~\eqref{eq:28}, we get \begin{equation} \delta_\infty = \sup_t \| \mathrm{e}^{\mathrm{i} t (H +\varepsilon V)} - \mathrm{e}^{\mathrm{i} t [H +\varepsilon V_{H}(\varepsilon)]} \| \leq 2 \| 1 - W_\varepsilon \| \leq 2\varepsilon \|K_1\| +O(\varepsilon^2) \leq \frac{2 \sqrt{d}}{\eta}\varepsilon\|V\| + O(\varepsilon^2), \label{eq:firstorderbound} \end{equation} where the last inequality is a consequence of the bound~\eqref{eq:boundX}, since $K_1=\mathrm{i} X$. In fact, an explicit bound on the divergence $\delta_\infty$ is obtained in Ref.~\cite[Appendix~E]{eternal} as \begin{equation} \delta_\infty \le \hat{\delta}_\infty, \qquad \text{where } \quad \hat{\delta}_\infty = 2\sqrt{d}\left( \frac{1}{\sqrt[4]{1-4\varepsilon/\eta}}-1 \right) = \frac{2 \sqrt{d}}{\eta}\varepsilon + O(\varepsilon^2), \end{equation} for $\|V\|=1$, which is easily seen to be always larger than the first order term in~\eqref{eq:firstorderbound}, $\hat{\delta}_\infty \geq 2 \sqrt{d} \,\varepsilon/ \eta$. This bound becomes trivial once it exceeds $\delta_\infty=2$ as $\varepsilon$ increases. Since $d\ge2$, let us care only about the values of $\varepsilon$ where $2\sqrt{2}\,(1/\sqrt[4]{1-4\varepsilon/\eta}-1)\le2$, namely, for $4\varepsilon/\eta\le(13 + 12\sqrt{2})/(17 + 12 \sqrt{2})= x_0$. Within this range, one gets the linear bound $2(1/\sqrt[4]{1-4\varepsilon/\eta}-1)\le (\sqrt{2}/ x_0) 4\varepsilon/\eta < 7 \varepsilon/\eta$. Therefore, we have \begin{equation} \delta_\infty < \frac{7\sqrt{d}}{\eta} \varepsilon. \end{equation} This yields Eq.~(7) of the Letter. \subsubsection{Higher-order terms} One can also show that all the following steps of the KAM iteration, giving higher-order terms $V_n$ in $V_H(\varepsilon)$ and $K_{n+1}$ in $K(\varepsilon)$, with $n\geq 1$, have the same structure as the first step and involve homological equations. For example, by considering the next-order terms, \begin{equation} K(\varepsilon) = \varepsilon K_1 + \varepsilon^2 K_2 + O(\varepsilon^3) , \qquad V_H(\varepsilon) = V_0 + \varepsilon V_1 + O(\varepsilon^2), \end{equation} one gets \begin{equation} H+ \varepsilon V_0 + \varepsilon^2 V_1 = (H+\varepsilon V) +\mathrm{i} \varepsilon [H+\varepsilon V, K_1] - \frac{1}{2} \varepsilon^2 [[H, K_1] , K_1] +\mathrm{i} \varepsilon^2 [H, K_2] + O(\varepsilon^3). \end{equation} The second-order terms give \begin{equation} V_1 = \mathrm{i} [H,K_2] -\frac{1}{2} [[H, K_1], K_1] + \mathrm{i} [V,K_1] , \label{eq:step2} \end{equation} that is \begin{equation} V_1 = \mathrm{i} [H,K_2] + \mathrm{i} \left[V- \frac{1}{2}\{ V\}, K_1\right]. \end{equation} This has the same structure as~\eqref{eq:step1}, and gives \begin{equation} V_1 = \langle V_1\rangle = \left\langle \mathrm{i} \left[V- \frac{1}{2}\{ V\}, K_1\right] \right\rangle = - \sum_\ell P_\ell V S_\ell V P_\ell, \end{equation} and a homological equation for $K_2$: \begin{equation} \mathrm{i} [H,K_2] = - \left\{ \mathrm{i} \left[V- \frac{1}{2}\{ V\}, K_1\right] \right\}. \end{equation} In general, at order $\varepsilon^{n+1}$ one gets an equation of the form \begin{equation} V_n = \mathrm{i} [H, K_{n+1}] + P_n(\mathcal{K}_1, \dots, \mathcal{K}_n)(H) + Q_n(\mathcal{K}_1, \dots, \mathcal{K}_n)(V), \end{equation} where $P_n$ and $Q_n$ are polynomials of order $n$ and $\mathcal{K}_j$ are the superoperators $\mathcal{K}_j(Y)= \mathrm{i} [Y, K_j]$. This has the same structure as~\eqref{eq:step1} or~\eqref{eq:step2}. $V_n$ will be given by the block-diagonal part of the right-hand side, while $K_{n+1}$ will be the solution of the homological equation given by the off-diagonal part. This is the algebraic structure of the KAM iteration scheme. And for our purposes this is enough. See for example~\cite{ref:Scherer95,ref:Scherer97}. However, most difficulties and the hardest part of this scheme arises for infinite-dimensional systems with a vanishing minimal spectral gap $\eta$ because of an accumulation point of the discrete spectrum. Interesting cases are systems with dense point spectrum~\cite{ref:Sinai,ref:Russman,ref:Craig,ref:Poschel,ref:Bellissard}. In such a situation, at each iteration step, the solution of the homological equation~\eqref{eq:solutionh} suffers from the plague of \emph{small denominators}, the same problem that besets celestial mechanics. The reduced resolvent $S_\ell$ becomes unbounded, and the formal expression~\eqref{eq:solutionh} is a bounded operator only for a particular class of perturbations $V$ which are adapted to the Hamiltonian $H$: the closer are the eigenvalues $h_k$ and $h_\ell$ of $H$ at the denominator of~\eqref{eq:solutionh}, the smaller must be the numerator $P_k V P_\ell$. In such a case, the proof of the existence and the convergence of the series makes use of classical techniques of KAM perturbation theory with a careful control of small denominators through a Diophantine condition, and a super-convergent iteration scheme~\cite{ref:ThirringClassicalPhysics,ref:ArnoldBook}. \subsection{Robustness of monotones} In Ref.~\cite{Zanardi}, it is shown that for a symmetry $\mathcal{M}$ of a Lindbladian $\mathcal{L}$ satisfying $[\mathcal{M},\mathcal{L}]=0$ one can define a monotone \begin{equation} f_{\mathcal{M}}(\rho)=\mathop{\mathrm{tr}}\nolimits[\mathcal{M}(\rho)^\dagger(\mathbf{L}_\rho +\lambda \mathbf{R}_\rho )^{-1} (\mathcal{M}(\rho))], \end{equation} which decreases under the evolution $\rho_t=\mathrm{e}^{t\mathcal{L}}\rho$, \begin{equation} f_{\mathcal{M}}(\rho_t) \leq f_{\mathcal{M}}(\rho), \quad \text{for all } t\ge0, \end{equation} where $\mathbf{L}_\rho (X) = \rho X$ and $\mathbf{R}_\rho (X) = X\rho$ are the superoperators of left and right multiplication by $\rho$, respectively, and the inverse with $\lambda \ge 0$ is well defined for strictly positive $\rho$. Here, we prove that a monotone defined with respect to a symmetry of the form \begin{equation} \mathcal{M}=\sum_km_k\mathcal{P}_k, \label{eqn:RobustMopen} \end{equation} where $\{\mathcal{P}_k\}$ are the spectral projections of the Lindbladian $\mathcal{L}$, remains a monotone up to an error $O(\varepsilon)$ eternally even in the presence of a perturbation $\varepsilon\mathcal{V}$, namely, \begin{equation} f_{\mathcal{M}}(\rho_t^\varepsilon) \leq f_{\mathcal{M}}(\rho)+O(\varepsilon), \quad \text{for all } t\ge0, \label{eqn:PerturbedMonotonicity} \end{equation} where $\rho_t^\varepsilon=\mathrm{e}^{t(\mathcal{L}+\varepsilon\mathcal{V})}\rho$. In this sense, $\mathcal{M}$ in (\ref{eqn:RobustMopen}) is a robust symmetry of the evolution $\mathcal{L}$. To show this, we first note that even in the case of open-system evolution one can find a block-diagonal approximation $\mathcal{V}_\mathcal{L}(\varepsilon)$ of the perturbation $\mathcal{V}$ such that $\mathcal{L}+\varepsilon\mathcal{V}_\mathcal{L}(\varepsilon)$ is similar to $\mathcal{L}+\varepsilon\mathcal{V}$ \cite{eternal}, \begin{equation} \mathcal{L}+\varepsilon\mathcal{V}_\mathcal{L}(\varepsilon) =\mathcal{W}_\varepsilon^{-1}(\mathcal{L}+\varepsilon\mathcal{V})\mathcal{W}_\varepsilon. \end{equation} Then, let us consider \begin{equation} \tilde{\mathcal{M}}=\mathcal{W}_\varepsilon\mathcal{M}\mathcal{W}_\varepsilon^{-1}. \end{equation} This is a symmetry of the perturbed system $\mathcal{L}+\varepsilon\mathcal{V}$, corresponding to the symmetry $\mathcal{M}$ of the unperturbed system $\mathcal{L}$, since $[\mathcal{M},\mathcal{V}_\mathcal{L}(\varepsilon)]=0$. Since this similarity transformation is small, $\mathcal{W}_\varepsilon=1+O(\varepsilon)$, we have \begin{equation} \tilde{\mathcal{M}}=\mathcal{M}+O(\varepsilon). \end{equation} Notice that the monotone $f_{\tilde{\mathcal{M}}}(\rho)$ defined with respect to $\tilde{\mathcal{M}}$ is decreasing under the perturbed evolution $\rho_t^\varepsilon=\mathrm{e}^{t(\mathcal{L}+\varepsilon\mathcal{V})}\rho$. Therefore, \begin{equation} f_{\mathcal{M}}(\rho_t^\varepsilon) =f_{\tilde{\mathcal{M}}}(\rho_t^\varepsilon)+O(\varepsilon) \le f_{\tilde{\mathcal{M}}}(\rho)+O(\varepsilon) =f_\mathcal{M}(\rho)+O(\varepsilon), \quad \text{for all } t\ge0. \end{equation} This proves the approximate monotonicity (\ref{eqn:PerturbedMonotonicity}). \end{widetext}
1,116,691,500,554
arxiv
\section{Introduction} Vision and language are two fundamental capabilities of human intelligence. Humans routinely perform cross-modal analytics through the interactions between vision and language, supporting the uniquely human capacity, such as describing what they see with a natural sentence (image captioning \cite{anderson2017bottom,yao2018exploring,yao2019hierarchy} and video captioning \cite{pan2016jointly,li2018jointly,Yao:ICCV15}) and answering open-ended questions w.r.t the given image (visual question answering \cite{antol2015vqa,kim2018bilinear}). The valid question of how language interacts with vision motivates researchers to expand the horizons of multimedia area by exploiting cross-modal analytics in different scenarios. In the past five years, vision-to-language has been one of the ``hottest'' and fast-developing topics for cross-modal analytics, with a significant growth in both volume of publications and extensive applications, e.g., image/video captioning and the emerging research task of vision-language pre-training. Although numerous existing vision-to-language works have released the open-source implementations, the source codes are implemented in different deep learning platforms (e.g., Caffe, TensorFlow, and PyTorch) and most of them are not organized in a standardized and user-friendly manner. Thus, researchers and engineers have to make intensive efforts to deploy their own ideas/applications for vision-to-language based on existing open-source implementations, which severely hinders the rapid development of cross-modal analytics. To alleviate this issue, we propose the \texttt{X-modaler} codebase, a versatile, user-friendly and high-performance PyTorch-based library that enables a flexible implementation of state-of-the-art vision-to-language techniques by organizing all components in a modular fashion. To our best knowledge, \texttt{X-modaler} is the first open-source codebase for cross-modal analytics that accommodates numerous kinds of vision-to-language tasks. \begin{figure*} \centering \vspace{-0.12in} \includegraphics[width=0.88\linewidth]{architecture.pdf} \vspace{-0.2in} \caption{An overview of the architecture in \texttt{X-modaler}, which is composed of seven major stages: \texttt{pre-processing}, \texttt{encoder}, \texttt{cross-modal interaction}, \texttt{decoder}, \texttt{decode strategy}, \texttt{training strategy}, and \texttt{pre-training}. Each stage is empowered with the functionality that covers a series of widely adopted modules in state-of-the-arts.} \vspace{-0.12in} \label{fig:architecture} \end{figure*} Specifically, taking the inspiration from neural machine translation in NLP field, the typical architecture of vision-to-language models is essentially an encoder-decoder structure. An image/video is first represented as a set of visual tokens (regions or frames), CNN representation, or high-level attributes via \texttt{pre-processing}, which are further transformed into intermediate states via \texttt{encoder} (e.g., LSTM, Convolution, or Transformer-based encoder). Next, conditioned the intermediate states, the \texttt{decoder} is utilized to decode each word at each time step, followed by \texttt{decode strategy} module (e.g., greedy decoding or beam search) to compose the final output sentence. More importantly, recent progress on cross-modal analytics has featured visual attention mechanisms, that trigger the cross-modal interaction between the visual content (transformed by encoder) and the textual sentence (generated by decoder) to boost vision-to-language. Therefore, an additional stage of \texttt{cross-modal interaction} is commonly adopted in state-of-the-art vision-to-language techniques. The whole encoder-decoder structure can be optimized with different \texttt{training strategies} (e.g., cross entropy loss, or reinforcement learning). In addition, vision-language pre-training approaches (e.g., \cite{zhou2019unified,li2021scheduled,pan2020auto}) go beyond the typical encoder-decoder structure by including additional \texttt{pre-training} objectives (e.g., masked language modeling and masked sentence generation). In this way, the state-of-the-art cross-modal analytics techniques can be encapsulated into seven general-purpose stages: \texttt{pre-processing}, \texttt{encoder}, \texttt{cross-modal interaction}, \texttt{decoder}, \texttt{decode strategy}, \texttt{training strategy}, and \texttt{pre-training}. Following this philosophy, \texttt{X-modaler} is composed of these seven general-purpose stages, and each stage is empowered with the functionality that covers a series of commonly adopted modules in state-of-the-arts. Such modular design in \texttt{X-modaler} enables a flexible implementation of state-of-the-art algorithms for vision-to-language, and meanwhile allows the easy plug-ins of user-defined modules to facilitate the deployment of novel ideas/applications for cross-modal analytics. In summary, we have made the following contributions: \textbf{(I).} To our best knowledge, \texttt{X-modaler} is the first open-source codebase that unifies comprehensive high-quality modules for cross-modal analytics. \textbf{(II).} \texttt{X-modaler} provides the easy implementations of state-of-the-art models for image captioning, video captioning, and vision-language pre-training, in a standardized and user-friendly manner. Moreover, \texttt{X-modaler} can be simply extended to support other vision-language tasks, e.g., visual question answering, visual commonsense reasoning, and cross-modal retrieval. \textbf{(III).} In \texttt{X-modaler}, we release all the reference codes, pre-trained models, and tools for each vision-language task, which will offer a fertile ground for deploying cross-modal analytics in industry and designing novel architectures in academia. \begin{figure} \centering \vspace{-0.16in} \includegraphics[width=0.86\linewidth]{uml.pdf} \vspace{-0.22in} \caption{Class diagram of our \texttt{X-modaler} codebase.} \vspace{-0.22in} \label{fig:uml} \end{figure} \vspace{-0.1in} \section{Architecture} In this section, we present the detailed architecture in our \texttt{X-modaler} consisting of seven major stages, as shown in Figure \ref{fig:architecture}. Figure \ref{fig:uml} depicts the detailed class diagram of our \texttt{X-modaler} codebase. \vspace{-0.1in} \subsection{Pre-processing} The \texttt{pre-processing} stage is utilized to transform the primary inputs of image/video and textual sentence into visual and textual tokens. Besides the typical \textbf{tokenizing}, we include numerous modules to represent each input image/video in different ways: (1) directly taking the output \textbf{CNN Representation} of fully-connected layers in 2D/3D CNNs as image/video features; (2) detecting a set of \textbf{regions} as bottom-up signals via Faster R-CNN as in \cite{anderson2017bottom}; (3) recognizing \textbf{objects} from visual content via pre-trained object classifier \cite{yao2017novel}; (4) extracting high-level semantic \textbf{attributes} through Multiple Instance Learning \cite{yao2017boosting,pan2017video}; (5) exploring the semantic or spatial object \textbf{relations} between every two regions \cite{yao2018exploring}. \vspace{-0.06in} \subsection{Encoder} The \texttt{encoder} stage is to take the visual/textual tokens as input and produce intermediate states to encode the semantic content. After transforming each visual/textual token via \textbf{visual/textual embedding}, the most typical way to construct encoder is to directly adopt \textbf{LSTM} to sequentially encode the sequence of tokens. The module of Graph Convolutional Networks (\textbf{GCN}) \cite{yao2018exploring} can be further utilized to strengthen each region-level encoded features by exploiting the graph structure among regions. Instead of the sequential modeling in LSTM module, \textbf{Convolution} module \cite{aneja2018convolutional,chen2019temporal} fully employs convolutions in encoder to enable parallelization within a sequence during training. Inspired by recent success of Transform-style encoder-decoder in NLP field \cite{vaswani2017attention}, we include the \textbf{self-attention} module that leverages self-attention mechanism to enhance each local (region/frame) feature by exploring the intra-modal feature interactions. \vspace{-0.06in} \subsection{Cross-modal Interaction} The \texttt{cross-modal interaction} stage aims to boost vision-language tasks by encouraging more interactions between the two different modalities. \textbf{Attention} module denotes the conventional attention mechanism that dynamically measures the contribution of each local image region \cite{Xu:ICML15} or frame \cite{Yao:ICCV15} based on the hidden state of decoder. \textbf{Top-down attention} module \cite{anderson2017bottom} exploits visual attention at object level. \textbf{Co-attention} module \cite{lu2019vilbert} enables bi-directional interaction between visual and textual tokens. \textbf{Meshed memory attention} module \cite{cornia2020meshed} utilizes memory-augmented attention that boosts inter-modal interaction with a priori knowledge. \textbf{X-Linear attention} module \cite{pan2020x} models the higher order interactions with both spatial and channel-wise bilinear attention. \vspace{-0.06in} \subsection{Decoder} The \texttt{decoder} stage targets for decoding each word at each time step conditioned on the intermediate states of the inputs induced by the encoder. Similar to the modules of encoder stage, the decoder can also be constructed in different forms: \textbf{LSTM/GRU} that autoregressively produces each word, \textbf{Convolution} which fully leverages convolutions for decoding sentence, or \textbf{Transformer} that first exploits the word dependency via self-attention mechanism and further captures the co-attention across vision \& language via cross-attention mechanism to facilitate word generation. \vspace{-0.06in} \subsection{Decode Strategy} The \texttt{decode strategy} stage includes two common modules to generate the final output sentence at inference: (1) \textbf{greedy decoding} that samples the word with the maximum probability at each time step until we select the special end-of-sentence token or reach the maximum length; (2) \textbf{beam search}, i.e., a heuristic search algorithm that maintains a beam including several most likely partial sentences at each decoding time step. \vspace{-0.06in} \subsection{Training Strategy} In the \texttt{training strategy} stage, we assemble a series of practical training strategies adopted in state-of-the-art vision-to-language models: (1) \textbf{cross entropy} module that penalizes the prediction of each word with cross entropy loss; (2) \textbf{label smoothing} module \cite{szegedy2016rethinking} further regularizes the classifier for word prediction by estimating the marginalized effect of label-dropout; (3) \textbf{schedule sampling} module \cite{bengio2015scheduled} capitalizes on a curriculum learning strategy to gently change the input token at each time step from the ground-truth word token to the estimated one from previous step; (4) \textbf{reinforcement learning} module \cite{rennie2017self} enables the direct optimization of the whole encoder-decoder structure with expected sentence-level reward loss. \vspace{-0.06in} \subsection{Pre-training} To endow the base encoder-decoder structure with the capabilities of multi-modal reasoning for vision-language pre-training, we involve the \texttt{pre-training} stage that pre-trains the base structure with several vision-language proxy tasks, e.g., \textbf{masked language modeling}, \textbf{masked sentence generation}, and \textbf{visual-sentence matching} as in \cite{zhou2019unified,li2021scheduled}. \begin{figure} \centering \vspace{-0.16in} \includegraphics[width=0.86\linewidth]{task.pdf} \vspace{-0.2in} \caption{Exemplary implementations of cross-modal analytics for numerous vision-language tasks in our \texttt{X-modaler}.} \vspace{-0.2in} \label{fig:task} \end{figure} \section{Applications and Evaluations} This section details the exemplary implementations of each vision-language task (as shown in Figure \ref{fig:task}) in our \texttt{X-modaler}, coupled with the experimental results over several vision-language benchmarks (e.g., COCO \cite{chen2015microsoft}, MSVD \cite{Chen:ACL11}, and MSR-VTT \cite{Xu:CVPR16}) for image/video captioning task. \noindent\textbf{Image/Video Captioning.} This task aims to auto-regressively generate the natural sentence that depicts the visual content of input image/video. We re-implement several state-of-the-art image captioning approaches (e.g., $M^2$ Transformer \cite{cornia2020meshed} and X-LAN \cite{pan2020x}) and video captioning methods (e.g., TDConvED \cite{chen2019temporal} and Transformer \cite{sharma2018conceptual}) by allocating different modules in the unified encoder-decoder paradigm (see Figure \ref{fig:task} (a)). Table \ref{tab:COCO} and \ref{tab:video} show the performance comparison of our re-implemented methods through \texttt{X-modaler} over COCO for image captioning and MSVD \& MSR-VTT for video captioning, which manage to achieve state-of-the-art performances on each benchmark. \begin{table}[t]\small \centering \vspace{-0.13in} \setlength\tabcolsep{1.2pt} \caption{\small Performance comparison on COCO for image captioning.} \vspace{-0.16in} \begin{tabular}{l | c c c c c | c c c c c} \Xhline{2\arrayrulewidth} & \multicolumn{5}{c|}{\textbf{Cross-Entropy Loss}} & \multicolumn{5}{c}{\textbf{CIDEr Score Optimization}} \\ & B@4 & M & R & C & S & B@4 & M & R & C & S \\ \hline \hline Attention \cite{rennie2017self} & 36.1 & 27.6 & 56.6 & 113.0 & 20.4 & 37.1 & 27.9 & 57.6 & 123.1 & 21.3 \\ Up-Down \cite{anderson2017bottom} & 36.0 & 27.6 & 56.6 & 113.1 & 20.7 & 37.7 & 28.0 & 58.0 & 124.7 & 21.5 \\ Transformer \cite{sharma2018conceptual} & 35.8 & 28.2 & 56.7 & 116.6 & 21.3 & 39.2 & 29.1 & 58.7 & 130.0 & 23.0 \\ $M^2$ Transformer \cite{cornia2020meshed} & 35.7 & 27.8 & 56.3 & 114.5 & 20.7 & 38.8 & 28.8 & 58.0 & 130.0 & 22.3 \\ X-LAN \cite{pan2020x} & 37.5 & 28.6 & 57.6 & 120.7 & 21.9 & 39.2 & 29.4 & 59.0 & 131.0 & 23.2 \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.22in} \label{tab:COCO} \end{table} \noindent\textbf{Vision-language Pre-training (VLP).} VLP is to pre-train a unified encoder-decoder structure over large-scale vision-language benchmarks, which can be easily adapted to vision-language downstream tasks. The state-of-the-art VLP models (e.g., TDEN \cite{li2021scheduled} and ViLBERT \cite{lu2019vilbert}) can be implemented as a two-stream Transformer structure (see Figure \ref{fig:task} (b)): object and sentence encoders first separately learns the representations of each modality, and the cross-modal interaction module further performs multi-modal reasoning, followed with the decoder for sentence generation. \noindent\textbf{Visual Question Answering (VQA).} In VQA, the model predicts an answer to the given natural language question with regard to an image. Here we show the implementation of a base model for this task in Figure \ref{fig:task} (c). This base model first encodes the input question and image via object and sentence encoders, and further utilizes cross-modal interaction module (e.g., the attention mechanism in \citep{yu2019deep}) to achieve the holistic image-question representation. Finally, a single-layer MLP is leveraged as a classifier to predict answer based on the holistic image-question representation. \noindent\textbf{Cross-modal Retrieval.} It aims to search an image/caption from a pool given its caption/image. It is natural to formulate this task as a ranking problem that sorts images/captions according to the learnt image-sentence matching scores. Here the image-sentence matching score can be directly measured as the dot product between the encoded features of image and caption (see Figure \ref{fig:task} (d)). \noindent\textbf{Visual Commonsense Reasoning (VCR).} VCR tackles two problems: visual question answering and answer justification, that requires the model to predict an answer or judge the correctness of the chosen rationale respectively. Each problem is framed as multiple choice task. Similar to VQA, we can measure the holistic image-sentence feature via cross-modal interaction module based on the multi-modal outputs of object and sentence encoders, which will be further leveraged to predict the score for each possible response via a classifier (see Figure \ref{fig:task} (e)). \section{Conclusions} We presented \texttt{X-modaler}, a versatile and high-performance codebase for cross-modal analytics. This codebase unifies comprehensive high-quality modules in state-of-the-art vision-language techniques, which are organized in a standardized and user-friendly fashion. Through an extensive set of experiments on several vision-language benchmarks (e.g., COCO, MSVD, and MSR-VTT), we demonstrate that our \texttt{X-modaler} provides state-of-the-art solutions for image/video captioning task. For the ease of use, all the reference codes, pre-trained models, and tools for each vision-language task are published on GitHub. \begin{table}[t]\small \centering \vspace{-0.13in} \setlength\tabcolsep{4.1pt} \caption{\small Performance comparison for video captioning.} \vspace{-0.16in} \begin{tabular}{l|cccc|cccc} \Xhline{2\arrayrulewidth} \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multicolumn{4}{c|}{MSVD} & \multicolumn{4}{c}{MSR-VTT} \\ \multicolumn{1}{c|}{} & B@4 & M & R & C & B@4 & M & R & C \\ \hline \hline MP-LSTM \cite{Venugopalan14} & 48.1 & 32.4 & 68.1 & 73.1 & 38.6 & 26.0 & 58.3 & 41.1 \\ TA \cite{Yao:ICCV15} & 51.0 & 33.5 & 70.0 & 77.2 & 39.9 & 26.4 & 59.4 & 42.9 \\ Transformer \cite{sharma2018conceptual} & 49.4 & 33.3 & 68.7 & 80.3 & 39.2 & 26.5 & 58.7 & 44.0 \\ TDConvED \cite{chen2019temporal} & 51.7 & 34.1 & 70.4 & 77.8 & 38.9 & 26.3 & 59.0 & 40.7 \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.26in} \label{tab:video} \end{table}
1,116,691,500,555
arxiv
\section{Introduction} The dielectronic satellite lines are commonly seen in the spectra of highly ionized ions in high-temperature sources, such as active and flaring regions of the solar corona or fusion devices (Kato {\it et\thinspace al}\ 1998, Bely-Dubau {\it et\thinspace al}\ 1982). They have been shown to be of particular relevance as diagnostics of both steady-state and transient stages of a plasma source (Oelgoetz and Pradhan 2001, 2004). The physical process entails the collision of an electron with an N-electron ion core, forming an (N+1)-electron quasi-bound, doubly-excited resonant state, which may decay radiatively or via auotionization (AI). Radiative decay leads to recombination of the free electron into a stabilised bound state of the (N+1)-electron ion --- the process known as dielectronic recombination (DR). For highly charged ions of relatively heavy elements, the large radiative decay rates in the parent ion core compete effectively with the autoionization rates for the {(e~+~ion)}\ system. Schematically, this branching process may be written as \begin{equation} e + X^{++} \leftrightarrow (X^+)^{**} \leftrightarrow \left\{ \begin{array}{c} \ e + X^{++}~(AI) \\ \ h\nu + X^+~(DR)\;. \end{array} \right.\label{3stepproc} \end{equation} The doubly-excited, autoionizing state (denoted by a double asterisk) appears as a resonance that breaks up either via autoionization, a radiation-less transition back to the N-electron ion and a free electron carrying away the excess energy into the continuum, or via radiative stabilization to a recombined (N+1)-electron bound state. In the latter case the excited ion core decays radiatively via a strong dipole transition, usually into a low-lying state, and the incident electron is thereby captured into an (N+1)-electron state; the resulting photon emission manifests itself as a `dielectronic satellite' (DES) line. The radiative transition within the N-electron ion core is called a {\it principal} transition. The DES lines are somewhat lower in energy, and at longer wavelengths or `redward' of, the principal line. There is an infinite set of these DES lines corresponding to the Rydberg series of autoionizing levels as $n \longrightarrow \infty$, converging on to the threshold for the principal core transition. Only the lowest $n$-complexes are usually observed and resolved however, as they have larger electron-capture rates (inverse of AI rates) and sufficiently removed energetically from the principal transition. The higher-$n$ DES lines blend with the principal line. The strongest DES lines stem from the $n$=2 complex in He-like ions, as `satellites' to the principal transition $1s2p$~$^1P^o_1$~$\rightarrow$~$1s^2$~$^1S_0$, designated as the line $w$ (also referred to as the `resonance transition'). The $w$-line wavelengths for Fe~XXV and Ni~XXVII are 1.8504 and 1.5884 $\AA$ respectively. The recombining electron forms a three-electron Li-like system in a quasi-bound state that may decay radiatively. The doubly-excited autoionizing states corresponding to the $n = 2$ complex (1s2$l$2$l'$) are designated as KLL resonances. Radiative decay of the KLL resonances to Li-like bound states ($1s^22s$~$^2S_{1/2}$, $1s^22p$~$^2P^o_{1/2}$, and $1s^22p$~$^2P^o_{3/2}$) give rise to 22 DES lines. The KLL DES lines from several elements are seen in high-temperature plasma sources above 1 MK (Kato {\it et\thinspace al}\ 1998, Beieresdorfer {\it et\thinspace al}\ 1992). In order for the DES lines to form, the plasma temperature must be sufficiently high such that energetic electrons are available to recombine with the target ion via these quasi-bound states. A DES line arises from electrons in a very narrow energy range corresponding to a particular AI level and its width. But the principal line can be excited by any electron with an energy greater than the threshold energy of excitation of the core transition. In other words, the DES line is basically insensitive to over free-electron distribution, generally a maxwellian, whereas the excitation of the principal line depends largely on the electron temperature. Therefore, the ratio of a DES line to the principal line in the core (the aforementioned $w$-line in the case of KLL resonances) is very sensitive to the plasma temperature. It follows that DES lines weaken much faster than the principal line near and beyond the temperature of maximum abundance (assuming coronal equilibrium). The formation of dielectronic satellites may be also viewed as a subset of the electron-ion recombination process. As Eq. (1) implies, by taking detailed balance into account, {(e~+~ion)}\ recombination is the inverse of photoionization. Recombination is usually divided into multiple components: radiative recombination (RR) which refers to non-resonant or background photo-recombination, dielectronic recombination (DR) via autoionizing resonances, and, if appropriate to the plasma, the stimulated analogues of these processes. Previous works (e.g. Nahar and Pradhan 1994, 2004) have developed a method that unifies both the RR and DR processes, accounting for quantum mechanical interference between the two processes. Nahar and Pradhan (2006) further extended the unified method for total electron-ion recombination for dielectronic satellite lines and showed that the profiles and intensities of the unified recombination cross sections ($\sigma_{RC}$) directly correspond to, and compare well with, the measured strengths and observed recombination spectra. We refer to the unified {(e~+~ion)}\ recombination cross section as $\sigma_{RC}$, which is the detailed balance inverse of photoionization from all bound states of the {(e~+~ion)}\ system and subsumes both the RR and DR processes. The unified {(e~+~ion)}\ recombination method yields a representation of dielectronic satellite spectra that includes the interference between RR ad DR in an {\it ab initio} manner. This is in contrast to existing isolated resonance approximation (IRA) treatments (such as Bely-Dubau {\it et\thinspace al}\ 1982) that do not include this coupling. In this report we extend and complete the new approach for DES lines described in Nahar and Pradhan (2006), and demonstrate its use with two important practical examples. We also compute the resonance strengths for KLL DES lines of (e+Fe~XXV) and (e+Ni~XXVII) to higher precision than in the earlier work. They are presented two different ways: The theoretical recombination resonance strength, and the DES line strength for comparison with experimentally measured values. The theoretical resonance strength is computed from the unified recombination cross sections without approximation and exhibits the full spread of the resonance profile with energy. But the experimentally measured value is obtained to one single (peak) energy; therefore, the DES line strength is recomputed at that energy. In general, this uncertainty in the measured values is not too critical to accuracy for DES resonances where the peak is sharply defined. Finally, the rates for all DES lines are tabulated and should be more accurate than previous works for spectral analysis, potentially applicable to the analysis of X-ray observations from a variety of plasma sources. \section{Theory} As mentioned above, the unified recombination approach for the strengths and rates of dielectronic satellite lines is fundamentally different from existing approaches based on the IRA. In the IRA a DES line is formed when a target ion captures a free electron into an autoionizing state, and this autoionizing state undergoes radiative stabilization via decay to bound (e + ion) state to complete the dielectronic recombination process. This emitted photon is treated just as if it were a bound-bound emission line, thus it is calculated using a standard line profile, centered about a single energy. The unified method, on the other hand, yields the entire line profile in an {\it ab initio} manner by calculating recombination cross sections from photoionization cross sections that account for the coupling between all of the quasi-bound states and the continuum, and the interference effects between the two, as summarized below. \subsection{The unified method for total recombination cross sections} The basic outline of the calculation starts with the bound N-electron wave functions for a number of states of the target ion ($\chi_i(ion)$). In the coupled channel R-matrix methodology, these N-electron wave functions are then used to calculate wave functions for the (N+1) electron system ($\Psi(ion+e;E,J,\pi)$, which are defined by an energy $E$, angular momenta $J$, and parity $\pi$) by using the close coupling expansion \begin{equation} \Psi(ion+e;E,J,\pi) = \hat{A}\left( \sum_{i} \chi_{i}(ion)\theta_{i} + \sum_{j} c_{j} \Phi_{j}(ion+e)\right)\;,\label{ccexp} \end{equation} where $\chi_{i}$ is the target wave function in a specific level $J_i\pi_i$ and $\theta_{i}$ is the wave function for the ($N$+1)-th electron in a channel labeled as $S_iL_i(J_i)\pi_ik_{i}^{2}\ell_i(\ J\pi))$ where $k_{i}^{2}$ is its incident kinetic energy. A channel is open or closed depending on positive or negative energy of the electron. The $\Phi_j$'s are correlation functions of the ($N$+1)-electron system that are built from the one-electron wave functions of the N electron system and account for short range correlations and orthogonality between continuum and bound orbitals. It should be noted that in this treatment, autoionizing states are not discrete states, but instead are embedded in the continuum and result from the coupling of open and closed channels. As a consequence the terms autoionizing or quasi-bound states appear naturally with the contiunnum and are treated together in the unified framework; thus we will use them only to refer to the IRA calculations. As the coupled channel expansion, Eq. (1), allows for interaction between the autoionizing states and the continuum, it accounts for quantum mechanical interference between them. Resonances are a consequence of this coupling. The unified method thereby includes both recombination processes, via the continuum (RR), and via autoionizing resonances (DR). In addition, it should also be noted that the same expansion, Eq. (1), is used for both the bound states ($E<0$), and for the contiunuum wave functions ($E>0$). Photoionization cross section can be obtained as \begin{eqnarray} \sigma_{PI}(B,E) = \sum_{(j,\pi)} \sigma_{PI}(B\rightarrow J,\pi,E)\;, \label{levspecific}\\ \sigma_{PI}(B\rightarrow J,\pi,E) = {1\over g_i}{4\pi^2\over 3c}\omega{\bf S(J,\pi,E)}\;,\label{sigPI}\\ {\bf S}=\left|\left<\Psi_B \left|\left| {\bf D} \right|\right| \Psi_F(J,\pi,E) \right>\right|^2\;,\label{smat} \end{eqnarray} where, $g_i$ is the statistical weight factor of the initial bound state, $\Psi_B$ and $\Psi_{F}$ respectively are the bound and free electron wave functions calculated using the expansion presented in (\ref{ccexp}), and ${\bf D}$ is the dipole operator (${\bf D}= \sum\limits_i^{N+1}{r_i}$). The radiative decay rates of highly charged H- and He-like recombining ions are often comparable to the Auger rates of autoionizing states, typically 10$^{12}$ - 10$^{14}~sec^{-1}$ (e.g. Nahar {\it et\thinspace al}\ 2000) with strong dipole allowed $2p$~($^2P^o$)~$\rightarrow$~$1s$~($^2S$) and $1s2p$~($^1P^o_1$)~$\rightarrow$~$1s^2$~($^1S_0$) transitions. Because of this it is essential to use a radiative damping treatment that will account for the probability of the absorbed photon being re-emitted via the radiative decay of the final {(e~+~ion)}\ state. These calculations include radiative damping by using the resonance fitting procedure outlined in Sakimoto {\it et\thinspace al}\ (1990), Pradhan and Zhang (1997), and Zhang {\it et\thinspace al}\ (1999). The photo-recombination cross section, $\sigma_{\rm RC}$, is related to photoionization cross section, $\sigma_{\rm PI}$, through principle of detailed balance as \begin{equation} \sigma_{\rm RC}(B,\epsilon) = {\alpha^2 \over 4} {g_i\over g_j}{(\epsilon + I)^2\over \epsilon} \sigma_{\rm PI}(B,\epsilon+I)\;,\label{sigRC} \end{equation} \noindent in Rydberg units. Here, $\alpha$ is the fine structure constant, $\epsilon$ is the photoelectron energy, $g_j$ is the statistical weight of the recombined ion, and $I$ is the ionization potential ($E=\epsilon+I$). The recombination cross section, $\sigma_{\rm RC}$, is calculated at a sufficiently large number of energies to delineate the non-resonant background and the resonances (which, when taken together, subsume both radiative and dielectronic recombination processes). It should be noted that (\ref{sigRC}) is valid for both $\sigma_{PI}(B,E)$ and $\sigma_{PI}(B\rightarrow J,\pi,E)$. We assume that the recombining ion is in the ground state and recombination can take place into the ground or any of the excited recombined (e+ion) states. Thus we can calculate a total unified $\sigma_{\rm RC}$ by summing over the final recombined states. It should be noted that a single resonance in the total $\sigma_{\rm RC}$ might appear in multiple level specific $\sigma_{\rm RC}$'s. This is because one free wave function (or autoionizing state in the IRA framework) can often recombine into multiple bound states. (The selection rules for this recombination are the same as the dipole operator, namely: $\Delta\pi=\pm 1$, $\Delta J=\pm1,0$ except $\Delta J\ne0$ for $J=0$.) As these final states have different energies, different DES emission lines are formed from each pathway. There is no interference between lines arising from the multiple recombination pathways, although the lines do blend in the total recombination cross section because the total cross section is simply a sum of the level specific cross sections. This should not be confused with interference between RR and DR, as well as the interference between different resonances within the same symmetry ($J\pi$). These two interference effects are included as a consequence of the close coupling expansion employed in the unified method. \subsection{Resonance strengths, rate coefficients, and intensities of DES lines} The resonances in unified recombination cross sections ($\sigma_{RC}$) are observed in the emission spectra as DES lines. As explained in Nahar and Pradhan (2006), integration or averaging over the resonance profiles provides the (a) recombination resonance strengths which are {\it intrinsic} quantities independent of external plasma conditions, and (b) DES intensities which depend mainly on the electron temperature. Following the notation of Nahar and Pradhan (2006), the recombination rate through a satellite line, for an electron temperature $T$, ($\alpha_s(T)$) can be expressed as the product of a temperature dependent term ($f(T)$) and a term that is intrinsic to the satellite line ($S_{\rm RC}(s)$). Nahar and Pradhan (2006) define these as \begin{eqnarray} \alpha_s(T) = f(T)S_{\rm RC}(s)\;,\label{NP2006a}\\ f(T)={4\over \sqrt{2\pi m_e}}{e^{-{\epsilon_s\over kT}}\over{(kT)^{3/2}}}\;,\label{NP2006b}\\ S_{\rm RC}(s)=\int_{E_i}^{E_f}{\epsilon~\sigma_{RC}(\epsilon) d\epsilon}\;,\label{NP2006c} \end{eqnarray} where $E_i$ and $E_f$ are the upper and lower bounds for the line $s$, $m_e$ is the mass of the electron, $k$ is the Boltzmann constant, and $\epsilon_s$ is the resonance's peak energy. The temperature independent part, $S_{\rm RC}$, is referred to as the {\it recombination resonance strength}. As the lines are narrow, and the resonance profiles are fairly symmetric, the computed DES resonance strengths can be compared directly with those derived from measurements of the {\it satellite line strength} ($S(s)$), which is related to the recombination resonance strength ($S_{\rm RC}(s)$) approximately as \begin{equation} S(s) \approx S_{\rm RC}(s)/\epsilon_s\;.\label{S} \end{equation} A more convenient expression relating the basic quantities above is \begin{equation} \alpha_s(T)=S(s)\epsilon_sf(T)=0.015484~{\epsilon_s e^{-{\epsilon_s\over kT}}\over {T}^{3/2}} S(s)\;,\label{convenient} \end{equation} which is valid for any satellite line with a narrow energy width. The units employed are: $\epsilon$ in Rydbergs (Ry) and $\sigma_{RC}$ in Megabarns (Mb). The DES strength $S_{\rm RC}(s)$ can be expressed in CGS units via the following conversion: Ry$^2$Mb$=4.75109\times 10^{-40}$ ergs$^2$~cm$^2$. One disadvantage inherent in the recombination cross section $\sigma_{\rm RC}$ is that it may not be used over the entire energy range since it diverges at zero photoelectron energy. Therefore, we also compute the recombination collision strength $\Omega_{RC}$ which shows no such divergence since \begin{equation} \sigma_{RC} (Mb) = {\pi\over g_ik_i^2} \Omega_{RC} (a_0^2/10^{-18})\;,\label{omega} \end{equation} where $k_i^2$ is the energy of the incident electron (equivalent to the photoelectron energy $\epsilon$). When calculating DES rate coefficients, $\alpha_s(T)$, we employ both the $\sigma_{RC}$ and $\Omega_{RC}$ as a numerical consistency check. Intensities of the individual DES lines can be obtained as, \begin{equation} I_s(i\rightarrow j,T)=\alpha_s(T) n_i n_e\;,\label{intensity} \end{equation} where $n_i$ is the density of the target ion and $n_e$ is the electron density. However, it is common, and often more useful when comparing to experiment, to compare the intensity ratio of the satellite line to the dipole core excitation line. In the case of KLL lines, this is the $w$-line due to the transition $1s2p$~~$^1P^o_1$~$\rightarrow$~$1s^2$~~$^1S_0$. The intensity ratio of a KLL satellite line to the $w$-line is obtained as \begin{equation} \frac{I_s}{I_w}=\frac{\alpha_s}{q_w}\;,\label{Iratio} \end{equation} where $q_w$ is the rate coefficient for collisional excitation from the ground state, into the $1s2p$~($^1P^o_1$) state which gives rise to the w line. \subsection{Autoionization rates from unified satellite strengths} Most of the modeling codes are based on the IRA. They calculate dielectronic capture and recombination rates from autoionization rates $A_a$ and radiative rates $A_r$. Hence, it is desirable to establish a correspondence between the present and the earlier approaches by formulating the autoionization rates ($A_a$) from unified resonance strengths. This will facilitate the use of the present data in modeling codes that are based on the IRA framework. Dielectronic recombination, in the IRA is a multi-step process, the first step of which is to calculate a dielectronic capture rate ($D$) which is related to the autoionization rate by \begin{equation} D_{m \rightarrow i} = \frac{g_i}{g_m}\frac{h^3e^{\frac{-\epsilon_s}{kT}}}{2(2\pi m_ekT)^{ \frac{3}{2}}}A_{a}(i \rightarrow m)\;, \label{eqnDC} \end{equation} where $m$ is the state of the target ion, and $i$ refers to the autoionizing state. It should be noted that this result assumes that dielectronic capture is the detailed balance inverse of autoionization, but does not assume a specific line (resonance) shape, other than that the shape is centered about $\epsilon_s$. The total DR rate coefficient for the satellite line is then simply the product of the dielectronic capture rate and the branching ratio, \begin{equation} \alpha^{DR}_s(T)=D_{m \rightarrow i}\frac{A_r(i\rightarrow j)} {\sum\limits_lA_r(i \rightarrow l)+\sum\limits_kA_a(i\rightarrow k)}\;, \label{eqndefira-s} \end{equation} where an ion in state $m$ captures an electron to form autoionizing state $i$, which then completes the DR process by radiatively stabilizing (decaying) to state $j$. The $A_r$'s are radiative decay rates for this last step. The present approach equates this two step process to a resonance in the level specific recombination cross section ($\sigma_{\rm RC}$). The resonances in $\sigma_{\rm RC}$ correspond to complete dielectronic satellite line profiles. Substituting (\ref{eqnDC}) into (\ref{eqndefira-s}) and approximating $\alpha^{DR}_s(T)$ by $\alpha_s(T)$, one gets \begin{equation} 4S_{\rm RC}=\frac{g_i}{g_m}\frac{h^3}{4\pi m_e}A_a(i \rightarrow n)\frac{A_r(i \rightarrow j)}{\sum\limits_lA_r(i \rightarrow l)+\sum\limits_kA_a(i \rightarrow k)}\;, \label{eqnsub} \end{equation} which gives the autoionization rate $A_a(i\rightarrow m)$, \begin{equation} \fl A_a(i \rightarrow m) = \frac{S_{\rm RC}}{\frac{g_i}{g_m}\frac{h^3}{16 \pi m_e}A_r(i \rightarrow j) - S_{\rm RC}} \times \left( \sum\limits_lA_r(i \rightarrow l) + \sum\limits_{k\ne m} A_a(i \rightarrow k)\right)\;.\label{bigAasat} \end{equation} The above equation shows that a given autoionization rate depends on the degeneracy factors, some constants, $S_{\rm RC}(s)$, the radiative decay rates out of the state, and the other autoionization rates. Hence if the radiative decay rates are available from another source, $A_a$ for other continuum states can be obtained by solving the set of coupled linear equations arise from (\ref{bigAasat}) provided the $S_{\rm RC}$ value for the resonances are known. In the case of KLL lines however: $\sum\limits_{k\ne m} A_a(i \rightarrow k) = 0$ and the expression simplifies to, \begin{equation} A_a(i \rightarrow m)=\frac{S_{\rm RC}}{\frac{g_i}{g_m}\frac{h^3}{16\pi m_e}A_r(i \rightarrow j) - S_{\rm RC}} \sum\limits_lA_r(i \rightarrow l)\;.\label{Aasimple} \end{equation} It should also be noted that this approximation assumes that the RR background in the unified recombination cross section ($\sigma_{\rm RC}$) has a negligible contribution to the value of $S_{\rm RC}(s)$ for each and every $s$. In general, this is a reasonable approximation for the systems under consideration in this work. \subsection{Identification and resonance strengths of DES lines} Proper and complete spectroscopic identification of the DES lines with respect to the energy positions is not straightforward from the unified recombination cross sections. The Breit-Pauli R-matrix (BPRM) calculations do not identify resonances spectroscopically a priori, as they are formed through coupling or interference of open and closed channels. They can be identified easily from the energy positions of the resonances which are known for ions, such as Fe~XXV, from existing experimental values. They can also be determined theoretically from relevant spectroscopic transitions in atomic structure calculations. In our approach, we sort out the DES lines from the level-specific recombination cross sections $\sigma_{RC}(nSLJ)$ by matching the resonances in the cross sections with both the resonances in the total cross section (see Nahar and Pradhan 2006) and the values given in Gabriel (1972). Again it should be noted that a single resonance in the total recombination cross section may be the sum of individual resonances that appear at the same energy in multiple level specific cross sections. This is because the same free wave function (or autoionizing state in the IRA framework) can recombine into multiple final states. These resonances, while they do not quantum mechanically interfere, blend together in the total cross section. In such cases $S_{\rm RC}(s)$ must be calculated from the level specific cross sections. The fractional contribution of the DES line to the resonance in the total cross section can be determined by examining the contributions of all the constituent lines. The fractions $x_s$ are determined from the ratio of the line strengths, $S_{\rm RC}(s)$, calculated from the level specific recombination cross sections to the summed strengths $\sum\limits_jS_{\rm RC}(j)$, that is, $x_s=S_{\rm RC}(s)/ \sum\limits_jS_{\rm RC}(s)$. \section{Computation} Computation for the present approach involved extension of existing codes and developing a new code to compute satellite resonance strengths using unified recombination cross sections. The recombination rates of all satellite lines were obtained in four separate computations; the purpose of the redundancy is to test for numerical problems and consistencies. Calculations for photoionization and electron-ion recombination are done in several stages, starting with atomic structure calculations using the code SUPERSTRUCTURE (Eissner {\it et\thinspace al}\ 1974) for the target ion wave function. The one-electron orbitals of the target are the initial input to the BPRM suites of codes (Berrington {\it et\thinspace al}\ 1987, 1995, which were developed under the Iron Project (IP), see: Hummer {\it et\thinspace al}\ 1993). The total wave function expansion for the resonance states of Li-like Fe~XXIV consists of a 13-level expansion of the target He-like ion, Fe~XXV, (Nahar {\it et\thinspace al}\ 2001) and a 17-level expansion for Ni~XXVII (Nahar 2005). Radiation damping of the resonances is included as described in (Zhang {\it et\thinspace al}\ 1999). The present computations for $\sigma_{\rm PI}$ are confined only to photoelectron energy regions of the first resonance complex, which is the KLL complex, and only to partial photoionization cross sections for leaving the core ion in the ground $1s^2$~($^1S_0$) level. The level specific recombination cross sections ($\sigma_{\rm RC}(i)$), collision strengths ($\Omega_{\rm RC}(E,i)$), and rates ($\alpha_R(E,i)$) are computed for all coupled symmetries and levels, and summed to obtain the total using the program PBPRRC (Nahar {\it et\thinspace al}\ 2000). The computation for $\sigma_{\rm RC}(i)$ has been carried out in specified energy ranges ($\epsilon_i-\epsilon_f$) corresponding to each satellite line $s$ using the extended program RECXS. The total recombination rate coefficients, $\alpha_R$, are also obtained from the total recombination collision strength, $\Omega_{\rm RC}$. The difference between the two numerical values is a few percent. A new code SATLN is written to process unified $\sigma_{\rm RC}(i)$ and obtain various quantities, such as the location and identification of the satellite lines, the dielectronic satellite strengths, $S_{\rm RC}(s)$ and $S(s)$, and the recombination rate coefficients of the individual and integrated satellite strengths, as well as to carry out internal consistency checks. The program also determines the individual satellite line contributions to blended resonances in the recombination spectra. We perform several checks on the numerical accuracy of the calculations, as well as to ascertain if background contributions to DES intensities are negligible. The recombination rate coefficients were obtained in four different ways. The first set of rates was obtained by summing the contributions of level specific recombination rate coefficients for each of the final three J$\pi$ symmetries responsible for the DES spectra. The second set of rates was obtained from integration of the total recombination collision strengths calculated directly from photoionization cross sections, and including background cross sections for recombination into high-n states (comprising the RR contribution). The third set of results is obtained via integration of total $\sigma_{\rm RC}$ for the entire energy range of the KLL complex, multiplied by the Maxwellian temperature factor $f(t)$ (see (8)). The fourth set of calculations is to compute the DES rates individually. The rate coefficients from all four separate calculations are in good agreement with each other, providing a numerical consistency check as well as validation that the background contributions due to RR are not significant. \section{Results and Discussions} Results for Fe~XXV and Ni~XXVII from the new approach of using unified method for positions, strengths, recombination rate coefficients and intensities of the DES lines are discussed in subsections below. \subsection{Recombination Spectrum of DES lines: Profiles and Energies} The unified method for total electron-ion recombination can generate the recombination spectrum of dielectronic satellite lines including the background as shown in figure \ref{rc-fe} for Fe~XXV and figure \ref{rc-ni} for Ni~XXVII. \begin{figure} \begin{center} \resizebox{120mm}{!}{\includegraphics{tsatln.fe24.kll.eps}} \caption{\label{rc-fe}Theoretical profiles of the 22 dielectronic satellite lines of the K${\alpha}$ complex seen in unified recombination cross sections of Fe~XXV. The interference effects in the {\it ab initio} results manifest themselves in the natural overlap of line profiles as described in the text.} \end{center} \end{figure} The autoionizing resonances in the KLL complex of a He-like ion that appear in the total unified $\sigma_{\rm RC}$ are the DES lines and provide the relevant physical quantities of the satellite lines. As mentioned previously, the resonances are formed via a process that is analogous to the radiative decay of the autoionizing states arising from the set of configurations 1s2$l$2$l'$ into the three n=2 levels: 1s$^2$2s~$^2S_{1/2}$, 1s$^2$2p~$^2P^o_{1/2}$, and 1s$^2$2p~$^2P^o_{3/2}$. The 22 KLL resonances or satellite lines for the two ions are listed in tables 1 and 2. \begin{figure} \begin{center} \resizebox{120mm}{!}{\includegraphics{tsatln.ni26.kll.ps}} \caption{\label{rc-ni}Theoretical profiles of the 22 dielectronic satellite lines of the K${\alpha}$ complex seen in unified recombination cross sections of Ni~XXVII, as in figure 1.} \end{center} \end{figure} \begin{table} \fl \caption{\label{arttype} The 22 dielectronic satellite lines of KLL complex in recombination of (e~+~Fe~XXV): the alphabetic notation (following Gabriel 1972), corresponding resonant transition, experimentally measured energy position E$_{ex}$, computed energy position in the unified method E$_P$, satellite strengths S(s) in $10^{-20}{\rm cm}^2$eV, recombination resonance strength $S_{RC}$ in $10^{-40}{\rm cm}^2$erg$^2$, and fractional contribution x$_s$ to each resolved DES lines.} \footnotesize\rm \begin{tabular}{rllllllr} \br \multicolumn{2}{r}{Index} & \multicolumn{1}{c}{Transition} & \multicolumn{1}{c}{E$_{ex}$} & \multicolumn{1}{c}{E$_P$} & \multicolumn{1}{c}{S(s)} & \multicolumn{1}{c}{S$_{RC}$} & \multicolumn{1}{c}{x$_s$}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{eV} & \multicolumn{1}{c}{eV} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}\\ \mr 1& o& $1s2s^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4553.4& 4561.53& 0.8832& 1.034& 0.50\\ 2& p& $1s2s^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4553.4& 4561.54& 0.8820& 1.033& 0.50\\ 3& v& $1s2p\;^3P^o\;2s(^4P^o_{1/2})\rightarrow 1s^22s(^2S_{1/2})$ & 4573.9& 4572.19& 0.05994& 0.07033& 1.00\\ 4& u& $1s2p\;^3P^o\;2s(^4P^o_{3/2})\rightarrow 1s^22s(^2S_{1/2})$ & 4578.9& 4577.52& 0.1628& 0.1913& 1.00\\ 5& r& $1s2p\;^1P^o\;2s(^2P^o_{1/2})\rightarrow 1s^22s(^2S_{1/2})$ & 4615.1& 4612.63& 3.798& 4.495& 1.00\\ 6& q& $1s2p\;^1P^o\;2s(^2P^o_{3/2})\rightarrow 1s^22s(^2S_{1/2})$ & 4625.3& 4622.54& 0.08194& 0.09721& 1.00\\ 7& i& $1s2p^2(^4P_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4624.6& 4629.91& 0.07853& 0.09331& 0.93\\ 8& h& $1s2p^2(^4P_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4624.6& 4629.92& 0.00554& 0.00659& 0.07\\ 9& t& $1s2p\;^3P^o\;2s(^2P^o_{1/2})\rightarrow 1s^22s(^2S_{1/2})$ & 4632.9& 4635.60& 5.517& 6.564& 1.00\\ 10& g& $1s2p^2(^4P_{3/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4639.4& 4638.77& 0.01199& 0.01427& 0.03\\ 11& f& $1s2p^2(^4P_{3/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4632.9& 4638.77& 0.3721& 0.4430& 0.97\\ 12& s& $1s2p\;^3P^o\;2s(^2P^o_{3/2})\rightarrow 1s^22s(^2S_{1/2})$ & 4642.5& 4639.79& 1.291& 1.538& 1.00\\ 13& e& $1s2p^2(^4P_{5/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4639.0& 4646.57& 0.4849& 5.783& 1.00\\ 14& k& $1s2p^2(^2D_{3/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4658.1& 4664.43& 17.43& 20.85& 0.92\\ 15& l& $1s2p^2(^2D_{3/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4658.1& 4664.43& 1.445& 1.729& 0.08\\ 16& d& $1s2p^2(^2P_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4658.6& 4667.21& 0.7809& 0.9354& 0.82\\ 17& c& $1s2p^2(^2P_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4658.6& 4666.78& 0.1703& 0.2040& 0.18\\ 18& j& $1s2p^2(^2D_{5/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4664.1& 4672.05& 26.82& 32.18& 1.00\\ 19& a& $1s2p^2(^2P_{3/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4677.0& 4685.28& 6.118& 7.359& 0.97\\ 20& b& $1s2p^2(^2P_{3/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4677.0& 4685.30& 0.2116& 0.2546& 0.03\\ 21& m& $1s2p^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 4697.7& 4705.47& 2.753& 3.326& 0.95\\ 22& n& $1s2p^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 4697.7& 4704.73& 0.1301& 0.1571& 0.05\\ \br \end{tabular} \end{table} \begin{table} \caption{The 22 dielectronic satellite lines of KLL complex in recombination of (e~+~Ni~XXVI): the alphabetic notation (following Gabriel 1972), corresponding resonant transition, computed energy in the unified method E$_P$, satellite strengths S(s) in $10^{-20}{\rm cm}^2$eV, recombination resonance strength $S_{\rm RC}$ in $10^{-40} ~cm^2erg^2$, and fractional contribution x$_s$ to each resolved DES lines.} \footnotesize\rm \begin{tabular}{rllllllr} \br \multicolumn{2}{r}{Index} & \multicolumn{1}{c}{Transition} & \multicolumn{1}{c}{E$_P$} & \multicolumn{1}{c}{S(s)} & \multicolumn{1}{c}{S$_{RC}$} & \multicolumn{1}{c}{x$_s$}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{eV} & \multicolumn{1}{c}{eV} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \mr 1& o& $1s2s^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5299.32& 0.9864& 1.341& 0.47\\ 2& p& $1s2s^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5299.32& 1.115& 1.516& 0.53\\ 3& v& $1s2p\;^3P^o\;2s(^4P^o_{1/2})\rightarrow 1s^22s(^2S_{1/2})$ & 5312.25& 0.07304& 0.09958& 1.00\\ 4& u& $1s2p\;^3P^o\;2s(^4P^o_{3/2})\rightarrow 1s^22s(^2S_{1/2})$ & 5319.17& 0.1765& 0.2410& 1.00\\ 5& r& $1s2p\;^1P^o\;2s(^2P^o_{1/2})\rightarrow 1s^22s(^2S_{1/2})$ & 5358.17& 4.256& 5.852& 1.00\\ 6& q& $1s2p\;^1P^o\;2s(^2P^o_{3/2})\rightarrow 1s^22s(^2S_{1/2})$ & 5372.30& 0.1059& 0.1461& 1.00\\ 7& i& $1s2p^2(^4P_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5379.37& 0.2769& 0.3823& 0.98\\ 8& h& $1s2p^2(^4P_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5379.37& 0.00427&0.005885& 0.02\\ 9& t& $1s2p\;^3P^o\;2s(^2P^o_{1/2})\rightarrow 1s^22s(^2S_{1/2})$ & 5387.31& 5.364& 7.415& 1.00\\ 10& s& $1s2p\;^3P^o\;2s(^2P^o_{3/2})\rightarrow 1s^22s(^2S_{1/2})$ & 5392.30& 0.1284& 0.1777& 0.81\\ 11& g& $1s2p^2(^4P_{3/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5392.41& 0.00379&0.005241& 0.02\\ 12& f& $1s2p^2(^4P_{3/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5392.41& 0.02548&0.03.527& 0.16\\ 13& e& $1s2p^2(^4P_{5/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5402.16& 5.401& 7.489& 1.00\\ 14& k& $1s2p^2(^2D_{3/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5419.96& 6.380& 8.866& 0.51\\ 15& l& $1s2p^2(^2D_{3/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5419.96& 6.100& 8.477& 0.49\\ 16& d& $1s2p^2(^2P_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5421.21& 1.161& 1.617& 0.50\\ 17& c& $1s2p^2(^2P_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5421.21& 1.151& 1.602& 0.50\\ 18& j& $1s2p^2(^2D_{5/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5432.44& 25.04& 34.92& 1.00\\ 19& a& $1s2p^2(^2P_{3/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5446.84& 5.993& 8.380& 0.96\\ 20& b& $1s2p^2(^2P_{3/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5446.84& 0.2348& 0.3283& 0.04\\ 21& m& $1s2p^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{3/2})$ & 5467.02& 2.328& 3.267& 0.78\\ 22& n& $1s2p^2(^2S_{1/2})\rightarrow 1s^22p(^2P^o_{1/2})$ & 5467.02& 0.6634& 0.9.310& 0.22\\ \br \end{tabular} \end{table} The interference effects of resonant lines, not considered in IRA methods, manifest themselves in overlapping satellite profiles (the most prominent example is the interaction of d and k, or c and l). Also included in the total unified recombination cross sections (shown in figures \ref{rc-fe} and \ref{rc-ni}) are the multitude of non-interfering, yet blending lines (o and p, d and c, l and k, etc.). The resolved total recombination spectra show 15 DES lines of Fe~XXV and 14 DES lines of Ni~XXVII. The satellite resonances vary over orders of magnitude in cross section, with often overlapping profiles {\it within each symmetry}; these overlapping profiles give rise to interference. The pairs (k+d) and (l+c) in the Fe~XXV spectrum not only have overlapping profiles, but one of the resonances in each of the pairs is extremely weak ($<$ 1\%) and lies in the wings of the other one. We emphasize that the overlap and the resulting uncertainty are natural features, which are commonly neglected by IRA methods. Owing to the natural widths of autoionizing resonances, satellite line profiles in the unified spectrum overlap. These overlaps can be significant in determining satellite intensities, particularly those of the weaker satellites. This also implies that the satellite lines may be blended and need to be identified carefully. This is in contrast to IRA methods, based on atomic structure calculations, where level designations are initially pre-specified according to the configuration, term, and level structure. As explained above, the lines are identified from the individual $J\pi$ contributions to the level-specific recombination cross sections that contain the resonances. Thus the KLL lines are identified from the individual $J\pi$ contributions to the recombination cross sections into the three Li-like levels $1s^22s$~$^2S_{1/2}$, $1s^22p$~$^2P^o_{1/2}$, and $1s^22p$~$^2P^o_{3/2}$. All 22 satellites in the $n$=2 K${\alpha}$ complex have been isolated and identified for both Fe~XXV and Ni~XXVII. The DES are labeled following the notation by Gabriel (1972). The energies of the satellite lines are assumed to be the peaks of the line profiles and are given in tables 1 and 2 for Fe~XXV and Ni~XXVII respectively. The present theoretical energies for Fe~XXV agree well with previous calculations. Comparison with EBIT measurements (Beiersdorfer {\it et\thinspace al}\ 1992) shows that the experimental energies are systematically lower than theoretical energies by up to 8 eV ($\sim$ 0.1\%). This may be taken as the uncertainty in the present calculations. Comparisons of energies and resonance strengths are presented in Nahar and Pradhan (2006). We know of no experimental data on the Ni~XXVII DES lines. \subsection{Strengths and recombination rates of DES Lines} The unified recombination cross sections ($\sigma_{\rm RC}$) provide the absolute resonance strengths, $S(s)$ and $S_{RC}(s)$, of the DES lines by direct integration. Evaluation of individual resonance strengths may have some uncertainties due to the natural overlap of resonances. However, overlap is not expected to be significant since the dominant contribution to integrated value arises from a narrow energy range around the peak energy. The integrated resonance strengths $S(s)$ and $S_{\rm RC}(s)$ for each of DES lines are given in tables 1 and 2 for Fe~XXV and Ni~XXVII respectively. Only the strong lines in the total spectra in figures \ref{rc-fe} and \ref{rc-ni} provide significant rates. We have arranged the DES by order of increasing energy, in hopes that it might be more convenient in comparing with observations or experiments than the commonly used alphabetical order, especially where blends of DES are concerned. Following satellite identifications, we obtain the fractional contribution, $x_s$, for each resolved line to the blended recombination feature (e.g.\ o and p). The $x_s$ values are given in tables 1 and 2. For a few weaker satellites, the interference and overlapping of resonances poses significant uncertainty, such as, for the weak resonances of the pairs (k+d) and (l+c) in the Fe~XXV spectrum. The recombination rate coefficients ($\alpha_s(T)$) for the DES lines are given for the recombined ion Fe~XXIV in table 3 and for Ni~XXVII in table 4. These are obtained from direct integration of the total unified recombination collision strengths $\Omega_{RC}$ as described in the theory section. These rate coefficients are more accurate than those using the approximate formula given in Nahar and Pradhan (2006), since the full energy variation of the exponential function $e^{\epsilon_s}/kT$ is is considered in Eq. (9), rather the peak value of the resonance. These rates were checked with those obtained from sum of individual contributions of the level-specific rates for numerical accuracy. The fractions $x_s$ are used for $\alpha_s(T)$ of blended lines. \begin{table} \noindent \caption{Recombination rate coefficients for dielectronic satellite lines of Fe~XXV forming Fe~XXIV (notation: x-y $\rightarrow x\times 10^{-y}$).} \scriptsize \begin{tabular}{llllllllllll} \br \multicolumn{1}{c}{T(K)} & \multicolumn{11}{c}{$\alpha_R(cm^3/s)$} \\ & \multicolumn{1}{c}{o} & \multicolumn{1}{c}{p} & \multicolumn{1}{c}{v} & \multicolumn{1}{c}{u} & \multicolumn{1}{c}{r} & \multicolumn{1}{c}{q} & \multicolumn{1}{c}{i} & \multicolumn{1}{c}{h} & \multicolumn{1}{c}{t} & \multicolumn{1}{c}{g} & \multicolumn{1}{c}{f}\\ \mr 6.0&3.46-35&3.46-35&2.09-36&5.30-36&8.33-35&1.59-36&1.41-36&9.96-38&9.12-35&1.94-37&6.03-36\\ 6.1&1.31-30&1.31-30&8.10-32&2.09-31&3.56-30&6.99-32&6.29-32&4.44-33&4.14-30&8.85-33&2.75-31\\ 6.2&5.28-27&5.28-27&3.33-28&8.68-28&1.58-26&3.17-28&2.89-28&2.04-29&1.93-26&4.13-29&1.28-27\\ 6.3&3.60-24&3.60-24&2.30-25&6.06-25&1.16-23&2.37-25&2.18-25&1.54-26&1.47-23&3.16-26&9.81-25\\ 6.4&5.97-22&5.96-22&3.87-23&1.02-22&2.05-21&4.22-23&3.92-23&2.77-24&2.67-21&5.75-24&1.79-22\\ 6.5&3.22-20&3.22-20&2.11-21&5.62-21&1.16-19&2.42-21&2.26-21&1.59-22&1.55-19&3.34-22&1.04-20\\ 6.6&7.14-19&7.12-19&4.70-20&1.26-19&2.67-18&5.61-20&5.26-20&3.72-21&3.63-18&7.84-21&2.43-19\\ 6.7&7.78-18&7.77-18&5.16-19&1.39-18&3.01-17&6.35-19&5.98-19&4.22-20&4.15-17&8.96-20&2.78-18\\ 6.8&4.84-17&4.83-17&3.22-18&8.68-18&1.91-16&4.06-18&3.84-18&2.71-19&2.67-16&5.77-19&1.79-17\\ 6.9&1.92-16&1.92-16&1.29-17&3.47-17&7.76-16&1.65-17&1.57-17&1.11-18&1.09-15&2.36-18&7.34-17\\ 7.0&5.36-16&5.35-16&3.59-17&9.73-17&2.20-15&4.69-17&4.46-17&3.15-18&3.12-15&6.74-18&2.09-16\\ 7.1&1.13-15&1.13-15&7.57-17&2.05-16&4.68-15&1.00-16&9.53-17&6.73-18&6.67-15&1.44-17&4.48-16\\ 7.2&1.89-15&1.89-15&1.28-16&3.46-16&7.93-15&1.70-16&1.62-16&1.15-17&1.14-14&2.46-17&7.65-16\\ 7.3&2.67-15&2.66-15&1.80-16&4.89-16&1.13-14&2.42-16&2.31-16&1.63-17&1.62-14&3.51-17&1.09-15\\ 7.4&3.26-15&3.25-15&2.20-16&5.98-16&1.38-14&2.98-16&2.84-16&2.01-17&2.00-14&4.33-17&1.34-15\\ 7.5&3.56-15&3.55-15&2.40-16&6.54-16&1.52-14&3.27-16&3.12-16&2.20-17&2.19-14&4.76-17&1.48-15\\ 7.6&3.55-15&3.55-15&2.40-16&6.54-16&1.52-14&3.28-16&3.13-16&2.21-17&2.21-14&4.78-17&1.48-15\\ 7.7&3.31-15&3.30-15&2.24-16&6.09-16&1.42-14&3.06-16&2.93-16&2.07-17&2.06-14&4.47-17&1.39-15\\ 7.8&2.91-15&2.90-15&1.97-16&5.36-16&1.25-14&2.70-16&2.59-16&1.83-17&1.82-14&3.95-17&1.22-15\\ 7.9&2.45-15&2.44-15&1.66-16&4.52-16&1.06-14&2.28-16&2.18-16&1.54-17&1.54-14&3.33-17&1.03-15\\ 8.0&1.99-15&1.98-15&1.35-16&3.67-16&8.58-15&1.85-16&1.77-16&1.25-17&1.25-14&2.71-17&8.41-16\\ 8.1&1.57-15&1.57-15&1.06-16&2.90-16&6.78-15&1.47-16&1.40-16&9.91-18&9.88-15&2.14-17&6.65-16\\ 8.2&1.21-15&1.21-15&8.20-17&2.24-16&5.24-15&1.13-16&1.08-16&7.66-18&7.64-15&1.66-17&5.14-16\\ 8.3&9.18-16&9.16-16&6.22-17&1.70-16&3.98-15&8.60-17&8.23-17&5.81-18&5.80-15&1.26-17&3.90-16\\ 8.4&6.86-16&6.85-16&4.65-17&1.27-16&2.98-15&6.43-17&6.16-17&4.35-18&4.34-15&9.41-18&2.92-16\\ 8.5&5.07-16&5.07-16&3.44-17&9.38-17&2.20-15&4.76-17&4.56-17&3.22-18&3.21-15&6.96-18&2.16-16\\ 8.6&3.72-16&3.71-16&2.52-17&6.87-17&1.61-15&3.49-17&3.34-17&2.36-18&2.35-15&5.10-18&1.58-16\\ 8.7&2.70-16&2.70-16&1.83-17&5.00-17&1.17-15&2.54-17&2.43-17&1.72-18&1.71-15&3.71-18&1.15-16\\ 8.8&1.96-16&1.95-16&1.33-17&3.62-17&8.50-16&1.84-17&1.76-17&1.24-18&1.24-15&2.69-18&8.35-17\\ 8.9&1.41-16&1.41-16&9.56-18&2.61-17&6.12-16&1.32-17&1.27-17&8.95-19&8.93-16&1.94-18&6.01-17\\ 9.0&1.01-16&1.01-16&6.86-18&1.87-17&4.40-16&9.50-18&9.10-18&6.42-19&6.41-16&1.39-18&4.32-17\\ \mr & \multicolumn{1}{c}{s}& \multicolumn{1}{c}{e}& \multicolumn{1}{c}{k}& \multicolumn{1}{c}{l}& \multicolumn{1}{c}{d}& \multicolumn{1}{c}{c}& \multicolumn{1}{c}{j}& \multicolumn{1}{c}{a}& \multicolumn{1}{c}{b}& \multicolumn{1}{c}{m}& \multicolumn{1}{c}{n} \\ \mr 6.0&2.07-35&7.22-35&2.12-34&1.75-35&9.24-36&2.02-36&2.99-34&5.86-35&2.03-36&2.11-35&9.98-37\\ 6.1&9.46-31&3.35-30&1.02-29&8.49-31&4.50-31&9.81-32&1.47-29&2.98-30&1.03-31&1.13-30&5.32-32\\ 6.2&4.43-27&1.59-26&5.02-26&4.16-27&2.21-27&4.83-28&7.32-26&1.52-26&5.26-28&5.96-27&2.81-28\\ 6.3&3.39-24&1.23-23&3.99-23&3.31-24&1.77-24&3.85-25&5.89-23&1.25-23&4.31-25&5.03-24&2.38-25\\ 6.4&6.18-22&2.25-21&7.48-21&6.21-22&3.32-22&7.24-23&1.11-20&2.40-21&8.30-23&9.91-22&4.68-23\\ 6.5&3.59-20&1.32-19&4.46-19&3.69-20&1.98-20&4.32-21&6.69-19&1.46-19&5.04-21&6.13-20&2.90-21\\ 6.6&8.44-19&3.12-18&1.07-17&8.84-19&4.75-19&1.04-19&1.61-17&3.54-18&1.22-19&1.51-18&7.14-20\\ 6.7&9.65-18&3.58-17&1.24-16&1.03-17&5.51-18&1.20-18&1.87-16&4.16-17&1.44-18&1.80-17&8.49-19\\ 6.8&6.23-17&2.31-16&8.07-16&6.69-17&3.60-17&7.85-18&1.23-15&2.74-16&9.48-18&1.20-16&5.65-18\\ 6.9&2.55-16&9.50-16&3.33-15&2.76-16&1.49-16&3.25-17&5.09-15&1.14-15&3.95-17&5.02-16&2.37-17\\ 7.0&7.28-16&2.72-15&9.59-15&7.95-16&4.28-16&9.34-17&1.47-14&3.30-15&1.14-16&1.46-15&6.90-17\\ 7.1&1.56-15&5.83-15&2.07-14&1.71-15&9.24-16&2.01-16&3.17-14&7.16-15&2.48-16&3.18-15&1.50-16\\ 7.2&2.66-15&9.96-15&3.54-14&2.94-15&1.58-15&3.46-16&5.44-14&1.23-14&4.26-16&5.49-15&2.59-16\\ 7.3&3.79-15&1.42-14&5.06-14&4.20-15&2.26-15&4.94-16&7.78-14&1.77-14&6.11-16&7.89-15&3.73-16\\ 7.4&4.67-15&1.75-14&6.26-14&5.19-15&2.80-15&6.11-16&9.63-14&2.19-14&7.57-16&9.81-15&4.63-16\\ 7.5&5.14-15&1.93-14&6.90-14&5.72-15&3.09-15&6.74-16&1.06-13&2.42-14&8.37-16&1.09-14&5.13-16\\ 7.6&5.16-15&1.94-14&6.95-14&5.76-15&3.11-15&6.79-16&1.07-13&2.44-14&8.44-16&1.10-14&5.18-16\\ 7.7&4.83-15&1.81-14&6.51-14&5.40-15&2.91-15&6.35-16&1.00-13&2.29-14&7.91-16&1.03-14&4.86-16\\ 7.8&4.26-15&1.60-14&5.75-14&4.77-15&2.58-15&5.62-16&8.87-14&2.02-14&7.00-16&9.11-15&4.30-16\\ 7.9&3.60-15&1.35-14&4.86-14&4.03-15&2.18-15&4.74-16&7.49-14&1.71-14&5.91-16&7.71-15&3.64-16\\ 8.0&2.93-15&1.10-14&3.96-14&3.28-15&1.77-15&3.87-16&6.10-14&1.39-14&4.82-16&6.28-15&2.97-16\\ 8.1&2.31-15&8.70-15&3.13-14&2.60-15&1.40-15&3.06-16&4.83-14&1.10-14&3.82-16&4.98-15&2.35-16\\ 8.2&1.79-15&6.73-15&2.42-14&2.01-15&1.08-15&2.37-16&3.74-14&8.54-15&2.95-16&3.85-15&1.82-16\\ 8.3&1.36-15&5.11-15&1.84-14&1.53-15&8.23-16&1.80-16&2.84-14&6.49-15&2.24-16&2.93-15&1.38-16\\ 8.4&1.02-15&3.82-15&1.38-14&1.14-15&6.17-16&1.34-16&2.13-14&4.86-15&1.68-16&2.19-15&1.04-16\\ 8.5&7.52-16&2.83-15&1.02-14&8.44-16&4.56-16&9.95-17&1.57-14&3.59-15&1.24-16&1.62-15&7.67-17\\ 8.6&5.51-16&2.07-15&7.47-15&6.19-16&3.35-16&7.30-17&1.15-14&2.64-15&9.12-17&1.19-15&5.63-17\\ 8.7&4.01-16&1.51-15&5.44-15&4.51-16&2.44-16&5.31-17&8.40-15&1.92-15&6.64-17&8.67-16&4.10-17\\ 8.8&2.91-16&1.09-15&3.94-15&3.26-16&1.76-16&3.85-17&6.08-15&1.39-15&4.81-17&6.28-16&2.97-17\\ 8.9&2.09-16&7.87-16&2.84-15&2.35-16&1.27-16&2.77-17&4.38-15&1.00-15&3.46-17&4.52-16&2.14-17\\ 9.0&1.50-16&5.65-16&2.04-15&1.69-16&9.12-17&1.99-17&3.14-15&7.19-16&2.49-17&3.25-16&1.53-17\\ \br \end{tabular} \end{table} \begin{table} \caption{Recombination rate coefficients for dielectronic satellite lines of Ni~XXVII forming Ni~XXIVII (notation: x-y $\rightarrow x\times 10^{-y}$).} \scriptsize \begin{tabular}{llllllllllll} \br \multicolumn{1}{c}{T(K)} & \multicolumn{11}{c}{$\alpha_R(cm^3/s)$} \\ & \multicolumn{1}{c}{o} & \multicolumn{1}{c}{p} & \multicolumn{1}{c}{v} & \multicolumn{1}{c}{u} & \multicolumn{1}{c}{r} & \multicolumn{1}{c}{q} & \multicolumn{1}{c}{i} & \multicolumn{1}{c}{h} & \multicolumn{1}{c}{t} & \multicolumn{1}{c}{s} & \multicolumn{1}{c}{g}\\ \br 6.0&8.63-39&9.75-39&5.48-40&1.20-39&1.90-38&4.00-40&9.66-40&1.49-41&1.71-38&3.84-40&1.13-41\\ 6.1&1.90-33&2.15-33&1.25-34&2.79-34&4.80-33&1.05-34&2.58-34&3.97-36&4.64-33&1.06-34&3.12-36\\ 6.2&3.10-29&3.50-29&2.08-30&4.75-30&8.77-29&1.97-30&4.90-30&7.55-32&8.96-29&2.06-30&6.08-32\\ 6.3&6.40-26&7.24-26&4.40-27&1.01-26&1.98-25&4.54-27&1.14-26&1.76-28&2.12-25&4.91-27&1.45-28\\ 6.4&2.57-23&2.90-23&1.79-24&4.17-24&8.52-23&1.99-24&5.04-24&7.77-26&9.43-23&2.20-24&6.49-26\\ 6.5&2.79-21&3.15-21&1.97-22&4.63-22&9.80-21&2.32-22&5.92-22&9.12-24&1.11-20&2.62-22&7.73-24\\ 6.6&1.08-19&1.22-19&7.70-21&1.82-20&3.96-19&9.47-21&2.43-20&3.74-22&4.60-19&1.09-20&3.20-22\\ 6.7&1.83-18&2.07-18&1.32-19&3.13-19&6.96-18&1.68-19&4.33-19&6.67-21&8.24-18&1.95-19&5.75-21\\ 6.8&1.61-17&1.83-17&1.17-18&2.79-18&6.32-17&1.54-18&3.97-18&6.12-20&7.59-17&1.80-18&5.31-20\\ 6.9&8.48-17&9.59-17&6.18-18&1.48-17&3.40-16&8.30-18&2.15-17&3.31-19&4.12-16&9.80-18&2.89-19\\ 7.0&2.95-16&3.34-16&2.16-17&5.17-17&1.20-15&2.95-17&7.66-17&1.18-18&1.47-15&3.51-17&1.03-18\\ 7.1&7.40-16&8.36-16&5.43-17&1.30-16&3.06-15&7.53-17&1.96-16&3.02-18&3.77-15&8.99-17&2.65-18\\ 7.2&1.43-15&1.62-15&1.05-16&2.53-16&5.98-15&1.48-16&3.85-16&5.92-18&7.42-15&1.77-16&5.22-18\\ 7.3&2.25-15&2.54-15&1.66-16&3.99-16&9.49-15&2.35-16&6.12-16&9.43-18&1.18-14&2.82-16&8.32-18\\ 7.4&3.00-15&3.39-15&2.22-16&5.34-16&1.27-14&3.16-16&8.25-16&1.27-17&1.59-14&3.81-16&1.12-17\\ 7.5&3.52-15&3.97-15&2.60-16&6.27-16&1.50-14&3.73-16&9.73-16&1.50-17&1.88-14&4.50-16&1.33-17\\ 7.6&3.71-15&4.20-15&2.75-16&6.63-16&1.59-14&3.96-16&1.03-15&1.59-17&2.00-14&4.79-16&1.41-17\\ 7.7&3.61-15&4.08-15&2.67-16&6.46-16&1.56-14&3.87-16&1.01-15&1.56-17&1.96-14&4.68-16&1.38-17\\ 7.8&3.29-15&3.72-15&2.44-16&5.89-16&1.42-14&3.54-16&9.24-16&1.42-17&1.79-14&4.29-16&1.26-17\\ 7.9&2.85-15&3.22-15&2.11-16&5.10-16&1.23-14&3.07-16&8.02-16&1.24-17&1.55-14&3.72-16&1.10-17\\ 8.0&2.36-15&2.67-15&1.75-16&4.24-16&1.02-14&2.55-16&6.68-16&1.03-17&1.29-14&3.10-16&9.14-18\\ 8.1&1.90-15&2.15-15&1.41-16&3.41-16&8.24-15&2.05-16&5.37-16&8.28-18&1.04-14&2.49-16&7.36-18\\ 8.2&1.49-15&1.68-15&1.10-16&2.67-16&6.46-15&1.61-16&4.21-16&6.49-18&8.16-15&1.96-16&5.77-18\\ 8.3&1.14-15&1.29-15&8.46-17&2.05-16&4.96-15&1.24-16&3.23-16&4.98-18&6.27-15&1.50-16&4.43-18\\ 8.4&8.59-16&9.72-16&6.38-17&1.54-16&3.74-15&9.33-17&2.44-16&3.76-18&4.73-15&1.13-16&3.34-18\\ 8.5&6.40-16&7.23-16&4.75-17&1.15-16&2.79-15&6.95-17&1.82-16&2.80-18&3.53-15&8.45-17&2.49-18\\ 8.6&4.71-16&5.33-16&3.50-17&8.47-17&2.05-15&5.12-17&1.34-16&2.07-18&2.60-15&6.23-17&1.84-18\\ 8.7&3.44-16&3.89-16&2.56-17&6.19-17&1.50-15&3.75-17&9.80-17&1.51-18&1.90-15&4.56-17&1.34-18\\ 8.8&2.50-16&2.83-16&1.86-17&4.50-17&1.09-15&2.72-17&7.12-17&1.10-18&1.38-15&3.31-17&9.76-19\\ 8.9&1.81-16&2.04-16&1.34-17&3.25-17&7.88-16&1.97-17&5.15-17&7.93-19&9.98-16&2.39-17&7.05-19\\ 9.0&1.30-16&1.47-16&9.65-18&2.34-17&5.67-16&1.41-17&3.70-17&5.70-19&7.18-16&1.72-17&5.07-19\\ \mr & \multicolumn{1}{c}{f}& \multicolumn{1}{c}{e}& \multicolumn{1}{c}{k}& \multicolumn{1}{c}{l}& \multicolumn{1}{c}{d}& \multicolumn{1}{c}{c}& \multicolumn{1}{c}{j}& \multicolumn{1}{c}{a}& \multicolumn{1}{c}{b}& \multicolumn{1}{c}{m}& \multicolumn{1}{c}{n} \\ \mr 6.0&7.63-41&1.45-38&1.41-38&1.34-38&2.48-39&2.46-39&4.76-38&9.70-39&3.80-40&2.98-39&8.51-40\\ 6.1&2.10-35&4.09-33&4.12-33&3.94-33&7.33-34&7.26-34&1.44-32&3.04-33&1.19-34&9.81-34&2.79-34\\ 6.2&4.09-31&8.12-29&8.46-29&8.09-29&1.51-29&1.50-29&3.03-28&6.56-29&2.57-30&2.20-29&6.28-30\\ 6.3&9.75-28&1.96-25&2.10-25&2.01-25&3.76-26&3.73-26&7.67-25&1.69-25&6.64-27&5.87-26&1.67-26\\ 6.4&4.37-25&8.89-23&9.71-23&9.29-23&1.75-23&1.73-23&3.60-22&8.10-23&3.17-24&2.87-23&8.19-24\\ 6.5&5.20-23&1.07-20&1.18-20&1.13-20&2.14-21&2.12-21&4.45-20&1.01-20&3.97-22&3.67-21&1.05-21\\ 6.6&2.16-21&4.45-19&5.01-19&4.79-19&9.07-20&8.98-20&1.90-18&4.38-19&1.72-20&1.61-19&4.58-20\\ 6.7&3.87-20&8.04-18&9.15-18&8.74-18&1.66-18&1.64-18&3.50-17&8.12-18&3.18-19&3.02-18&8.60-19\\ 6.8&3.57-19&7.46-17&8.55-17&8.18-17&1.55-17&1.54-17&3.29-16&7.69-17&3.01-18&2.89-17&8.23-18\\ 6.9&1.94-18&4.07-16&4.70-16&4.50-16&8.54-17&8.46-17&1.82-15&4.27-16&1.67-17&1.62-16&4.61-17\\ 7.0&6.96-18&1.46-15&1.70-15&1.62-15&3.08-16&3.06-16&6.58-15&1.55-15&6.09-17&5.92-16&1.69-16\\ 7.1&1.78-17&3.76-15&4.38-15&4.19-15&7.96-16&7.89-16&1.70-14&4.04-15&1.58-16&1.54-15&4.40-16\\ 7.2&3.51-17&7.41-15&8.66-15&8.28-15&1.58-15&1.56-15&3.38-14&8.02-15&3.14-16&3.08-15&8.78-16\\ 7.3&5.60-17&1.18-14&1.39-14&1.33-14&2.52-15&2.50-15&5.42-14&1.29-14&5.06-16&4.97-15&1.42-15\\ 7.4&7.56-17&1.60-14&1.88-14&1.79-14&3.42-15&3.39-15&7.35-14&1.75-14&6.87-16&6.77-15&1.93-15\\ 7.5&8.93-17&1.89-14&2.22-14&2.13-14&4.05-15&4.01-15&8.72-14&2.08-14&8.16-16&8.05-15&2.30-15\\ 7.6&9.50-17&2.01-14&2.37-14&2.27-14&4.32-15&4.28-15&9.30-14&2.22-14&8.71-16&8.61-15&2.46-15\\ 7.7&9.30-17&1.97-14&2.32-14&2.22-14&4.23-15&4.19-15&9.12-14&2.18-14&8.54-16&8.46-15&2.41-15\\ 7.8&8.51-17&1.80-14&2.13-14&2.03-14&3.88-15&3.84-15&8.36-14&2.00-14&7.84-16&7.77-15&2.22-15\\ 7.9&7.39-17&1.57-14&1.85-14&1.77-14&3.37-15&3.34-15&7.27-14&1.74-14&6.82-16&6.77-15&1.93-15\\ 8.0&6.15-17&1.30-14&1.54-14&1.47-14&2.81-15&2.78-15&6.06-14&1.45-14&5.69-16&5.65-15&1.61-15\\ 8.1&4.95-17&1.05-14&1.24-14&1.19-14&2.26-15&2.24-15&4.88-14&1.17-14&4.58-16&4.55-15&1.30-15\\ 8.2&3.88-17&8.24-15&9.74-15&9.31-15&1.78-15&1.76-15&3.83-14&9.19-15&3.60-16&3.58-15&1.02-15\\ 8.3&2.98-17&6.32-15&7.48-15&7.15-15&1.36-15&1.35-15&2.95-14&7.06-15&2.77-16&2.75-15&7.83-16\\ 8.4&2.25-17&4.78-15&5.65-15&5.40-15&1.03-15&1.02-15&2.23-14&5.33-15&2.09-16&2.08-15&5.92-16\\ 8.5&1.68-17&3.56-15&4.21-15&4.03-15&7.68-16&7.61-16&1.66-14&3.98-15&1.56-16&1.55-15&4.42-16\\ 8.6&1.24-17&2.62-15&3.11-15&2.97-15&5.67-16&5.61-16&1.22-14&2.93-15&1.15-16&1.14-15&3.26-16\\ 8.7&9.04-18&1.92-15&2.27-15&2.17-15&4.14-16&4.10-16&8.95-15&2.15-15&8.41-17&8.36-16&2.38-16\\ 8.8&6.57-18&1.39-15&1.65-15&1.58-15&3.01-16&2.98-16&6.50-15&1.56-15&6.11-17&6.08-16&1.73-16\\ 8.9&4.75-18&1.01-15&1.19-15&1.14-15&2.17-16&2.15-16&4.70-15&1.13-15&4.42-17&4.39-16&1.25-16\\ 9.0&3.41-18&7.25-16&8.58-16&8.20-16&1.56-16&1.55-16&3.38-15&8.11-16&3.18-17&3.16-16&9.01-17\\ \br \end{tabular} \end{table} The DES rate coefficients have been tabulated over a wide temperature range to enable interpolation of $\alpha_s$ at practically any temperature. However, as described above, $\alpha_s$ can also be obtained for any temperature of interest using the recombination strength $S_{\rm RC}(s)$ or $S(s)$ and the temperature factor, $f(T)$. Although the approximation of assuming relatively narrow satellite lines is made in (\ref{NP2006a} -- \ref{NP2006c}), use of (\ref{NP2006a} -- \ref{NP2006c}) or (\ref{convenient}) is simpler and potentially more accurate. \subsection{Comparison of DES line intensities} Given the high temperature sensitivity of individual DES lines, it is natural to attempt observations that could provide good temperature diagnostics. The intensity ratios of KLL lines with the $w$-line are computed and presented in figure \ref{Iratfig}. We calculated $q_w$ from the electron impact excitation collision strengths for the $w$-line computed by Pradhan (1985). There is excellent agreement with the $q_w$ values from previous calculations by Bely-Dubau {\it et\thinspace al}\ (1982); for example, $3.22\times 10^{-15}$ and $3.10\times 10^{-15}$ cm$^{-3} s^{-1}$ respectively at 10$^7$ K, and $1.23\times 10^{-13}$ and $1.24\times 10^{-13}$ cm$^{-3} s^{-1}$ at $2\times 10^7$ K. In more recent R-matrix calculations, Whiteford {\it et\thinspace al}\ (2001) also report very good agreement with Pradhan (1985); though their electronically available data lacks the finer temperature resolution of Pradhan (1985), Whiteford {\it et\thinspace al}\ (2001)'s graphical comparisons show negligible differences for dipole allowed transitions (within computational uncertainties, usually estimated at $<$ 10\% for strong dipole transitions). We have therefore used the better resolved (in temperature) calculations of Pradhan (1985) for the $w$-line. (We note, however, that there may be significant differences for other collisional excitation rates of Fe~XXV). Except for a few astrophysical observations of the solar corona and laboratory experiments such as EBITs, the resolution of individual satellite intensities is rare. On the other hand it is easier, and not uncommon, to observe the combined intensity of the total KLL DES complex (redward of the $w$-line at one extreme). We therefore compute the intensity ratio $I(KLL)/I(w)$ as a function of temperature. \begin{figure} \begin{center} \resizebox{110mm}{!}{\includegraphics{kll.w.fe25.eps}} \caption{\label{Iratfig}Comparison of intensity ratios $I(KLL)/I(w)$ of Fe~XXV with previous calculations: solid line -- present, dashed line -- Bely-Dubau {\it et\thinspace al}\ (1982), dotted line -- Vainshtein and Safronova (1978).} \end{center} \end{figure} We compare present intensity ratios of Fe~XXV (solid curves) in figure \ref{Iratfig} with previous results by Vainshtein and Safronova (1978) (dashed curve) and Bely-Dubau {\it et\thinspace al}\ (1982) (dotted curve) respectively. Figure \ref{Iratfig} shows considerable differences in the most sensitive low-temperature range, T $< 2\times 10^7$ K ($\log(T) = 7.3$); all three sets of results appear to converge to good agreement as they approach the temperature of maximum abundance of Fe~XXV at $\log(T)\sim 7.4$ in coronal equilibrium. But the present values, as well as those of Bely-Dubau {\it et\thinspace al}\ (1982), differ considerably from Vainshtein and Safronova (1978) all throughout by 20\% to a factor of two or more. Above $\log(T)\sim 7.2$ our values are in very good agreement with Bely-Dubau {\it et\thinspace al}\ (1982), but show significant differences in the most sensitive region at about $\log(T)\sim 7.1$. In part, the differences as well as the agreeement are due to the behavior of the rate coefficient for the resonance $w$-line which increases with T, while the DES rate is independent of T. Therefore, the satellite-to-resonance line ratio at low temperatures is more reflective of the actual differences between the various approximations employed to compute the DES resonance strengths. \subsection{DES spectra in plasma models} Recently, we have modeled the 6.7 keV complex of Fe~XXV and the 7.8 keV complex of Ni~XXVII in stationary and transient plasma sources using the new General Spectral Modeling (GSM) code (Oelgoetz 2006, see also: Oelgoetz {\it et\thinspace al}\ 2007a,b). GSM is an IRA code that contains the capability outlined in (\ref{Aasimple}) such that it can calculate autoionization rates ($A_a$) from the unified recombination rates of DES lines. The DES spectra for both ions are generated at T = $10^7$ K and are presented in figures \ref{Fe} and \ref{Ni}. To benchmark the present approach, we compare the calculated Fe~XXV and Ni~XXVII DES spectra using the present unified satellite rates with spectra generated using autoionization rates computed from Los Alamos National Laboratory (LANL) suite of atomic physics codes (see: Abdallah {\it et\thinspace al}\ 1994, 2001). We get very good agreement between the DES spectra computed using the two sets of data, as shown in figures \ref{Fe} and \ref{Ni}. The calculations for both ions generally follow the approach outlined in Oelgoetz {\it et\thinspace al}\ (2007a) for the RM data set, as to data sources and the complete statistical models. The major exception to this is that both models presented use level energies calculated by the LANL code CATS (see: Abdallah {\it et\thinspace al}\ 1994, 2001) for all energy levels except those that give rise to the KLL satellite lines. The energies for these autoionizing states are taken from the data presented in this work. Both models use identical energies to facilitate an easy comparison of their spectra. The two models differ from each other in that the R-matrix curve presented here uses autoionization rates calculated from the KLL DES strengths as computed herein, and the DW data set uses distorted wave autoionization rates calculated using the LANL code GIPPER (again, see: Abdallah {\it et\thinspace al}\ 1994, 2001). \begin{figure} \begin{center} \resizebox{0.8\textwidth}{!}{\includegraphics{Fe-physicascripta.eps}} \caption{\label{Fe}Comparison of theoretical dielectronic satellite spectra of He-like Fe~XXV at T = $10^7$ K using the present DES strengths using the unified recombination method (solid, red line) and the distorted wave (DW) data from the Los Alamos codes (blue, dashed line).} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{0.8\textwidth}{!}{\includegraphics{Ni-physicascripta.eps}} \caption{\label{Ni}Comparison of theoretical dielectronic satellite spectra of He-like Ni~XXVII at T = $10^7$ K, as in figure \ref{Fe}} \end{center} \end{figure} \section{CONCLUSION} The main conclusions of this work are: $\bullet$ The theoretical and computational framework for self-consistent and unified treatment of photoionization and radiative and dielectronic recombination has now extended the calculation of dielectronic satellite strengths. Computed DES spectra demonstrate the effect of channel interference and overlapping profiles, though the practical effect on rate coefficients is small. $\bullet$ Theoretical formulation for autoionization rates from the unified satellite line strengths is given for applications in astrophysical models based on IRA methodologies. The first comparison for Fe~XXV and Ni~XXVII spectrum shows excellent agreement with data from the Los Alamos National Laboratory suite of atomic physics codes. $\bullet$ Recombination rate coefficients are presented for all 22 KLL DES lines of Fe~XXV and Ni~XXVII for a wide temperature range. We find considerable differences with earlier works of more than 20\% at low temperatures $<$ 10$^7$K, but the agreement improves to $<$ 10\% towards the temperature of maximum abundance for both ions. While we expect the present rate coefficients to be more accurate, a conservative estimate of uncertainties in the temperature range of practical applications, T$>$ 10$^7$K, is about 10-20\%. $\bullet$ Intensity ratios $I(s)/I(w)$ of the DES to $w$-line of He-like Fe and Ni are theoretically computed for practical applications in high-temperature X-ray plasmas. $\bullet$ Although the present unified dielectronic resonance strengths are in good agreement with those computed in previous works using the IRA, some significant differences are found in temperature ranges where the DES are most temperature sensitive, below the temperature of maximum abundance of He-like ions in coronal equilibrium. \subsection{Acknowledgments} This work was supported partially by the NASA Astrophysical Theory Program and the Space Astrophysical Research and Analysis programs as well partially conducted under the auspices of the United States Department of Energy at Los Alamos National Laboratory. Much of the computational work was carried out at the Ohio Supercomputer Center in Columbus, Ohio. \section*{References} \def\amp{{\it Adv. At. Molec. Phys.}\ } \def\apj{{\it Astrophys. J.}\ } \def\apjs{{\it Astrophys. J. Suppl. Ser.}\ } \def\apjl{{\it Astrophys. J. (Letters)}\ } \def\aj{{\it Astron. J.}\ } \def{\it Astron. Astrophys.}\ {{\it Astron. Astrophys.}\ } \def\aas{{\it Astron. Astrophys. Suppl.}\ } \def\aasup{{\it Astron. Astrophys. Suppl.}\ } \def\adndt{{\it At. Data Nucl. Data Tables}\ } \def\cpc{{\it Comput. Phys. Commun.}\ } \def\jqsrt{{\it J. Quant. Spectrosc. Radiat. Transfer}\ } \def\jpb{{\it Journal Of Physics B}\ } \def\pasp{{\it Pub. Astron. Soc. Pacific}\ } \def\mn{{\it Mon. Not. R. Astr. Soc.}\ } \def\pra{{\it Physical Review A}\ } \def\prl{{\it Physical Review Letters}\ } \def\zpds{{\it Z. Phys. D Suppl.}\ } \begin{harvard} \item{} Abdallah J, Clark R~E~H, Peek J~M, and Fontes C~J 1994 \jqsrt {\bf 51} 1 \item{} Abdallah J, Zhang H~L, Fontes C~J, Kilcrease D~P and Archer B~J 2001 \jqsrt {\bf 71} 107 \item Beiersdorfer P, Philips T~W, Wong K~L, Marrs R~E and Vogel D~A 1992 \pra {\bf 46} 3812 \item{} Berrington K~A, Burke P~G, Butler K, Seaton M~J, Storey P~J, Taylor K~T, and Yan Y 1987 \jpb {\bf 20} 6379 \item{} Berrington K~A, Eissner W and Norrington P~H 1995 \cpc {\bf 92} 290 \item{} Bely-Dubau F, Dubau J, Faucher P and Gabriel A~H 1982 \mn {\bf 198} 239 \item{} Eissner W, Jones M and Nussbaumer H 1974 \cpc {\bf 8} 270 \item{} Gabriel A~H 1972 \mn {\bf 160} 99 \item{} Hummer D~G, Berrington K~A, Eissner W, Pradhan A~K, Saraph H~E and Tully J~A 1993 {\it Astron. Astrophys.}\ {\bf 279} 298 \item{} Kato T, Fujiwara T and Hanaoka Y 1998 \apj {\bf 492} 822 \item{} Nahar S~N 1996 \pra {\bf 53} 2417 \item{} Nahar S~N 2005 \apjs {\bf 158} 80 \item{} Nahar S~N and Pradhan A~K 1994 \pra {\bf 49} 1816 \item{} Nahar S~N and Pradhan A~K 2004, Review in {\it Radiation Processes In Physics and Chemistry} {\bf 70} 323 \item{} Nahar S~N and Pradhan A~K 2006 \pra {\bf 73} 062718-1 \item{} Nahar S~N, Pradhan A~K and Zhang H~L 2000 \apjs {\bf 131} 375 \item{} Nahar S~N, Pradhan A~K and Zhang H~L 2001 \apjs {\bf 133} 255 \item{} Oelgoetz J 2006 Ph.D. thesis, The Ohio State University \item{} Oelgoetz J, Fontes C~J, Zhang H~L, Montenegro M, Nahar S~N and Pradhan A~K 2007a, \mn {\bf 382} 761 \item{} Oelgoetz J, Fontes C~J, Zhang H~L and Pradhan A~K 2007b, \pra {\bf 76} 062504-1 \item{} Oelgoetz J and Pradhan A~K 2001, \mn {\bf 327} L42 \item{} Oelgoetz J and Pradhan A~K 2004, \mn {\bf 354} 1093 \item{} Pradhan A~K 1985 \apjs {\bf 59} 183 \item{} Pradhan A~K and Zhang H~L 1997 \jpb {\bf 30} L571 \item{} Sakimoto K, Terao M and Berrington K~A 1990 \pra {\bf 42} 291 \item{} Vainshtein L~A and Safronova U~I 1978 \adndt {\bf 25} 49 \item{} Whiteford A~D, Badnell N~R, Ballance C~P, O'Mullane M~G, Summers H~P and Thomas A~L 2001 \jpb {\bf 34} 3179 \item{} Zhang H~L, Nahar S~N and Pradhan A~K 1999 \jpb {\bf 32} 1459 \end{harvard} \end{document}
1,116,691,500,556
arxiv
\section{Introduction\protect \bigskip} {The recent discoveries of surface and interface superconductivity with} {% exceptionally high superconducting (SC) transition temperatures in several material structures} \cite{BosovicNphys14},\cite{FengGeNmat15}, \cite% {MannaArXive16} have drawn much attention to the phenomenon of strong type-II superconductivity in two-dimensional (2D) electron systems, in which the application of high magnetic fields can lead to exotic phenomena both in the normal and SC states\cite{Maniv01}.\ {Of special interest is the unique situation of the 2D superconductivity realized in surface states of topological insulators, e.g. }Sb$_{2}$Te$_{3}$ \cite{Zhao15}, {where the chemical potential} $E_{F}$ {is close to a Dirac point }\cite{Zhang09}{\ (with Fermi velocity }$v${) and the cyclotron effective mass}, $m^{\ast }=E_{F}/v^{2}$\cite{Katsnelson12} {is a small fraction (e.g. 0.065 in }Sb$% _{2}$Te$_{3}$, see also \cite{Arnold16}{) of the free electron mass} $m_{e}$% , {resulting in a dramatic enhancement of the cyclotron frequency}, $\omega _{c}=eH/m^{\ast }c$, {and the corresponding Landau level (LL) energy spacing. In a recent paper \cite{ZDMPRB17} we have exploited a standard electron gas model, with a quadratic energy-momentum dispersion and an effective band mass }$m^{\ast }={0.065}m_{e}$, in a systematic investigation of the quasi-particle states and the SC pair-potential in the vortex lattice state of this system under high perpendicular magnetic fields,{\ by solving self-consistently the corresponding }Bogoliubov de Gennes (BdG) equations. The results account reasonably well for the 2D SC state observed on the surface of Sb$_{2}$Te$_{3}$ under magnetic fields of up to 3 T \cite{Zhao15}% , revealing a strong type-II superconductivity at unusually low carrier density and small cyclotron effective mass, which can be realized only in the strong coupling ($\lambda \sim 1$) superconductor limit. {This unique situation is due to the proximity of the Fermi energy to a Dirac point, which implies that other materials in the emerging field of surface superconductivity, with metallic surface states and Dirac dispersion law around the Fermi energy, can show similar features. } It should be noted, however, that the use of the standard LL spectrum, arising from a parabolic band-structure, in the self-consistent BdG theory, presented in Ref.{\cite{ZDMPRB17}, has been done heuristically, without actual derivation from} the effective 2D Weyl Hamiltonian describing the helical surface states {observed in these topological insulators \cite% {Tkachov16}, }\cite{Zhao15}. Such a derivation is particularly necessary for the spin-momentum locked model under study here, since SC pairing involves certain spin-orbital correlations. Our purpose in the present paper is, therefore, two-fold: First, to develop the formal framework for solving the self consistency equation for the SC order parameter in the 2D Weyl model Hamiltonian under a strong perpendicular magnetic field, and then exploit the developed formalism in a study of the transition to superconductivity in comparison with the well known results of the standard model\cite{Maniv01}. The SC transition in helical surface states of topological insulators, such as those reported, e.g. in Ref.\cite{Zhao15}, is then comparatively studied with respect to both models. It is found that, similar to the well known solution of the linearized self consistency equation, derived by Helfand and Werthamer for the standard model \cite{HW64}, {\cite{ZDMPRB17}} the desired solution of the corresponding integral equation in the 2D Weyl model is greatly facilitated by initially finding analytical solutions of the eigenvalue equation for the SC order parameter. \ Furthermore, the calculated $H-T$ phase diagram for the Weyl model in the semiclassical limit (i.e. for LL filling factors $n_{F}>1$) can be directly mapped onto that found for the standard model, having the same Fermi surface parameters $E_{F}$ and $v$, and a cyclotron effective mass equal to $m^{\ast }=E_{F}/2v^{2}$. Significant deviations from the predicted mapping are found only for very small carrier densities, when the cyclotron energy becomes very large, the LL filling factors are smaller than unity, and the Fermi energy shrinks below the cutoff energy. \section{The 2D spin-momentum locked (Weyl) Fermion gas model} To describe the underlying normal surface state electron, with charge $-e$ , in a topological insulator under a magnetic field $\mathbf{H=}\left( 0,0,H\right) $ (vector potential in the Landau gauge, $\mathbf{A=}\left( -Hy,0,0\right) $), we exploit the Weyl Hamiltonian: \cite{Tkachov16} \begin{equation} \widehat{h}\left( \mathbf{r}\right) =\hbar v\left( \widehat{\sigma }_{x}% \widetilde{p}_{x}+\widehat{\sigma }_{y}\widetilde{p}_{y}\right) -E_{F}% \widehat{\sigma }_{0} \label{WeylH} \end{equation} with the Pauli matrices: $\ \widehat{\sigma }_{x}=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0% \end{array}% \right) ;\widehat{\sigma }_{y}=\left( \begin{array}{cc} 0 & -i \\ i & 0% \end{array}% \right) ;\widehat{\sigma }_{0}=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1% \end{array}% \right) $, and the gauge invariant momentum $\widetilde{\mathbf{p}}\mathbf{% \equiv }\left( -i \mathbf{\nabla +}\left(e/\hbar c\right)\mathbf{A}\right) $% , such that: \begin{widetext} \begin{equation} \widehat{h}\left( \mathbf{r}\right) =\left( \begin{array}{cc} 0 & -\hbar v\frac{\partial }{\partial y}-\hbar v\left( i\frac{\partial }{% \partial x}+\frac{y}{a_{H}^{2}}\right) \\ \hbar v\frac{\partial }{\partial y}-\hbar v\left( i\frac{\partial }{\partial x}+\frac{y}{a_{H}^{2}}\right) & 0% \end{array}% \right) . \end{equation} In these equations $v$ is the Fermi velocity and $a_{H}\equiv \sqrt{\frac{% \hbar c}{eH}}$ is the magnetic length. Note that Zeeman spin-splitting is neglected with respect to the cyclotron energy in Eq.\ref{WeylH} due to the very small cyclotron effective mass considered here. The corresponding Weyl equation for the spinor $\left( \begin{array}{c} \psi _{\uparrow }\left( \mathbf{r}\right) \\ \psi _{\downarrow }\left( \mathbf{r}\right)% \end{array}% \right) $ takes the form: \begin{equation} \ \left( \begin{array}{cc} -E_{F} & -\hbar v\frac{\partial }{\partial y}-\hbar v\left( i\frac{\partial }{\partial x}+\frac{y}{a_{H}^{2}}\right) \\ \hbar v\frac{\partial }{\partial y}-\hbar v\left( i\frac{\partial }{\partial x}+\frac{y}{a_{H}^{2}}\right) & -E_{F}% \end{array}% \right) \left( \begin{array}{c} \psi _{\uparrow }\left( \mathbf{r}\right) \\ \psi _{\downarrow }\left( \mathbf{r}\right)% \end{array}% \right) =E\left( \begin{array}{c} \psi _{\uparrow }\left( \mathbf{r}\right) \\ \psi _{\downarrow }\left( \mathbf{r}\right)% \end{array}% \right) \label{WeylEq} \end{equation}% \bigskip Expressing all length variables in units of the magnetic length $a_{H}$, and introducing the dimensionless energy variable $\mu \equiv \frac{E_{F}a_{H}}{% \sqrt{2}\hbar v}$, where all other energy symbols refer in what follows to quantities measured in units of the cyclotron energy $\hbar \omega _{c}\equiv \frac{\sqrt{2}\hbar v}{a_{H}}$, we write the corresponding mean-field Hamiltonian for singlet pairing in Nambu representation: \begin{equation} \widehat{H}=\left( \begin{array}{cc} \frac{1}{\sqrt{2}}\widehat{\boldsymbol{\mathbf{\sigma }}}\cdot \widetilde{% \mathbf{p}}-\mu & i\widehat{\sigma }_{y}\Delta \left( \mathbf{r}\right) \\ -i\widehat{\sigma }_{y}\Delta ^{\ast }\left( \mathbf{r}\right) & -\frac{1}{% \sqrt{2}} \widehat{\boldsymbol{\mathbf{\sigma }}}^{\ast }\cdot \widetilde{% \mathbf{p}}^{\ast }+\mu% \end{array}% \right) \label{NambuH} \end{equation}% where the spin-singlet order parameter is defined by: $\Delta ^{\ast }\left( \mathbf{r}\right) \equiv -\left(\left \vert V\right \vert /\hbar \omega_{c} \right) \left \langle \psi _{\downarrow }^{\dagger }\left( \mathbf{r}\right) \psi _{\uparrow }^{\dagger }\left( \mathbf{r}\right) \right \rangle \equiv \Delta _{\uparrow \downarrow }^{\ast }\left( \mathbf{r}\right) =-\Delta _{\downarrow \uparrow }^{\ast }\left( \mathbf{r}\right) $ \cite% {InducedTriplet} and $\widehat{\boldsymbol{\mathbf{\sigma }}}\equiv \left(\sigma_{x},\sigma_{y}\right)$. The corresponding Nambu field operators:% \begin{equation*} \widehat{\Psi }\left( \mathbf{r};t\right) \equiv \left[ \begin{array}{c} \psi _{\uparrow }\left( \mathbf{r};t\right) \\ \psi _{\downarrow }\left( \mathbf{r};t\right) \\ \psi _{\uparrow }^{\dagger }\left( \mathbf{r};t\right) \\ \psi _{\downarrow }^{\dagger }\left( \mathbf{r};t\right)% \end{array}% \right] ,\widehat{\Psi }^{\dagger }\left( \mathbf{r};t\right) \equiv \left[ \begin{array}{cccc} \psi _{\uparrow }^{\dagger }\left( \mathbf{r};t\right) & \psi _{\downarrow }^{\dagger }\left( \mathbf{r};t\right) & \psi _{\uparrow }\left( \mathbf{r}% ;t\right) & \psi _{\downarrow }\left( \mathbf{r};t\right)% \end{array}% \right] \end{equation*}% satisfy the equation of motion $i\partial _{t}\widehat{\Psi }\left( \mathbf{r% };t\right) =\widehat{H}\widehat{\Psi }\left( \mathbf{r};t\right) $ , resulting in the corresponding equations for the Nambu-Gorkov time-ordered Green's functions $4\times 4$ matrix, $\widehat{G}\left( \mathbf{r,r}% ^{\prime };t-t^{\prime }\right) \equiv -i\left \langle T\widehat{\Psi }% \left( \mathbf{r};t\right) \widehat{\Psi }^{\dagger }\left( \mathbf{r}% ^{\prime };t^{\prime }\right) \right \rangle $: \bigskip \begin{equation} \left[ i\partial _{t}-\left( \begin{array}{cc} \frac{1}{\sqrt{2}}\widehat{\mathbf{\sigma }}\cdot \widetilde{\mathbf{p}}-\mu & i\widehat{\sigma }_{y}\Delta ^{\ast }\left( \mathbf{r}\right) \\ -i\widehat{\sigma }_{y}\Delta ^{\ast }\left( \mathbf{r}\right) & -\frac{1}{% \sqrt{2}}\widehat{\mathbf{\sigma }}^{\ast }\cdot \widetilde{\mathbf{p}}% ^{\ast }+\mu% \end{array}% \right) \right] \left( \begin{array}{cc} \widehat{G}_{11}\left( \mathbf{r,r}^{\prime };t-t^{\prime }\right) & \widehat{G}_{12}\left( \mathbf{r,r}^{\prime };t-t^{\prime }\right) \\ \widehat{G}_{21}\left( \mathbf{r,r}^{\prime };t-t^{\prime }\right) & \widehat{G}_{22}\left( \mathbf{r,r}^{\prime };t-t^{\prime }\right)% \end{array}% \right) =\delta \left( t-t^{\prime }\right) \delta \left( \mathbf{r}-\mathbf{% r}^{\prime }\right) \label{DiffG} \end{equation} \bigskip Time-Fourier transforming with frequency $\omega $ and rewriting Eq.\ref% {DiffG} in its integral form, the relevant parts of these equations for our purpose here is written in the form: \bigskip \begin{eqnarray} \widehat{G}_{11}\left( \mathbf{r,r}^{\prime };\omega \right) &=&\widehat{G}% _{11}^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) +\int d% \mathbf{r}^{\prime \prime }\widehat{G}_{11}^{\left( 0\right) }\left( \mathbf{% r,r}^{\prime \prime };\omega \right) i\widehat{\sigma }_{y}\Delta \left( \mathbf{r}^{\prime \prime }\right) \widehat{G}_{21}\left( \mathbf{r}^{\prime \prime }\mathbf{,r}^{\prime };\omega \right) , \label{IntG11G21} \\ \widehat{G}_{21}\left( \mathbf{r,r}^{\prime };\omega \right) &=&\int d% \mathbf{r}^{\prime \prime }\widehat{G}_{11}^{\left( 0\right) T}\left( \mathbf{r}^{\prime \prime }\mathbf{,r};-\omega \right) i\widehat{\sigma }% _{y}\Delta ^{\ast }\left( \mathbf{r}^{\prime \prime }\right) \widehat{G}% _{11}\left( \mathbf{r}^{\prime \prime }\mathbf{,r}^{\prime };\omega \right) \label{IntG21G11} \end{eqnarray}% where the upper-left block of the Normal state $2\times 2$ Green's function matrix: $\widehat{G}_{11}^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) \equiv \left( \begin{array}{cc} G_{\uparrow \uparrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) & G_{\uparrow \downarrow }^{\left( 0\right) }\left( \mathbf{% r,r}^{\prime };\omega \right) \\ G_{\downarrow \uparrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) & G_{\downarrow \downarrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right)% \end{array}% \right) $, satisfies the equation: \begin{equation} \left( \omega +\mu -\frac{1}{\sqrt{2}}\widehat{\mathbf{\sigma }}\cdot \widetilde{\mathbf{p}}\right) \widehat{G}_{11}^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) =\delta \left( \mathbf{r}-\mathbf{r}% ^{\prime }\right) \label{DiffGN11} \end{equation}% and its transpose with frequency $-\omega $, $\widehat{G}_{11}^{\left( 0\right) T}\left( \mathbf{r}^{\prime }\mathbf{,r};-\omega \right) $, satisfies the dual equation: \begin{equation} \left( \omega -\mu +\frac{1}{\sqrt{2}}\mathbf{\sigma }^{\ast }\cdot \widetilde{\mathbf{p}}^{\ast }\right) \widehat{G}_{11}^{\left( 0\right) T}\left( \mathbf{r}^{\prime }\mathbf{,r};-\omega \right) =-\delta \left( \mathbf{r}-\mathbf{r}^{\prime }\right) \label{DiffGN11T} \end{equation} \bigskip Expanding the above normal state Green's functions in terms of the complete set of solutions, $\varphi _{n}\left( y-k_{x}\right) =\frac{1}{\pi ^{1/4}% \sqrt{2^{n}n!}}e^{-\frac{1}{2}\left( y-k_{x}\right) ^{2}}H_{n}\left( y-k_{x}\right) $ , of the eigenstate equation:$\frac{1}{2}\left[ -\partial _{y}^{2}+\left( y-k_{x}\right) ^{2}-1\right] \varphi _{n}\left( y-k_{x}\right) =n\varphi _{n}\left( y-k_{x}\right) $, where $H_{n}\left( y\right) $ is Hermite polynomial of order $n=0,1,2,...,$ we find: \begin{equation} G_{\uparrow \uparrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) = \frac{1}{L_{x}}\sum \limits_{k_{x}}e^{ik_{x}\left( x-x^{\prime }\right) }\sum \limits_{n=0}^{\infty }\frac{\left( \omega +\mu \right) \varphi _{n}\left( y-k_{x}\right) \varphi _{n}\left( y^{\prime }-k_{x}\right) }{\left( \omega +\mu \right) ^{2}-\left( n+1\right) } \label{gUU} \end{equation} \bigskip \begin{equation} G_{\uparrow \downarrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) = \frac{1}{L_{x}}\sum \limits_{k_{x}}e^{ik_{x}\left( x-x^{\prime }\right) }\sum \limits_{n=0}^{\infty }\frac{\left( -\sqrt{n}% \right) \varphi _{n-1}\left( y-k_{x}\right) \varphi _{n}\left( y^{\prime }-k_{x}\right) }{\left( \omega +\mu \right) ^{2}-n} \label{gUD} \end{equation} \bigskip \begin{equation} G_{\downarrow \uparrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) = \frac{1}{L_{x}}\sum \limits_{k_{x}}e^{ik_{x}\left( x-x^{\prime }\right) }\sum \limits_{n=0}^{\infty }\frac{\left( -\sqrt{n+1}% \right) \varphi _{n+1}\left( y-k_{x}\right) \varphi _{n}\left( y^{\prime }-k_{x}\right) }{\left( \omega +\mu \right) ^{2}-\left( n+1\right) } \label{gDU} \end{equation} \bigskip \begin{equation} G_{\downarrow \downarrow }^{\left( 0\right) }\left( \mathbf{r,r}^{\prime };\omega \right) = \frac{1}{L_{x}}\sum \limits_{k_{x}}e^{ik_{x}\left( x-x^{\prime }\right) }\sum \limits_{n=0}^{\infty }\frac{\left( \omega +\mu \right) \varphi _{n}\left( y-k_{x}\right) \varphi _{n}\left( y^{\prime }-k_{x}\right) }{\left( \omega +\mu \right) ^{2}-n} \label{gDD} \end{equation}% where $L_{x}$ is the surface size along the x-axis (measured in units of $% a_{H}$). Leading order expansion of the integral equations Eq.\ref{IntG21G11} in the order parameter $\Delta \left( \mathbf{r}\right) $ yields: \bigskip \begin{equation*} \widehat{G}_{21}\left( \mathbf{r,r}^{\prime };\omega \right) \equiv \left( \begin{array}{cc} F_{\uparrow \uparrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) & F_{\uparrow \downarrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) \\ F_{\downarrow \uparrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) & F_{\downarrow \downarrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right)% \end{array}% \right) =\int d\mathbf{r}^{\prime \prime }\widehat{G}_{11}^{\left( 0\right) T}\left( \mathbf{r}^{\prime \prime }\mathbf{,r};-\omega \right) i\sigma _{y}\Delta ^{\ast }\left( \mathbf{r}^{\prime \prime }\right) \widehat{G}% _{11}^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r}% ^{\prime };\omega \right) \end{equation*} \bigskip so that for the anomalous Green's functions $F_{\downarrow \uparrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) $ and $F_{\uparrow \downarrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) $ we find, respectively: \bigskip \begin{equation} F_{\downarrow \uparrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) =\int d\mathbf{r}^{\prime \prime }\Delta ^{\ast }\left( \mathbf{r}^{\prime \prime }\right) \left[ G_{\uparrow \downarrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r};-\omega \right) G_{\downarrow \uparrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r}% ^{\prime };\omega \right) -G_{\downarrow \downarrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r};-\omega \right) G_{\uparrow \uparrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r}% ^{\prime };\omega \right) \right] \label{FDU1CC} \end{equation} \bigskip \begin{equation} F_{\uparrow \downarrow }^{+}\left( \mathbf{r,r}^{\prime };\omega \right) =\int d\mathbf{r}^{\prime \prime }\Delta ^{\ast }\left( \mathbf{r}^{\prime \prime }\right) \left[ G_{\uparrow \uparrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r};-\omega \right) G_{\downarrow \downarrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r}% ^{\prime };\omega \right) -G_{\downarrow \uparrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r};-\omega \right) G_{\uparrow \downarrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime \prime }\mathbf{,r}% ^{\prime };\omega \right) \right] \label{FUD1CC} \end{equation} \bigskip The self-consistency condition for the singlet SC order parameter, in the imaginary (Matsubara) frequency representation, $\omega _{\nu }=\left( 2\nu +1\right) \pi \tau \left(\tau \equiv k_{B}T/\hbar \omega _{c}\right),\nu =0,\pm 1,\pm 2,...$: \begin{equation} \Delta ^{\ast }\left( \mathbf{r}\right) =-\left( \left \vert V\right \vert /\hbar \omega _{c}\right) \left \langle \widehat{\psi }_{\downarrow }^{\dagger }\left( \mathbf{r};\tau \right) \widehat{\psi }_{\uparrow }^{\dagger }\left( \mathbf{r};\tau \right) \right \rangle = \left( \left \vert V\right \vert /\hbar \omega _{c} \right) \tau \sum \limits_{\nu =-\infty }^{\infty }F_{\downarrow \uparrow }^{+}\left( \mathbf{r,r};\omega _{\nu }\right) ,\Delta ^{\ast }\left( \mathbf{r}\right) \equiv \Delta _{\uparrow \downarrow }^{\ast }\left( \mathbf{r}\right) =-\Delta _{\downarrow \uparrow }^{\ast }\left( \mathbf{r}\right) \label{SCF} \end{equation}% where $V$ is the effective electron-electron interaction potential responsible for the pairing instability. \ Note that, unlike the dimensionless quantity $\Delta ^{\ast }\left( \mathbf{r}\right) $, both $V$ and $k_{B}T$ are expressed here in their absolute energy dimensions. \ Thus, to leading order in $\Delta ^{\ast }\left( \mathbf{r}\right) $ Eq.\ref{SCF} takes the form: \begin{equation} \Delta ^{\ast }\left( \mathbf{r}\right) =\left( \left \vert V\right \vert /\hbar \omega _{c}\right) \int d\mathbf{r}^{\prime }\Delta ^{\ast }\left( \mathbf{r}^{\prime }\right) Q\left( \mathbf{r}^{\prime }\mathbf{,r}\right) \label{SCDel} \end{equation}% where the kernel $Q\left( \mathbf{r}^{\prime }\mathbf{,r}\right) $ is given by: \begin{eqnarray} Q\left( \mathbf{r}^{\prime }\mathbf{,r}\right) &=&\tau \sum \limits_{\nu =-\infty }^{\infty }Q\left( \mathbf{r}^{\prime }\mathbf{,r};\omega _{\nu }\right) , \notag \\ Q\left( \mathbf{r}^{\prime }\mathbf{,r};\omega _{\nu }\right) &=&Q_{\downarrow \uparrow }^{\uparrow \downarrow }\left( \mathbf{r}^{\prime }% \mathbf{,r};\omega _{\nu }\right) -Q_{\downarrow \downarrow }^{\uparrow \uparrow }\left( \mathbf{r}^{\prime }\mathbf{,r};\omega _{\nu }\right) \label{QUDDU-QUUDD} \end{eqnarray}% with: \begin{eqnarray*} Q_{\downarrow \uparrow }^{\uparrow \downarrow }\left( \mathbf{r}^{\prime }% \mathbf{,r};\omega _{\nu }\right) &=&G_{\downarrow \uparrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime }\mathbf{,r};-\omega _{\nu }\right) G_{\uparrow \downarrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime }% \mathbf{,r};\omega _{\nu }\right) , \\ Q_{\downarrow \downarrow }^{\uparrow \uparrow }\left( \mathbf{r}^{\prime }% \mathbf{,r};\omega _{\nu }\right) &=&G_{\downarrow \downarrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime }\mathbf{,r};-\omega _{\nu }\right) G_{\uparrow \uparrow }^{\left( 0\right) }\left( \mathbf{r}^{\prime }\mathbf{% ,r};\omega _{\nu }\right) \end{eqnarray*} Exploiting Eqs.\ref{gUU}-\ref{gDD} for the normal state Green's functions we derive the following explicit expressions for the kernels: \bigskip \begin{eqnarray} Q_{\downarrow \downarrow }^{\uparrow \uparrow }\left( \mathbf{r}^{\prime }% \mathbf{,r};\omega _{\nu }\right) &=&\left( \frac{1}{2\pi }\right) ^{2}\sum \limits_{n,n^{\prime }=0}^{\infty }\frac{\left( -i\omega _{\nu }+\mu \right) \left( i\omega _{\nu }+\mu \right) }{\left[ \left( -i\omega _{\nu }+\mu \right) ^{2}-n\right] \left[ \left( i\omega _{\nu }+\mu \right) ^{2}-\left( n^{\prime }+1\right) \right] }\times \notag \\ &&e^{i\left( x^{\prime }-x\right) \left( y+y^{\prime }\right) }e^{-\frac{1}{2% }\rho ^{2}}L_{n^{\prime }}\left( \frac{1}{2}\rho ^{2}\right) L_{n}\left( \frac{1}{2}\rho ^{2}\right) \label{QUUDD} \end{eqnarray} \begin{eqnarray} Q_{\downarrow \uparrow }^{\uparrow \downarrow }\left( \mathbf{r}^{\prime }% \mathbf{,r};\omega _{\nu }\right) &=&-\frac{1}{2}\left( \frac{1}{2\pi }% \right) ^{2}\sum \limits_{n,n^{\prime }=1}^{\infty }\frac{1}{\left[ \left( -i\omega _{\nu }+\mu \right) ^{2}-n\right] }\frac{1}{\left[ \left( i\omega _{\nu }+\mu \right) ^{2}-n^{\prime }\right] }\times \notag \\ &&e^{i\left( x^{\prime }-x\right) \left( y+y^{\prime }\right) }e^{-\frac{1}{2% }\rho ^{2}}\rho ^{2}L_{n}^{\prime }\left( \frac{1}{2}\rho ^{2}\right) L_{n^{\prime }}^{\prime }\left( \frac{1}{2}\rho ^{2}\right), \label{QUDDU} \end{eqnarray} where $\rho = |\mathbf{r}-\mathbf{r^{\prime }}|$. Similar to the situation in the standard, single band (with quadratic energy dispersion) 2D electron system, the order parameter of the Landau orbital form $\Delta ^{\ast }\left( \mathbf{r}\right) \propto e^{-2iq_{x}x}\varphi _{m}\left[ \sqrt{2}% \left( y-q_{x}\right) \right] ,m=0,1,2,...$, is an eigenfunction of the integral operator $\int d\mathbf{r}^{\prime }Q\left( \mathbf{r}^{\prime }% \mathbf{,r}\right) ...,$ that is: \begin{equation} \int d\mathbf{r}^{\prime }Q\left( \mathbf{r}^{\prime }\mathbf{,r}\right) \Delta ^{\ast }\left( \mathbf{r}^{\prime }\right) =A\Delta ^{\ast }\left( \mathbf{r}\right) \label{EigenDel} \end{equation} This important result is obtained by showing that the above Landau orbital is also an eigenfunction of the integral operators $k_{B}T\sum \limits_{\nu =-\infty }^{\infty }\int d\mathbf{r}^{\prime }Q_{\downarrow \downarrow }^{\uparrow \uparrow }\left( \mathbf{r}^{\prime }\mathbf{,r};\omega _{\nu }\right)... , k_{B}T\sum \limits_{\nu =-\infty }^{\infty }\int d\mathbf{r}% ^{\prime }Q_{\downarrow \uparrow }^{\uparrow \downarrow }\left( \mathbf{r}% ^{\prime }\mathbf{,r};\omega _{\nu }\right)... $, so that, by exploiting Eqs.% \ref{QUUDD},\ref{QUDDU}, the eigenvalue $A$ in Eq.\ref{EigenDel} can be written in terms of the respective eigenvalues, $A_{\downarrow \uparrow }^{\uparrow \downarrow },A_{\downarrow \downarrow }^{\uparrow \uparrow }$ , as: \begin{equation} A=A_{\downarrow \uparrow }^{\uparrow \downarrow }-A_{\downarrow \downarrow }^{\uparrow \uparrow } \label{A} \end{equation}% where: \bigskip \begin{eqnarray} A_{\downarrow \uparrow }^{\uparrow \downarrow } &=&-\tau \sum \limits_{\nu =-\infty }^{\infty }\sum \limits_{n,n^{\prime }=1}^{\infty }\frac{1}{\left[ \left( -i\omega _{\nu }+\mu \right) ^{2}-n\right] }\frac{1}{\left[ \left( i\omega _{\nu }+\mu \right) ^{2}-n^{\prime }\right] }\times \label{AUDDU} \\ &&\frac{1}{2}\left( \frac{1}{2\pi }\right) \int_{0}^{\infty }\rho ^{3}d\rho e^{-\rho ^{2}}L_{n^{\prime }}^{\prime }\left( \frac{1}{2}\rho ^{2}\right) L_{n}^{\prime }\left( \frac{1}{2}\rho ^{2}\right) \notag \end{eqnarray} \bigskip \begin{eqnarray} A_{\downarrow \downarrow }^{\uparrow \uparrow } &=&\tau \sum \limits_{\nu =-\infty }^{\infty }\sum \limits_{n,n^{\prime }=0}^{\infty }\frac{\left( -i\omega _{\nu }+\mu \right) \left( i\omega _{\nu }+\mu \right) }{\left[ \left( -i\omega _{\nu }+\mu \right) ^{2}-n\right] \left[ \left( i\omega _{\nu }+\mu \right) ^{2}-\left( n^{\prime }+1\right) \right] }\times \label{AUUDD} \\ &&\left( \frac{1}{2\pi }\right) \int_{0}^{\infty }\rho d\rho e^{-\rho ^{2}}L_{n^{\prime }}\left( \frac{1}{2}\rho ^{2}\right) L_{n}\left( \frac{1}{2% }\rho ^{2}\right) \notag \end{eqnarray} and $\ \tau \equiv \frac{k_{B}T}{\hbar \omega _{c}}$.\bigskip Performing the integration over $\rho $ in both Eqs. \ref{AUDDU} and \ref% {AUUDD} our results for the eigenvalues $A_{\downarrow \uparrow }^{\uparrow \downarrow },A_{\downarrow \downarrow }^{\uparrow \uparrow }$ take the forms: \bigskip \begin{equation} A_{\downarrow \uparrow }^{\uparrow \downarrow }\equiv A_{0}=-\left( \frac{1}{% 2\pi }\right) \tau \sum \limits_{\nu =-\infty }^{\infty }\sum \limits_{n,n^{\prime }=1}^{\infty }\frac{\left( n+n^{\prime }\right) !}{% 2^{n+n^{\prime }}n^{\prime }!n!}\frac{\left( \frac{n^{\prime }n}{n+n^{\prime }}\right) }{\left[ \left( -i\omega _{\nu }+\mu \right) ^{2}-n\right] \left[ \left( i\omega _{\nu }+\mu \right) ^{2}-n^{\prime }\right] } \label{AUDDUf} \end{equation} \bigskip \begin{eqnarray} A_{\downarrow \downarrow }^{\uparrow \uparrow } &=&A_{1}+\Delta A_{1}, \label{AUUDDf} \\ A_{1} &=&\left( \frac{1}{4\pi }\right) \tau \sum \limits_{\nu =-\infty }^{\infty }\sum \limits_{n,n^{\prime }=1}^{\infty }\frac{\left( n+n^{\prime }\right) !}{2^{n+n^{\prime }}n!n^{\prime }!}\frac{\left( -i\omega _{\nu }+\mu \right) \left( i\omega _{\nu }+\mu \right) }{\left[ \left( -i\omega _{\nu }+\mu \right) ^{2}-n\right] \left[ \left( i\omega _{\nu }+\mu \right) ^{2}-n^{\prime }\right] }, \label{A1} \\ \Delta A_{1} &=&\left( \frac{1}{4\pi }\right) \tau \sum \limits_{\nu =-\infty }^{\infty }\sum \limits_{n=1}^{\infty }\frac{1}{2^{n}}\left \{ \frac{\left( i\omega _{\nu }+\mu \right) }{\left( -i\omega _{\nu }+\mu \right) \left[ \left( i\omega _{\nu }+\mu \right) ^{2}-n\right] }+CC\right \} \label{DA1} \end{eqnarray} \end{widetext} Note that the pairing correlations of electrons, preserving their spin-up projection, with electrons preserving their spin-down projection, as expressed by $A_{\downarrow \downarrow }^{\uparrow \uparrow }$ in Eq.\ref% {AUUDDf}, include contributions of correlations of the zero LL with nonzero LLs, as reflected by $\Delta A_{1}$ in Eq.\ref{DA1}. On the other hand, the pairing correlations of electrons, flipping their spin-up to spin-down projections, due to spin orbit coupling, with electrons flipping their spin-down to spin-up projections, as expressed by $A_{\downarrow \uparrow }^{\uparrow \downarrow }$ in Eq.\ref{AUDDUf}, do not include any contributions involving zero LL states. Combining the self-consistency integral equation, Eq.\ref{SCF}, with the eigenvalue equation (for $A=A_{\downarrow \uparrow }^{\uparrow \downarrow }-A_{\downarrow \downarrow }^{\uparrow \uparrow }$), Eq.\ref{EigenDel}, the former reduces to the simple algebraic equation, $1=\left \vert V\right \vert A $. \ Performing the summation over the Matsubara frequencies $\omega _{\nu }$ we arrive at the following expression for $% \left \vert V\right \vert A$: \begin{widetext} \begin{eqnarray} \left \vert V\right \vert A &=&\frac{1}{32}\frac{\lambda }{\sqrt{n_{F}}}\sum \limits_{i,j=1}^{2}\sum \limits_{n=N_{l}^{\left( i\right) }}^{N_{u}^{\left( i\right) }}\sum \limits_{m=N_{l}^{\left( j\right) }}^{N_{u}^{\left( j\right) }}\frac{\left( m+n\right) !}{2^{m+n}n!m!}I_{nm}^{\left( ij\right) }, \label{Af} \\ I_{nm}^{\left( ij\right) } &=&\frac{\left[ \left( -1\right) ^{j}\sqrt{m}% +\left( -1\right) ^{i}\sqrt{n}\right] ^{2}}{n+m}\frac{\tanh \left( \frac{\mu +\left( -1\right) ^{j}\sqrt{m}}{2\tau }\right) +\tanh \left( \frac{\mu +\left( -1\right) ^{i}\sqrt{n}}{2\tau }\right) }{2\mu +\left( -1\right) ^{j}% \sqrt{m}+\left( -1\right) ^{i}\sqrt{n}},\ \ I_{00}^{\left( ij\right) }=0 \notag \end{eqnarray} \end{widetext} where: \begin{equation*} \lambda =\frac{\sqrt{n_{F}}\left \vert V\right \vert }{\pi a_{H}^{2}\hbar \omega _{c}}=\left \vert V\right \vert N\left( E_{F}\right) =\left \vert V\right \vert \left( \frac{m^{\ast }}{2\pi \hbar ^{2}}\right) ,n_{F}\equiv \mu ^{2} \end{equation*}% with $N\left( E_{F}\right) =\frac{E_{F}}{2\pi \left( \hbar v\right) ^{2}}$ being the single electron density of states per spin projection per unit area and $m^{\ast }=\frac{E_{F}}{v^{2}}$ the effective cyclotron mass at the Fermi energy. The different cutoff LL indices $N_{u}^{i}$ ($N_{l}^{i}$ ), indicated in Eqs.% \ref{Af}, refer to the different branches, i.e. the conduction (positive), or valence (negative) energy bands of the Weyl model contributing to the pairing correlation. The different values arise due to the fact that the cutoff is introduced to the electron energy, by the mediating electron-phonon interaction, relative to the Fermi energy, rather than to the branching point (zero) energy of the Weyl bands structure. \ Thus, assuming a (Debye) cutoff energy $\hbar \omega _{D}$, we should distinguish between two different situations. In the usual situation where $\hbar \omega _{D}<E_{F}$, pairing takes place only in a single band, so that, e.g. for a positive chemical potential, we find: $N_{u}^{\left( 1\right) }=\left[ n_{F}\left( 1+\gamma \right) ^{2}\right] $, $N_{l}^{\left( 1\right) }=\left[ n_{F}\left( 1-\gamma \right) ^{2}\right] $, $\ N_{u}^{\left( 2\right) }=N_{l}^{\left( 2\right) }=0$, where $\gamma =\hbar \omega _{D}/E_{F}$. \ In the unusual situation where the cutoff energy, $\hbar \omega _{D}>E_{F}$, both inter and intra band pairing take place, so that the cutoff LL indices are different for energies in the valence (V) and conduction (C) bands. Thus, for CB pairing (corresponding to the energy denominator $2\mu -\sqrt{m}% -\sqrt{n}$ in Eq.\ref{Af}), we have: $N_{u}^{\left( 1\right) }=\left[ n_{F}\left( 1+\gamma \right) ^{2}\right] ,N_{l}^{\left( 1\right) }=0$. For the interband pairing (energy denominators $2\mu -\sqrt{m}+\sqrt{n}$ , or $% 2\mu +\sqrt{m}-\sqrt{n}$ in Eq.\ref{Af}) the cutoff indices are: $% N_{u}^{\left( 1\right) }=\left[ n_{F}\left( 1+\gamma \right) ^{2}\right] ,N_{l}^{\left( 1\right) }=0$, or: $N_{u}^{\left( 2\right) }=\left[ n_{F}\left( \gamma -1\right) ^{2}\right] ,N_{l}^{\left( 2\right) }=0$, respectively. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{fig1.png} \caption{Schematic illustration of the Weyl model bands structure for a positive chemical potential, smaller than the cutoff energy, showing a pair of Landau levels in both subbands at the cutoff energy measured from the Fermi energy.} \end{figure} \bigskip \section{Comparison with the standard (nonrelativistic) electron gas model: The semiclassical approximation} \bigskip A useful reference model, for comparison with the 2D Weyl model developed above, starts with a nonrelativistic electron gas, characterized by a quadratic single-electron energy-momentum dispersion, $E=\frac{\hbar ^{2}k^{2}}{2m_{S}^{\ast }}$, with band effective mass, $m_{S}^{\ast }$ , set equal to $\frac{1}{2}m_{0}^{\ast }=\frac{E_{F0}}{2v^{2}}$, and $E_{F0}$- the Fermi energy in the Weyl model at a certain doping level, to be determined in reference to a concrete experiment. Under these assumptions both the Fermi energy, $E_{F0}$ , and the Fermi wave number, $k_{F0}$ , are the same in both models: \begin{equation} E_{F0}=\frac{\hbar ^{2}k_{F0}^{2}}{2m_{S}^{\ast }}=\hbar vk_{F0} \label{EF} \end{equation}% And in a perpendicular magnetic field $\mathbf{H=}\left( 0,0,H\right) $ the cyclotron frequency, $\omega _{c}^{S}\equiv \left( \frac{eH}{m_{S}^{\ast }c}% \right) $, is related to the Weyl cyclotron frequency, $\omega _{c}^{W}\equiv \frac{\sqrt{2}v}{a_{H}}$, via: \bigskip \begin{equation} \omega _{c}^{W}=2\sqrt{n_{F0}}\left( \frac{eH}{m_{0}^{\ast }c}\right) =\sqrt{% n_{F0}}\omega _{c}^{S} \label{wc} \end{equation}% where in both models \begin{equation} n_{F0}\equiv \frac{E_{F0}}{\hbar \omega _{c}^{S}}\equiv \left( \frac{E_{F0}}{% \hbar \omega _{c}^{W}}\right) ^{2}=\frac{\left( a_{H}k_{F0}\right) ^{2}}{2} \label{nF0} \end{equation} \bigskip Using the set of parameters defined above, the well known expression for the pairing energy eigenvalue obtained in the standard model takes the form: \begin{widetext} \begin{equation} \left \vert V\right \vert A_{S}=\frac{1}{4}\lambda _{S}\sum \limits_{m,n=n_{F0}\left( 1-\gamma _{0}\right) }^{n_{F0}\left( 1+\gamma _{0}\right) }\frac{\left( m+n\right) !}{2^{m+n}n!m!}\frac{\tanh \left( \frac{% \mu _{0}-n-1/2}{2\tau _{S}}\right) +\tanh \left( \frac{\mu _{0}-m-1/2}{2\tau _{S}}\right) }{2\mu _{0}-n-m-1} \label{AS0} \end{equation} \end{widetext} where $\mu _{0}=n_{F0},\tau _{S}=\frac{k_{B}T}{\hbar \omega _{c}^{S}}% ,\lambda _{S}\equiv \left \vert V\right \vert \left( \frac{m_{S}^{\ast }}{% 2\pi \hbar ^{2}}\right) $, and $\gamma _{0}=\hbar \omega _{D}/E_{F0}$. The semiclassical limit of our theory in the Weyl model is basically established at sufficiently small magnetic fields for which the LL index at the Fermi energy, $n_{F}$, is sufficiently large compared to unity. \ Thus, assuming that $n_{F}\gg 1$ , we may expand the CB energy appearing in the dominant contribution to $A$ (i.e. $I_{nm}^{\left( 11\right) }$) in Eq.\ref% {Af} around $m=n_{F}$ , or $n=n_{F}$, e.g.: $\ \sqrt{m}\approx \mu +\frac{1}{% 2\sqrt{n_{F}}}\left( m-n_{F}\right) $, $\sqrt{n}\approx \mu +\frac{1}{2\sqrt{% n_{F}}}\left( n-n_{F}\right) $, \ such that to leading order: $% I_{nm}^{\left( 11\right) }\approx 4\sqrt{n_{F}}\frac{\tanh \left( \frac{% n_{F}-m}{2\left( 2\tau \sqrt{n_{F}}\right) }\right) +\tanh \left( \frac{% n_{F}-n}{2\left( 2\tau \sqrt{n_{F}}\right) }\right) }{2n_{F}-m-n}$, and: \begin{widetext} \begin{equation} \left \vert V\right \vert A_{W}\approx \left \vert V\right \vert A_{W}^{SC}=% \frac{1}{8}\lambda \sum \limits_{m,n=N_{l}^{\left( 1\right) }}^{N_{u}^{\left( 1\right) }}\frac{\left( m+n\right) !}{2^{m+n}n!m!}\frac{% \tanh \left( \frac{n_{F}-m}{2\left( 2\sqrt{n_{F}}\tau _{W}\right) }\right) +\tanh \left( \frac{n_{F}-n}{2\left( 2\sqrt{n_{F}}\tau _{W}\right) }\right) }{2n_{F}-m-n} \label{ASC} \end{equation} \end{widetext} Note that, for $E_{F}=E_{F0}$, $A_{W}$ in Eq.\ref{ASC} is seen to be close to $\frac{1}{2}A_{S}$ in Eq.\ref{AS0}, provided the dimensionless temperature scale $\tau _{W}\equiv \frac{k_{B}T}{\hbar \omega _{c}^{W}}$ is rescaled by the factor $2\sqrt{n_{F0}}$. \ In fact, the rescaled value, $2% \sqrt{n_{F0}}\tau _{W}=\frac{k_{B}T}{\hbar eH/m_{0}^{\ast }c}=2\left( \frac{% k_{B}T}{\hbar \omega _{c}^{S}}\right) =2\tau _{S}$ , is consistent with Eq.% \ref{wc}. It should be noted that the dimensionless zero-point energy, $1/2$ in Eq.\ref{AS0}, characterizing the standard model, does not make any difference since it can always be absorbed into the chemical potential $\mu $% . \ The factor of $\frac{1}{2}$ between the expressions \ref{ASC} and \ref% {AS0} is due to the spin-momentum locking, inherent to the Weyl model, and the consequent splitting of its spectrum into positive and negative energy subbands, as compared to the single band of the standard spectrum. There is, however, an essential difference between the two models, and that is the cyclotron effective mass in the Weyl model is a function of the Fermi energy, whereas in the standard model it is a constant. The above comparison is, therefore, drastically modified in the ultimate quantum limit, when together with the doping level, the Fermi energy tends to zero, and the prefactor $\frac{\lambda }{\sqrt{n_{F}}}$ in Eq.\ref{Af} nominally diverges as $n_{F}\rightarrow 0$. \ The vanishing of $\lambda $ in the Weyl model with $E_{F}$ through the cyclotron effective mass, evidently removes this divergency, yielding: $\frac{\lambda }{\sqrt{n_{F}}}\rightarrow \frac{1}{% \sqrt{2}\pi }\frac{\left \vert V\right \vert /\hbar v}{a_{H}}% ,E_{F}\rightarrow 0$ . It will be, therefore, helpful to extend the reference model expressed in Eq.% \ref{AS0} for varying values of $E_{F}$, to account for the dependence of the parameters $n_{F}$ and $m^{\ast }$ in the Weyl model on $E_{F}$. \ This can be done by replacing $n_{F0}=\mu _{0}$ in Eq.\ref{AS0} with $n_{F}$, defined in the Weyl model by: \bigskip \begin{equation} n_{F}\equiv \left( \frac{E_{F}}{\hbar \omega _{c}^{W}}\right) ^{2}=\frac{1}{2% }\left( a_{H}k_{F}\right) ^{2} \label{nF} \end{equation}% so that: \begin{widetext} \begin{equation} \left \vert V\right \vert A_{S}\rightarrow \frac{1}{4}\lambda _{S}\sum \limits_{m,n=n_{F}\left( 1-\gamma \right) }^{n_{F}\left( 1+\gamma \right) }% \frac{\left( m+n\right) !}{2^{m+n}n!m!}\frac{\tanh \left( \frac{n_{F}-n-1/2}{% 2\tau _{S}}\right) +\tanh \left( \frac{n_{F}-m-1/2}{2\tau _{S}}\right) }{% 2n_{F}-n-m-1} \label{AS} \end{equation}% \end{widetext} where \ $\tau _{S}=\sqrt{n_{F0}}\tau _{W}$\ and $\gamma =\hbar \omega _{D}/E_{F}$. \ \ The standard coupling constant, $\lambda _{S}$ , is defined by fixing the value of the cyclotron mass at $m_{S}^{\ast }$ (i.e. at a certain value of the Fermi energy $E_{F0}$): $\lambda _{S}\equiv \left \vert V\right \vert \left( m_{S}^{\ast }/2\pi \hbar ^{2}\right) =\frac{1}{2}% \lambda _{0}$, so that the prefactor in Eq.\ref{Af} is rewritten in a form showing its independence of $E_{F}$: \begin{equation} \frac{\lambda }{\sqrt{n_{F}}}=\frac{\lambda _{0}}{\sqrt{n_{F0}}}=2\frac{% \lambda _{S}}{\sqrt{n_{F0}}} \label{pref} \end{equation} Using this expression in Eq.\ref{Af}, together with the semiclassical approximation that yields Expression \ref{ASC}, the pre-factor, $\lambda /8$% , in the latter becomes: $\left( \lambda _{S}/4\right) \left( \frac{k_{F}}{% k_{F0}}\right) $, in full agreement with Eq.\ref{AS} at the reference point $% k_{F}=k_{F0}$. For doping levels away from the reference point, i.e. for $% k_{F}\neq k_{F0}$, one finds the simple relation: \begin{equation} A_{W}^{SC}=\left( \frac{k_{F}}{k_{F0}}\right) A_{S} \label{ASCvsAS} \end{equation} \begin{widetext} \onecolumngrid \begin{figure*}[t] \centering {\label{fig:a}}{\includegraphics[width=0.45\textwidth]{fig2a.png}} {\label% {fig:b}}{\includegraphics[width=0.45\textwidth]{fig2b.png}} {\label{fig:c}}{% \includegraphics[width=0.45\textwidth]{fig2c.png}} {\label{fig:d}}{% \includegraphics[width=0.45\textwidth]{fig2d.png}} \caption{Pairing condensation energy eigenvalue, $A$, as a function of field, $h\equiv H/H_{0}$ , at temperature $t\equiv T/T_{0}=0.01$, calculated for the Weyl model, Eq.\protect \ref{Af} (red curves), and for the extended standard model, Eq.\protect \ref{AS} (blue curves), at various values of $% \widetilde{n}_{F\text{ }}$($=15$ (a),$10$ (b), $5$ (c), $0.5$ (d)). The reference parameters, $H_{0},T_{0}$ , were selected in accord with the experiment, as discussed in the text. The cutoff was selected at: $\hbar \protect \omega _{D}/E_{F0}=0.5$. Note that for $\widetilde{n}_{F\text{ }}<5$ the cutoff energy $\hbar \protect \omega _{D}>E_{F}$.} \label{gene.fig1} \end{figure*} \twocolumngrid \end{widetext} \section{Mapping between the two models and their comparison with experiment} Experimental evidence for the existence of strong type-II superconductivity in a surface state of a topological insulator under a strong magnetic field {% can} be found in results of transport, magnetic susceptibility, de Haas van Alphen ({dHvA}) oscillations and scanning tunnelling spectroscopy measurements, reported recently on Sb$_{2}$Te$_{3}$ \cite{Zhao15}. Using a simple s-wave BCS model, similar to the standard model described in Sec.3, with the experimentally observed {dHvA frequency,} $F_{0}=36.5$ T (implying $% n_{F0}\left( H\right) =\frac{F_{0}}{H}$), {and cyclotron mass} $m_{0}^{\ast }=0.065m_{e}$, it was shown in \cite{ZDMPRB17} that such an unusual SC state can exist only in the {strong coupling superconductor limit. In particular, the zero field limit of the self-consistent order parameter amplitude,} $% \Delta _{SC}\left( n_{F0}\rightarrow \infty \right) \rightarrow \hbar \omega _{D}/\sinh \left( 1/\lambda _{0}\right) $, calculated in \cite{ZDMPRB17}, was found, for $\lambda _{0}=1$ and $\hbar \omega _{D}=0.25E_{F0}$, to basically agree with {the spatially average SC energy gap, derived from the STS measurements} (i.e. $\simeq 13$ meV) \cite{Zhao15}, whereas the LL filling factor, calculated at the semiclassical $H_{c2}$ ( $n_{F0}\approx 14$% ), was found to agree with the experimentally determined field of the resistivity onset downshift $H_{R}$ ($\sim 2.5$ T, $n_{F0}\sim 14$) \cite% {Zhao15}. \ Such an agreement, between the standard model, outlined in Sec.3, and the experiment reported in \cite{Zhao15}, seems to imply that the peculiar features of the helical surface state bands structure distinguishing the 2D Weyl Fermion gas model from the standard model, are irrelevant in constructing its high-fields SC state, except for a single parameter:- its unusual cyclotron effective mass, which can be dramatically modified upon variation of the chemical potential (e.g. by doping or by changing the gate voltage). The analysis presented in Sec.3 supports this conclusion for carrier densities and magnetic fields in the semiclassical limit. \ Here we study the relationships between the Weyl model and the extended standard model, described above, in the general parameters range, finding conditions for a complete mapping between the two models, and searching for physical situations in which they are qualitatively distinguishable. In Fig.2 we plot results of the pairing eigenvalue $A$, calculated within both the Weyl and the extended standard models, as a function of the reduced magnetic field, $h\equiv H/H_{0}$, for various values of Fermi energy $E_{F}$% . \ The temperature was selected sufficiently small to unfold the quantum oscillations associated with the Landau quantization. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{fig3.png} \caption{H-T Phase diagram, obtained by solving the self-consistency equation for both models at the reference point: $\widetilde{n}_{F}=% \widetilde{n}_{F0}=10$, on the basis of the reference parameters $% H_{0},T_{0} $ , as discussed in the text.The cutoff was selected at: $\hbar \protect \omega _{D}/E_{F0}=0.5$. Deviations are seen only around the dark blue area, where the Weyl phase boundary is slightly above the standard one. \ Mutual reentrances of the SC and N phases, due to strong magneto-oscillations effect, are seen around the upper-left corner of the phase diagram. } \label{Fig3} \end{figure} Selecting for the reference parameters the values extracted from the transport and magneto-oscillations measurements \cite{Zhao15}, as described above: $F_{0}=36.5T,H_{0}=2.5T$, $m_{0}^{\ast }=0.065m_{e}$, and from the magnetic susceptibility measurements \cite{Zhao15} the value: $T_{0}=100K$, we define the dimensionless reference parameters: $\widetilde{\tau }% _{S}\equiv \left( k_{B}T_{0}/\hbar \widetilde{\omega }_{c}^{S}\right) $ and $% \widetilde{\tau }_{W}\equiv \left( k_{B}T_{0}/\hbar \widetilde{\omega }% _{c}^{W}\right) $, where $\widetilde{\omega }_{c}^{S}\equiv \left( eH_{0}/m_{S}^{\ast }c\right) $ and $\widetilde{\omega }_{c}^{W}\equiv \left( \sqrt{2}v/a_{H_{0}}\right) $, so that: $\tau _{S}=\widetilde{\tau }% _{S}\left( t/h\right) $ and $\tau _{W}=\widetilde{\tau }_{W}\left( t/\sqrt{h}% \right) $. \ The two scales are therefore related via: $\widetilde{\tau }% _{S}=\left( \widetilde{n}_{F0}\right) ^{1/2}\widetilde{\tau }_{W}$ , where $% \widetilde{n}_{F0}\equiv \left( a_{H_{0}}k_{F0}\right) ^{2}/2\approx 10$.\ \ The eigenvalues $A_{W},A_{S}$, plotted in Fig.2 as functions of $h$, for various values of $\widetilde{n}_{F}\equiv \left( a_{H_{0}}k_{F}\right) ^{2}/2$, show at $\widetilde{n}_{F}=\widetilde{n}_{F0}$ complete agreement between the two models, including the fine structure of the quantum oscillations, provided $\widetilde{\tau }_{S}$ is re-scaled to $2\times \left( \widetilde{n}_{F0}\right) ^{1/2}\widetilde{\tau }_{W}$, as found in the semiclassical approximation, Eq.\ref{ASC}. Under these conditions, solutions of the self consistency equation, $1=\left \vert V\right \vert A$, for both models, yield nearly identical results for the H-T phase diagrams, as shown in Fig.3, except for a small deviation in the low fields region, due to the different ultraviolet divergency predicted by the two models. The two intersection points of the phase boundary with the axes, shown in Fig.3, are seen to be close to $t=1$ and $h=1$, thus indicating that the calculated $H_{c2}\left( T\rightarrow 0\right) $ and $T_{c}\left( H\rightarrow 0\right) $ values are close to the values of $H_{0}$ and $T_{0}$, respectively. For values of $\widetilde{n}_{F}$ away from $\widetilde{n}_{F0}$ the baseline of $A_{W}$ is shifted with respect to that of $A_{S}$, depending on wether $\widetilde{n}_{F}>\widetilde{n}_{F0}$ (shift up), or $\widetilde{n}% _{F}<\widetilde{n}_{F0}$ (shift down), thus reflecting the dependence of the pairing correlation in the Weyl model on the carrier density. This behavior is consistent with the relation \ref{ASCvsAS} derived in the semiclassical limit. The oscillatory patterns remain nearly the same, except for slight relative narrowing of the Weyl peaks upon decreasing $\widetilde{n}_{F}$, which becomes quite significant in the quantum limit, e.g. at $\widetilde{n}% _{F}=0.5$ in Fig.2d. It is also remarkable that in the ultimate quantum limit, i.e. when $\widetilde{n}_{F}\rightarrow 0$, the pairing correlation in the Weyl model, despite its vanishing normal electron density of states at the Fermi energy, does not vanish. \section{Conclusion} In this paper we have developed a Nambu-Gorkov Green's function approach to strongly type-II superconductivity in a 2D spin-momentum locked (Weyl) Fermi gas model at high perpendicular magnetic fields in order to study the transition to high field surface superconductivity observed recently on the topological insulator Sb$_{2}$Te$_{3}$\cite{Zhao15}. We have found that, for LL filling factors larger than unity, superconductivity in such a 2D Weyl Fermion gas can be mapped onto the standard 2D electron (or hole) gas model, having the same Fermi surface parameters, but with a cyclotron effective mass, $m^{\ast }=E_{F}/2v^{2}$ , which could be dramatically reduced below the free electron mass, $m_{e}$, by manipulating the doping level, or the gate voltage. \ Our calculations for Sb$_{2}$Te$_{3}$ show that the SC helical surface state reported in \cite{Zhao15} was in the moderate semiclassical range ($n_{F}\geq 10$), so justifying the mapping with the standard model. They reveal a very unusual, strong type-II superconductivity at low carrier density and small cyclotron effective mass, $m^{\ast }={0.065}% m_{e}$, which can be realized only in the strong coupling ($\lambda \sim 1$) superconductor limit\cite{ZDMPRB17}. Further reduction of the carrier density in such a system could yield an effective cyclotron energy comparable to or larger than the Fermi energy, LL filling factors smaller than unity, and cutoff energy larger than the chemical potential, resulting in significant deviations from the predictions of the standard model. Note, however, that for such a dilute fermion gas system the simple mean field BCS theoretical framework of superconductivity, exploited in this paper, should be drastically revised, particularly due to the neglect of both phase and amplitude fluctuations of the SC order parameter \cite% {Emery-Kivel-Nat95}, and to the breakdown of the adiabatic approximation in the electron phonon system \cite{GorkovPRB16}. Several recent reports on superconductivity in very dilute fermion gas systems, such as that found in compensated semimetallic FeSe \cite{KasaharaNCom16}, or in the large-gap semiconductor SrTiO$_{3}$ \cite{EdgePRL15}, have drawn much attention to fluctuation superconductivity beyond the Gaussian approximation, which could lead to crossover between weak-coupling BCS and strong-coupling Bose-Einstein condensate limits \cite{RanderiaBCS-BEC14}. In the presence of strong magnetic fields the situation is further complicated due to complex interplay between vortex and SC amplitude fluctuations \cite{ManivPRB06}. \bigskip
1,116,691,500,557
arxiv
\section{Introduction} The use of multilayers of superconductors with a larger critical temperature $T\msub{c}$ on top of Nb to increase the accelerating gradient $E\msub{acc}$ of superconducting radiofrequency (SRF) cavites has first been proposed by Gurevich \cite{Gurevich_mulitlayers}. He suggested a structure with interlaying insulating layers to prevent flux penetration in the Nb substrate. In \cite{kubo2014radio} this structure has been studied within London theory and it was concluded that also a layered structure without insulators can potentially yield a larger field of first flux penetration $H\msub{entry}$ compared to an uncoated substrate. Highest $E\msub{acc}$ with Nb cavities is achieved by low temperature baking at \unit[120]{$\degree$C} in vacuum for \unit[48]{h} following wet acid chemistry. Low energy muon spin rotation measurements have shown that there is a change in the Meissner screening of low temperature baked Nb, i.e. a depth dependent mean free path \cite{Romanenko_muSR}. The material can therefore be considered as an effective multilayer system. Kubo proposed two mechanisms that link the change in Meissner screening to the enhanced $H\msub{entry}$ \cite{kubo2016multilayer}. (1) A counter current flow at the boundary between the two superconductors suppresses the surface current and therefore enhances the theoretical field limit. This yields a maximum $H\msub{entry}$ above the individual superheating field of the substrate and the coating. (2) There is a second energy barrier at the boundary between the two superconductors. It has to be noted that (1) requires that both materials can be operated in a superheated state. This is rather unlikely, due to vortex penetration at defects. In fact, experimental results suggest that technical superconductors can generally not be superheated. An exception is \unit[120]{$\degree$C} baked Nb as used for SRF cavities. However also for this case the maximum field values are below the prediction of the effective two layer model. Kubo suggested that this system should be described by an infinite number of thin superconductors continuously piled up on a substrate. Therefore, the theoretical field limit for this structure, especially with defects and surface roughness, is hard to estimate \cite{kubo2015field}. Checchin in \cite{checchin2016physics} introduced a sigmoidal function for the Ginzburg Landau parameter $\kappa$, representing the depth dependent mean free path. He solves the normalized one-dimensional Ginzburg-Landau equations to estimate the forces acting on a vortex, and then looks at the Bean-Livingston barrier \cite{Bean_Livingston} created by the forces from the Meissner current and an image vortex introduced to fulfill the boundary condition at the superconductor-vacuum (SV) interface. He finds that the energy barrier is enhanced for a sigmoidal compared to a constant $\kappa$. This enhancement depends on the distance over which $\kappa$ changes and the thickness of the outer layer. If the latter becomes large compared to the former, the model reduces to an effective bi-layer system with two distinct energy boundaries. \\ % % \section{Experiment} All theoretical considerations for $H\msub{entry}$ reviewed above are not restricted to the RF case. In fact, there are several non-intrinsic field limitations of SRF cavities, i.e. field emission, multipacting and premature quench, which in general prevent reaching the intrinsic $H\msub{entry}$. It is therefore beneficial to use a DC method to measure $H\msub{entry}$. For this purpose we have established a muon spin rotation ($\mu$SR) experiment \cite{Junginger_muSR_Overview}. Spin polarized muons with an average stopping distance of \unit[130]{$\mu$m} are implanted one at a time into the sample. When the muon decays (half life=\unit[2.197]{$\mu$s}) it emits a fast decay positron, preferentially along the direction of its spin. By detecting the location of emitted positrons as a function of time with two detectors the spin precession of the muons and therefore magnetic field properties can be inferred through an asymmetry signal \begin{equation} Asy(t)=\frac{N\msub{l}(t)-\alpha N\msub{r}(t)}{N\msub{l}(t)+\alpha N\msub{r}(t)}, \label{eq:Alpha} \end{equation} where $N\msub{l}(t)$ and $N\msub{r}(t)$ are the number of counts in the left and right detector. The parameter $\alpha$ is added to account for detector efficiencies and to remove any bias between the up and down detectors caused by uneven solid angles. In the case where the detector efficiencies are identical, $\alpha$ assumes a value of 1. Samples are placed in a cryostat surrounded by by field-inducing coils. For field penetration measurements, samples are cooled in zero field to below $T\msub{c}$ in a horizontal gas flow cryostat, which allows to reach a base temperature of about \unit[2.5]{K} and then a static magnetic field is applied perpendicular to the initial spin polarization to probe if field has penetrated the sample. Specifically, the asymmetry signal gives information on the volume fraction of the host material sampled by the muon that does not contain magnetic field. This signal can be used to characterize the superconducting state, particularly the transition from Meissner to mixed state. The total asymmetry function is a sum of two terms. The first one is the dynamic Kubo-Tuyabe function \cite{Hayano} with initial asymmetry $a_0$. The second term is a damped oscillating function caused by the penetrated external field: \begin{eqnarray} Asy(t)&=&a_0\cdot P_{\rm ZF}^{\rm dyn.}(t) + \\ \nonumber & & a_1\cdot\exp{\left( -\frac{1}{2}\Delta^2t^2\right) }\cdot \cos{\left( \omega t + \frac{\pi\phi}{180}\right) } \label{eq:Asy} \end{eqnarray} with \begin{equation} \omega=2 \pi \gamma_\mu H\msub{int}, \end{equation} where $\gamma_\mu$=13.55KHz/G is the gyromagnetic ratio of the muon and $\phi$ the phase. The value of $a_0$ compared to its initial low field value is a measure of the volume fraction being in the field free Meissner state. \\%Upon transitioning to the mixed state. \section{Sample preparation} All samples were made from fine grain niobium from Tokyo Denkai with a residual resistance ratio (RRR) above 300. In \cite{Junginger_muSR_Overview} a detailed study of geometry and pinning has been carried out. Depending on sample and field geometry, $H\msub{entry}$ measurements can potentially give an artificially larger value due to flux pinning. Careful precautions have been taken to avoid this for the measurements presented here. Ellipsoidal samples with the magnetic field applied along the major axis with the muons implanted at the equator are rather insensitive to pinning and ideally suited for $H\msub{entry}$ measurements. This configuration is used for one Nb$_3$Sn and one Nb sample. For the MgB$_2$ measurements coin samples have been used to simplify the coating procedure. Here the field is applied in the radial direction. In \cite{Junginger_muSR_Overview} it has been shown that this geometry is only slightly more sensitive to pinning than the ellipsoidal geometry. For comparison Nb and Nb$_3$Sn samples of a coin shape have also been produced for this study. These coin samples have a thickness of \unit[3]{mm} and are \unit[20]{mm} in diameter. They were cut by water jet from sheets. The prolate ellipsoids were machined to the dimensions of a semi-major axis of \unit[22.9]{mm} and a semi-minor circular cross-section of \unit[6.3]{mm} radius. After machining the samples were treated by buffered chemical polishing (BCP) to remove \unit[100]{$\mu$m} of outer material. Afterwards all samples, except the ones which were used for Nb$_3$Sn coating, have received a \unit[1400]{$\degree$C} annealing at TRIUMF. This treatment is effective in releasing virtually all pinning \cite{Junginger_muSR_Overview}. After the heat treatment the samples received an additional BCP to remove another \unit[30]{$\mu$m} of material. The Nb ellipsoid received also a \unit[120]{$\degree$C} baking in vacuum at TRIUMF after initial testing. The Nb$_3$Sn coatings were produced by vapor diffusion at Cornell University. This process includes heating up to \unit[1100]{$\degree$C} which also strongly releases pinning. Furthermore, in \cite{Junginger_muSR_Overview} Nb$_3$Sn has been studied in different geometries and it was concluded that pinning is rather weak for this sample. The MgB$_2$ coatings were carried out at Temple University using the Hybrid Physical-Chemical Vapor Deposition (HPCVD) technique. For details about the coating procedures refer to \cite{posen2017nb3sn} and \cite{zeng2002situ}. Since all Nb substrates have received a high temperature annealing and pinning is mainly a bulk effect the measurements presented here have no ambiguity concerning $H\msub{entry}$ vs pinning. \section{Experimental Results} Figure \ref{fig:Asy} shows the normalized fit parameter $\widetilde{a}_0$ as a function of the applied field, corrected for geometrical field enhancement, $H\msub{0}$, where $H\msub{0}$/$H\msub{a}$=0.91 and 0.87 for the coin and the ellipsoid respectively \cite{brandt2000superconductors}. For the coin, the geometry is approximated by a long strip with rectangular cross section. Comparing the results of the annealed Nb coin to the bullet shows that this approximation is valid. Furthermore, the field has been scaled to \unit[0]{K} assuming the empirical relation \begin{equation} H_0(T)=H_0(0\text{K})\left(1-\left(\frac{T}{T\msub{c}}\right)^2\right), \label{eq:Hc1(T)} \end{equation} assuming $T\msub{c}$=\unit[9.25]{K} for Nb \cite{finnemore1966superconducting}. The lower estimate for $H\msub{entry}$ is the largest measured $H_0$ for which $\widetilde{a}_0>0.95$ or $a_1$=0 holds. The higher estimate is the smallest $H_0$ for which $\widetilde{a}_0<0.05$ holds. This criteria has been chosen since there have been fluctuations in $\widetilde{a}$ on the order of \unit[10]{\%} for some data sets like the Nb$_3$Sn bullet. This is most likely related to the position and polarization of the incoming muon beam. The uncertainty in $H\msub{entry}$ is the difference in $H_0$ for these two data points plus an additional \unit[1]{\%} to account for additional error sources mainly from potential misalignment. Figure \ref{fig:HentryvsT} shows $H\msub{entry}$ as a function of temperature for the two Nb$_3$Sn and the \unit[300]{nm} MgB$_2$ sample. The data has been fitted to \refE{eq:Hc1(T)}. Here $T\msub{c}$ was used as a common fit parameter for all three samples. Its value \unit[9.45(0.02)]{K} is slightly above the literature value from Finnemore \unit[9.25]{K} \cite{finnemore1966superconducting} but inconsistent with the much larger critical temperatures of MgB$_2$ and Nb$_3$Sn. $H\msub{entry}(0)$ was fitted individually for each sample. For all coated samples, a value significantly above $H\msub{c1$\mid$Nb}$ is found, close to the superheating field of Nb $H\msub{sh}\approx$\unit[240]{mT} \cite{PhysRevB.83.094505}, see \refT{tab:Hentry}. Baking at \unit[120]{$\degree$C} pushes $H\msub{entry}$ from \unit[178(7)]{mT} to \unit[188(4)]{mT}. This can also be correlated to an energy barrier built up at the interface between the dirty layer and the clean bulk as predicted in \cite{checchin2016physics}. Note that \unit[120]{$\degree$C} baking reduces the mean free path at the surface and therefore also $H\msub{c1}$ \cite{Romanenko_muSR}.\\ \begin{table} \centering \begin{tabular}{l|c} Sample & $\mu_0H\msub{entry}$(0K) \\ \hline Nb \unit[1400]{$\degree$C} (bullet) & 178(7)/- \\ Nb \unit[1400]{$\degree$C}+\unit[120]{$\degree$C} (bullet) & 188(4)/-\\ Nb \unit[1400]{$\degree$C} (coin) & 177(7)/- \\ Nb$_3$Sn bullet & 233(11)/238(43) \\ Nb$_3$Sn coin & 210(18)/210(43)\\ MgB$_2$ 50nm & 216(11)/- \\ MgB$_2$ 150nm & 233(9)/- \\ MgB$_2$ 300nm & 223(9)/216(42)\\ \end{tabular} \caption{$H\msub{entry}$(0K) for all samples used in this study. The first value is from a measurement at about \unit[2.5]{K} and corrected to \unit[0]{K} using \refE{eq:Hc1(T)}. The second one is from a fit to \refE{eq:Hc1(T)} with common $T\msub{c}$. For details see text.} \label{tab:Hentry} \end{table} \begin{figure} \centering \includegraphics[width=0.90\columnwidth]{Asy.pdf} \caption{Normalized fit parameter $\widetilde{a}_0$ as a function of $H_0$. } \label{fig:Asy} \end{figure} \begin{figure} \centering \includegraphics[width=0.90\columnwidth]{HentryvsT.pdf} \caption{$H\msub{entry}$ as a function of temperature.} \label{fig:HentryvsT} \end{figure} \section{Simulation} There is no clear trend in $H\msub{entry}$ vs. layer thickness. This suggests that the superconductor-superconductor (SS) boundary is providing effective shielding up to $H\msub{sh$\mid$ Nb}$, while the superconductor-vacuum (SV) boundary is not providing shielding above its lower critical field $H\msub{c1}$. Note that realistic surfaces contain defects. A possible explanation why the SS but not the SV boundary provides shielding above $H\msub{c1}$ could be that the proximity effect recovers the order parameter $\psi$ in the vicinity of defects at the SS boundary. In the following we present a simplified simulation model to strengthen support for this hypothesis. It has to be noted that this model cannot serve as a quantitative description of the experimental results, due to several simplifications as outlined in the following. In the interests of simple calculations, we assume that the defects are approximately the same size, and are very frequent, such that they can be approximated in a 1D model. The systems of interest are SN and SNS systems comprised of a semi-infinite superconducting slab for $x<0$, a normal conducting plane with finite thickness for $0<x<d$ and for the SNS case, another superconducting slab for $x>d$, see \refF{fig:SNS} (Cartoon on top). Such systems are well studied from the point of view of Josephson junctions. Here the same methods as described in \cite{chapman1995ginzburg} are applied. This treatment uses the non-dimensional Ginzburg-Landau functionals for each domain and minimizes the sum of the functionals with respect to the normalized order parameter $\psi$ and the normalized vector potential $\textbf{A}$. In the superconducting region the dimensionless Ginzburg-Landau equations are (see e.g. \cite{chapman1995ginzburg}): \begin{equation} \left( \frac{i}{\kappa} \nabla + \textbf{A} \right)^2 \psi - \psi + \left|\psi \right|^2 \psi = 0, \end{equation} \begin{equation} \left(\nabla\times\nabla\times \textbf{A} \right)=\frac{-i}{2\kappa} \left(\psi^{*} \nabla \psi - \psi \nabla \psi^{*} \right) - \left|\psi \right|^2 \textbf{A}, \end{equation} where $\kappa=\frac{\lambda}{\xi}$ is the Ginzburg-Landau parameter. To arrive at these equations, the normalizations \begin{eqnarray} \widetilde{\psi}=\sqrt{\frac{\mid a \mid}{b}}\psi, &\quad \widetilde{x}=\sqrt{\frac{\widetilde{m} b c^2}{4\pi\mid a \mid e^2 \mu\msub{s}}}x=\frac{x}{\lambda\sqrt{\mu_s}} \\ \nonumber \widetilde{\textbf{A}}=\sqrt{\frac{2\mid a \mid c^2 \widetilde{m}_s}{e^2}}\textbf{A}, &\quad \widetilde{\textbf{H}}=\sqrt{\frac{8\pi a^2}{b \mu_s}} \textbf{H} \\ \nonumber \end{eqnarray} were applied (tilde indicates physical units). $a$ and $b$ are constants, $e\msub{s}$ and $m\msub{s}$ are twice the electron charge and mass respectively, c is the speed of light and $\mu_s$ is the permeability. The modified Ginzburg-Landau equations for the superconducting charge carriers within the normal conducting domain are \cite{chapman1995ginzburg}: \begin{equation} \left( \frac{i}{\kappa} \nabla + \textbf{A} \right)^2 \psi + \alpha \psi= 0\, \label{eq:GL_N1} \end{equation} \begin{equation} \left(\nabla\times\nabla\times \textbf{A} \right)=\frac{-1}{m_n}\left(\frac{-i}{2\kappa} \left(\psi^{*} \nabla \psi - \psi \nabla \psi^{*} \right) - \left|\psi \right|^2 \textbf{A} \right), \label{eq:GL_N2} \end{equation} where $\alpha=\widetilde{m}\msub{n} a\msub{n}/(\widetilde{m}\msub{s} \mid a\msub{s}\mid) $. $\psi$ is taken to be real, which can be done for our problem in one dimension without loss of generalization and simplifies the calculations to a real second order ordinary differential equations boundary value problem, since it results in \begin{equation} \psi^{*} \nabla \psi - \psi \nabla \psi^{*}=0. \nonumber \end{equation} \begin{figure} \centering \includegraphics[width=\columnwidth]{SNS.pdf} \caption{On the top it is shown that for small thicknesses of normal conducting layers, an SNS system should have a higher superheating field than an SN system. For large thicknesses of normal conducting layers, there is only a depression, as there are essentially just 2 independent SN systems, as Cooper pairs do not travel across the normal conducting layer. The reason for the depressed superheating field for large N is the local depression at the interface from the normal conducting layer, as opposed to a vacuum/insulator layer. On the bottom is $\Delta H=H\msub{SNS}-H\msub{SN}$, as a function of the thickness of the N layer.} \label{fig:SNS} \end{figure} The SNS case was examined for identical superconductors only. This was done so as to avoid an issue with \refE{eq:GL_N1} and \refE{eq:GL_N2}, where the parameters used would be dependent on which superconductor the superconducting charge carriers originated from. Without this assumption, the equations would have to be reformulated to also include the effects of superconducting charge carriers with different normalizations mixing. The superheating field was taken to be the largest field for which a Meissner state solution could be found. To evaluate this, a magnetic field is applied at the outer surface of the N-layer. This assumes that a vortex has penetrated the SV boundary and reflects the hypothesis that only the SS but not the SV boundary is providing shielding above $H\msub{c1}$. With just one type of superconductor, $\psi$ and $\textbf{A}$ are continuous and differentiable at the interfaces. The systems were solved explicitly. For each domain two of the four continuity conditions were forced to be the value on the boundary of the other domain, and the other conditions were allowed to vary. Then, the solutions were iterated until they converged within a tolerance. Using thicknesses of 1000 and \unit[1500]{nm} \footnote{These layer thicknesses are used to allow a decay of the wave function to zero, so that the standard boundary conditions $d\psi/dx(x\mapsto\infty)=0$ and curl$\textbf{A}(x\mapsto\infty)=H$ can be used in a finite domain with reasonable calculation time.} for the outer and inner superconductor respectively, $H\msub{sh}$ was obtained for the SN and SNS systems, see \refF{fig:SNS}. The red line shows the depression of $H\msub{sh}$ as a function of normal conducting layer thickness $d$. Mass, charge and permeability were taken to be the same between the normal conducting layer and the superconducting layer. This corresponds to Cooper pairs in a normal conducting plane of Nb on top of superconducting Nb. The blue line shows two bulk S layers (the same superconductor with $\kappa$=1.04 for clean Nb \cite{maxfield1965superconducting}), separated by a thin N layer. In both cases, there is no change for thick N layers, which is intuitively expected, as Cooper pairs will only penetrate a distance on the order of the coherence length $\xi$ into the N layer, so beyond that, the bulk SCs are independent. For large $d$ and $d\rightarrow 0$, there are no substantial differences found between SN and SNS systems. This can be intuited from an assumption that Cooper pairs penetrate a distance $\approx \xi$ into the normal layer. Then, for a thick layer $(d>>\xi)$, there should be no difference, as the SNS system effectively becomes an SN system. For a thin layer ($d\rightarrow 0$) there should also be no difference, as both the SN and SNS systems are reduced to a bulk superconductor with no N-layer. For intermediate layers $(d \approx \xi)$, the density of the penetrating Cooper pairs in the N-layer will be greater, resulting in increased shielding. \section{Conclusion} In \cite{kubo2016multilayer} it was theoretically shown that there is a second energy barrier at the SS interface of a layered superconductor. However, no reasoning was given under what circumstances this boundary would provide shielding above $H\msub{c1$\mid$substrate }$ and whether this boundary is more stable than the SV boundary. The experimental results and the simulation presented here suggest that the proximity effect can recover the stability of $\psi$ near defects and therefore increase $H\msub{entry}$ up to $H\msub{sh}$ at the SS boundary. However the 1D simulation model is a rather crude approximation of a bi-layer superconductor with defects. A more realistic simulation will require higher dimensions \cite{du2005numerical}. In order to be useful to SRF applications a thin overlayer thickness $d<\lambda$ is needed to avoid strong dissipation from vortices within the relevant London layer of a few nm. The results with $d \gg \lambda$ are interesting in the sense that they provide insight in the physics of layered superconductors. Considering the hypothesis that the proximity recovers the stability of $\psi$ near defects, layers with $d \approx \xi\msub{layer}$ could potentially be sufficient to increase $H\msub{entry}$. This is consistent with the increased $H\msub{entry}$ reported here for a \unit[120]{$\degree$C} baked sample and recently reported increased $E\msub{acc}$ for SRF cavities treated with modified low temperature baking recipes \cite{grassellino2017unprecedented}. Furthermore, the proposed hypothesis gives an alternative explanation for recently reported magnetometry measurements on MgB$_2$ on Nb ellipsoidal samples \cite{tan2016magnesium} if pinning is also taken into account. As shown in \cite{Junginger_muSR_Overview} pinning is not negligible for this geometry if the samples are not annealed. Further simulations with smaller superconducting layers in higher dimensions and further $H\msub{entry}$ measurements with different layer thicknesses are necessary to further test the proposed hypothesis. \section{Acknowledgment} This research was supported by a Marie Curie International Outgoing Fellowship within the EU Seventh Framework Programme for Research and Technological Development (2007-2013). The authors would like to thank T. Tan, M. Wolak, W.Withanage, X. Xi, D. Hall, S. Posen and M. Liepe for providing the samples used in this study. Thanks to B. Waraich, J. Keir, J. Wong and L. Lambert for producing the bullet samples and performing the heat treatments.
1,116,691,500,558
arxiv
\section{Introduction} \subsection{Settings of the problem} We are interested in stabilizing the density dependent Navier-Stokes equations around some stationary state $(\rho_{s},v_{s})$ (where $(\rho_{s},v_{s},p_{s})$ is a stationary solution) in a two dimensional channel $\Omega$. For that we will use an appropriate boundary control $u_{c}$ acting on the velocity in the inflow part of the boundary $\partial{\Omega}$.\\ Let $d$ be a positive constant. Throughout this article we will use the following notations (see Figure 1.) \begin{equation}\label{domain} \begin{array}{l} \Omega =(0,d)\times (0,1),\quad \Gamma=\partial \Omega,\quad Q_{T}=\Omega \times (0,T),\quad \Sigma_{T}=\Gamma \times (0,T)\quad \mathrm{for}\quad 0<T\leqslant \infty. \end{array} \end{equation} The unit outward normal to the boundary $\Gamma$ is denoted by $n.$ The velocity, density and pressure of the fluid are denoted respectively by ${v},$ ${\rho}$ and $p.$ The viscosity $\nu>0$ of the fluid is a positive constant. We consider the following control system \begin{equation}\label{1-3} \left\{ \begin{array}{ll} \displaystyle \frac{\partial \rho}{\partial t} + \mbox{div}(\rho {v})=0&\quad \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \rho=\rho_s &\quad \mbox{on}\quad \{(x,t)\in\Sigma_{\infty}\suchthat (v(x,t)\cdot n(x))<0\}, \vspace{1.mm}\\ \displaystyle \rho(x,0)=\rho_{s}+\rho_0&\quad\mbox{in}\quad\Omega, \vspace{1.mm}\\ \displaystyle \rho \left(\frac{\partial {v}}{\partial t} +({v}\cdot\nabla){v} \right) - \nu \Delta {v} + \nabla p = 0&\quad \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \mbox{div}({v}) = 0&\quad \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle {v}=v_{s} + u_{c}\chi_{\Gamma_{c}} &\quad \mbox{on}\quad \Sigma_{\infty}, \vspace{1.mm}\\ \displaystyle {v}(x,0)={v}_s+{v}_{0}&\quad\mbox{in}\quad\Omega, \end{array} \right. \end{equation} where $u_{c}\chi_{\Gamma_{c}}$ is a control function for the velocity $v$ with $\chi_{\Gamma_{c}}$ denoting the characteristics function of a set $\Gamma_{c}$ which is compactly supported on $\Gamma.$ The set $\Gamma_{c}$ will be precisely defined shortly afterwards. The equation \eqref{1-3}$_{1}$ is the mass balance equation and \eqref{1-3}$_{4}$ is the momentum balance equation. The triplet $(\rho_{s},v_{s},p_{s})$ is the Poiseuille profile defined as follows \begin{equation}\label{vs} \begin{array}{l} \rho_{s}(x_{1},x_{2})=1,\quad{v}_s(x_{1},x_{2})= \ \begin{bmatrix} x_2(1-x_2) \\ 0 \end{bmatrix},\quad p_{s}=-2\nu x_{1},\quad\mbox{in}\quad\Omega. \end{array} \end{equation} Observe that $(\rho_{s},v_{s},p_{s})$ (given by \eqref{vs}) is a stationary solution of the Navier-Stokes equations \eqref{1-3}. We remark that in the definition \eqref{vs} of the Poiseuille profile we can choose $\rho_{s}$ to be any positive constant in place of one up to modifying $p_{s}$ accordingly. Also in the definition \eqref{vs} one can consider $v_{s}=(\alpha x_{2}(1-x_{2}),0),$ for a positive constant $\alpha>0.$ The strategy and results of our analysis apply for any constant $\rho_{s}>0$ and $\alpha>0.$\\ The aim of this article is to determine feedback boundary control ${u}_{c}$ (the control of the velocity) such that the solution $(\rho, v)$ of the controlled system is exponentially stable around the stationary solution $(\rho_{s},v_{s})$ provided the perturbation $(\rho_{0},v_{0})$ of the steady state $(\rho_{s},v_{s})$ is sufficiently small (in some suitable norm).\\ In view of the stationary profile \eqref{vs}, it is natural to control the inflow part of the boundary, $i.e.$ we will consider the control function $u_{c}$ supported on \begin{equation}\label{inflow} \begin{array}{l} \Gamma_{in}=\{x\in \Gamma \mid (v_{s}\cdot n)(x) < 0\}=\{0\}\times (0,1). \end{array} \end{equation} In fact we do slightly more and control on some open subset $\Gamma_{c}$ of $\Gamma_{in}.$ We consider $\Gamma_{c}$ of the following form \begin{equation}\label{dgc} \begin{array}{l} \Gamma_{c}=\{0\}\times (L,1-L)\subset\Gamma_{in}, \end{array} \end{equation} for some fixed $0<L<\frac{1}{2}.$ \begin{remark} We consider the control zone of the form \eqref{dgc} to simplify the notations. In fact our analysis allows to consider any subset $\{0\}\times (A,B)$ ($0<A<B<1$) of $\Gamma_{in}$ as the control zone. \end{remark} To state our results precisely, we introduce some appropriate functional spaces. \subsection{Functional framework for the Naviers-Stokes equation }\label{funcframe} Let $ H^s (\Omega;\mathbb{R}^N)\,\mbox{and}\,L^2(\Omega;\mathbb{R}^N)$ denote the vector valued Sobolev spaces. If it is clear from the context, we may simply denote these spaces by $H^{s}(\Omega)$ and $L^{2}(\Omega)$ both for scalar and vector valued functions. The same notational conventions will be used for the trace spaces. We now introduce different spaces of divergence free functions and some suitable spaces of boundary data: \begin{equation}\nonumber \begin{array}{l} \displaystyle {V}^{s}(\Omega) = \{ {y} \in {H}^s(\Omega;\mathbb{R}^{2})\suchthat \mbox{div} {y} =0 \quad \mbox{in} \quad \Omega \}\quad \mbox{for} \quad s\geqslant 0, \vspace{1.mm}\\ {V}^{s}_{n} (\Omega) = \{{y} \in {H}^{s} (\Omega;\mathbb{R}^{2})\suchthat \mbox{div}{y}=0\quad \mbox{in} \quad \Omega, \quad {y}\cdot {n} =0\quad \mbox{on}\quad \Gamma\}\quad \mbox{for} \quad s\geqslant 0, \vspace{1.mm}\\ {V}^{s}_{0}(\Omega)=\{{y} \in {H}^{s}(\Omega;\mathbb{R}^{2}) \suchthat \mbox{div}{y}=0\quad \mbox{in} \quad \Omega,\quad {y}=0 \quad \mbox{on} \quad \Gamma \}\quad \mbox{for} \quad s\in(\frac{1}{2},\frac{3}{2}), \vspace{1.mm}\\ \displaystyle{V}^{s} (\Gamma)= \{{y} \in {H}^{s} (\Gamma;\mathbb{R}^{2})\suchthat \int\limits_{\Gamma}y\cdot n\,dx=0 \}\quad\mbox{for}\quad s\geqslant 0. \end{array} \end{equation} The spaces ${V}^{s}(\Omega)$ and ${V}^{s}(\Gamma)$ are respectively equipped with the usual norms of ${H}^{s}(\Omega)$ and ${H}^{s}(\Gamma),$ which will be denoted by $\|\cdot \|_{{V}^{s}(\Omega)}$ and $\|\cdot \|_{{V}^{s}(\Gamma)}.$\\ From now onwards we will identify the space $V^{0}_{n}(\Omega)$ with its dual.\\ For $0<T\leqslant\infty$ let us introduce the following functional spaces adapted to deal with functions of the time and space variables. \begin{equation}\nonumber \begin{array}{l} {V}^{s,\tau}(Q_{T}) = {{H}^\tau}(0,T;{V}^0 (\Omega))\cap { L}^2 (0,T;{V}^{s}(\Omega)) \quad \mbox{for} \quad s , \tau \geq 0, \vspace{1.mm}\\ {V}^{s,\tau}(\Sigma_{T}) = {{H}^\tau}(0,T;{V}^0 (\Gamma))\cap {L}^2 (0,T;{V}^s(\Gamma)) \quad \mbox{for} \quad s , \tau \geq 0. \end{array} \end{equation} We also fix the convention that for any two Banach spaces $\mathcal{X}$ and $\mathcal{Y},$ the product space $\mathcal{X}\times\mathcal{Y}$ is endowed with the norm $$\forall\,\, (x,y)\in\mathcal{X}\times\mathcal{Y},\,\,\|({x},{y})\|_{\mathcal{X}\times\mathcal{Y}}=\|{x}\|_{\mathcal{X}}+\|{y}\|_{\mathcal{Y}},$$ where $\|.\|_{\mathcal{X}}$ and $\|.\|_{\mathcal{Y}}$ denotes the norms in the corresponding spaces.\\ \subsection{The main result} We now precisely state our main result in form of the following theorem. \begin{thm}\label{main} Let $\beta>0,$ $A_{1}\in (0,\frac{1}{2}).$ There exist a constant $\delta>0$ such that for all $(\rho_{0},{v}_{0})\in L^{\infty}(\Omega)\times{V}^{1}_{0}(\Omega)$ satisfying \begin{equation}\label{1-4} \begin{array}{l} \mbox{supp}(\rho_{0})\subset [0,d]\times (A_{1},1-A_{1}), \end{array} \end{equation} and \begin{equation}\nonumber \begin{array}{l} \| (\rho_{0},{v}_{0})\|_{L^{\infty}(\Omega)\times{V}^{1}_{0}(\Omega)}\leqslant \delta, \end{array} \end{equation} there exists a control ${u}_{c}\in H^{1}(0,\infty;C^{\infty}(\overline{\Gamma}_{c})),$ for which the system \eqref{1-3} admits a solution $$(\rho,v)\in L^{\infty}(Q_{\infty})\times{V}^{2,1}(Q_{\infty}),$$ satisfying the following stabilization requirement \begin{equation}\label{1-6} \begin{array}{l} \| e^{\beta t}(\rho-\rho_{s},{v}-v_{s}) \|_{L^{\infty}(Q_{\infty})\times{V}^{2,1}(Q_{\infty})} \leqslant C \| ({\rho_{0}},{v}_{0})\|_{L^{\infty}(\Omega)\times{V}^{1}_{0}(\Omega)}, \end{array} \end{equation} for some constant $C>0.$ Moreover, $\rho=\rho_{s}$ for $t$ sufficiently large. \end{thm} We now make precise the structure of the control function $u_{c}$ we are going to construct. We will show the existence of a natural number $N_{c},$ and a family $$\{{g_{j}}\suchthat 1\leqslant j\leqslant N_{c}\},$$ of smooth functions supported on $\Gamma_{c}$ such that the control ${u}_{c}$ acting on the velocity is given as follows \begin{equation}\label{findcon} \begin{array}{l} {u}_{c}(x,t)=e^{-\beta t}\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){{g}_{j}}(x), \end{array} \end{equation} where $w_{c}(t)=(w_{1}(t),....,w_{N_{c}}(t))$ is the control variable and is given in terms of a feedback operator $\mathcal{K}.$ More precisely, $w_{c}=(w_{1},...,w_{N_{c}})$ satisfies the following ODE \begin{equation}\nonumber \begin{array}{l} w_{c}^{'}=-\gamma w_{c}+\mathcal{K}\begin{pmatrix} P(v-v_{s})\\w_{c} \end{pmatrix}\quad \mbox{in}\quad (0,\infty),\quad w_{c}(0)=0, \end{array} \end{equation} where $\gamma$ is a positive constant, $P$ is the Leray projector from $L^{2}(\Omega)$ to $V^{0}_{n}(\Omega)$ (\cite[Section 1.4]{temam}) and $\mathcal{K}\in\mathcal{L}(V^{0}_{n}(\Omega)\times\mathbb{R}^{N_{c}},\mathbb{R}^{N_{c}})$ (the feedback operator $\mathcal{K}$ is determined in Section \ref{efcl}).\\ The boundary control \eqref{findcon} we construct has a finite dimensional range and resembles with the control designed in \cite{ray2f}. The construction of our control basis $\{g_{j}\suchthat 1\leqslant j \leqslant N_{c}\}$ is different from the one done in \cite{ray2f}. In \cite{ray2f} it is constructed using generalized eigenvectors of the adjoint of Oseen operator while we construct it only by using eigenvectors of adjoint of Oseen operator relying on the construction of \cite{raym}. We will not consider any control on the transport equation modeling the density and as for the homogeneous Navier-Stokes equations, we show that considering a control $u_{c}$ of the velocity is enough to stabilize the whole system \eqref{1-3}.\\ The stabilizability of the constant density (or homogeneous) incompressible Navier-Stokes equation (with Dirichlet or mixed boundary condition) by a finite dimensional feedback Dirichlet boundary control has already been studied in the literature. For instance in \cite{ray2f} it is proved that in a $C^{4}$ domain the velocity profile $v,$ solution to system \eqref{1-3}$_{4}$-\eqref{1-3}$_{7}$ with $\rho=1$ is locally stabilizable around a steady state ${v}_{s}$ (${v}_{s}\in H^{3}(\Omega;\mathbb{R}^{2})$) by a finite dimensional Dirichlet boundary control localized in a portion of the boundary and moreover the control $u_{c}$ is given as a feedback of the velocity field.\\ Unlike the constant density incompressible Navier-Stokes equations (which is of parabolic nature), the system \eqref{1-3} obeys a coupled parabolic-hyperbolic dynamics. Local exact controllability to trajectories of the system \eqref{1-3} was studied in \cite{erv1}. In the present article we answer the question posed in \cite{erv1} on the stabilizability of the system \eqref{1-3} around the Poiseuille profile. In proving the controllability results one of the main geometric assumptions of \cite{erv1} is that \begin{equation}\label{geoasm} \begin{array}{l} \overline{\Omega}=\Omega^{T}_{{out}}=\{x\in \overline{\Omega}\suchthat \exists t\in (0,T),\,\mbox{s.t}\, \overline{X}(t,0,x)\in \mathbb{R}^{d}\setminus\overline{\Omega}\}, \end{array} \end{equation} where $\overline{X}$ is the flow corresponding to the target velocity trajectory $\overline{v}_{s}$ defined as \begin{equation}\nonumber \begin{array}{l} \forall (x,t,s)\in \mathbb{R}^{d}\times [0,T]^{2},\quad \partial_{t}\overline{X}(x,t,s)=\overline{v}_{s}(\overline{X}(x,t,s),t),\quad \overline{X}(x,s,s)=x. \end{array} \end{equation} In the article \cite{erv1} the assumption \eqref{geoasm} plays the key role in controlling the density of the fluid. In our case since the target velocity trajectory is $v_{s}$ (defined in \eqref{vs}) the assumption \eqref{geoasm} is not satisfied because $v_{s}$ vanishes at the lateral boundary of the domain $\Omega.$ Hence to control the density we make a parallel assumption \eqref{1-4}. Indeed, the assumption \eqref{1-4} implies that $supp(\rho_{0})\Subset\Omega^{T}_{{out}}.$ The assumption \eqref{1-4} exploits the hyperbolic nature of the continuity equation \eqref{1-3}$_{1}$ in order to control the coupled system \eqref{1-3}. The condition \eqref{1-4} in fact guarantees that the density exactly equals ${\rho}_{s}=1,$ after some time $T_{1}=T_{A_{1}}>\frac{d}{\inf\limits_{x_{2}\in[A_{1},1-A_{1}]}v_{s}}$ (will be detailed in Section \ref{density}) so that the non-homogeneous Navier-Stokes equations become homogeneous after some finite time. In \cite{erv1} the authors uses two control functions (one for the density and one for velocity) for the purpose of controlling the non-homogeneous fluid. Contrary to that we use only one control acting on the velocity to stabilize the coupled system \eqref{1-3}. \subsection{Decomposition of the boundary $\Gamma$ and comment on the support of control} Based on the velocity profile $v_{s}$ (as defined in \eqref{vs}) we can rewrite the boundary of $\Omega$ as follows $$ \Gamma= \Gamma_{in}\cup \Gamma_{out}\cup \Gamma_0,$$ where \begin{equation}\label{1-2} \begin{array}{l} \Gamma_{in}\,\mbox{is\,defined\,in}\,\eqref{inflow}, \vspace{1.mm}\\ \Gamma_{out}=\{x\in \Gamma \mid (v_{s}\cdot n)(x) >0\}=\{d\}\times (0,1), \vspace{1.mm}\\ \Gamma_{0}=((0,d)\times \{0\})\cup ((0,d)\times \{1\})=\Gamma_{b}\cup\Gamma_{h}\quad(\mbox{Figure}\,1). \end{array} \end{equation} \begin{figure}[h!]\label{pic} \centering \begin{tikzpicture}[scale=0.75] \draw (5,0)node [below] {$\Gamma_b$}; \draw (0,0)node [below] {$0$}; \draw (10,0)node [below right] {$d$}; \draw (0,5)node [above] {$1$}; \draw (10,2.5)node [above right] {$\Gamma_{out}$}; \draw (0,4.5)node [right] {1-L}; \draw (0,.5)node [right] {L}; \draw (5,5) node [above] {$\Gamma_h$}; \draw (0,0) -- (10,0); \draw (10,0) -- (10,5); \draw (10,5) -- (0,5); \draw (0,5) -- (0,4.5); \draw (0,0.5) -- (0,0); \draw[blue] (0,4.5) -- (0,0.5); \draw [decorate,decoration={brace,amplitude=10pt},xshift=-0.5pt,yshift=0pt] (0,.5) -- (0,4.5) node [black,midway,xshift=-0.5cm] {\footnotesize $\Gamma_{c}$}; \draw [decorate,decoration={brace,amplitude=30pt},xshift=-1.5pt,yshift=0pt] (0,0) -- (0,5) node [black,midway,xshift=-1.5cm] {\footnotesize $\Gamma_{in}$}; \end{tikzpicture} \caption{Picture of the domain $\Omega$.} \end{figure} \begin{remark}\label{ngin} From now onwards we will use the notation $\Gamma_{in}$ to denote the inflow boundary of both the vector fields $v_{s}$ and $v.$ This is a slight abuse of notation but we will prove the existence of the controlled trajectory $v$ in a small neighborhood (in a suitable norm) of $v_{s}$ provided the perturbation $v_{0}$ is small. This will guarantee that $\Gamma_{in}$ and the inflow boundary of the vector field $v_{s}$ are identical. For the details we refer the reader to the Corollary \ref{p3.0.2}. \end{remark} We will look for a control function ${u}_{c}$ of the form \eqref{findcon} which is compactly supported in $\Gamma_{c}.$ More particularly we will construct the finite dimensional basis $\{{{g}_{j}}\suchthat{1\leqslant j \leqslant N_{c}}\}$ of the control space in such a way that $g_{j}$ ($\forall\, 1\leqslant j \leqslant N_{c}$) is smooth and supported in $\Gamma_{c}.$ \subsection{Strategy} (i) As our goal is to stabilize the solution $(\rho,v)$ of \eqref{1-3} around the stationary solution $(1,v_{s})$ with a rate $e^{-\beta t}$ we introduce \begin{equation}\label{chun} \begin{array}{l} y=e^{\beta t}({v}-{v}_s),\quad \sigma=e^{\beta t}(\rho - 1),\quad q=e^{\beta t}(p-p_{s}),\quad u=e^{\beta t}{u}_{c}. \end{array} \end{equation} To be consistent with the notations $y$ and $\sigma,$ we further introduce the following \begin{equation}\label{y0s0} \begin{array}{l} \sigma_{0}=\rho_{0},\quad y_{0}=v_{0}. \end{array} \end{equation} As in our case the control \eqref{findcon} is supported in the inflow boundary, in view of the notations introduced in \eqref{1-2} and the Remark \ref{ngin} we use \eqref{findcon} to rewrite the system \eqref{1-3} in the following form \begin{equation}\label{2-1} \left\{\begin{array}{lll} \displaystyle &\displaystyle\frac{\partial \sigma}{\partial t}+(({v}_s+e^{-\beta t}{y}) \cdot \nabla)\sigma-\beta\sigma=0\quad &\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ &\displaystyle\sigma (x,t)=0\quad&\mbox{on}\quad \Gamma_{in} \times(0,\infty), \vspace{1.mm}\\ &\displaystyle\sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ &\displaystyle \frac{\partial {y}}{\partial t}-\beta {y}-\nu \Delta {y}+ ({v}_s \cdot \nabla){y}+({y} \cdot \nabla)v_s +\nabla q=\mathcal{F}(y,\sigma)\quad& \mbox{in}\quad Q_{\infty},\\[2.mm] &\displaystyle\mbox{div}\,{y}=0\quad &\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ &\displaystyle{y}=0\quad &\mbox {on} \quad (\Gamma_0\cup \Gamma_{out}) \times (0,\infty),\\[1.mm] &\displaystyle{y}=\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){{g}_{j}}(x)\quad& \mbox{on} \quad \ \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ &\displaystyle{y}(x,0)={y}_0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where $$\mathcal{F}({y},\sigma)=-e^{-\beta t}{\sigma}\frac{\partial {y}}{\partial t}-e^{-\beta t}({y}\cdot \nabla){y}-e^{-\beta t}{\sigma}(v_{s}\cdot \nabla){y} -e^{-\beta t}{\sigma}({y}\cdot \nabla)v_{s}-e^{-2\beta t}{\sigma}({y}\cdot \nabla){y}+\beta e^{-\beta t} \sigma {y}.$$ To solve a nonlinear stabilization problem the usual method is to first solve the stabilization problem for the linearized system and then use a fixed point method to conclude the stabilizability of the original nonlinear problem \eqref{2-1}. In this article due to regularity issues of the transport equation we avoid linearizing the whole system. Instead, we only linearize the equation \eqref{2-1}$_{4}$ satisfied by $y$ $i.e.$ we replace the nonlinear terms appearing in the equation \eqref{2-1}$_{4}$ by a non homogeneous source term ${f}$ and we leave the equation of the density \eqref{2-1}$_{1}$ unchanged. Hence we start by analyzing the stabilizability of the system \begin{equation}\label{2-2} \left\{\begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+(({v}_s+e^{-\beta t}{y}) \cdot \nabla)\sigma-\beta\sigma=0\quad &\mbox{in} \quad Q_{\infty}, \vspace{1.mm}\\ \sigma (x,t)=0 \quad& \mbox{on} \quad \Gamma_{in} \times(0,\infty), \vspace{1.mm}\\ \sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega,\\[1.mm] \displaystyle \frac{\partial {y}}{\partial t}-\beta {y}-\nu \Delta {y}+ ({v}_s \cdot \nabla){y}+({y} \cdot \nabla){v}_s +\nabla q={f}\quad &\mbox{in}\quad Q_{\infty}, \vspace{2.mm}\\ \mbox{div}\,{y}=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{2.mm}\\ {y}=0\quad& \mbox {on} \quad (\Gamma_{0}\cup\Gamma_{out}) \times (0,\infty),\\[1.mm] {y}=\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){{g}_{j}}(x) \quad& \mbox{on} \quad \ \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ {y}(x,0)={y}_0\quad&\mbox{in}\quad\Omega. \end{array}\right. \end{equation} (ii) Section \ref{velocity} is devoted to study the stabilization of the linearized Oseen equations \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$. In that direction we first write \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$ using operator notations. This is done in the spirit of \cite{raye} but with suitable modifications which are necessary since our domain is Lipschitz. To prove the stabilizability of this system we look for a control of the form \eqref{findcon}. We will choose the functions $\{g_{j}\suchthat 1\leqslant j\leqslant N_{c}\},$ supported on $\Gamma_{c},$ so that we can prove some unique continuation property equivalent to the stabilizability of the system under consideration. This is inspired from \cite{raym}. Using the fact that $g_{j}$ (for all $1\leqslant j\leqslant N_{c}$) is supported on a smooth subset of $\Gamma$ we further show that $g_{j}$ is in $C^{\infty}(\Gamma).$ This in particular implies that the control $u_{c},$ of the form \eqref{findcon}, is smooth in the space variable.\\ (iii) Next our aim is to find a boundary control which is given in terms of a feedback law. At the same time we have to design the control such that the velocity $y$ belongs to the space $V^{2,1}(Q_{\infty}).$ Indeed the $H^{2}(\Omega)$ regularity of the velocity field will be used later to prove the stabilization of the continuity equation. This creates another difficulty because to prove the $V^{2,1}(Q_{\infty})$ regularity of $y$ solution of \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$, one must have a compatibility between the initial velocity $y_{0},$ assumed to be in $V^{1}_{0}(\Omega)$ and the boundary condition ($i.e.$ the control $u$). We deal with this issue by adding a system of ordinary differential equations satisfied by $w_{c}.$ The corresponding extended system satisfied by $(y,w_{c})$ reads as follows \begin{equation}\label{2-7} \left\{\begin{array}{ll} \displaystyle \frac{\partial{y}}{\partial t}-\beta{y}-\nu\Delta{y}+(v_{s}\cdot \nabla){y}+({y}\cdot \nabla)v_{s}+\nabla q= {f}\quad& \mbox{in} \quad Q_{\infty}, \vspace{1.mm}\\ \mbox{div}\,{y}=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ {y}=0\quad &\mbox{on}\quad ({\Gamma}_{0}\cup \Gamma_{out})\times (0,\infty), \vspace{1.mm}\\ {y}=\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){{g}_{j}}(x)\quad &\mbox{on}\quad \Gamma_{in}\times (0,\infty), \vspace{1.mm}\\ {y}(x,0)={y}_{0}\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ w_{c}^{'}+{\gamma }w_{c}=\varphi_{c}\quad& \mbox{in}\quad (0,\infty), \vspace{1.mm}\\ w_{c}(0)=0\quad&\mbox{in}\quad\Omega, \end{array} \right. \end{equation} where $\gamma>0$ is a positive constant and $\varphi_{c}(\in \mathbb{R}^{N_{c}})$ is a new control variable which will be determined later as a feedback of the pair $(y,w_{c}).$ Since $y(.,0)=0,$ imposing $w_{c}(0)=0$ furnishes the desired compatibility between the initial and boundary conditions of $y$ which is necessary to obtain the $V^{2,1}(Q_{\infty})$ regularity of $y.$\\ First we will construct the control $\varphi_{c}$ given in terms of a feedback operator which is able to stabilize the homogeneous ($i.e.$ when $f=0$) extended system \eqref{2-7} by solving a Riccati equation. Then we show that the same control stabilizes the entire non-homogeneous ($i.e.$ with the non-homogeneous source term $f$) extended system \eqref{2-7} by assuming that the non-homogeneous term $f$ belongs to some appropriate space.\\[1.mm] (iv) In Section \ref{density}, we study the stability of the continuity equation \eqref{2-2}$_{1}$-\eqref{2-2}$_{3}$. We assume the velocity field in ${V}^{2,1}(Q_{\infty})$ and $\sigma_{0}\in L^{\infty}(\Omega)$ such that \eqref{1-4} (recall from \eqref{y0s0} that $\sigma_{0}=\rho_{0}$) holds. Since $\sigma_{0}\in L^{\infty}(\Omega)$ and the transport equation has no regularizing effect we expect that $\sigma\in L^{\infty}_{loc}(Q_{\infty}).$ The Cauchy problem for the continuity equation in the presence of an inflow boundary is rather delicate. In our case we use results from \cite{boy} for the existence of a unique renormalized weak solution of the problem \eqref{2-2}$_{1}$-\eqref{2-2}$_{3}$ in the space $L^{\infty}(Q_{\infty}).$ Our proof of the stabilization of the transport equation satisfied by the density relies on the fact that the characteristics equation corresponding to the velocity field is well posed. As we are dealing with velocity fields in $L^{2}(0,\infty,H^{2}(\Omega)),$ which is not embedded in $L^{1}_{loc}(0,\infty,W^{1,\infty}(\Omega))$ in dimension two, our analysis relies on \cite{zuazua} (see also \cite[Theorem 3.7]{Bahouri-Chemin-Danchin}), stating the well-posedness of the equation of the flow as a consequence of Osgood condition. Then considering the velocity field $(v_{s}+e^{-\beta t}y)$ as a small perturbation of $v_{s}$ (see \eqref{vs} for the definition) we prove that the characteristic curves corresponding to the perturbed velocity field stay close to that of $v_{s}$ in a suitable norm. Using the fact that the characteristics corresponding to the velocity fields $v_{s}$ and $(v_{s}+e^{-\beta t}y)$ are close we show that the particles initially lying in the support of $\sigma_{0}$ are transported out of the domain in some finite time $T>T_{A_{1}}=\frac{d}{A_{1}(1-A_{1})}$ along the flow corresponding to the perturbed velocity field. Consequently, the solution $\rho$ of the equation \eqref{1-3}$_{1}$-\eqref{1-3}$_{3}$ reaches exactly the target density $\rho_{s}=1$ after the time $T.$ \\ (v) Finally in Section \ref{final}, we will use Schauder's fixed point theorem to conclude that the control designed in step (iii) locally stabilizes the non linear coupled system \eqref{2-2} and consequently Theorem \ref{main} follows. \subsection{Bibliographical comments} In the literature many works have been dedicated to the study of incompressible Navier-Stokes equations. For the classical results concerning the existence-uniqueness and regularity issues of the constant density incompressible Navier-Stokes equations we refer the reader to \cite{temam}. The reader can also look into \cite{galdi} for a thorough analysis of the subject. Intricate situations may arise due to the lack of regularity when special geometric assumptions are imposed on the boundary $\partial\Omega.$ For example, the domain can have corners or edges of prescribed geometric shape. For the analysis of these situations the interested reader may look into \cite{mazya} and \cite{deuring}. In the present article the functional settings for the incompressible Navier-Stokes equations is motivated from \cite{raye}. The results of \cite{raye} are stated in a domain with smooth boundary. Thus to adapt the functional framework from \cite{raye} in the case of a rectangular domain we have used some results from \cite{gris} and \cite{Osborn}.\\ Regarding the Cauchy problem of the non-homogeneous Navier-Stokes equations, the existence of classical solution for the non-homogeneous Navier-Stokes equations with homogeneous Dirichlet boundary condition for velocity in space dimension three is studied in \cite{anton}. Results concerning the existence-uniqueness of global in time strong solution (with small initial data and small volume force) in space dimension three can be found in \cite{lady}. In dimension two the existence and uniqueness of global in time solution (without any smallness restriction on the data) is also proved in \cite{lady}. In both of these references the velocity field is Lipschitz and the initial condition of the density is smooth enough, hence the transport equation satisfied by the density can be classically solved using the method of characteristics. To deal with less regular velocity field the concept of renormalized solution was initially developed in \cite{Lio} and later suitably adapted in several contexts. For instance, one can find an application of a suitable variation of the Di-Perna-Lions theory to prove an existence and uniqueness result for the inhomogeneous Navier-Stokes equation in \cite{des}. All of these articles assume that the velocity field satisfies $v\cdot n=0.$ In the present article we are dealing with the target velocity $v_{s},$ which is inflow on a part of the boundary $\partial\Omega.$ For a velocity field with inflow, one must assume a suitable boundary condition for the density so that the transport equation satisfied by the density is well posed. This problem is analyzed in the articles \cite[Chapter VI]{boy} and \cite{boy2}, where the authors suitably define the trace for the weak solution of the transport equation. They also prove that these traces enjoy the renormalization property. In the present article we use the existence, uniqueness and stability results for the transport equation from \cite{boy} and \cite{boy2}. For a more intricate case involving nonlinear outflow boundary condition, similar results can be found in \cite{boy1}.\\ There is a rich literature where the question of the feedback boundary stabilization of the constant density incompressible Navier-Stokes equation is investigated. For the feedback boundary stabilization of a general semilinear parabolic equation one can look into the article \cite{fur2}. The feedback stabilization of the 2D and 3D constant density Navier-Stokes equations can be found in the articles \cite{fur1} and \cite{fur3} respectively. Concerning the stabilization of homogeneous Navier-Stokes equations one can also consult \cite{ray2f} and \cite{ray3} where the feedback boundary controls are achieved by solving optimal control problems. We would also like to mention the articles \cite{munt} and \cite{barb} where the authors prove the feedback stabilization of the same model around the Poiseuille profile by using normal velocity controllers. The idea of constructing a finite dimensional boundary feedback control to stabilize a linear parabolic equation dates back to the work \cite{tri1}. In our case we adapt the ideas from the articles \cite{raym} and \cite{ray2f} in order to construct a feedback boundary control with finite dimensional range to stabilize the linear Oseen equations. Actually for constant density fluids, the article \cite{raym} deals with a more intricate case involving mixed boundary conditions. Control properties of the variable density Navier-Stokes equations have been studied in the article \cite{frcara}, which proves several optimal control results in the context of various cost functionals. We also refer to the article \cite{erv1} where the authors prove the local exact controllability to a smooth trajectory of the non-homogeneous incompressible Navier-Stokes equation.\\ The study of the controllability and stabilizability issues of a system coupling equations of parabolic and hyperbolic nature is relatively new in the literature. We would like to quote a few articles in that direction. Null-controllability of a system of linear thermoelasticity (coupling wave and heat equations) in a $n-$ dimensional, compact, connected $C^{\infty}$ Riemannain manifold is studied in \cite{lebeauzua}. Controllability and stabilizability issues of compressible Navier-Stokes equations are investigated in \cite{raymcho}, \cite{rammy}, \cite{eggp} (in dim $1$) and \cite{ervgugla} (in dim $2$ and $3$). The compressible Navier-Stokes equations are also modeled by a coupled system of momentum balance and mass balance equations but the coupling is different from the one we consider in system \eqref{1-3}.\\ Let us emphasize that in the system \eqref{1-3} the control acts only on the velocity of the fluid and not on the density. In the literature there are articles dealing with controllability issues of a system of PDEs in which the controls act only on some components of the system. We would like to quote a few of them. We refer to \cite{lissy} where the authors prove local null-controllability of the three dimensional incompressible Navier-Stokes equations using distributed control with two vanishing components. A related result concerning the stabilizability of $2-$d incompressible Navier-Stokes equations using a control acting on the normal component of the upper boundary is proved in \cite{ervcho}. In \cite{lebeauzua} to prove the null-controllability of a system of linear thermoelasticity the authors consider the control on the wave equation $i.e.$ on the hyperbolic part and not on the parabolic equation modeling the temperature. On the other hand controllability and stabilizability issues of one dimensional compressible Navier-Stokes equations have been studied in \cite{raymcho} and \cite{rammy} by using only a control acting on the velocity. In the present article we also consider the control on the velocity and not on the density but our approach exploits more directly and in a more intuitive manner the geometry of the flow of the target velocity in order to control the hyperbolic transport equation modeling the density. \subsection{Outline} In section \ref{velocity} we study the feedback stabilization of the velocity. Section \ref{density} is devoted to the stabilization of the density. In Section \ref{final} we use a fixed point argument to prove the stabilizability of the coupled system \eqref{1-3}. Finally in Section \ref{furcom} we briefly comment on how to adapt our analysis if one wishes to control the outflow boundary $\Gamma_{out}$ or the lateral boundary $\Gamma_{0}$ of the channel $\Omega.$ \section{Stabilization of the Oseen equations}\label{velocity} The goal of this section is to discuss the stabilization of the Oseen equations \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$. We will first design a localized boundary control with finite dimensional range to stabilize the linear Oseen equation \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}.$ We will then construct the control as a feedback of $(y,w_{c}),$ where the pair $(y,w_{c})$ solves the extended system \eqref{2-7}. The plan of this section is as follows\\ (i) In Section \ref{stablin}, we study the stabilization of the homogeneous linear system (with $f=0$) \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$, using a finite dimensional boundary control.\\ (ii) We will analyze the feedback stabilization of the extended system \eqref{2-7} in Section \ref{stabext}. Moreover with this feedback control we will prove the $V^{2,1}(Q_{\infty})$ regularity of the solution of linear Oseen equations \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$. Using a further regularity regularity estimate (see \eqref{dofcon}) of the control $u$ we show that $(e^{-\beta t}y+v_{s})$ has the same inflow and outflow as that of $v_{s},$ provided the initial condition $y_{0}$ and the non-homogeneous source term $f$ (appearing in \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$) are suitably small (see Corollary \ref{p3.0.2} ). \subsection{Stabilization of the linear Oseen equations}\label{stablin} In the following section we will define some operators and present some of their properties which helps in studying the linearized Oseen equations \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}.$ \subsubsection{Writing the equations with operators} The following results are taken from \cite{raye} where they are stated in a $C^{2}$ domain. It is necessary to make suitable changes to adapt those results in our case since the domain $\Omega$ in our case is Lipschitz. Without going into the details of the proofs we will just comment on how to adapt those results in our case.\\ Let $P$ be the orthogonal projection operator from $L^{2}(\Omega)$ onto ${V}^{0}_{n}(\Omega)$ known as Helmholtz or Leray projector (see \cite[Section 1.4]{temam}). \newline We denote by $ (A,\mathcal{D}(A)) $ (the Oseen operator) and $(A^*,\mathcal{D}(A^*))$ the unbounded operators in ${V}^{0}_{n}(\Omega),$ defined by \begin{equation}\label{oprA} \begin{array}{ll} \mathcal{D}({A})={V}^{2}(\Omega)\cap {V}^{1}_{0}(\Omega),\quad & A{y}=\nu P \Delta {y}+\beta {y}-P(({v}_s\cdot \nabla){y})-P(({y}\cdot \nabla){v}_s), \vspace{1.mm}\\ {\mathcal{D}}({A}^*)={V}^{2}(\Omega)\cap {V}^{1}_{0}(\Omega), \quad & A^{*}{y}=\nu P\Delta {y} +\beta {y}+P(({v}_s\cdot \nabla){y})-P((\nabla {v}_s)^T){y}. \end{array} \end{equation} For the $H^{2}(\Omega)$ regularity of the solutions of the homogeneous Dirichlet boundary value problems corresponding to the operators $A$ and $A^{*}$ in a rectangular domain $\Omega,$ one can apply \cite[Theorem 3.2.1.3]{gris}. Since $v_{s}$ is smooth with div$(v_{s}) =0,$ we can prove the following lemma. \begin{lem}\label{estres} \cite[Section 2.2]{ray2} There exists $\lambda_0 >0$ in the resolvent set of $A$ such that the following hold \begin{equation}\label{2-4} \begin{array}{l} \langle (\lambda_0I -A){y},{y}\rangle _{{V}^{0}_{n}(\Omega)}\geq \frac{1}{2}|{y} |^{2}_{V^{1}_{0}(\Omega)}\quad \mbox{for all}\quad {y} \in \mathcal{D}(A), \vspace{.1cm}\\ {\mbox{and}}\\[1.mm] \langle(\lambda_0 I-A^*){y},{y}\rangle_{V^{0}_{n}}(\Omega)\geq\frac{1}{2}|{y} |^{2}_{V^{1}_{0}(\Omega)}\quad \mbox{for all} \quad {y} \in \mathcal{D}(A^*). \end{array} \end{equation} \end{lem} In Lemma \ref{estres} we can always choose $\lambda_{0}>\beta,$ taking $\lambda_{0}$ larger if necessary. Throughout this article we will stick to this assumption. Now, Lemma \ref{estres} can be used to prove the following. \begin{lem}\label{t2-1} The unbounded operator $(A,\mathcal{D}(A))$ (respectively $(A^{*},\mathcal{D}(A^{*}))$) is the infinitesimal generator of an analytic semi group on $V^{0}_{n}(\Omega)$. Moreover the resolvent of $A$ is compact. \end{lem} \begin{proof} The proof of the fact that $(A,\mathcal{D}(A))$ (respectively $(A^{*},\mathcal{D}(A^{*}))$) generates an analytic semigroup on $V^{0}_{n}(\Omega)$ uses the resolvent estimate \eqref{2-4} and can be found in \cite[Lemma 4.1]{raye}. One can mimic the arguments used in \cite[Lemma 3.1]{fur1} to show that the resolvent of $A$ is compact. The reader can also look into \cite[Section 3]{ray2f}. \end{proof} Now we want to find a suitable operator $B$ to write down the Oseen equation as a boundary control system.\\ Consider the following system of equations \begin{equation}\label{liftopt} \left\{ \begin{array}{ll} \lambda_0 {y} - \nu \Delta {y}-\beta {y}+(({v}_s\cdot \nabla){y})+(({y}\cdot \nabla){v}_s)+\nabla q=0\quad &\mbox{in}\quad \Omega, \vspace{1.mm}\\ \mbox{div}({y})=0 \quad& \mbox{in}\quad \Omega, \vspace{1.mm}\\ {y}={u} \quad& \mbox{on}\quad \Gamma. \end{array}\right. \end{equation} \begin{lem}\label{h2reg} Let $\lambda_{0}$ be as in Lemma \ref{estres}. For $u\in V^{3/2}(\Gamma),$ the system \eqref{liftopt} admits a unique solution $(y,q)\in V^{2}(\Omega)\times {H^{1}(\Omega)}/{\mathbb{R}}$ and moreover the following inequality holds \begin{equation}\label{estyq} \begin{array}{l} \|y\|_{V^{2}(\Omega)}+\|q\|_{H^{1}(\Omega)/\mathbb{R}}\leqslant C\|u\|_{H^{3/2}(\Gamma)}, \end{array} \end{equation} for some constant $C>0.$ \end{lem} \begin{remark} The Lemma \ref{h2reg} is inspired from \cite[Lemma B.1.]{raye} where it is proved in the case of a $C^{2}$ domain. \end{remark} \begin{proof}[Proof of Lemma \ref{h2reg}] We write $(y,q)=(y_{1},q_{1})+(y_{2},q_{2}),$ such that $(y_{1},q_{1})$ satisfies \begin{equation}\label{estyq1} \left\{ \begin{array}{ll} \lambda_{0}y_{1}-\nu\Delta y_{1}-\beta y_{1}+\nabla q_{1}=0\quad&\mbox{in}\quad\Omega,\\ \mbox{div}(y_{1})=0\quad&\mbox{in}\quad\Omega,\\ y_{1}=u\quad&\mbox{on}\quad\Gamma \end{array}\right. \end{equation} and $(y_{2},q_{2})$ satisfies \begin{equation}\label{estyq2} \left\{ \begin{array}{ll} \lambda_0 {y}_{2} - \nu \Delta {y}_{2}-\beta {y}_{2}+(({v}_s\cdot \nabla){y}_{2})+(({y}_{2}\cdot \nabla){v}_s)+\nabla q_{2}=-(({v}_s\cdot \nabla){y}_{1})-(({y}_{1}\cdot \nabla){v}_s)\quad& \mbox{in}\quad \Omega, \vspace{1.mm}\\ \mbox{div}({y}_{2})=0 \quad &\mbox{in}\quad \Omega, \vspace{1.mm}\\ {y}_{2}=0 \quad &\mbox{on}\quad \Gamma. \end{array}\right. \end{equation} As $u\in H^{3/2}(\Omega),$ the solution to \eqref{estyq1} satisfies $(y_{1},q_{1})\in V^{2}(\Omega)\times H^{1}(\Omega)/\mathbb{R}$ (see \cite{Osborn}) and the following inequality is true \begin{equation}\label{estyq3} \begin{array}{l} \|y_{1}\|_{V^{2}(\Omega)}+\|q_{1}\|_{H^{1}(\Omega)/\mathbb{R}}\leqslant C\|u\|_{H^{3/2}(\Gamma)}, \end{array} \end{equation} where $C>0$ is a constant. Using \eqref{estyq3} we observe that the right hand side of \eqref{estyq2}$_{1}$ is in $H^{1}(\Omega).$ Hence we get that $y_{2}\in V^{2}(\Omega)\cap V^{1}_{0}(\Omega).$ Then the corresponding pressure $q_{2}\in H^{1}(\Omega)/\mathbb{R}$ can be recovered using De Rham's theorem (see \cite[Section 1.4]{temam}). Using \eqref{estyq3} one also has the following inequality \begin{equation}\label{estyq4} \begin{array}{l} \|y_{2}\|_{V^{2}(\Omega)}+\|q_{2}\|_{H^{1}(\Omega)/\mathbb{R}}\leqslant C\|u\|_{H^{3/2}(\Gamma)}, \end{array} \end{equation} for some positive constant $C.$ The inequalities \eqref{estyq3} and \eqref{estyq4} together yield \eqref{estyq}. \end{proof} Now for $u\in V^{3/2}(\Gamma),$ we define the Dirichlet lifting operators $D_{A}u=y$ and $D_{p}u=q,$ where $(y,q)$ is the solution of \eqref{liftopt} with Dirichlet data $u.$ \begin{lem}\label{l2-2} (i) The operator $D_A$ can be extended as a bounded linear map from $V^{0}(\Gamma)$ to $V^{1/2}(\Omega).$ Moreover $D_{A}\in \mathcal{L}({V}^{s}(\Gamma),{V}^{s+1/2}(\Omega))$ for all $0\leqslant s \leqslant 3/2.$\\ (ii)The operator $D_{A}^{*},$ the adjoint of $D_A$ computed as a bounded operator from $V^{0}(\Gamma)$ to $V^{0}(\Omega)$ is a bounded linear operator from $V^{0}(\Omega)$ to $V^{0}(\Gamma)$ and is given as follows \begin{equation}\label{representation} \begin{array}{l} \displaystyle D^{*}_{A}{g} = -\nu \frac{\partial {z}}{\partial {n}}+ \pi {n} -{\frac{1}{|\Gamma|}}\left(\int\limits_{\Gamma}\pi \right){n}, \end{array} \end{equation} where $(z,\pi)$ is the solution of \begin{equation}\label{defda*} \left\{ \begin{array}{ll} \lambda_{0} {z} - \nu \Delta {z}-\beta {z}-({v}_s\cdot \nabla){z}+(\nabla {v}_s)^{T}{z}+\nabla \pi=g\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ \textup{div}{z}=0 \quad& \mathrm{in}\quad \Omega, \vspace{1.mm}\\ {z}=0 \quad& \mathrm{on}\quad \Gamma, \end{array}\right. \end{equation} Here $|\Gamma| $ is the one dimensional Lebesgue measure of $\Gamma$. Moreover $D_{A}^{*}\in\mathcal{L}({V}^{0}(\Omega),{V}^{1/2}(\Gamma)).$\\ (iii) The operator $D_{A}^{*}$ can be extended as a bounded linear operator from $H^{-\frac{1}{2}+\kappa}(\Omega)$ to $V^{\kappa}(\Gamma),$ for all $0<\kappa<\frac{1}{2},$ $i.e.$ \begin{equation}\label{regD*} \begin{array}{{l}} D_{A}^{*}\in\mathcal{L}({H}^{-\frac{1}{2}+\kappa}(\Omega),{V}^{\kappa}(\Gamma))\quad\mbox{for all}\quad 0<\kappa<\frac{1}{2}. \end{array} \end{equation} \end{lem} \begin{remark} The first two parts of the Lemma \ref{l2-2} are inspired from \cite[Lemma B.4.]{raye} where it is proved in the case of a $C^{2}$ domain. \end{remark} \begin{proof}[Proof of Lemma \ref{l2-2}] (i) In Lemma \ref{h2reg} we have proved that $D_{A}$ is a bounded operator from $V^{3/2}(\Gamma)$ into $V^{2}(\Omega).$ Following \cite[Theorem B.1]{raye} one obtains that the operator $D_{A}$ can be extended as a bounded linear map from $V^{0}(\Gamma)$ into $V^{1/2}(\Omega)$ (in the sense of variational formulation). Hence one can use interpolation to prove that $D_{A}$ is bounded from $V^{s}(\Gamma)$ to $V^{s+1/2}(\Omega),$ for all $0\leqslant s\leqslant 3/2.$\\ (ii) The second part can be done following the proof of \cite[Lemma B.4]{raye}. It is therefore left to the reader.\\ (iii) For the final part, in view of \cite[Appendix B, Lemma B.1.]{raye} one first observes that the map $\mathcal{M}_{g\rightarrow (z,\pi)},$ mapping $g$ to $(z,\pi)$ (where $g,$ $z$ and $\pi$ are as in \eqref{defda*}) satisfies the following \begin{equation}\label{interreg} \begin{array}{l} \mathcal{M}_{g\rightarrow (z,\pi)}\in \mathcal{L}(L^{2}(\Omega),(V^{2}(\Omega)\cap V^{1}_{0}(\Omega))\times H^{1}(\Omega)/\mathbb{R}). \end{array} \end{equation} Now following \cite[Appendix B, Theorem B.1.]{raye} one can use method of transposition to define weak type solution of the problem \eqref{defda*} when $g\in (H^{2}(\Omega)\cap H^{1}_{0}(\Omega))',$ where $(H^{2}(\Omega)\cap H^{1}_{0}(\Omega))'$ is the dual of the space $H^{2}(\Omega)\cap H^{1}_{0}(\Omega)$ provided that $L^{2}(\Omega)$ is identified with its dual. In particular one has the following \begin{equation}\label{interreg1} \begin{array}{l} \mathcal{M}_{g\rightarrow (z,\pi)}\in \mathcal{L}((H^{2}(\Omega)\cap H^{1}_{0}(\Omega))',V^{0}(\Omega)\times (H^{1}(\Omega)/\mathbb{R})'). \end{array} \end{equation} Now let us assume $g\in H^{-1}(\Omega),$ where $H^{-1}(\Omega)$ denotes the dual of $H^{1}_{0}(\Omega)$ with $L^{2}(\Omega)$ as the pivot space. Using \eqref{interreg1} and the fact that $H^{-1}(\Omega)\subset (H^{2}(\Omega)\cap H^{1}_{0}(\Omega))'$ (since $(H^{2}(\Omega)\cap H^{1}_{0}(\Omega))$ is dense in $H^{1}_{0}(\Omega)$) one can write \eqref{defda*} as follows \begin{equation}\label{defda**} \left\{ \begin{array}{ll} - \nu \Delta {z}+\nabla \pi=g^{*}\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ \textup{div}{z}=0 \quad& \mathrm{in}\quad \Omega, \vspace{1.mm}\\ {z}=0 \quad& \mathrm{on}\quad \Gamma, \end{array}\right. \end{equation} where $$g^{*}=g-\lambda_{0} {z}+\beta {z}+({v}_s\cdot \nabla){z}-(\nabla {v}_s)^{T}{z}\in H^{-1}(\Omega).$$ Now \cite[Theorem IV.5.2]{boy} furnishes the following regularity \begin{equation}\label{penulest} \begin{array}{l} \mathcal{M}_{g\rightarrow (z,\pi)}\in \mathcal{L}(H^{-1}(\Omega),V^{1}_{0}(\Omega)\times L^{2}(\Omega)/\mathbb{R}). \end{array} \end{equation} Now from \eqref{interreg}, \eqref{penulest} and using the interpolation result \cite[Theorem 5.1.]{liomag} one has \begin{equation}\label{penulest*} \begin{array}{l} \mathcal{M}_{g\rightarrow (z,\pi)}\in \mathcal{L}(H^{-\frac{1}{2}+\kappa}(\Omega),(V^{\frac{3}{2}+\kappa}(\Omega)\cap V^{1}_{0}(\Omega))\times H^{\frac{1}{2}+\kappa}(\Omega)/\mathbb{R}), \end{array} \end{equation} for $-\frac{1}{2}\leqslant\kappa\leqslant\frac{1}{2}.$\\ Finally the definition \eqref{representation} of $D^{*}_{A}$ and \eqref{penulest*} in particular provide that $$D_{A}^{*}\in\mathcal{L}({H}^{-\frac{1}{2}+\kappa}(\Omega),{V}^{\kappa}(\Gamma)),\,\,\mbox{for all}\,\,0<\kappa<\frac{1}{2}.$$ Hence we are done with the proof of Lemma \ref{l2-2}. \end{proof} \begin{remark} In part $(ii)$ of Lemma \ref{l2-2}, the operator $D_{A}^{*}$ is defined on the space of divergence free functions but in part $(iii)$ we extended this definition by removing the divergence free constraint on the elements of the domain of $D^{*}_{A}.$ This is possible since it is not necessary to have a divergence free function $g$ in order to solve \eqref{defda*}. \end{remark} In order to localize the control of the velocity on $\Gamma_{c}$ (defined in \eqref{dgc}), we introduce the operator $M,$ which is defined as follows \begin{equation}\label{docn} \begin{array}{l} \displaystyle M{g}(x)=m(x){g}(x)- \frac {m}{\displaystyle\int\limits_{\Gamma}m}\left(\int\limits_{\Gamma}m{g}\cdot {n} \right){n}(x)\quad\mbox{for all}\quad x\in\Gamma. \end{array} \end{equation} In the expression \eqref{docn} the weight function $m \in C^{\infty} (\Gamma)$ takes values in $[0,1]$ and is supported in $\Gamma_{c}\subset \Gamma_{in}.$ Moreover, $m$ equals 1 in some open connected component \begin{equation}\label{gammac+} \begin{array}{l} \Gamma_{c}^{+}\Subset \Gamma_{c}. \end{array} \end{equation} So the operator $M$ localizes the support of the control on $\Gamma_{in}$ and also guarantees that $Mg\in V^{0}(\Gamma)$ for any $g\in L^{2}(\Gamma).$ \begin{lem}\label{l2-3} \cite[Lemma 2.3]{ray2f} The operator $M \in \mathcal{L}({V}^{0}(\Gamma))$ (defined in \eqref{docn}) is symmetric. \end{lem} Sometimes we might use the notation \begin{equation}\label{stten} \begin{array}{l} \mathbb{T}(v,p)=\nu(\nabla{v}+(\nabla v)^{T})-pI, \end{array} \end{equation} to denote the Cauchy stress tensor corresponding to a vector field $v$ and a pressure $p.$\\ We now define the operator \begin{equation}\label{DEFB} \begin{array}{l} B=(\lambda_{0}I-A)PD_{A}M\in \mathcal{L}({V}^{0}(\Gamma),(\mathcal{D}(A^{*}))'), \end{array} \end{equation} where $(\mathcal{D}(A^{*}))'$ denotes the dual of the space $\mathcal{D}(A^{*})$ with $V^{0}_{n}(\Omega)$ as the pivot space. \begin{prop}\label{p2-4} $(i)$ The adjoint of the operator $B,$ computed for the duality structure $\langle \cdot,\cdot\rangle_{(\mathcal{D}(A^{*})',\mathcal{D}(A^{*}))},$ that we will denote by $B^{*}$ in the following, satisfies $B^{*}\in \mathcal{L} (\mathcal{D}(A^{*}),{V}^{0}(\Gamma))$ and for all $\Phi\in\mathcal{D}(A^{*}),$ \begin{align} \label{exB*phi1} B^{*}\Phi&=M\left(-\nu \frac{\partial \Phi}{\partial {n}}+\left(\psi -{\frac{1}{|\Gamma|}}\left(\int\limits_{\Gamma}\psi \right)\right){n}\right)\\ \label{exB*phi2} &=-M\mathbb{T}\left(\Phi,\left(\psi -{\frac{1}{|\Gamma|}}\left(\int\limits_{\Gamma}\psi \right)\right)\right)n, \end{align} where \begin{equation}\label{exB*phi3} \begin{array}{l} \nabla \psi =(I-P)[\nu \Delta \Phi +({v}_s \cdot \nabla)\Phi-(\nabla {v}_s)^{T}\Phi], \end{array} \end{equation} and $\mathbb{T}$ denotes the stress tensor as defined in \eqref{stten}.\\ $(ii)$ There exists a positive constant $\omega>0$ such that the operator $B^{*}$ can be extended as a bounded linear map from $\mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}})$ to $V^{\kappa}(\Gamma),$ for all $0<\kappa<\frac{1}{2}$ $i.e.$ \begin{equation}\label{extnregB*} \begin{array}{l} B^{*}\in\mathcal{L}(\mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}}),V^{\kappa}(\Gamma))\quad\mbox{for all}\quad 0<\kappa<\frac{1}{2}. \end{array} \end{equation} \end{prop} \begin{proof} (i) From Lemma \ref{l2-2}, we know that \begin{equation}\label{B*phi} \begin{array}{l} \displaystyle B^{*}\Phi=MD^{*}_{A}P(\lambda_{0}I-A^{*})\Phi=M\left(-\nu \frac{\partial \hat\Phi}{\partial {n}}+\left(\psi -{\frac{1}{|\Gamma|}}\left(\int\limits_{\Gamma}\psi \right)\right){n}\right), \end{array} \end{equation} where \begin{equation}\label{defda*1} \left\{ \begin{array}{ll} \lambda_{0} {\hat{\Phi}} - \nu \Delta {\hat{\Phi}}-\beta {\hat{\Phi}}-(({v}_s\cdot \nabla){\hat{\Phi}})+(\nabla {v}_s)^{T}{\hat{\Phi}}+\nabla \psi=P(\lambda_{0}I-A^{*})\Phi\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ \textup{div}{\hat{\Phi}}=0 \quad& \mathrm{in}\quad \Omega, \vspace{1.mm}\\ {\hat{\Phi}}=0 \quad& \mathrm{on}\quad \Gamma. \end{array}\right. \end{equation} This gives $\hat{\Phi}=\Phi$ and the expression \eqref{exB*phi3}. Hence the representation \eqref{exB*phi1} directly follows from \eqref{B*phi}. Also \eqref{exB*phi2} follows from \eqref{exB*phi1} because $(\nabla\Phi)^{T}n=0$ on $\Gamma$ (this can be easily deduced from the fact that $\Phi$ on $\Gamma$ is zero and $\mbox{div}(\Phi)=0$ on $\Omega$).\\ (ii) Recall from Lemma \ref{t2-1} that $(A^{*},\mathcal{D}(A^{*}))$ generates an analytic semigroup on $V^{0}_{n}(\Omega).$ Hence one can always choose a large enough positive constant $\omega$ from the resolvent set of $A,$ such that the spectrum of $(A^{*}-\omega I)$ lies in the open left half-plane. Now following the definition \cite[p. 329, Section 7.4, Eq. 7.4.3]{fattorini} one can define the operator $(\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}}$ where $0<\kappa<\frac{1}{2}.$ Let us consider $\Phi\in \mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}})$ where $0<\kappa<\frac{1}{2}.$ Since $$\mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}})=[V^{2}(\Omega)\cap V^{1}_{0}(\Omega),V^{0}_{n}(\Omega)]_{\frac{1}{4}-\frac{\kappa}{2}}=V^{\frac{3}{2}+\kappa}(\Omega)\cap V^{1}_{0}(\Omega)$$ (for details on the characterization of domains of fractional powers we refer to \cite{lionsinterpole}), one observes the following \begin{equation}\label{incregB*} \begin{array}{l} P(\lambda_{0}I-A^{*})\Phi\in H^{-\frac{1}{2}+\kappa}(\Omega). \end{array} \end{equation} Now one can use the expression of $B^{*}$ as given by \eqref{B*phi} and part $(iii)$ of the Lemma \ref{l2-2} to prove \eqref{extnregB*}. \end{proof} Now following \cite{raye} the Oseen equations \begin{equation}\label{2-5} \left\{ \begin{array}{ll} \displaystyle \frac{\partial {y}}{\partial t}-\beta {y}-\nu \Delta {y}+ ({v}_s \cdot \nabla){y}+({y} \cdot \nabla)v_s +\nabla q=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \mbox{div}{y}=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle {y}=0\quad& \mbox {on} \quad (\Gamma_0\cup\Gamma_{out}) \times (0,\infty), \vspace{1.mm}\\ \displaystyle {y}=M{u} \quad &\mbox{on} \quad \ \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ \displaystyle {y}(x,0)={y}_0\quad & \mbox{on}\quad \Omega, \end{array} \right. \end{equation} can be written in the following evolution equation form \begin{equation}\label{2-6} \left\{ \begin{array}{ll} P{y}'=AP{y}+B{u}\quad& \mbox{in}\quad(0,\infty), \vspace{.1cm}\\ P{y}(0)={y}_{0},&\\[1.mm] (I-P){y}=(I-P)D_{A}M{u} \quad& \mbox{in}\quad (0,\infty). \end{array}\right. \end{equation} In the following section we discuss some spectral properties of the Oseen operator $A$ and then we define a suitable control space in order to construct a control function which stabilizes the Oseen equations. \subsubsection{Spectral properties of $A$ and the stabilizability criterion} Since the resolvent of $A$ is compact (see Lemma \ref{t2-1}), the spectrum $spec(A)$ of the operator $A$ is discrete. Moreover since $A$ is the generator of an analytic semi group (see Lemma \ref{t2-1}), $spec(A)$ is contained in a sector. Also the eigenvalues are of finite multiplicity and appear in conjugate pairs when they are not real.\\ We denote by $(\lambda_{k})_{k\in N}$ the eigenvalues of $A.$ Without loss of generality we can always assume that there is no eigenvalue of $A$ with zero real part by fixing a slightly larger $\beta,$ if necessary. So we choose $N_{u}\in\mathbb{N}$ such that \begin{equation}\label{coev} \begin{array}{l} ...\mbox{Re}\lambda_{N_{u}+1}<0< \mbox{Re}\lambda_{N_{u}}\leqslant ... \leqslant \mbox{Re}\lambda_{1}. \end{array} \end{equation} Following \cite{raym}, we now choose the control space as follows \begin{align}\label{U0} U_{0}=\mbox{vect}\oplus^{N_{u}}_{k=1}(\mbox{Re}B^{*}\mbox{ker}(A^{*}-\lambda_{k}I)\oplus \mbox{Im}B^{*}\mbox{ker}(A^{*}-\lambda_{k}I)). \end{align} The choice \eqref{U0} of the control space plays an important role in proving a unique continuation property which implies the stabilizability of the pair $(A,B).$ Let us choose the functions $g_{j}$ in \eqref{findcon} such that \begin{align}\label{basisU0} \{{{g}_{j}}\suchthat 1\leqslant j \leqslant N_{c}\}\, \mbox{is an orthonormal basis of}\, {U}_{0}. \end{align} For later use we now prove an additional regularity result for the elements of the control space $U_{0}.$ The following regularity result is true only because the elements of $U_{0}$ are supported on a smooth subset of $\Gamma.$ \begin{lem}\label{smgci} The set $U_{0},$ defined in \eqref{U0}, is a subspace of $C^{\infty}(\Gamma).$ \end{lem} \begin{proof} The function $m$ is supported on $\Gamma_{c},$ which is $C^{\infty}$. In view of the representation \eqref{exB*phi1} of the operator $B^{*}$, we observe that to prove Lemma \ref{smgci} it is enough to show that for each $1\leqslant k\leqslant N_{u},$ any solution $(\phi,\psi)$ to the system \eqref{eivp1} is $C^{\infty}$ in some open set $\Omega_{\Gamma_{c}}$ ($\subset\Omega$) such that $\partial\Omega_{\Gamma_{c}}$ contains $\Gamma_{c}.$ Let us consider $k\in\{1,...,N_{u}\}$ and $(\phi,\psi)$ solves the following \begin{equation}\label{eivp1} \left\{ \begin{array}{ll} \lambda_{k}\phi-\nu\Delta\phi-\beta\phi-((v_{s}\cdot\nabla)\phi)+(\nabla v_{s})^{T}\phi+\nabla\psi=0\quad&\mbox{in}\quad\Omega,\\[1.mm] \mbox{div}\phi=0\quad&\mbox{in}\quad\Omega,\\[1.mm] \phi=0\quad&\mbox{on}\quad\Gamma. \end{array}\right. \end{equation} We thus apply the elliptic regularity result \cite[Theorem 3.2.1.3]{gris} to show that \begin{equation}\label{phipsi} \begin{array}{l} \phi\in\mathcal{D}({A^{*}})= V^{2}(\Omega)\cap V^{1}_{0}(\Omega)\quad\mbox{and}\quad \psi\in H^{1}(\Omega). \end{array} \end{equation} We will work in a neighborhood of $\Gamma_{c}$ in order to avoid the singularities due to the presence of the corners $(0,0)$ and $(0,1).$ First consider a neighborhood $N^{b}_{\Gamma_{c}}$ of $\Gamma_{c}$ such that neither of the points $(0,0)$ and $(0,1)$ belong to $N^{b}_{\Gamma_{c}}.$ Now we consider an open set $\Omega_{\Gamma_{c}}$ such that $\Omega_{\Gamma_{c}}\subset {\Omega},$ $\partial\Omega_{\Gamma_{c}}$ (the boundary of $\Omega_{\Gamma_{c}}$) is $C^{\infty}$ and $\partial\Omega_{\Gamma_{c}}\cap \Gamma=N^{b}_{\Gamma_{c}}.$ Let $\Theta\in C^{\infty}(\bar{\Omega}_{\Gamma_{c}})$ be such that $\Theta=1$ on a subset of $\bar\Omega_{\Gamma_{c}}$ containing $\Gamma_{c}$ and $\Theta=0$ on $\partial\Omega_{\Gamma_{c}}\setminus N^{b}_{\Gamma_{c}}.$ One can check that the function $(\Theta\phi,\Theta\psi)$ satisfies the following \begin{equation}\label{eqpro} \begin{array}{l} - \nu \Delta ({\Theta{\phi}})+\nabla (\Theta\psi)=F(\Theta,\phi,\psi)\quad\mbox{in}\quad \Omega_{\Gamma_{c}}, \end{array} \end{equation} where $$F(\Theta,\phi,\psi)=-\nu\Delta\Theta\phi-2\nu\nabla\Theta\nabla\phi-\phi(v_{s}\cdot\nabla)\Theta+\psi\nabla\Theta,$$ and also $\Theta\phi=0$ on $\partial\Omega_{\Gamma_{c}},$ which implies $\displaystyle \int_{\Omega_{\Gamma_{c}}}\textup{div}{({\Theta{\phi}})}=\int_{\partial\Omega_{\Gamma_{c}}}{({\Theta{\phi}})}\cdot\,n=0.$ Using \eqref{phipsi} one verifies that $$F(\Theta,\phi,\psi)\in H^{1}(\Omega_{\Gamma_{c}})\quad\mbox{and}\quad\mbox{div}(\Theta\phi)=\phi\cdot\nabla\Theta\in H^{2}(\Omega_{\Gamma_{c}}).$$ Now we apply \cite[Theorem {IV.5.8}]{boy} to obtain, $(\Theta\phi,\Theta\psi)\in H^{3}(\Omega_{\Gamma_{c}})\times H^{2}(\Omega_{\Gamma_{c}}).$ We can use a bootstrap argument to conclude that, $(\Theta\phi,\Theta\psi)\in C^{\infty}(\Omega_{\Gamma_{c}}).$ Hence we finally have ${g_{j}}\in C^{\infty}(\Gamma),$ for all $1\leqslant j\leqslant N_{c} .$ \end{proof} We are looking for a control $u$ taking values in $U_{0}.$ We write \begin{equation}\label{fdc} \begin{array}{l} u(x,t)=\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){g_{j}}(x), \end{array} \end{equation} where $w_{c}=(w_{1},...,w_{N_{c}})\in L^{2}(0,\infty;\mathbb{R}^{N_{c}})$ is the control variable. Again in view of \cite{raym} we define a new control operator $\mathcal{B}\in \mathcal{L}(\mathbb{R}^{N_{c}},(\mathcal{D}(A^{*}))')$ as \begin{equation}\label{decB} \begin{array}{l} \mathcal{B}w_{c}=\sum\limits_{j=1}^{N_{c}}{w_{j}}B{g_{j}}=\sum\limits_{j=1}^{N_{c}}{w_{j}}(\lambda_{0}I-A)PD_{A}{g_{j}}. \end{array} \end{equation} Observe that $\mathcal{B}$ is defined by restricting the action of the operator $B$ to $U_{0}.$\\ Let us consider the controlled system \begin{equation}\label{consans} \begin{array}{l} Py'=APy+Bu\,\,\mbox{in}\,\,(0,\infty),\quad Py(0)=y_{0}, \end{array} \end{equation} which we obtain from \eqref{2-6}$_{1}$-\eqref{2-6}$_{2}.$ With the definition \eqref{decB} and a control of the form \eqref{fdc}, the system \eqref{consans} takes the form $$Py'=APy+\mathcal{B}w_{c}\,\,\mbox{in}\,\,(0,\infty),\quad Py(0)=y_{0}.$$ \begin{thm}\label{thstab} Assume that the spectrum of $A$ obeys the condition \eqref{coev}, we choose $\{{g_{j}}\suchthat 1\leqslant j\leqslant N_{c}\}$ as \eqref{basisU0} and the operator $\mathcal{B}$ is as defined in \eqref{decB}. Then the pair $(A,\mathcal{B})$ is stabilizable in $V^{0}_{n}(\Omega).$ \end{thm} Before going into the proof of Theorem \ref{thstab}, let us recall that the pair $(A,\mathcal{B})$ is stabilizable in $V^{0}_{n}(\Omega)$ iff for all $y_{0}\in V^{0}_{n}(\Omega),$ there exists a control $w_{c}\in L^{2}(0,\infty;\mathbb{R}^{N_{c}})$ such that the controlled system $$Py'=APy+\mathcal{B}w_{c}\,\,\mbox{in}\,\,(0,\infty),\quad Py(0)=y_{0},$$ obeys $$\int\limits_{0}^{\infty}\|Py(t)\|^{2}_{V^{0}_{n}(\Omega)}dt<\infty.$$ The proof of Theorem \ref{thstab} in a more intricate situation involving mixed boundary condition can be found in \cite{raym}. In \cite{raym} the localization operator $M,$ localizing the control, is simply the cutoff function $m$ whereas in our case $M$ is as defined in \eqref{docn}. For the sake of completeness, we present the proof of Theorem \ref{thstab} below, which follows step by step the one of \cite{raym} up to minor modifications. \begin{proof}[Proof of Theorem \ref{thstab}] \begin{figure}[h!]\label{pic2} \centering \begin{tikzpicture}[scale=0.75] \draw (5,0)node [below] {$\Gamma_b$}; \draw (0,0)node [below] {$0$}; \draw (10,0)node [below right] {$d$}; \draw (0,5)node [above] {$1$}; \draw (10,2.5)node [above right] {$\Gamma_{out}$}; \draw (5,5) node [above] {$\Gamma_h$}; \draw (5,2.5) node [above] {$\Gamma_{0}=\Gamma_{h}\cup\Gamma_{b}$}; \draw (0,0) -- (10,0); \draw (10,0) -- (10,5); \draw (10,5) -- (0,5); \draw (0,5) -- (0,4); \draw (0,1) -- (0,0); \draw [thick,dash dot] (0,1) -- (0,4); \draw [decorate,decoration={brace,amplitude=10pt,mirror,raise=4pt},yshift=0pt] (0,1) -- (0,4) node [black,midway,xshift=0.8cm] {\footnotesize $\Gamma^{+}_{c}$}; \draw (0,1) -- (-2,1); \draw (-2,1) -- (-2,4); \draw (-2,4) -- (0,4); \end{tikzpicture} \caption{Domain $\Omega_{ex}$.} \end{figure} According to \cite[Theorem 1.2]{raypa} (one can also consult \cite[Chapter V]{ben} for related results) proving the stabilizability of the pair $(A,\mathcal{B})$ is equivalent to verifying the Hautus criterion: \begin{align} \mbox{ker}(\lambda_{k}I-A^{*})\cap \mbox{ker}(\mathcal{B}^{*})=\{0\},\quad\mbox{for all}\quad 1\leqslant k \leqslant N_{u}.\label{Hutus} \end{align} Let $\phi\in \mbox{ker}(\lambda_{k}I-A^{*}).$ Also suppose that $\psi$ is the pressure associated with $\phi,$ $i.e.$ the pair $(\phi,\psi)$ solves \eqref{eivp1}. Now one can use \eqref{decB} and Proposition \ref{p2-4} in order to verify that \begin{align}\label{exB*} \displaystyle \mathcal{B}^{*}\phi&=-\left(\int\limits_{\Gamma_{c}}{g_{j}}\overline{M\mathbb{T}(\phi,\psi)n}\,dx \right)_{1\leqslant j \leqslant N_{c}}\\ &=-\left(\int\limits_{\Gamma_{c}}{g_{j}}M\mbox{Re}\mathbb{T}(\phi,\psi)n\, dx \right)_{1\leqslant j \leqslant N_{c}}+i\left(\int\limits_{\Gamma_{c}}{g_{j}}M\mbox{Im}\mathbb{T}(\phi,\psi)n\, dx \right)_{1\leqslant j \leqslant N_{c}}. \end{align} One can notice that $M\mbox{Re}\mathbb{T}(\phi,\psi)n\in U_{0}$ and $M\mbox{Im}\mathbb{T}(\phi,\psi)n\in U_{0}.$ On the other hand we know that $\{{g_{j}}\}_{1\leqslant j\leqslant N_{c}}$ forms a basis of $U_{0}.$ Hence $\mathcal{B}^{*}\phi=0$ implies that $$M(\mathbb{T}(\phi,\psi)n)\mid_{\Gamma_{c}}=0.$$ This implies that \begin{equation}\label{sgmagc} \begin{array}{l} \mathbb{T}(\phi,\psi)n=C_{0}n\quad\mbox{on}\quad supp\,(m), \end{array} \end{equation} where $C_{0}$ is a constant given by \begin{align} \displaystyle C_{0}=\frac{1}{\displaystyle\int\limits_{\Gamma}m}\left(\int\limits_{\Gamma}m\mathbb{T}(\phi,\psi)n\right).\nonumber \end{align} Now recall that $\phi=0$ on $\Gamma$ and the unit outward normal on $\Gamma_{c}^{+}$ is $(-1,0).$ Also since $\phi\in V^{2}(\Omega),$ one can consider the trace of $\mbox{div}\phi$ on $\Gamma$ to obtain that $\mbox{div}\phi=0$ on $\Gamma.$ Using these facts one can at once deduce from \eqref{sgmagc} that $\displaystyle\frac{\partial\phi}{\partial n}=0$ and $\psi=C_{0}$ on $\Gamma^{+}_{c}.$\\ Now consider the domain $\Omega_{ex}$ which is an extension of the domain $\Omega$ (see Figure 2). Extend the function $\phi$ into $\Omega_{ex}$ by defining it zero outside $\Omega,$ denote the extension also by $\phi.$ Extend $\psi$ into $\Omega_{ex}$ by the constant $C_{0}$ outside $\Omega.$ We denote the extension of $\psi$ by $\psi$ itself. It is not hard to verify that the extended pair $(\phi,\psi)\in V^{2}(\Omega_{ex})\times H^{1}(\Omega_{ex})/\mathbb{R},$ solves the eigenvalue problem \eqref{eivp1} in the extended domain $\Omega_{ex}.$ Finally the unique continuation property from \cite{lebeau} shows that $\phi=0$ in $\Omega_{ex},$ thus in particular on $\Omega.$ Hence we are done with the proof of the Hautus test \eqref{Hutus}. \end{proof} From Theorem \ref{thstab} we know that the pair $(A,\mathcal{B})$ is stabilizable by a control $w_{c}\in L^{2}(0,\infty;\mathbb{R}^{N_{c}}).$ Hence there exists a control $u$ (of the form \eqref{fdc}) which belongs to the finite dimensional space $U_{0}$ (see \eqref{U0}) and stabilizes the pair $(A,B).$\\ Now our aim is to construct $w_{c}$ such that it is given in terms of a feedback control law. For that we will study the stabilization of the extended system \eqref{2-7} in the following section. \subsection{Stabilization of the extended system \eqref{2-7} by a feedback control}\label{stabext} \subsubsection{Evolution equation associated with the extended system \eqref{2-7}} We set \begin{equation}\label{deftZ} \begin{array}{l} {\widetilde Z}=V^{0}_{n}(\Omega)\times \mathbb{R}^{N_{c}}. \end{array} \end{equation} Depending on the context the notation $I$ denotes the identity operator for all of the spaces $V^{0}_{n}(\Omega),$ $\mathbb{R}^{N_{c}}$ and $\widetilde{Z}.$ We equip the space ${\widetilde Z}$ with the inner product $$({\widetilde {\zeta}_{1}},{\widetilde \zeta}_{2})_{\widetilde Z}=(\zeta_{1},\zeta_{2})_{V^{0}_{n}(\Omega)}+(w_{1},w_{2})_{\mathbb{R}^{N_{c}}},$$ where ${\widetilde {\zeta}_{1}}=({\zeta_{1}},w_{1})$ and ${\widetilde \zeta_{2}}=(\zeta_{2},w_{2}).$ \newline We fix a positive constant $\gamma$ (where $\gamma$ is the constant appearing in the extended system \eqref{2-7}).\\ Now let us recall the representation \eqref{2-6} of the system \eqref{2-5}. In the same note it follows that $\widetilde{y}=(P{y},w_{c})$ is a solution to equation \eqref{2-7} iff $(Py,w_{c})$ solves the following set of equations \begin{equation}\label{3-4} \left\{ \begin{array}{ll} {\widetilde {y}'}= {{\begin{pmatrix} P{y}\\ w_{c} \end{pmatrix}}}'=\begin{pmatrix} A & \mathcal{B}\\ 0 & -{\gamma I} \end{pmatrix} \begin{pmatrix} P{y}\\ w_{c} \end{pmatrix}+\begin{pmatrix} 0\\ I \end{pmatrix}\varphi_{c}+ \widetilde{{f}}\quad&\mbox{in}\quad (0,\infty),\\[1.mm] {\widetilde {y}}(0)=\widetilde{y}_{0}=\begin{pmatrix} {y}_{0}\\0 \end{pmatrix},& \vspace{1.mm}\\ (I-P){y}=\sum\limits_{j=1}^{N_{c}}{w_{j}}(I-P)D_{A}{{g}_{j}}\quad&\mbox{in}\quad (0,\infty), \end{array}\right. \end{equation} where ${\widetilde{f}}=(P{f},0)$ and recall the definition of $\mathcal{B}$ from \eqref{decB}. Now we define the operator $(\widetilde{A},\mathcal{D}(\widetilde{A}))$ in $\widetilde{Z}$ as follows \begin{equation}\label{DAt} \begin{array}{l} \mathcal{D}(\widetilde A)=\{(\zeta,w_{c})\in {\widetilde Z}\mid A\zeta+\mathcal{B}w_{c}\in {V}^{0}_{n}(\Omega)\}\quad \mbox{and} \quad {\widetilde A}=\begin{pmatrix} A & \mathcal{B}\\ 0 & -{\gamma I} \end{pmatrix}. \end{array} \end{equation} As we have identified $V^{0}_{n}(\Omega)$ with its dual, the space $\widetilde{Z}$ and $\widetilde{Z}^{*}$ are also identified. We define the adjoint of $({\widetilde A},\mathcal{D}({\widetilde A}))$ in $\widetilde{Z}$ as follows \begin{equation}\label{DAt*} \begin{array}{l} \mathcal{D}({\widetilde A}^{*})=\mathcal{D}(A^{*})\times {\mathbb R}^{N_{c}}=\mathcal{D}(A)\times {\mathbb R}^{N_{c}}\quad \mbox{and}\quad {\widetilde A}^{*}=\begin{pmatrix} A^{*} & 0\\ \mathcal{B}^{*} & -{\gamma I} \end{pmatrix}. \end{array} \end{equation} \begin{remark} We emphasize that due to the compatibility condition involved in the definition \eqref{DAt}, $\mathcal{D}(\widetilde{A})$ can not be written as $\mathcal{D}(A)\times\mathbb{R}^{N_{c}}.$ Contrary to that one does not require any compatibility condition in defining the domain of $\widetilde{A}^{*}$ which is given by \eqref{DAt*}. \end{remark} \begin{thm}\label{t3-3} The operator $({\widetilde A},\mathcal{D}({\widetilde A}))$ is the infinitesimal generator of an analytic semigroup on ${\widetilde Z}.$ \end{thm} \begin{proof} We will prove that $({\widetilde A},\mathcal{D}({\widetilde A}))$ generates an analytic semigroup on ${\widetilde Z}$ by proving that $({\widetilde A}^{*},\mathcal{D}({\widetilde A}^{*}))$ generates an analytic semigroup on $\widetilde{Z}.$ This is enough since one has the following by using \cite[Theorem 2.16.5, p. 56]{hilleph} $$\|\mathcal{R}(\lambda,\widetilde{A})\|_{\mathcal{L}(\widetilde{Z})}=\|(\lambda I-\widetilde{A})^{-1}\|_{\mathcal{L}(\widetilde{Z})}=\|((\lambda I-\widetilde{A})^{-1})^{*}\|_{\mathcal{L}(\widetilde{Z})}=\|(\overline{\lambda} I-\widetilde{A}^{*})^{-1}\|_{\mathcal{L}(\widetilde{Z})}=\|\mathcal{R}(\overline{\lambda},\widetilde{A}^{*})\|_{\mathcal{L}(\widetilde{Z})},$$ where $\mathcal{R}(\lambda,\cdot)$ denotes the resolvent of the respective operator (see \cite[Section 2.16]{hilleph} for details on resolvent) and hence $({\widetilde A},\mathcal{D}({\widetilde A}))$ generates an analytic semigroup on $\widetilde{Z}$ follows from the fact that $({\widetilde A}^{*},\mathcal{D}({\widetilde A}^{*}))$ generates an analytic semigroup on $\widetilde{Z}$ as a consequence of \cite[p. 163, Def. 5.4.5]{tucsnak}.\\ Let us notice that the operator $\widetilde{A}^{*}$ can be decomposed as follows $$\widetilde{A}^{*}=\widetilde{A}_{1}+\widetilde{A}_{2},$$ where $$\widetilde{A}_{1}=\begin{pmatrix} A^{*} & 0\\ 0 & -\gamma I \end{pmatrix}\quad\mbox{and}\quad \widetilde{A}_{2}=\begin{pmatrix} 0 & 0\\ \mathcal{B}^{*} & 0 \end{pmatrix}.$$ Since $(A,\mathcal{D}(A))$ generates an analytic semigroup on $Z=V^{0}_{n}(\Omega)$ (see Lemma \ref{t2-1}), $(A^{*},\mathcal{D}(A^{*}))$ generates an analytic semigroup on $Z$ (follows from the argument used in the beginning of the proof). Consequently the operator $$\widetilde{A}^{0}_{1}=\begin{pmatrix} A^{*} & 0\\ 0 & 0 \end{pmatrix}$$ generates an analytic semigroup on $\widetilde{Z}.$ Since $\widetilde{A}_{1}$ is a bounded perturbation of the operator $\widetilde{A}^{0}_{1},$ one uses \cite[Corollary 2.2, Section 3.2]{pazy} to conclude that $\widetilde{A}_{1}$ with domain $\mathcal{D}(A^{*})\times\mathbb{R}^{N_{c}}$ generates an analytic semigroup on $\widetilde{Z}.$ On the other hand the definition \eqref{decB} of $\mathcal{B}$ and part $(ii)$ of Proposition \ref{p2-4} furnish that $$\mathcal{B}^{*}\in\mathcal{L}(\mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}},\mathbb{R}^{N_{c}})\quad \mbox{for all}\quad 0<\kappa<\frac{1}{2}.$$ This implies the following \begin{equation}\label{A2d} \begin{array}{l} \widetilde{A}_{2}\in \mathcal{L}(\mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}}\times \mathbb{R}^{N_{c}},\widetilde{Z})\quad \mbox{for all}\quad 0<\kappa<\frac{1}{2}. \end{array} \end{equation} Now observe that $(\widetilde{A}_{1}-\omega I)$ is a diagonal operator, hence the semigroup $e^{t(\widetilde{A}_{1}-\omega I)}$ on $\widetilde{Z}$ generated by $(\widetilde{A}_{1}-\omega I)$ is of the form $$e^{t(\widetilde{A}_{1}-\omega I)}(\zeta_{1},w_{1})=(e^{t(A^{*}-\omega I)}\zeta_{1},e^{t(-\gamma-\omega)I}w_{1}),\quad\mbox{for all}\quad (\zeta_{1},w_{1})\in\widetilde{Z}.$$ Hence one can use the definition \cite[p.329]{fattorini} of the domain of fractional power to have the following \begin{equation}\label{fracpwr} \begin{array}{l} \mathcal{D}((\omega I-\widetilde{A}_{1})^{\frac{3}{4}+\frac{\kappa}{2}})=\mathcal{D}((\omega I-A^{*})^{\frac{3}{4}+\frac{\kappa}{2}})\times\mathbb{R}^{N_{c}}\quad\mbox{for all}\quad 0<\kappa<\frac{1}{2}. \end{array} \end{equation} Finally in view of \eqref{A2d} and \eqref{fracpwr}, the result \cite[p. 420, Lemma 12.38]{renardy} furnish that $(\widetilde{A}^{*},\mathcal{D}(\widetilde{A}^{*}))$ is the infinitesimal generator of an analytic semigroup on $\widetilde{Z}.$ This in turn gives that $({\widetilde A},\mathcal{D}({\widetilde A}))$ generates an analytic semigroup on ${\widetilde Z}.$ \end{proof} From the definition \eqref{DAt} of the operator $\widetilde{A}$ one can easily observe that the spectrum of $\widetilde{A}$ is discrete and is explicitly given as follows $$spec(\widetilde{A})=spec(A)\cup\{-\gamma\}.$$ \subsubsection{Existence of a feedback control law}\label{efcl} We introduce the notation ${{\widetilde J}}=(0,I). $ Let us notice that ${{\widetilde J}}$ belongs to ${\mathcal L}(\mathbb{R}^{N_{c}},{\widetilde Z}).$ This section is devoted to the construction of a feedback control $\varphi_{c}$ which is able to stabilize the linear equation \begin{equation}\label{linsys} \begin{array}{l} \widetilde{y}{'}=\widetilde{A}\widetilde{y}+{\widetilde J}\varphi_{c}\quad\mbox{in}\quad (0,\infty),\qquad \widetilde{y}(0)=\widetilde{y}_{0}, \end{array} \end{equation} which is obtained from \eqref{3-4}$_{1}$-\eqref{3-4}$_{2}$ after neglecting the non-homogeneous source term $\widetilde{f}.$ \begin{prop}\label{stabextr} The pair $({\widetilde {A}},{{\widetilde J}})$ is stabilizable. More precisely there exists a feedback operator $\mathcal{K}\in\mathcal{L}(\widetilde{Z},\mathbb{R}^{N_{c}})$ such that the operator $(\widetilde A+\widetilde{J}\mathcal{K})$ with domain $\mathcal{D}(\widetilde{A})$ generates an exponentially stable analytic semigroup on $\widetilde{Z}.$ \end{prop} Before going into the proof of Proposition \ref{stabextr}, let us recall that the pair $(\widetilde{A},\widetilde{J})$ is stabilizable in $\widetilde{Z}$ iff for all $\widetilde{y}_{0}\in \widetilde{Z},$ there exists a control $\varphi_{c}\in L^{2}(0,\infty;\mathbb{R}^{N_{c}})$ such that the controlled system $${\widetilde{y}}'=\widetilde{A}\widetilde{y}+\widetilde{J}\varphi_{c}\,\,\mbox{in}\,\,(0,\infty),\qquad \widetilde{y}=\widetilde{y}_{0},$$ satisfies $$\displaystyle \|\widetilde{y}\|_{L^{2}(0,\infty;\widetilde{Z})}<\infty.$$ \begin{proof}[Proof of Proposition \ref{stabextr}] We check the stabilizability of the pair $(\widetilde{A},{\widetilde J})$ by verifying the Hautus criterion \cite[Theorem 1.2]{raypa} (one can also consult \cite[Chapter V]{ben} for related results): \begin{equation}\label{Hutus2} \begin{array}{l} \mbox{ker}(\widetilde\lambda_{k} I-\widetilde{A}^{*})\cap\mbox{Ker}({\widetilde J}^{*})=\{0\}\quad\mbox{for all}\quad \widetilde\lambda_{k}\in spec(\widetilde{A})\quad\mbox{with}\quad\mbox{Re}\widetilde{\lambda}_{k}>0. \end{array} \end{equation} Let us prove \eqref{Hutus2}. We consider $$\begin{pmatrix} \phi\\w \end{pmatrix}\in \mbox{ker}(\widetilde\lambda_{k} I-\widetilde{A}^{*})\cap\mbox{Ker}({\widetilde J}^{*}). $$ Recall that ${\widetilde J}^{*}( \phi,w )=w.$ This gives $w=0.$\\ Now use the relation $( \phi,0 )\in\mbox{ker}(\widetilde\lambda_{k} I-\widetilde{A}^{*}),$ to obtain $$(\widetilde\lambda_{k} I-A^{*})\phi=\mathcal{B}^{*}\phi=0.$$ Hence $\phi=0,$ since the pair $(A,\mathcal{B})$ is stabilizable (see Theorem \ref{thstab}). This furnishes the stabilizability of the pair $(\widetilde{A},{\widetilde J}).$\\ We consider the following Riccati equation \begin{equation}\label{are} \left\{ \begin{array}{l} \widetilde{\mathcal{P}}\in \mathcal{L}(\widetilde{Z},\widetilde{Z}),\quad \widetilde{\mathcal{P}}=\widetilde{\mathcal{P}}^{*}>0,\\ \widetilde{\mathcal{P}}\widetilde{A}+\widetilde{A}^{*}\widetilde{\mathcal{P}}-\widetilde{\mathcal{P}}{\widetilde J}{\widetilde J}^{*}\widetilde{{P}}=0,\\ \widetilde{\mathcal{P}}\,\mathrm{is\, invertible}. \end{array}\right. \end{equation} Using \cite[Theorem 3]{kes}, there exists a solution $\widetilde{\mathcal{P}}$ to the Riccati equation \eqref{are} and the operator $\mathcal{K}=-\widetilde{J}^{*}\widetilde{\mathcal{P}}\in\mathcal{L}(\widetilde{Z},\mathbb{R}^{N_{c}}),$ provides a stabilizing feedback for $(\widetilde{A},\widetilde{J}).$ The operator $(\widetilde{A}+\widetilde{J}\mathcal{K})$ with domain $$\mathcal{D}(\widetilde{A}+\widetilde{J}\mathcal{K})=\mathcal{D}(\widetilde{A})$$ is the generator of an exponentially stable analytic semigroup on $\widetilde{Z}.$ \end{proof} From now onwards we will not use the explicit expression of the feedback controller $\mathcal{K}$ which was constructed in the proof of Proposition \ref{stabextr}, in fact we will only use that $\mathcal{K}\in\mathcal{L}(\widetilde{Z},\mathbb{R}^{N_{c}})$ and $\mathcal{D}(\widetilde{A}+\widetilde{J}\mathcal{K})=\mathcal{D}(\widetilde{A}).$ \subsubsection{Stabilization of the closed loop extended system with a non homogeneous source term} Using the feedback control $\mathcal{K}$, we write the equation \eqref{3-4}$_{1}$-\eqref{3-4}$_{2}$ as the following closed loop system \begin{equation}\label{closedloop} \left\{ \begin{array}{l} {\widetilde {y}'}= \widetilde{A}\widetilde{y}+{\widetilde J}\mathcal{K}\widetilde{y}+ \widetilde{{f}}\quad\mbox{in}\quad (0,\infty),\\ {\widetilde {y}}(0)=\widetilde{y}_{0}. \end{array}\right. \end{equation} From now on the constant $K(>0)$ appearing in the inequalities will denote a generic positive constant which may change from line to line. If we want to specify a constant (to use it for later purpose) we will denote it by $K_{i},$ for some natural number $i.$ \begin{lem}\label{l3-8} Let the following hold \begin{equation}\label{cofy0} \begin{array}{l} \widetilde{f}\in L^{2}(0,\infty;V^{0}_{n}(\Omega)\times\mathbb{R}^{N_{c}})\quad\mbox{and}\quad \widetilde{y}_{0}\in V^{1}_{0}(\Omega)\times\{0\}, \end{array} \end{equation} where $\{0\}$ denotes the zero element of $\mathbb{R}^{N_{c}}.$ Then the equation \eqref{closedloop} admits a unique solution in $$\widetilde{y}\in H^{1}(0,\infty;{\widetilde Z})\cap L^{2}(0,\infty;{\mathcal{D}}({\widetilde{A}}))$$ which obeys \begin{equation}\label{3-24} \begin{array}{l} \| \widetilde{y} \|_{H^{1}(0,\infty;{\widetilde Z})\cap L^{2}(0,\infty;{\mathcal{D}}({\widetilde{A}}))}\leqslant K(\| \widetilde{y}_{0} \|_{{V}^{1}_{0}(\Omega)\times\mathbb{R}^{N_{c}}}+\| \widetilde{f} \|_{L^{2}(0,\infty;L^{2}(\Omega)\times{R}^{N_{c}})}), \end{array} \end{equation} for some positive constant $K.$ \end{lem} \begin{proof} Observe that $$\widetilde{y}_{0}\in V^{1}_{0}(\Omega)\times\{0\}=[\mathcal{D}(A),V^{0}_{n}(\Omega)]_{1/2}\times\{0\}\underset{(i)}{=} [\mathcal{D}(A)\times\{0\},V^{0}_{n}(\Omega)\times\{0\}]_{1/2}\underset{(ii)}{\subset}[\mathcal{D}(\widetilde{A}),\widetilde{Z}]_{1/2},$$ the steps $(i)$ and $(ii)$ in the calculation above directly follows by using the definition of interpolation spaces provided by using \cite[p. 92, Theorem 14.1]{liomag} and \cite[Remark 14.1]{liomag}.\\ Since $$\widetilde{f}\in L^{2}(0,\infty;\widetilde{Z})\quad\mbox{and}\quad \widetilde{y}_{0}\in [\mathcal{D}(\widetilde{A}),\widetilde{Z}]_{1/2},$$ one can use the isomorphism theorem \cite[Part II, Section 3.6.3, Theorem 3.1]{ben} to conclude the proof of Lemma \ref{l3-8}. \end{proof} \begin{corollary}\label{c3-8} Let the following hold \begin{equation}\label{cofy00} \begin{array}{l} {f}\in L^{2}(Q_{\infty})\quad\mbox{and}\quad {y}_{0}\in V^{1}_{0}(\Omega). \end{array} \end{equation} Then the equation \begin{equation}\label{114r} \left\{ \begin{array}{ll} \displaystyle \frac{\partial {y}}{\partial t}-\beta {y}-\nu \Delta {y}+ ({v}_s \cdot \nabla){y}+({y} \cdot \nabla){v}_s +\nabla q={f}\quad &\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \mathrm{div}{y}=0\quad &\mbox{in}\quad Q_{\infty}, \vspace{2.mm}\\ {y}=0\quad &\mbox {on} \quad(\Gamma_{0}\cup\Gamma_{out}) \times (0,\infty),\\[1.mm] {y}=\sum\limits_{j=1}^{N_{c}}w_{j}(t){{g}_{j}}(x) \quad &\mbox{on} \quad \ \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ {y}(x,0)={y}_0\quad &\mbox{in}\quad\Omega,\\[2.mm] \displaystyle w_{c}^{'}+{\gamma }w_{c}-\mathcal{K}(Py,w_{c})=0\quad &\mbox{in}\quad (0,\infty), \vspace{1.mm}\\ w_{c}(0)=0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} $w_{c}=(w_{1},...,w_{N_{c}})$ and $g_{j},$ for all $1\leqslant j\leqslant N_{c}$ are defined in \eqref{basisU0}, admits a unique solution $( y,w_{c} )$ in $V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})$ and the pair $ (y,w_{c}) $ obeys the following estimate \begin{equation}\label{esty} \begin{array}{l} \| (y,w_{c})\|_{V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant K_{1}(\| {y}_{0} \|_{{V}^{1}_{0}(\Omega)} +\| {f} \|_{L^{2}(Q_{\infty})}), \end{array} \end{equation} for some positive constant $K_{1}.$\\ In addition, there exists a constant $K_{2}>0$ such that the control \begin{equation}\label{concor} \begin{array}{l} u(x,t)=\sum\limits_{j=1}^{N_{c}}w_{j}(t){g_{j}}(x), \end{array} \end{equation} satisfies the following estimate \begin{equation}\label{dofcon} \begin{array}{l} \|{u}(x,t) \|_{L^{\infty}(\Sigma_{\infty})}\leqslant K_{2}(\|{y}_{0} \|_{{V}^{1}_{0}(\Omega)}+\|{f} \|_{L^{2}(Q_{\infty})}). \end{array} \end{equation} \end{corollary} \begin{proof} Using the notations used in \eqref{3-4}, one observes that $\|\widetilde{y}_{0}\|_{V^{1}_{0}(\Omega)\times\mathbb{R}^{N_{c}}}=\|(y_{0},0)\|_{V^{1}_{0}(\Omega)\times\mathbb{R}^{N_{c}}}=\|y_{0}\|_{V^{1}_{0}(\Omega)}$ and $\|\widetilde{f}\|_{L^{2}(0,\infty;L^{2}(\Omega)\times\mathbb{R}^{N_{c}})}=\|(Pf,0)\|_{L^{2}(0,\infty;L^{2}(\Omega)\times\mathbb{R}^{N_{c}})}=\|Pf\|_{L^{2}(Q_{\infty})}.$ Since the closed loop system \eqref{closedloop} along with \eqref{3-4}$_{3}$ is the operator representation of \eqref{114r}, one can use Lemma \ref{l3-8} (particularly the estimate \eqref{3-24}) to obtain the following \begin{equation}\label{esPy} \begin{array}{l} \| (P{y},w_{c}) \|_{H^{1}(0,\infty;V^{0}_{n}(\Omega)\times\mathbb{R}^{N_{c}})\cap L^{2}(0,\infty;\mathcal{D}({\widetilde{A}}))}\leqslant K(\|y_{0}\|_{V^{1}_{0}(\Omega)}+\|Pf\|_{L^{2}(Q_{\infty})}). \end{array} \end{equation} Now we estimate $$(I-P)y=\sum\limits_{j=1}^{N_{c}}w_{j}(I-P)D_{A}{{g}_{j}}.$$ We know that there exists a positive constant $K$ such that for all $1\leqslant j \leqslant N_{c}$ \begin{equation}\label{estI-P1} \begin{array}{l} \|D_{A}{g_{j}}\|_{V^{2}(\Omega)}\leqslant K\|{g_{j}}\|_{H^{3/2}(\Gamma)}\leqslant K. \end{array} \end{equation} Estimates \eqref{estI-P1} and \eqref{esPy} yield \begin{equation}\label{estI-P2} \begin{array}{l} \|(I-P)y\|_{H^{1}(0,\infty;H^{2}(\Omega))}\leqslant K(\|y_{0}\|_{V^{1}_{0}(\Omega)}+\|Pf\|_{L^{2}(Q_{\infty})}). \end{array} \end{equation} Once again using \eqref{esPy} and \eqref{estI-P2} one has \begin{equation}\label{inregy} \begin{split} \displaystyle \|y\|_{H^{1}(0,\infty;L^{2}(\Omega))}&\leqslant \|Py\|_{H^{1}(0,\infty;V^{0}_{n}(\Omega))}+\|(I-P)y\|_{H^{1}(0,\infty;H^{2}(\Omega))}\\ &\leqslant K(\|y_{0}\|_{V^{1}_{0}(\Omega)}+\|Pf\|_{L^{2}(Q_{\infty})}). \end{split} \end{equation} To prove higher regularity of $y$ we will use a bootstrap argument. First we write \eqref{114r}$_{1}$-\eqref{114r}$_{5}$ as follows \begin{equation}\label{114r*} \left\{ \begin{array}{ll} \displaystyle -\nu \Delta {y}+\nabla q=f^{*}\quad &\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \mathrm{div}{y}=0\quad &\mbox{in}\quad Q_{\infty}, \vspace{2.mm}\\ {y}=0\quad &\mbox {on} \quad(\Gamma_{0}\cup\Gamma_{out}) \times (0,\infty),\\[1.mm] {y}=\sum\limits_{j=1}^{N_{c}}w_{j}(t){{g}_{j}}(x) \quad &\mbox{on} \quad \ \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ {y}(x,0)={y}_0\quad &\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where $$f^{*}=f-\frac{\partial {y}}{\partial t}+\beta {y}- ({v}_s \cdot \nabla){y}-({y} \cdot \nabla){v}_s.$$ Using \eqref{cofy00} and \eqref{inregy} we obtain that $f^{*}\in L^{2}(0,\infty;H^{-1}(\Omega))$ and the following holds \begin{equation}\label{estf*} \begin{array}{l} \|f^{*}\|_{L^{2}(0,\infty;H^{-1}(\Omega))}\leqslant K(\|f\|_{L^{2}(Q_{\infty})}+\|y_{0}\|_{V^{1}_{0}(\Omega)}). \end{array} \end{equation} Also \begin{equation}\label{regcon} \begin{array}{l} \sum\limits_{j=1}^{N_{c}}w_{j}(t){{g}_{j}}(x)\in H^{1}(0,\infty;V^{0}(\Gamma)\cap C^{\infty}(\Gamma)), \end{array} \end{equation} ($g_{j}\in V^{0}(\Gamma)$ follows from the definition \eqref{basisU0} and Proposition \ref{p2-4} whereas $C^{\infty}(\Gamma)$ regularity follows from Lemma \ref{smgci}). Hence one can use \cite[Theorem IV.5.2]{boy}, \eqref{estf*} and \eqref{esPy} to get $y\in L^{2}(0,\infty;V^{1}(\Omega))$ and the following inequality \begin{equation}\label{incregy} \begin{array}{l} \|y\|_{L^{2}(0,\infty;V^{1}(\Omega))}\leqslant K(\|f\|_{L^{2}(Q_{\infty})}+\|y_{0}\|_{V^{1}_{0}(\Omega)}). \end{array} \end{equation} The regularity \eqref{inregy}, \eqref{incregy} and \eqref{cofy00} furnish $f^{*}\in L^{2}(Q_{\infty})$ and \begin{equation}\label{incregf} \begin{array}{l} \|f^{*}\|_{L^{2}(Q_{\infty})}\leqslant K(\|f\|_{L^{2}(Q_{\infty})}+\|y_{0}\|_{V^{1}_{0}(\Omega)}). \end{array} \end{equation} In view of \eqref{regcon} and \eqref{incregf} one further obtains that $y\in L^{2}(0,\infty;V^{2}(\Omega))$ (using the regularity result from \cite{Osborn}) and the following \begin{equation}\label{finalregy} \begin{array}{l} \|y\|_{L^{2}(0,\infty;V^{2}(\Omega))}\leqslant K(\|f\|_{L^{2}(Q_{\infty})}+\|y_{0}\|_{V^{1}_{0}(\Omega)}). \end{array} \end{equation} Hence $y\in V^{2,1}(Q_{\infty})$ and using \eqref{inregy} and \eqref{finalregy} one has the following \begin{equation}\label{esty1} \begin{array}{l} \|y\|_{V^{2,1}(Q_{\infty})}\leqslant K(\|y_{0}\|_{V^{1}_{0}(\Omega)}+\|f\|_{L^{2}(Q_{\infty})}). \end{array} \end{equation} Finally, \eqref{esPy} and \eqref{esty1} provides the desired estimate \eqref{esty}.\\ Since ${g_{j}}\in L^{\infty}(\Gamma),$ the estimate \eqref{dofcon} readily follows by using \eqref{esty}. \end{proof} The following result justifies our choice of denoting the inflow and outflow boundary of $v_{s}$ and a perturbation of $v_{s}$ using the same notation. \begin{corollary}\label{p3.0.2} If we take \begin{equation}\label{smalasum} (\|{y}_{0} \|_{{V}^{1}_{0}(\Omega)}+\| {f} \|_{L^{2}(Q_{\infty})})\leqslant \frac{L(1-L)}{2K_{2}}, \end{equation} where $K_{2}$ is the constant in \eqref{dofcon}, then $$\|{y}\mid_{\Sigma_{\infty}}\|_{L^{\infty}(\Sigma_{\infty})}\leqslant \frac{L(1-L)}{2}$$ and hence in particular for all $t>0,$ \begin{equation}\label{iny0f} \left\{ \begin{array}{ll} (e^{-\beta t}{y}(\cdot,t)+{v}_{s})\cdot{n}<0\,\,&\mbox{on}\,\,\Gamma_{in},\\ (e^{-\beta t}{y}(\cdot,t)+v_{s})\cdot {n}=0\,\,&\mbox{on}\,\, \Gamma_{0},\\ (e^{-\beta t}{y}(\cdot,t)+{v}_{s})\cdot{n}>0\,\,&\mbox{on}\,\,\Gamma_{out}, \end{array}\right. \end{equation} where $( y,w_{c} )$ is the solution to \eqref{114r}. This means that for all time $t>0$, $\Gamma_{in}$ and $\Gamma_{out}$ are still the inflow and the outflow boundary for the perturbed vector field $(v_{s}+e^{-\beta t}{y}).$ \end{corollary} \begin{proof} The proof is a direct consequence of Corollary \ref{c3-8}, in particular the estimate \eqref{dofcon}. \end{proof} \section{Stability of the continuity equation}\label{density} This section is devoted to the study of the transport equation satisfied by density which is modeled by {\eqref{2-2}}$_{1}$ together with \eqref{2-2}$_{2}$ and \eqref{2-2}$_{3}$. This equation is linear in $\sigma$ but nonlinear in $(\sigma,{y}).$ First let us briefly discuss the stabilization of the linearized transport equation modeling the density with zero inflow boundary condition. This will give us an idea about how to obtain analogous results for its nonlinear counterpart. \subsection{Comments on the linear transport equation at velocity $v_{s}$}\label{lte} The linearized continuity equation with the zero inflow boundary condition is given by \begin{equation}\label{4-1} \left\{ \begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+({v}_s \cdot \nabla)\sigma-\beta\sigma=0\quad&\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \sigma(x,t)=0 \quad &\mbox{on} \quad \Gamma_{in} \times(0,\infty), \vspace{1.mm}\\ \displaystyle \sigma(x,0)=\sigma_0\quad&\mbox{in}\quad\Omega. \end{array}\right. \end{equation} We can explicitly solve \eqref{4-1} to obtain \begin{equation}\label{4-2} \sigma(x,t)=\left\{ \begin{array}{l} \displaystyle e^{\beta t}\sigma_{0}(x_{1}-(x_{2}(1-x_{2}))t,x_{2})\quad \mbox{for} \quad t\leqslant \frac{1}{(x_{2}(1-x_{2}))}x_{1}, \vspace{4.mm}\\ 0\quad \mbox{for}\quad t>\displaystyle \frac{1}{(x_{2}(1-x_{2}))}x_{1}, \end{array}\right. \end{equation} for all $(x_{1},x_{2})\in\Omega$. In particular if we assume that $\sigma_{0}$ satisfies the condition \eqref{1-4}, the solution $\sigma$ to \eqref{4-1} vanishes after some finite time $T_{A_{1}}=\frac{d}{A_{1}(1-A_{1})}.$ Hence we see that with zero inflow boundary condition the solution of the linearized transport equation is automatically stabilized (in fact controlled) after some finite time. The equation \eqref{4-1} is just a prototype of the transport equation \eqref{2-2}$_{1,2,3}$ exhibiting similar property and we will discuss this in the following section. \subsection{Stability of the transport equation \eqref{2-2} satisfied by density}\label{nlte} We consider the transport equation satisfied by the density with the nonlinearity $({y}\cdot\nabla)\sigma.$ We assume that $\|y\|_{V^{2,1}(Q_{\infty})}$ is small enough and the following holds \begin{equation}\label{cony.n} \begin{array}{l} (e^{-\beta t}{y}+{v}_{s})\cdot{n}<0\,\,\mbox{on}\,\,\Gamma_{in},\,\,(e^{-\beta t}{y}+v_{s})\cdot {n}=0\,\,\mbox{on}\,\,\Gamma_{0},\,\,\mbox{and}\,\,(e^{-\beta t}{y}+{v}_{s})\cdot{n}>0\,\, \mbox{on}\,\, \Gamma_{out}. \end{array} \end{equation} Recalling Corollary \ref{p3.0.2}, the condition \eqref{cony.n} is automatically satisfied when $y$ solves \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$ and \eqref{smalasum} holds. Notice that the role of the condition \eqref{cony.n} is only to guarantee that even if we perturb the vector field $v_{s}$ by adding $e^{-\beta t}{y},$ the inflow boundary of the fluid remains unchanged. \newline Here the transport equation satisfied by the density is given by \begin{equation}\label{4-3} \left\{ \begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+(({v}_s+e^{-\beta t}{y}) \cdot \nabla)\sigma-\beta\sigma=0\quad&\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \sigma (x,t)=0 \quad &\mbox{on} \quad \Gamma_{in} \times(0,\infty), \vspace{1.mm}\\ \displaystyle \sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where ${y}$ is in ${V}^{2,1}(Q_{\infty}),$ \eqref{cony.n} holds, $\sigma_{0}\in L^{\infty}(\Omega)$ and satisfies the condition \eqref{1-4} (recall from \eqref{y0s0} that $\sigma_{0}=\rho_{0}$). Provided $y$ is suitably small in the norm $V^{2,1}(Q_{\infty})$, \eqref{4-1} can be seen as an approximation of \eqref{4-3}, and as we will see in Theorem \ref{t4-1}, solutions of \eqref{4-1} and of \eqref{4-3} share some similar behavior. \newline We are in search of a unique solution of \eqref{4-3} in the space $L^{\infty}(Q_{\infty})$. In the following discussion we will borrow several results from \cite{boy} on the existence, uniqueness and stability of the continuity equation. For later use, we shall consider a general transport equation of the form \begin{equation}\label{4-3-bis} \left\{ \begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+(v \cdot \nabla)\sigma-\beta\sigma=0\quad&\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \sigma (x,t)=0 \quad &\mbox{on} \quad \Sigma_{in,v,\infty}, \vspace{1.mm}\\ \displaystyle \sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where $v$ is a divergence free vector field in $L^2(0,\infty; V^2(\Omega))$, and $$ \Sigma_{in,v,T} = \{(x,t) \in \Gamma \times (0,T)\suchthat v(x,t) \cdot n(x) < 0 \} $$ First let us define the notion of weak solution for the transport equation \eqref{4-3-bis}$_{1}$. \begin{mydef}\label{dd1} Let $T>0$ and $v$ a divergence free vector field such that $v \in L^{2}((0,T); V^2(\Omega))$. A function $\sigma \in L^{\infty}( Q_{T})$ is said to be a weak solution of \eqref{4-3-bis}$_{1}$ if the following is true $$ \int\limits_{0}^{T}\int\limits_{\Omega}\sigma(\partial_{t}\phi +v\cdot \nabla \phi+\beta\phi)dxdt=0,$$ for any test function $\phi \in C^{\infty}(\bar{\Omega}\times [0,T])$ with $\phi (\cdot,T)=0=\phi(\cdot,0)$ in $\Omega$ and $\phi=0$ on $\Sigma_{T}.$ \end{mydef} One can interpret the boundary trace of a weak solution (as defined in Definition \ref{dd1}) of \eqref{4-3-bis}$_{1}$ in a weak sense. Following \cite{boy} we introduce some notations which will be used to define the trace of a weak solution of \eqref{4-3-bis}$_{1}.$\\ Let $m$ denote the boundary Lebesgue measure on $\Gamma.$ Now for any $T>0,$ associated to the vector field $v$, we introduce the measure $$d\mu_{v}=(v \cdot {n})dmdt\quad \mbox{on}\quad \Sigma_{T}$$ and denote by $d\mu_{v}^{+}$ (respectively $d\mu_{v}^{-}$) its positive (resp. negative) part in such a way that $|d\mu_{v}| =d\mu_{v}^{+}+d\mu_{v}^{-}.$ The support of $d\mu_{v}^{+}$ (resp. $d\mu_{v}^{-}$) is the outflow (resp. inflow) part of $\Sigma_{T}$ corresponding to the vector field $v$.\\ The following two theorems, Theorem \ref{trace} and Theorem \ref{l4-2}, are stated in \cite{boy} for a weaker assumption on the velocity field $v.$ Here we state the results with $v\in L^2(0,T; V^{2}(\Omega))$ for the particular equation \eqref{4-3-bis}. \begin{thm}\label{trace} \cite[Theorem \textrm{VI.1.3}]{boy} Let $T>0,$ $v\in L^{2}(0,T; V^2(\Omega))$ and $\sigma\in L^{\infty}(Q_{T})$ be a weak solution of \eqref{4-3-bis}$_{1}$ in the sense of the Definition \ref{dd1}. Then the following hold:\\ $(i)$\,The function $\sigma$ lies in $C^{0}([0,T],L^{p}(\Omega))$ for all $1\leqslant p<+\infty.$\\ $(ii)$\,There exists a unique function $\gamma_{\sigma}\in L^{\infty}(\Sigma_{T} ,|d\mu_{v}|)$ such that for any test function $\phi\in C^{0,1}(\bar{Q}_{T})$ and for any $[t_{0},t_{1}]\subset [0,T]$ we have \begin{equation}\label{4-4} \begin{split} \int\limits_{t_{0}}^{t_{1}}\int\limits_{\Omega}\sigma\left(\frac{\partial \phi}{\partial t}+v\cdot \nabla\phi +\beta \phi \right)dxdt&-\int\limits_{t_{0}}^{t_{1}}\int\limits_{\Gamma}\gamma_{\sigma} \phi d\mu_{v}\\ & +\int\limits_{\Omega}\sigma(t_{0})\phi(t_{0}) dx -\int\limits_{\Omega}\sigma(t_{1})\phi(t_{1}) dx=0. \end{split} \end{equation} $(iii)$ The renormalization property: For any function $\xi: \mathbb{R}\to \mathbb{R}$ of class $C^1$, for any $\phi\in C^{0,1}(\bar{Q}_{T})$ and for any $[t_{0},t_{1}]\subset [0,T]$ we have \begin{multline}\label{Renormalized} \int\limits_{t_{0}}^{t_{1}}\int\limits_{\Omega}\xi(\sigma)\left(\frac{\partial \phi}{\partial t}+v\cdot \nabla\phi \right)dxdt + \int\limits_{t_{0}}^{t_{1}}\int\limits_{\Omega}\beta \sigma \xi'(\sigma) \phi -\int\limits_{t_{0}}^{t_{1}}\int\limits_{\Gamma}\xi(\gamma_{\sigma})\phi d\mu_{v}\\ +\int\limits_{\Omega}\xi(\sigma(t_{0}))\phi(t_{0}) dx -\int\limits_{\Omega}\xi(\sigma(t_{1}))\phi(t_{1}) dx=0. \end{multline} \end{thm} The following theorem states some results on the well posedness of the weak solution $\sigma$ of the Cauchy-Dirichlet transport problem \eqref{4-3-bis}. \begin{thm}\label{l4-2} \cite[Theorem \textrm{VI.1.6}]{boy} Let $T>0,$ $\sigma_{0}\in L^{\infty}(\Omega)$ and $v\in L^{2}(0,T; V^2(\Omega))$. Then there exists a unique function $\sigma\in L^{\infty}(Q_{T})$ such that\\ (i) The function $\sigma$ is a weak solution of the problem \eqref{4-3-bis}$_{1}$ in $Q_{T}$ in the sense of Definition \ref{dd1}.\\[1.mm] (ii) The trace $\gamma_{\sigma}$ of $\sigma$ satisfies the inflow boundary condition, $\gamma_{\sigma}=0,$ $d\mu^{-}_{v}$ almost everywhere on $\Sigma_{in,v,T}$ and $\sigma$ satisfies the initial condition $\sigma(x,0)=\sigma_{0}$ in $\Omega$.\\ In the following, we call this function $\sigma$ satisfying $(i)$ and $(ii),$ the solution of \eqref{4-3-bis}.\\[1.mm] (iii) Moreover for $0<t<T,$ the solution $\sigma$ of \eqref{4-3-bis} satisfies \begin{equation}\label{esorho} \begin{array}{l} \|\sigma(\cdot,t)\|_{L^{\infty}(\Omega)}\leqslant \|\sigma_{0}\|_{L^{\infty}(\Omega)}e^{\beta t}. \end{array} \end{equation} \end{thm} Let us also recall, for later purpose, the following stability result for the transport equation with respect to its velocity field: \begin{lem}\label{l4-4} \cite[Theorem \textrm{VI.1.9}]{boy} Let $T>0.$ Suppose that $\sigma_{0}\in L^{\infty}(\Omega)$ and let $\{{v}_{m}\}_{m}$ be a sequence of functions in ${L}^{2}(0,T; V^2(\Omega))$ such that there exists $v \in L^2(0,T; V^2(\Omega))$ such that $${v}_{m} \xrightarrow[m\rightarrow \infty]{} {v}\quad\mbox{in}\quad L^{1}(Q_{T}), \, \text{ and } \, v_m \cdot n \xrightarrow[m\rightarrow \infty]{} {v.n} \quad\mbox{in}\quad L^{1}(\Sigma_{T}). $$ Now suppose that $\sigma_{m}\in L^{\infty}(Q_{T})$ is the unique weak solution (in sense of Definition \ref{dd1}.) of the following initial and boundary value problem \begin{equation}\label{4-12} \left\{ \begin{array}{ll} \displaystyle \frac{\partial \sigma_{m}}{\partial t}+(v_m \cdot \nabla)\sigma_{m}- \beta\sigma_{m} = 0\quad&\mbox{in}\quad Q_{T}, \vspace{1.mm}\\ \sigma_{m} (x,t)=0 \quad& \mbox{on}\quad \Sigma_{in,v_{m},T}, \vspace{1.mm}\\ \sigma_{m} (x,0)=\sigma_0\quad &\mbox{in}\quad\Omega. \end{array}\right. \end{equation} If we denote by $\sigma$ the unique solution to the transport problem \eqref{4-3-bis} in $Q_{T},$ then we have \begin{equation}\label{4-13} \begin{array}{l} \sigma_{m}\xrightarrow[m\rightarrow \infty]{} \sigma\quad\mbox{in}\quad C^{0}([0,T],L^{p}(\Omega)),\quad\mbox{for any}\quad 1\leqslant p<+\infty. \end{array} \end{equation} \end{lem} Now we state the main theorem of this section: \begin{thm}\label{t4-1} Let $A_{1}\in(0,\frac{1}{2})$ and $T_{1}>T_{A_{1}}=\frac{d}{A_{1}(1-A_{1})}.$ There exists a constant $K_{3}>0$ such that if $y\in V^{2,1}(Q_{\infty})$ satisfies \begin{equation}\label{yK3} \begin{array}{l} \|{y}\|_{{V}^{2,1}(Q_{\infty})}<K_{3}, \end{array} \end{equation} \eqref{cony.n} holds, ${\sigma}_{0}\in L^{\infty}(\Omega)$ and satisfies the condition \eqref{1-4}, the solution $\sigma$ of equation \eqref{4-3} satisfies the following \begin{equation}\label{fiessig} \begin{array}{l} (i)\, \forall t<T_{1},\,\, \sigma(\cdot,t)\,\mbox{satisfies the estimate }\,\eqref{esorho},\\ (ii)\, \forall t\geqslant T_{1},\,\, \|\sigma(\cdot,t)\|_{L^{\infty}(\Omega)}=0. \end{array} \end{equation} \end{thm} \begin{proof}[Proof of Theorem \ref{t4-1}] Item $(i)$ of \eqref{fiessig} is automatically satisfied as a consequence of item $(iii),$ Theorem \ref{l4-2}. \newline We thus focus on the proof of item $(ii)$ of Theorem \ref{t4-1}. Let $T_{1}>T_{A_{1}}=\frac{d}{A_{1}(1-A_{1})}$ be fixed. Our approach will be based on the flow $X$ corresponding to the vector field $v_{s}+e^{-\beta t}{y}$. In order to introduce it in a more convenient manner, we first extend the domain into $\R^2$. Observe that the definition of $v_{s}$ can be naturally extended to $\R^2$ into a Lipschitz function by setting $v_s(x_1,x_2) = v_s(x_2)$ if $x_2 \in (0,1)$ and $0$ if $x_2 \in \R \setminus(0,1)$. We denote this extension by $v_{s}$ itself. For the following analysis we use the functional space $$H^{2,1}(\R^2 \times (0,\infty))=L^{2}(0,\infty;H^{2}(\R^2)\cap H^{1}(0,\infty;L^{2}(\R^2))$$ (this is consistent with the notations defined in Section \ref{funcframe}). Now we introduce an extension operator $\mathbb{E}$ from $\Omega$ to $\R^2$. $$ \mathbb{E}:L^2(\Omega)\longrightarrow L^2(\R^2) $$ such that: \begin{itemize} \item for every $y\in L^{2}(\Omega),$ $\mathbb{E}y\mid_{\Omega}=y$, \item the restriction of $\mathbb{E}$ to $H^2(\Omega)$ defines a linear operator from $H^2(\Omega)$ to $H^2(\R^2)$, \item the restriction of $\mathbb{E}$ to $H^2(\Omega)\cap W^{1,\infty} (\Omega)$ defines a linear operator from $H^2(\Omega)\cap W^{1,\infty} (\Omega)$ to $H^2(\R^2)\cap W^{1,\infty} (\R^2)$, \end{itemize} The existence of such an extension operator is a direct consequence of \cite[Theorem 2.2]{liomag}. \\ We now introduce the flow $X(x,t,s)$ defined for $x \in \R^2$ and $(t,s)\in[0,\infty)^{2},$ by the following differential equation: \begin{equation}\label{4-5} \left\{ \begin{array}{l} \displaystyle \frac{\partial X(x,t,s)}{\partial t}=(v_{s}+e^{-\beta t}\mathbb{E}{y})(X{(x,t,s)},t), \vspace{1.mm}\\ X(x,t,s)\suchthat_{t=s}=x\in \R^2. \end{array}\right. \end{equation} The integral formulation of \eqref{4-5} can be written as follows \begin{equation}\label{4-6} \begin{array}{l} \displaystyle \forall (x,t,s) \in \R^2 \times [0,\infty)^{2}, \quad X(x,t,s)=x+\int\limits_{s}^{t}(v_{s}+e^{-\beta t}\mathbb{E}{y})(X(x,\theta,s),\theta)d\theta. \end{array} \end{equation} As the vector field $$(v_{s}+e^{-\beta t}\mathbb{E}{y})\in L^2(0,\infty; {W}^{1,\infty}(\R^2)) + {H}^{2,1}(\R^2\times(0,\infty)),$$ due to the Osgood condition (see \cite{zuazua} and \cite[Theorem 3.7]{Bahouri-Chemin-Danchin}) we know that equation \eqref{4-6} has a unique continuous solution. Similarly, we introduce the flow $X_0$ corresponding to the vector field $v_{s}$ as the solution of the following differential equation: \begin{equation}\label{4-7} \left\{ \begin{array}{l} \displaystyle \frac{\partial X_{0}(x,t,s)}{\partial t}=v_{s}(X_{0}{(x,t,s)},t), \vspace{1.mm}\\ X_{0}(x,t,s)\suchthat_{t=s}=x\in\R^2. \end{array}\right. \end{equation} As $v_{s}$ is Lipschitz, the flow, which can also be seen as the solution of \begin{equation}\label{4-8} \begin{array}{l} \displaystyle X_{0}(x,t,s)=x+\int\limits_{s}^{t}v_{s}(X_{0}(x,\theta,s),\theta)d\theta, \quad (x,t,s) \in \R^2 \times (0,\infty)^2, \end{array} \end{equation} is well defined in classical sense. \begin{lem}\label{ld3} Let $T>0.$ There exists a constant $K_{4}=K_{4}(T)>0$ such that for all $y\in V^{2,1}(Q_\infty),$ $(t,s)\in[0,T]^{2}$ and $x\in \R^2,$ the solutions of \eqref{4-5} and \eqref{4-7} satisfy the following \begin{equation}\label{4-9} \begin{array}{l} \mid X(x,t,s)-X_{0}(x,t,s)\mid < K_{4}(T)\|{y}\|_{{V}^{2,1}(Q_{\infty})}. \end{array} \end{equation} \end{lem} \begin{proof} The proof of Lemma \ref{ld3} can be performed by using arguments which are very standard in the literature. For the convenience of the reader we include the proof. \\ {\bf 1.} As $H^{2}(\R^2)$ is embedded in $L^{\infty}(\R^2),$ using H\"{o}lder's inequality we can at once obtain the following estimate for all $(t,s)\in[0,T]^{2}$ and $x\in\R^2$, $$ \left|\int\limits_{s}^{t}e^{-\beta \theta} \mathbb{E}{y}(X(x,\theta,s),\theta) d\theta\right| \leqslant {K}\|\mathbb{E}{y}\|_{{H}^{2,1}(\R^2\times(0,\infty))},$$ for some constant $K>0.$ \vspace{1.mm}\\ {\bf 2.} Subtracting \eqref{4-6} from \eqref{4-8}, we get, for all $(t,s)\in[0,\infty)^{2}$ and $x\in\R^2$, \begin{equation}\nonumber \begin{aligned} |X(x,t,s)-X_{0}(x,t,s)| & \leqslant \left|\int\limits_{s}^{t} |v_{s}(X(x,\theta,s),\theta)-v_{s}(X_{0}(x,\theta,s),\theta)| d\theta\right| + \left|\int\limits_{s}^{t}e^{-\beta \theta} |\mathbb{E}{y}(X(x,\theta,s),\theta)|d\theta\right|\\ & \leqslant \|\nabla v_{s}(.) \|_{L^{\infty}(\Omega)} \left|\int\limits_{s}^{t} |X(x,\theta,s)-X_{0}(x,\theta,s)| d\theta\right| +{K}\|\mathbb{E}{y}\|_{{H}^{2,1}(\R^2 \times(0,\infty))}. \end{aligned} \end{equation} Since $\mathbb{E}$ is a bounded operator from $L^{2}(\Omega)$ to $L^{2}(\mathbb{R}^{2})$ and from $H^{2}(\Omega)$ to $H^{2}(\mathbb{R}^{2}),$ there exists a constant $K>0$ such that \begin{equation}\label{pregronlt} \begin{array}{l} \displaystyle |X(x,t,s)-X_{0}(x,t,s)|\leqslant \|\nabla v_{s}(.) \|_{L^{\infty}(\Omega)} \left|\int\limits_{s}^{t} |X(x,\theta,s)-X_{0}(x,\theta,s)| d\theta\right|+ K\|y\|_{V^{2,1}(Q_{\infty})}. \end{array} \end{equation} Now we can use Gr\"{o}nwall's inequality to obtain \eqref{4-9}. \end{proof} Recall that the solution of \eqref{4-1} vanishes after some finite time $T_{A_{1}}=\frac{d}{A_{1}(1-A_{1})}.$ At the same time Lemma \ref{ld3} suggests that for any finite time $T>0,$ the flow $X_{0}(x,t,s)$ stays uniformly close to $X(x,t,s)$ in $\R^2\times(0,T)$ provided $\|y\|_{V^{2,1}(Q_{\infty})}$ is small enough. In view of these observations, in the following we design a Lyapunov functional corresponding to a localized energy, to prove that $\sigma$ vanishes after the time $T_1 > T_{A_1}$ when $\|y\|_{V^{2,1}(Q_{\infty})}$ is small enough, which will prove Theorem \ref{t4-1}.\\ Let $\varepsilon$ be a fixed positive constant in $(0,A_1)$ such that \begin{equation}\label{varepsilon} \begin{array}{l} \displaystyle T_{1}=\frac{d+\varepsilon}{(A_{1}-\varepsilon)(1-A_{1}+\varepsilon)}. \end{array} \end{equation} Our primary goal is to prove that, for a velocity field $y$ satisfying \eqref{cony.n} and such that $ \|y\|_{V^{2,1}(Q_{T_1})}$ is small enough and an initial condition ${\sigma}_{0}\in L^{\infty}(\Omega)$ satisfying \eqref{1-4}, the solution $\sigma$ of \eqref{4-3} satisfies \begin{equation} \label{Sigma-t-1=0} \displaystyle \sigma(x,T_{1})=0\quad\mbox{for all}\quad x\in\Omega. \end{equation} In fact, the condition \eqref{cony.n} does not play any role. We shall thus prove a slightly more general result: there exists $K_{3}>0,$ such that for any velocity field $y$ such that $ \|y\|_{V^{2,1}(Q_{T_1})}\leq K_3$ and any initial condition ${\sigma}_{0}\in L^{\infty}(\Omega)$ satisfying \eqref{1-4}, the solution $\sigma$ of \begin{equation}\label{4-3-ter} \left\{ \begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+(({v}_s+e^{-\beta t}{y}) \cdot \nabla)\sigma-\beta\sigma=0\quad&\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \displaystyle \sigma (x,t)=0 \quad &\mbox{on} \quad \Sigma_{in,y,\infty}, \vspace{1.mm}\\ \displaystyle \sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where $$ \Sigma_{in,y,\infty} = \{(x,t) \in \Gamma \times (0,T) \suchthat (v_s(x) + y(x,t) e^{-\beta t}) \cdot n(x) < 0 \}, $$ satisfies \eqref{Sigma-t-1=0}. We will achieve this goal using two steps. In the first one, we shall consider smooth ($\in V^{2,1} (Q_{T_1}) \cap L^2(0,T_1; W^{1,\infty}(\Omega))$) vector field $y$. In the second one, we will explain how the same result can be obtained for all vector fields $y \in V^{2,1}(Q_{T_1})$. \smallskip {\it Case $y \in V^{2,1} (Q_{T_1}) \cap L^2(0,T_1; W^{1,\infty}(\Omega))$.} Here we assume that \begin{equation} \label{AdditionalAssumption} y \in V^{2,1} (Q_{T_1}) \cap L^2(0,T_1; W^{1,\infty}(\Omega)). \end{equation} With $\varepsilon >0$ given by \eqref{varepsilon}, we then define a function $\vartheta \in C^{\infty}(\R^2)$ and $\vartheta(x_{1},x_{2})\in [0,1]$ such that \begin{equation}\label{vartheta} \vartheta(x_{1},x_{2})=\left\{ \begin{array}{ll} 0\quad&\mbox{if}\quad (x_{1},x_2) \in [0,d] \times [A_1,1-A_1],\\ 1\quad&\mbox{if}\quad (x_{1},x_2) \in \R^2 \setminus [-\frac{\varepsilon}{2}, d + \frac{\varepsilon}{2}] \times [A_1 -\frac{\varepsilon}{2} ,1-A_1 + \frac{\varepsilon}{2}]. \end{array}\right. \end{equation} We consider the following auxiliary transport problem \begin{equation}\label{auxtrans} \left\{ \begin{array}{ll} \displaystyle \frac{\partial\Psi}{\partial t}+((v_{s}+e^{-\beta t}\mathbb{E}y)\cdot\nabla)\Psi=0\quad&\mbox{in}\quad \R^2\times (0,T_{1}),\\ \Psi(\cdot,0)=\vartheta\quad&\mbox{in}\quad \R^2. \end{array}\right. \end{equation} Since $v_{s}+e^{-\beta t}\mathbb{E}y$ belongs to $L^2(0,T_1; W^{1,\infty}(\R^2))$ the system \eqref{auxtrans} can be solved using the characteristics formula to obtain \begin{equation}\label{reprP} \begin{array}{l} \Psi(x,t)=\vartheta(X(x,0,t))\quad\mbox{for all}\quad (x,t)\in\R^2\times[0,T_{1}], \end{array} \end{equation} where the flow $X(\cdot,\cdot,\cdot)$, defined by \eqref{4-5}, is globally Lipschitz in $\R^2\times[0,T_{1}]$. It follows that $\Psi$ is also globally Lipschitz in $\R^2\times[0,T_{1}]$. Besides, this formula immediately provides the non-negativity of $\Psi$ in $\R^2\times[0,T_{1}]$. We now introduce the following quantity: \begin{equation}\label{lyap} \begin{array}{l} \displaystyle E_{loc}(t)=\frac{1}{2}\int_{\Omega} \Psi(x,t) |\sigma(x,t)|^{2} dx \quad\mbox{for all}\quad t\in [0,T_{1}]. \end{array} \end{equation} The idea is that this quantity will measure the $L^2$ norm of $\sigma(\cdot,t)$ localized in the support of $\Psi(\cdot,t)$. \\ In order to evaluate how the quantity $E_{loc}$ evolves, we use the renormalization property \eqref{Renormalized} with $\xi(s) = s^2$ and we compute the time derivative of $E_{loc}$ (in $\mathcal{D}'(0,T)$): \begin{equation}\label{timeder} \begin{array}{ll} \displaystyle \displaystyle \frac{d}{dt}E_{loc}(t) & \displaystyle =\frac{1}{2}\int\limits_{\Omega} (\frac{\partial\Psi }{\partial t}+({v}_s+e^{-\beta t}\mathbb{E}{y})\cdot\nabla\Psi)|\sigma|^{2}dx+\beta\int\limits_{\Omega}\Psi|\sigma|^{2} dx \\ & \displaystyle \quad -\frac{1}{2}\int\limits_{\Gamma}\Psi |\gamma_{\sigma}|^{2} (({v}_s+e^{-\beta t}\mathbb{E}{y})\cdot n)dm \\ & \displaystyle\leqslant\beta\int\limits_{\Omega}\Psi |\sigma|^{2}dx = 2 \beta E_{loc}(t). \end{array} \end{equation} In the above calculation we have used that $\Psi$ solves the equation \eqref{auxtrans}$_{1},$ $\gamma_{\sigma}$ (the trace of $\sigma,$ see Theorem \ref{trace}, item $(ii)$) vanishes on $\Sigma_{in,y,T_1}$, and that $\Psi$ stays non-negative in $(0,T_{1})\times \R^2$. Now using Gr\"{o}nwall's inequality in \eqref{timeder}, we get \begin{equation}\label{pregron} \begin{array}{l} \displaystyle \frac{1}{2}\int\limits_{\Omega}\Psi(x,T_1) |\sigma(x,T_{1})|^{2}dx=E_{loc}(T_{1})\leqslant e^{2\beta T_{1}}E_{loc}(0) = 0, \end{array} \end{equation} where the last identity comes from the fact that ${\sigma}_{0}\in L^{\infty}(\Omega)$ satisfies the condition \eqref{1-4} and the choice of $\Psi$ in \eqref{vartheta}, \eqref{auxtrans}.\\ We now prove that \begin{equation} \label{Psi-T1} \forall x \in \Omega, \quad \Psi(x, T_1) = 1. \end{equation} In order to prove \eqref{Psi-T1}, we will rely on the formula \eqref{reprP}, and Lemma \ref{ld3}. Indeed, for $x = (x_1,x_2) \in \Omega$, we have $$ X_0(x,0,T_1) = \begin{pmatrix} x_{1}-T_{1}(x_{2}(1-x_{2}))\\ x_{2} \end{pmatrix}. $$ Therefore, if $x = (x_1,x_2) \in \Omega$ satisfies $x_2 \in (A_1 - \varepsilon, 1- A_1+\varepsilon)$, as one has $x_{2}(1-x_{2})>(A_{1}- \varepsilon)(1-A_{1} + \varepsilon)$, $(X_0(x, 0,T_1))_1 \leq d - T_1 (A_1-\varepsilon)(1-A_1+\varepsilon) \leq - \varepsilon$. Similarly, if $x_2 \in [0,1] \setminus (A_1-\varepsilon, 1-A_1+\varepsilon)$, $(X_0(x,0,T_1))_2 \in [0,1] \setminus (A_1-\varepsilon, 1-A_1+\varepsilon)$. In particular, one obtains that for all $x=(x_{1},x_{2})\in \Omega$ \begin{equation}\label{X0es} \begin{array}{l} X_{0}(x,0,T_{1})\in \R^2 \setminus (-\varepsilon, d+\varepsilon) \times (A_{1}-\varepsilon,1-A_{1} + \varepsilon). \end{array} \end{equation} Now set $\displaystyle K_{3}=K_{3}(T_{1})=\frac{\varepsilon}{2K_{4}(T_{1})}>0,$ where $\displaystyle K_{4}(T_{1})$ is the constant appearing in Lemma \ref{ld3}, and assume that \begin{equation}\label{smally} \begin{array}{l} \displaystyle \|y\|_{V^{2,1}(Q_{T_1})}<K_{3}. \end{array} \end{equation} The inequality \eqref{4-9}, \eqref{X0es} and the assumption \eqref{smally} furnish that for all $x\in \Omega$, \begin{equation}\label{charnear} \begin{array}{l} \displaystyle X(x,0,T_{1})\in \R^2 \setminus [-\frac{\varepsilon}{2},d+\frac{\varepsilon}{2}]\times[A_1 -\frac{\varepsilon}{2} ,1-A_1 + \frac{\varepsilon}{2}]. \end{array} \end{equation} Now using the representation \eqref{reprP} of $\Psi$, we immediately deduce \eqref{Psi-T1}. The estimate \eqref{pregron} then yields that $\sigma$ vanishes at time $T_1$ in the whole set $\Omega$, i.e. the identity \eqref{Sigma-t-1=0}.\\ {\it The general case $y \in V^{2,1}(Q_{T_1})$.} We now discuss the case in which $y$ does not satisfy the regularity \eqref{AdditionalAssumption} and $y$ only belongs to $V^{2,1}(Q_\infty)$ as stated in Theorem \ref{t4-1}. In order to deal with this case, we use the density of $V^{2,1}(Q_{T_1})\cap L^2(0,T_1; W^{1, \infty}(\Omega))$ in $V^{2,1}(Q_{T_1}) $. In particular, if $y$ belongs to $V^{2,1}(Q_\infty)$ and satisfies \eqref{smally}, we can find a sequence $y_n$ of functions of $V^{2,1}(Q_{T_1}) \cap L^2(0,T_1; W^{1, \infty}(\Omega))$ such that $y_n$ strongly converges to $y$ in $V^{2,1}(Q_T)$ and for all $n$, $\| y_n \|_{V^{2,1}(Q_{T_1})} < K_3$. Using then the previous arguments, we can show that for all $n$, $\sigma_n(x,T_1) = 0$ for all $x \in \Omega$, where $\sigma_n$ denotes the solution of \eqref{4-12} on the time interval $(0,T_1)$. The strong convergence of $(y_n)$ to $y$ in $V^{2,1}(Q_{T_1})$, hence of $y_n$ to $y$ in $L^1(Q_{T_1})$ and of $y_n\cdot n$ to $y\cdot n$ in $L^1(\Sigma_{T_1})$, and Lemma \ref{l4-4} then imply \eqref{Sigma-t-1=0}.\\ {\it End of the proof of Theorem \ref{t4-1}}. We shall then show that, when $y \in V^{2,1}(Q_\infty)$ satisfies the condition \eqref{smally}, the solution $\sigma$ of \eqref{4-3} stays zero for times larger than $T_1$. This is obvious, as one can replace \eqref{4-3}$_{3}$ by $\sigma(x,T_{1})=0$ on $\Omega$ and solve the Cauchy problem \eqref{4-3} in the time interval $[T_{1},\infty)$ to obtain that $\sigma$ is the trivial solution $$\sigma(x,s)=0\quad\mbox{for all}\quad (x,s)\in\Omega\times[T_{1},\infty).$$ This concludes the proof of Theorem \ref{t4-1}. \end{proof} \begin{remark} In the above proof, we have handled separately the case $y \in V^{2,1}(Q_{T_1}) \cap L^2(0,T_1; W^{1, \infty}(\Omega))$ from the case of a general vector field $y \in V^{2,1}(Q_{T_1})$, because the solution $\Psi$ of \eqref{auxtrans} for a vector field $y \in V^{2,1}(Q_{T_1})$ has \emph{a priori} only H\"older regularity (see in particular \cite[Theorem 3.7]{Bahouri-Chemin-Danchin}), and thus cannot be used directly as a test function in the weak formulation \eqref{Renormalized} to obtain \eqref{timeder}. \end{remark} \begin{remark} In general to prove the stabilizability of a non-linear problem it is usual to first study the stabilizability of the corresponding linear problem and then consider the non-linear term as a source term to obtain analogous stabilizability result corresponding to the complete non-linear system. But the reader may notice that contrary to the usual method we did not consider the non-linear term $({y}\cdot\nabla)\sigma$ (nonlinear in $(\sigma,y)$ but linear in $\sigma$) as a source term while dealing with the system \eqref{4-3}. This is because the transport equation has no regularizing effect on its solution, hence it is not possible to consider the non-linear term in \eqref{4-3} as a source term and to recover the solution in $L^{\infty}(Q_{\infty}).$ \end{remark} \section{Stabilization of the two dimensional Navier-Stokes equations.}\label{final} \begin{proof}[Proof of Theorem \ref{main}] We will prove Theorem \ref{main} using the Schauder fixed point theorem. We now discuss the strategy of the proof.\\[1.mm] (i) First we define an appropriate fixed point map. This will be done in Section \ref{dfp}.\\ (ii) Then we fix a suitable ball which is stable by the map defined in step (i). This is done in the Section \ref{sbit}.\\ (iii) In Section \ref{coco} we show that the ball defined in step (ii), is compact in some appropriate topology. We then prove that the fixed point map from step (i) in that topology is continuous.\\ (iv) At the end we draw the final conclusion to prove Theorem \ref{main}. \subsection{Definition of a fixed point map}\label{dfp} Let us recall the fully non linear system (including the boundary controls) under consideration: \begin{equation}\label{5-2*} \left\{\begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+(({v}_s+e^{-\beta t}{y}) \cdot \nabla)\sigma-\beta\sigma=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \sigma(x,t)=0 \quad &\mbox{on} \quad \Gamma_{in} \times(0,\infty), \vspace{1.mm}\\ \sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega, \vspace{2.mm}\\ \displaystyle \frac{\partial {y}}{\partial t}-\beta {y}-\nu \Delta {y}+ ({v}_s \cdot \nabla){y}+({y} \cdot \nabla)v_s +\nabla q=\mathcal{F}({y},\sigma)\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \mbox{div}({y})=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ {y}=0\quad& \mbox {on} \quad (\Gamma_0\cup \Gamma_{out} )\times (0,\infty), \vspace{1.mm}\\ {y}=\sum\limits_{j=1}^{N_{c}}{w}_{j}(t){{g}_{j}}(x) \quad &\mbox{on} \quad \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ {y}(x,0)={y}_0\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ {w}_{c}^{'}+{\gamma }{w}_{c}-\mathcal{K}(Py,w_{c})=0\quad& \mbox{in}\quad (0,\infty), \vspace{1.mm}\\ {w}_{c}(0)=0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where $$\mathcal{F}({y},\sigma)=-e^{-\beta t}{\sigma}\frac{\partial {y}}{\partial t}-e^{-\beta t}({y}\cdot \nabla){y}-e^{-\beta t}{\sigma}(v_{s}\cdot \nabla){y} -e^{-\beta t}{\sigma}({y}\cdot \nabla)v_{s}-e^{-2\beta t}{\sigma}({y}\cdot \nabla){y}+\beta e^{-\beta t} \sigma {y},$$ and $w_{c}=(w_{1},...,w_{N_{c}}).$ To prove the existence of a solution of the system \eqref{5-2*} we are going to define a suitable fixed point map.\\ Now assume that $\sigma_{0}\in L^{\infty}(\Omega)$ and satisfies \eqref{1-4}. Recall the definition of ${g_{j}}$'s from \eqref{basisU0}. Let us suppose that $\widehat{y}\in {V}^{2,1}(Q_{\infty})$ satisfies \eqref{yK3} and on the boundary it is given in the following form \begin{equation}\label{hatybd} \widehat{y}\mid_{\Sigma_{\infty}}=\sum\limits_{j=1}^{N_{c}}\widehat{w}_{j}(t){g_{j}}(x), \end{equation} where $\widehat{w}_{c}=(\widehat{w}_{1},...,\widehat{w}_{N_{c}})\in H^{1}(0,\infty;\mathbb{R}^{N_{c}}).$ In addition the coefficients $\widehat{w}_{c}$ are assumed to be such that $\widehat{y}$ satisfies the following boundary condition \begin{equation}\label{concor**} \|\widehat{y}\mid_{\Sigma_{\infty}}\|_{L^{\infty}(\Sigma_{\infty})}\leqslant \frac{L(1-L)}{2}, \end{equation} where the constant $L$ was fixed in \eqref{dgc}. We further assume that $y_{0}\in V^{1}_{0}(\Omega).$\\ We consider the following set of equations \begin{equation}\label{5-2} \left\{\begin{array}{ll} \displaystyle \frac{\partial {\widehat{\sigma}}}{\partial t}+(({v}_s+e^{-\beta t}\widehat{y}) \cdot \nabla){\widehat{\sigma}}-\beta{\widehat{\sigma}}=0\quad &\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ {\widehat{\sigma}} (x,t)=0 \quad &\mbox{on} \quad\Gamma_{in} \times(0,\infty), \vspace{1.mm}\\ {\widehat{\sigma}} (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega, \vspace{2.mm}\\ \displaystyle \frac{\partial y}{\partial t}-\beta y-\nu \Delta y+ ({v}_s \cdot \nabla)y+(y \cdot \nabla)v_s +\nabla q=\mathcal{F}(\widehat{y},{\widehat{\sigma}})\quad &\mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ \mbox{div}(y)=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{1.mm}\\ y=0\quad &\mbox {on} \quad (\Gamma_0\cup\Gamma_{out}) \times (0,\infty), \vspace{1.mm}\\ y=\sum\limits_{j=1}^{N_{c}}{w}_{j}(t){{g}_{j}}(x) \quad &\mbox{on} \quad \Gamma_{in} \times (0,\infty), \vspace{1.mm}\\ y(x,0)={y}_0\quad&\mbox{in}\quad\Omega, \vspace{1.mm}\\ {w}_{c}^{'}+{\gamma }{w}_{c}-\mathcal{K}(Py,w_{c})=0\quad& \mbox{in}\quad (0,\infty), \vspace{1.mm}\\ {w}_{c}(0)=0\quad&\mbox{in}\quad\Omega, \end{array}\right. \end{equation} where $$\mathcal{F}(\widehat{y},{\widehat{\sigma}})=-e^{-\beta t}{\widehat{\sigma}}\frac{\partial \widehat{y}}{\partial t}-e^{-\beta t}(\widehat{y}\cdot \nabla)\widehat{y}-e^{-\beta t}{\widehat{\sigma}}(v_{s}\cdot \nabla)\widehat{y} -e^{-\beta t}{\widehat{\sigma}}(\widehat{y}\cdot \nabla)v_{s}-e^{-2\beta t}{\widehat{\sigma}}(\widehat{y}\cdot \nabla)\widehat{y}+\beta e^{-\beta t} \widehat{\sigma} \widehat{y}$$ and $w_{c}=(w_{1},...,w_{N_{c}}).$ Since \eqref{hatybd} and \eqref{concor**} hold, one can verify that $\widehat{y}$ satisfies \eqref{cony.n}. Hence we can solve \eqref{5-2}$_{1}$-\eqref{5-2}$_{3}$ for $\widehat{\sigma}$ in $L^{\infty}(Q_{\infty})$ (see Section \ref{density}). Now using this $\widehat{\sigma}$ and $\widehat{y}$ one can solve \eqref{5-2}$_{4}$-\eqref{5-2}$_{10}$ (see Section \ref{velocity}) for $(y,w_{c})$ provided $\mathcal{F}(\widehat{y},\widehat{\sigma})\in L^{2}(Q_{\infty}).$ This is indeed the case since we have $\widehat{y}\in V^{2,1}(Q_{\infty})$ and $\widehat{\sigma}\in L^{\infty}(Q_{\infty})$ and the detailed estimates are done in Lemma \ref{l5-2}.\\ At this point we fix $T_{1}>T_{A_{1}}=\frac{d}{A_{1}(1-A_{1})}$ in Theorem \ref{t4-1}. We also fix the constant $K_{3}$ appearing in Theorem \ref{t4-1}. Let $0<\mu<K_{3}.$ We define a convex set $D_{\mu}$ as follows \begin{equation}\label{Dmu} D_{\mu}=\left\{ \begin{split} & \begin{pmatrix} \widehat{y}\\\widehat{w}_{c} \end{pmatrix}\in {V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})\suchthat\|(\widehat{y},\widehat{w}_{c})\|_{{V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant\mu\\ &\,\,\,\,\mbox{and}\, \widehat{y}\mid_{\Sigma_{\infty}} \mbox{is of the form}\,\,\eqref{hatybd}\,\, \mbox{and satisfies the condition}\,\,\eqref{concor**} \end{split}\right\}. \end{equation} Notice that $(0,0)$ belongs to $D_{\mu},$ hence $D_{\mu}$ is non-empty.\\ Let $(\widehat{\sigma},y,w_{c})\in L^{\infty}(Q_{\infty})\times{V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})$ be the solution of system \eqref{5-2} corresponding to $(\widehat{y},\widehat{w}_{c})\in D_{\mu}.$ We consider the following map \begin{equation}\label{chi} \begin{matrix} \displaystyle \chi: & D_{\mu} & \longrightarrow & {V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})\\ \displaystyle &(\widehat{y},\widehat{w}_{c}) &\mapsto & ( \displaystyle {{y},w_{c}} ). \end{matrix} \end{equation} In the sequel we will choose the constant $\mu\in(0,K_{3}),$ small enough such that $\chi$ maps $D_{\mu}$ into itself.\\ We will then look for a fixed point of the map $\chi.$ Indeed if $( y_{f},w_{{f},c} )$ is a fixed point of the map $\chi,$ by construction, there exists a function $\sigma_{f}$ such that the triplet $(\sigma_{{f}},{y}_{f},w_{{f},c})$ solves \eqref{5-2*}. Hence in order to prove Theorem \ref{main} it is enough to show that the map $\chi$ has a fixed point in $D_{\mu}$. \subsection{$\chi$ maps $D_{\mu}$ into itself}\label{sbit} In this section we will choose a suitable constant $\mu$ such that $\chi$ maps $D_{\mu}$ into itself, provided the initial data are small enough.\\ Now given $( \widehat{y},\widehat{w}_{c} )\in D_{\mu},$ we can use \eqref{fiessig} in order to show that $\widehat{\sigma},$ the solution of \eqref{5-2}$_{1}$-\eqref{5-2}$_{3}$ satisfies the following \begin{equation}\label{bdtran} \begin{array}{l} \|\widehat{\sigma}\|_{L^{\infty}(Q_{\infty})}\leqslant e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}. \end{array} \end{equation} \begin{lem}\label{l5-2} If $( \widehat{y},\widehat{w}_{c} )$ belongs to $D_{\mu}$ (defined in \eqref{Dmu}) and $\widehat{\sigma}$ is the solution of the problem \eqref{5-2}$_{1}$-\eqref{5-2}$_{3}$ then $\mathcal{F}(\widehat{y},\widehat{\sigma})\in L^{2}(Q_{\infty}).$ Besides there exist constants $K_{5}>0,$ $K_{6}>0$ such that for all $(\widehat{y},\widehat{w}_{c})\in D_{\mu}$ and for all $(\sigma_{0},y_{0})$ with $\sigma_{0}$ satisfying \eqref{1-4} and $e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}<1,$ the following estimate is true: \begin{equation}\label{5-9} \begin{array}{l} \| \mathcal{F}(\widehat{y},\widehat{\sigma})\|_{L^{2}(Q_{\infty})}\leqslant K_{5}e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)} +K_{6}\|\widehat{y}\|_{{V}^{2,1}(Q_{\infty})}^{2}. \end{array} \end{equation} \end{lem} \begin{proof} First use \eqref{bdtran} to show \begin{equation}\label{5-3} \begin{split} \|{\widehat{\sigma}}\frac{\partial\widehat{y}}{\partial t}\|_{L^{2}(Q_{\infty})} &\leqslant e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}\|\widehat{y}\|_{{V}^{2,1}(Q_{\infty})}. \end{split} \end{equation} Recall that $v_{s}\in C^{\infty}(\bar{\Omega}).$ Hence we again apply \eqref{bdtran} to get \begin{equation}\label{5-4} \|{\widehat{\sigma}}(v_{s}\cdot\nabla)\widehat{y}\|_{L^{2}(Q_{\infty})}\leqslant e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}\|v_{s}\|_{W^{1,\infty}(\Omega)}\|\widehat{y}\|_{{V}^{2,1}(Q_{\infty})}. \end{equation} and \begin{equation}\label{5-5} \|{\widehat{\sigma}}(\widehat{y}\cdot\nabla)v_{s}\|_{L^{2}(Q_{\infty})}\leqslant e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}\|v_{s}\|_{W^{1,\infty}(\Omega)}\|\widehat{y}\|_{{V}^{2,1}(Q_{\infty})}. \end{equation} Now we estimate $(\widehat{y}\cdot\nabla)\widehat{y}$ in $L^{2}(Q_{\infty}).$ We know that $V^{2,1}(Q_{\infty})$ is continuously embedded in the space $L^{\infty}(0,\infty;H^{1}(\Omega)).$ Hence $\widehat{y}\in L^{\infty}(0,\infty;H^{1}(\Omega)),$ $\nabla \widehat{y}\in L^{2}(0,\infty;H^{1}(\Omega))$ and the following holds \begin{equation}\label{5-6} \begin{split} \|(\widehat{y}\cdot\nabla)\widehat{y}\|_{L^{2}(Q_{\infty})} & \leqslant K\|\widehat{y}\|_{L^{\infty}(0,\infty;H^{1}(\Omega))}\|\nabla\widehat{y}\|_{L^{2}(0,\infty;H^{1}(\Omega))}\leqslant K \|\widehat{y}\|^{2}_{{V}^{2,1}(Q_{\infty})}. \end{split} \end{equation} Similarly \begin{equation}\label{5-7} \begin{array}{l} \|{\widehat{\sigma}}(\widehat{y}\cdot\nabla)\widehat{y}\|_{L^{2}(Q_{\infty})}\leqslant Ke^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}\|\widehat{y}\|^{2}_{{V}^{2,1}(Q_{\infty})} \end{array} \end{equation} and \begin{equation}\label{5-8} \begin{array}{l} \|\beta{\widehat{\sigma}}\widehat{y}\|_{L^{2}(Q_{\infty})}\leqslant |\beta|e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}\|\widehat{y}\|_{{V}^{2,1}(Q_{\infty})}. \end{array} \end{equation} Now observe that \begin{equation}\label{esmu0mu} \begin{array}{l} e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}\|\widehat{y}\|_{V^{2,1}(Q_{\infty})} \leqslant \frac{1}{2}( e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}+\|\widehat{y}\|_{V^{2,1}(Q_{\infty})}^{2}). \end{array} \end{equation} Hence we use estimates \eqref{5-4}-\eqref{5-8} and \eqref{esmu0mu} to prove Lemma \ref{l5-2} and the estimate \eqref{5-9}. \end{proof} \begin{lem}\label{eohy} There exist constants $K_{7}>\max\{1,K_{5},K_{6}\}>0,$ $K_{8}>\max\{K_{5},K_{6}\}>0$ such that for all $(\widehat{y},{\widehat{w}_{c}})\in D_{\mu}$ (defined in \eqref{Dmu}), for all $(\sigma_{0},y_{0})$ with $\sigma_{0}$ satisfying \eqref{1-4}, $e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)}<1,$ and for $\widehat{\sigma}$ uniquely solving \eqref{5-2}$_{1}$-\eqref{5-2}$_{3},$ $({y},w_{c} )=\chi(\widehat{y},\widehat{w}_{c}),$ solving \eqref{5-2}$_{4}$-\eqref{5-2}$_{10}$ is well defined and satisfies the following inequality \begin{equation}\label{5-15} \begin{array}{l} \|({y},w_{c})\|_{{V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant K_{7}{\max}\,\{e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)},\|{y}_{0}\|_{{V}^{1}_{0}(\Omega)}\}+K_{8}\|\widehat{y}\|_{{V}^{2,1}(Q_{\infty})}^{2}. \end{array} \end{equation} \end{lem} \begin{proof} Corollary \ref{c3-8} shows that $( y,w_{c} )$ satisfy the following estimate \begin{equation}\label{3-24re} \begin{array}{l} \|( y,w_{c} )\|_{V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant K_{1}(\| {y}_{0} \|_{{V}^{1}_{0}(\Omega)} +\| \mathcal{F}(\widehat{y},{\widehat{\sigma}})\|_{L^{2}(Q_{\infty})}). \end{array} \end{equation} Now using \eqref{5-9} in \eqref{3-24re}, we get the desired result. \end{proof} From now on we will consider the initial data $\sigma_{0}\in L^{\infty}(\Omega)$ and $y_{0}\in V^{1}_{0}(\Omega)$ such that they satisfy \begin{equation}\label{smu0} \left\{ \begin{array}{l} \displaystyle \sigma_{0}\,\,\mbox{satisfies}\,\,\eqref{1-4},\\ \displaystyle \mbox{max}\,\{e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)},\|{y}_{0}\|_{{V}^{1}_{0}(\Omega)}\}<\min\,\left\{\frac{L(1-L)}{8K_{2}K_{7}},\frac{K_{3}}{2K_{7}},\frac{1}{4K_{7}K_{8}},1\right\}, \end{array}\right. \end{equation} where $K_{2},$ $K_{7}$ and $K_{8}$ are the constants appearing respectively in \eqref{dofcon} and \eqref{5-15}. \begin{lem}\label{l5-3} For all $(\sigma_{0},y_{0})$ satisfying \eqref{smu0}, setting \begin{equation}\label{setmu} \begin{split} \displaystyle \mu={2K_{7}\mbox{max}\,\{e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)},\|{y}_{0}\|_{{V}^{1}_{0}(\Omega)}\}}, \end{split} \end{equation} where $K_7$ is the constant in \eqref{5-15}, the map $\chi$ (defined in \eqref{chi}) maps $D_{\mu}$ (defined in \eqref{Dmu}) into itself. \end{lem} \begin{proof} In view of \eqref{smu0}$_{2}$ and \eqref{setmu}, one observes in particular that \begin{equation}\label{mu0} \begin{array}{l} \displaystyle 0<\mu<\min\,\left\{\frac{L(1-L)}{4K_{2}},{K_{3}},\frac{1}{2K_{8}}\right\}. \end{array} \end{equation} Now we will verify that with the choice \eqref{setmu} of $\mu,$ the map $\chi$ maps $D_{\mu}$ into itself. Let $(\widehat{y},{\widehat{w}_{c}})\in D_{\mu},$ for all $(\sigma_{0},y_{0})$ obeying \eqref{smu0} and for $\widehat{\sigma}$ uniquely solving \eqref{5-2}$_{1}$-\eqref{5-2}$_{3},$ $({y},w_{c} )=\chi(\widehat{y},\widehat{w}_{c}),$ solves \eqref{5-2}$_{4}$-\eqref{5-2}$_{10}.$ We claim that $(y,w_{c})\in D_{\mu}.$\\ First of all in view of \eqref{5-15}, \eqref{smu0}$_{2},$ \eqref{setmu} and \eqref{mu0} we observe that \begin{equation}\nonumber \begin{split} \displaystyle \|({y},w_{c})\|_{{V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant K_{7}\mbox{max}\,\{e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)},\|{y}_{0}\|_{{V}^{1}_{0}(\Omega)}\}+K_{8}\mu^{2}\leqslant \mu. \end{split} \end{equation} Since $({y},w_{c} )=\chi(\widehat{y},\widehat{w}_{c})$ solves \eqref{5-2}$_{4}$-\eqref{5-2}$_{10},$ the function $y$ on the boundary is given by $\sum\limits_{j=1}^{N_{c}}{w}_{j}(t){{g}_{j}}(x)$. This verifies \eqref{hatybd}.\\ Finally \begin{equation}\label{L1L} \begin{split} \|{y}_{0} \|_{{V}^{1}_{0}(\Omega)}+\| \mathcal{F}(\widehat{y},\widehat{\sigma})\|_{L^{2}(Q_{\infty})}&\leqslant (1+K_{5})\mbox{max}\,\{e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)},\|{y}_{0}\|_{{V}^{1}_{0}(\Omega)}\} +K_{6}\mu^{2}\\ & \leqslant\frac{3}{8}\frac{L(1-L)}{K_{2}}\leqslant\frac{L(1-L)}{2K_{2}}, \end{split} \end{equation} where in \eqref{L1L}$_{1}$ we have used \eqref{smu0}$_{2},$ \eqref{5-9} and in \eqref{L1L}$_{2}$ we have used \eqref{smu0}$_{2}$, \eqref{mu0} and the fact that $K_{7}>\max\{1,K_{5},K_{6}\}>0,$ $K_{8}>\max\{K_{5},K_{6}\}>0$ (which follows from the statement of Lemma \ref{eohy}). Now using Corollary \ref{p3.0.2} one verifies \eqref{concor**} for $y\mid_{\Sigma_{\infty}}.$\\ Hence we have verified that $(y,w_{c})\in D_{\mu}$ and the proof of Lemma \ref{l5-3} is finished. \end{proof} At this point we fix $\mu$ as in Lemma \ref{l5-3}. \subsection{Compactness and continuity}\label{coco} To start with, let us define the weighted space \begin{equation}\nonumber \begin{array}{l} \displaystyle L^{2}(0,\infty,(1+t)^{-1}dt;{ L^{2}}(\Omega)\times\mathbb{R}^{N_{c}})\\ \displaystyle=\left\{\overline{z}=\begin{pmatrix} z(x,t)\\w_{c}(t) \end{pmatrix}\in L^{2}(Q_{\infty})\times L^{2}((0,\infty);\mathbb{R}^{N_{c}})\suchthat \int\limits_{0}^{\infty}{(1+t)^{-2}}{\|\overline{z}\|_{{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}}^{2}}dt<\infty\right\}. \end{array} \end{equation} We endow the set $D_{\mu},$ defined in \eqref{Dmu}, with the norm induced from $L^{2}(0,\infty,(1+t)^{-1}{dt};{L^{2}}(\Omega)\times\mathbb{R}^{N_{c}}).$ \begin{lem}\label{l5-4} The set $D_{\mu}$ is compact in $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ \end{lem} \begin{proof} We divide the proof in two steps.\\ Step 1.\,\,We claim that $D_{\mu}$ is closed in the space $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ Consider a sequence $\{\overline{y}_{n}\}_{n}$ $\left(\mbox{where}\,\,\overline{y}_{n}=( y_{n},{w_{n,c}} ) \right)$ in $D_{\mu}$ such that $\{\overline{y}_{n}\}_{n}$ converges to some $\overline{y}$ $\left(\mbox{where}\,\,\overline{y}=( y,{w_{c}} ) \right)$ in the space $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ We will check that $\overline{y}\in D_{\mu}.$ Since for all $n,$ $\overline{y}_{n}\in D_{\mu},$ the definition of $D_{\mu}$ (see \eqref{Dmu}) yields \begin{equation}\label{bndyn} \begin{array}{l} \|\overline{y}_{n}\|_{V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant \mu. \end{array} \end{equation} Using the lower semi-continuity of the norms one obtains \begin{equation}\label{barymu} \begin{array}{l} \|\overline{y}\|_{V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant\mu. \end{array} \end{equation} Now we will verify that \begin{equation}\label{tracey} \begin{array}{l} y\mid_{\Sigma_{\infty}}=\sum\limits_{j=1}^{N_{c}}w_{j}(t){{g}_{j}}(x)\quad\mbox{for all}\quad (x,t)\in\Sigma_{\infty}, \end{array} \end{equation} where ${w_{c}}=(w_{1},...,w_{N_{c}}).$ From \eqref{bndyn} one has the following weak convergence $$y_{n}\rightharpoonup y\quad\mbox{in}\quad L^{2}(0,\infty;H^{2}(\Omega)),\quad\mbox{and}\quad w_{n,c}\rightharpoonup w_{c}\quad\mbox{in}\quad H^{1}(0,\infty;\mathbb{R}^{N_{c}}).$$ As the trace operator is linear and bounded from $H^{2}(\Omega)$ onto $H^{3/2}(\Gamma),$ $y_{n}\mid_{\Sigma_{\infty}}$ converges weakly to $y\mid_{\Sigma_{\infty}}$ in $L^{2}(0,\infty;H^{3/2}(\Gamma)).$ On the other hand as $\overline{y}_{n}\in D_{\mu},$ for each $n$ $${y}_{n}\mid_{\Sigma_{\infty}}=\sum\limits_{j=1}^{N_{c}}w_{n,j}(t){{g}_{j}}(x),$$ where $w_{n,c}=(w_{n,1},...,w_{n,N_{c}}).$ Now since ${w_{n,c}}$ converges weakly to ${w_{c}}$ in $H^{1}(0,\infty;\mathbb{R}^{N_{c}})$ we have the following convergence in the sense of distribution $${y}_{n}\mid_{\Sigma_{\infty}} \xrightarrow[n\rightarrow \infty]{} \sum\limits_{j=1}^{N_{c}}w_{j}(t){{g}_{j}}(x)\quad\mbox{in}\quad \mathscr{D}'(\Sigma_{\infty}).$$ Since the distributional limit and the weak limit (in the space $L^{2}(0,\infty;H^{3/2}(\Gamma))$) of $y_{n}\mid_{\Sigma_{\infty}}$ coincides, one at once obtains the expression \eqref{tracey} of $y\mid_{\Sigma_{\infty}}.$ Also using the continuous embedding $H^{1}(0,\infty)\hookrightarrow L^{\infty}(0,\infty)$ one observes that $$y_{n}\mid_{\Sigma_{\infty}}\stackrel{\ast}{\rightharpoonup}y\mid_{\Sigma_{\infty}}\quad\mbox{in}\quad L^{\infty}(\Sigma_{\infty}).$$ Hence one has the following by lower semi continuity of norm with respect to the above weak type convergence $$\|y\mid_{\Sigma_{\infty}}\|_{L^{\infty}(\Sigma_{\infty})}\leqslant \frac{L(1-L)}{2}. $$ Hence $y\mid_{\Sigma_{\infty}}$ satisfies \eqref{concor**}. This finishes the proof of $\bar{y}\in D_{\mu}.$\\[2.mm] Step 2.\,\,Now to prove Lemma \ref{l5-4}, it is enough to show that ${V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})$ is compactly embedded in $L^{2}(0,\infty,(1+t)^{-1}{dt},{ L^{2}}(\Omega)\times\mathbb{R}^{N_{c}}).$ Let $\{\overline{z}_{n}\}_{n}$ be a sequence in ${V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})$ such that $$\| \overline{z}_{n}\|_{{V}^{2,1}(Q_{\infty})\times H^{1}((0,\infty);\mathbb{R}^{N_{c}})}\leqslant 1.$$ This implies that for any $T>0$ \begin{equation}\label{5-16} \begin{array}{l} \displaystyle \int\limits_{T}^{\infty}{(1+t)^{-2}}{\|\overline{z}_{n}\|_{{L}^{2}(\Omega)\times \mathbb{R}^{N_{c}}}^{2} }dt\leqslant \frac{1}{(1+T)^{2}}, \end{array} \end{equation} for all $n\in\mathbb{N}.$ Let $\epsilon>0.$ Choose $T_{\epsilon}>0$ such that $$\frac{1}{(1+T_{\epsilon})^{2}}\leqslant \epsilon.$$ So using \eqref{5-16} we have \begin{equation}\label{5-17} \begin{array}{l} \|\overline{z}_{n}-\overline{z}_{m}\|^{2}_{L^{2}(T_{\epsilon},\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}})}\leqslant 4\epsilon, \end{array} \end{equation} for all $m,n\in \mathbb{N}.$ \newline We know from Rellich's compactness theorem and Aubin-Lions lemma (\cite{aubin}) that the embedding of ${V}^{2,1}(Q_{T_{\epsilon}})\times H^{1}(0,T_{\epsilon};\mathbb{R}^{N_{c}})$ into $L^{2}(0,T_{\epsilon},{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}})$ is compact. Hence up to a subsequence (denoted by the same notation) $\{\overline{z}_{n}\}_{n}$ is Cauchy in $L^{2}(0,T_{\epsilon},{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ \newline So it follows that there exists $N_{0}\in\mathbb{N}$ such that for all natural numbers $m,n\geqslant N_{0},$ \begin{equation}\label{5-18} \begin{array}{l} \|\overline{z}_{n}-\overline{z}_{m}\|^{2}_{L^{2}(0,T_{\epsilon},{(1+t)^{-1}}dt,{ L^{2}(\Omega)}\times\mathbb{R}^{N_{c}})}\leqslant\epsilon. \end{array} \end{equation} Now combining \eqref{5-17}, \eqref{5-18} and a diagonal extraction argument, we can construct a subsequence $\{\overline{z}_{n}\}_{n}$ which is a Cauchy sequence in the Banach space $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}})$. \newline The proof is complete. \end{proof} \begin{lem}\label{l5-5} If a sequence $\{\overline{z}_{n}\}$ in $D_{\mu}$ converges weakly to some $\overline{z}$ in ${V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})$ then up to a subsequence \begin{equation}\label{5-19} e^{-\beta t}\overline{z}_{n} \xrightarrow[n\rightarrow \infty]{}e^{-\beta t}\overline{z}\quad\mbox{strongly in}\quad L^{2}(0,\infty;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}). \end{equation} \end{lem} \begin{proof} The proof follows from the arguments used in proving Lemma \ref{l5-4} and is left to the reader. \end{proof} \begin{lem}\label{l5-6} The map $\chi$ is continuous in $D_{\mu},$ endowed with the norm $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega))$. \end{lem} \begin{proof} Let $\{\overline{\widehat{y}}_{n}\}_{n}$ $\left( \mbox{where}\,\, \overline{\widehat{y}}_{n}=( {\widehat{y}}_{n},{\widehat{w}_{n,c}} )\right)$ be a sequence in $D_{\mu}$ and assume that this sequence $\{\overline{\widehat{y}}_{n}\}_{n}$ strongly converges to $\overline{\widehat{y}}$ $\left( \mbox{where}\,\, \overline{\widehat{y}}=( {\widehat{y}},\widehat{w}_{c} )\right)$ in the norm $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ \newline As for all $n\in\mathbb{N},$ $\|\overline{\widehat{y}}_{n}\|_{V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{{N_{c}}})}\leqslant\mu,$ up to a subsequence we have the following weak convergence \begin{equation}\label{conyn} \begin{array}{l} \{\overline{\widehat{y}}_{n}\}_{n}\rightharpoonup \overline{\widehat{y}}\quad\mbox{in}\quad V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})\quad\mbox{as}\quad n\rightarrow\infty. \end{array} \end{equation} Now corresponding to the vector field ${\widehat{y}_{n}},$ let us denote by ${\widehat{\sigma}_{n}}$ the solutions to \eqref{5-2}$_{1}$-\eqref{5-2}$_{3}$. Similarly $\widehat{\sigma}$ is the solution to \eqref{5-2}$_{1}$-\eqref{5-2}$_{3}$ which corresponds to the vector field ${\widehat{y}}.$ As ${\widehat{y}_{n}}$ converges strongly to ${\widehat{y}}$ in the norm $L^{2}(0,\infty,{(1+t)^{-1}}dt,{L}^{2}(\Omega)),$ for any $T>0,$ ${\widehat{y}_{n}}$ converges to ${\widehat{y}}$ in particular in the norm $L^{1}(Q_{T}).$ Besides, the strong $L^1(\Sigma_T)$ convergence of ${\widehat{y}_{n}} \cdot \vec{n}$ towards ${\widehat{y}} \cdot \vec{n}$ is obvious in view of the identities \eqref{hatybd} and the strong convergence of $\widehat{w}_n$ to $\widehat{w}$ in $L^1(0,T)$, which immediately follows from the weak convergence of $\widehat{w}_n$ to $\widehat{w}$ in $H^1(0,\infty)$. Hence from Lemma \ref{l4-4}, we obtain that ${\widehat{\sigma}_{n}}$ strongly converges to $\widehat{\sigma}$ in $C^{0}([0,T],L^{q}(\Omega))$ for all $1\leqslant q <+\infty.$ Due to the suitable choice of $\mu$ in Lemma \ref{l5-3}, we can conclude from Theorem \ref{t4-1} (in particular from \eqref{fiessig}) that each of ${\widehat{\sigma}_{n}}$ and $\widehat{\sigma}$ vanishes for $t\geqslant T_{1}.$ So \begin{equation}\label{5-20} \begin{array}{l} \widehat{\sigma}_{n} \xrightarrow[n\rightarrow \infty]{}\widehat{\sigma}\quad\mbox{strongly in}\quad L^{\infty}(0,\infty;L^{q}(\Omega))\,\,\forall\,\,1\leqslant q<+\infty, \vspace{1.mm}\\ \forall n\in\mathbb{N} ,\,{\widehat{\sigma}_{n}} (t)=\widehat{\sigma}(t) =0\quad \mbox{for all}\quad t\geqslant T_{1}. \end{array} \end{equation} Also from \eqref{bdtran} and \eqref{smu0} we know that the $L^{\infty}(Q_{\infty})$ norm of the sequence ${\widehat{\sigma}_{n}}$ is uniformly bounded. \newline We will now check that $\mathcal{F}(\widehat{y}_{n},{\widehat{\sigma}_{n}})$ converges weakly in $L^{2}(Q_{\infty})$ to $\mathcal{F}(\widehat{y},\widehat{\sigma}).$ As $({\widehat{y}_{n}},{\widehat{w}_{n,c}})\in D_{\mu},$ from the estimate \eqref{5-9} we obtain a uniform bound for $\| \mathcal{F}({\widehat{y}_{n}},{\widehat{\sigma}_{n}})\|_{L^{2}(Q_{\infty})}.$ So there exists a subsequence of $\mathcal{F}(\widehat{y}_{n},{\widehat{\sigma}_{n}})$ which weakly converges in $L^{2}(0,\infty;{L}^{2}(\Omega)).$ This is therefore enough to show that the sequence $\mathcal{F}({\widehat{y}_{n}},{\widehat{\sigma}_{n}})$ converges to $\mathcal{F}({\widehat{y}},\widehat{\sigma})$ weakly in $\mathscr{D}'(Q_{\infty})$ ($i.e.$ in the sense of distribution). \newline Let us first check the weak convergence of the term $-e^{-\beta t}{\widehat{\sigma}_{n}}\frac{\partial{\widehat{y}_{n}}}{\partial t}.$ From \eqref{5-20} we know that ${\widehat{\sigma}_{n}}$ strongly converges to $\widehat{\sigma}$ in $L^{2}(Q_{\infty})$ and each of ${\widehat{\sigma}_{n}}$ and $\widehat{\sigma}$ vanishes for all $t\geqslant T_{1}$ (see \eqref{5-20}). Also from \eqref{conyn} we have that $\frac{\partial \widehat{y}_{n}}{\partial t}$ converges weakly to $\frac{\partial \widehat{y}_{n}}{\partial t}$ in $L^{2}(Q_{\infty}).$ Hence their product ${\widehat{\sigma}_{n}}\frac{\partial \widehat{y}_{n}}{\partial t}$ converges weakly to $\widehat{\sigma}\frac{\partial \widehat{y}}{\partial t}$ in $L^{1}(Q_{\infty}).$ So it is now easy to verify that $e^{-\beta t}{\widehat{\sigma}_{n}}\frac{\partial \widehat{y}_{n}}{\partial t}$ converges to $e^{-\beta t}\widehat{\sigma}\frac{\partial \widehat{y}}{\partial t}$ weakly in $L^{1}(Q_{\infty}).$\\ Now we consider $e^{-2\beta t}({\widehat{y}_{n}}\cdot\nabla){\widehat{y}_{n}}.$ As $\widehat{y}_{n}$ is bounded and weakly convergent to $\widehat{y}$ in ${V}^{2,1}(Q_{\infty}),$ using Lemma \ref{l5-5}, we have \begin{equation}\label{5-22} \begin{array}{l} e^{-2\beta t}{\widehat{y}_{n}} \xrightarrow[n\rightarrow \infty]{} e^{-2\beta t}\widehat{y}\quad \mbox{strongly in}\quad L^{2}(Q_{\infty}), \end{array} \end{equation} and \begin{equation}\label{5-23} \begin{array}{l} \nabla{\widehat{y}_{n}} \rightharpoonup\nabla\widehat{y}\quad\mbox{in}\quad L^{2}(Q_{\infty})\quad\mbox{as}\quad\,n\rightarrow\infty. \end{array} \end{equation} Therefore $e^{-2\beta t}(\widehat{y}_{n}\cdot\nabla)\widehat{y}$ converges to $e^{-2\beta t}(\widehat{y}\cdot\nabla)\widehat{y}$ weakly in $L^{1}(Q_{\infty}).$ \newline Since $\widehat{y}_{n}$ converges weakly to $y$ in $V^{2,1}(Q_{\infty}),$ one has the following $$ \nabla\widehat{y}_{n}\rightharpoonup \nabla\widehat{y}\quad\mbox{in}\quad L^{\infty}(0,\infty;L^{2}(\Omega))\cap L^{2}(0,\infty;H^{1}(\Omega))\quad\mbox{as}\quad n\rightarrow\infty. $$ We use the interpolation result \cite[Theorem II.5.5]{boy} to obtain the following in particular \begin{equation}\label{congy} \begin{array}{l} \nabla\widehat{y}_{n}\rightharpoonup \nabla\widehat{y}\quad\mbox{in}\quad L^{3}(0,\infty;L^{3}(\Omega))\quad\mbox{as}\quad n\rightarrow\infty. \end{array} \end{equation} Using \eqref{5-20}, \eqref{5-22} and \eqref{congy} one has the following weak convergence $$ e^{-2\beta t}{\widehat{\sigma}_{n}}(\widehat{y}_{n}\cdot\nabla)\widehat{y}_{n} \rightharpoonup e^{-2\beta t}\widehat{\sigma}(\widehat{y}\cdot\nabla)\widehat{y}\quad\mbox{in}\quad L^{1}(Q_{\infty})\quad\mbox{as}\quad n\rightarrow\infty. $$ The convergences of the remaining terms $e^{-\beta t}{\widehat{\sigma}_{n}}(v_{s}\cdot\nabla)\widehat{y}_{n},$ $e^{-\beta t}{\widehat{\sigma}_{n}}(\widehat{y}_{n}\cdot\nabla)v_{s}$ and $\beta e^{-\beta t}{\widehat{\sigma}_{n}}{\widehat{y}_{n}}$ can be analyzed similarly using the convergences \eqref{conyn} and \eqref{5-20}$_{1}.$ We thus conclude that $\mathcal{F}(\widehat{y}_{n},{\widehat{\sigma}_{n}})$ converges weakly to $\mathcal{F}(\widehat{y},\widehat{\sigma})$ in the space $\mathscr{D}'(Q_{\infty}).$ Hence this is also the $L^{2}(Q_{\infty})$ weak limit.\\ From Corollary \ref{c3-8}, we know that for the closed loop system \eqref{114r}, the map \begin{equation}\nonumber \begin{array}{l} \begin{matrix} L^{2}(0,\infty;{L}^{2}(\Omega))\times {V}^{1}_{0}(\Omega) & \mapsto & {V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})\\ ({f},{y}_{0}) & \mapsto & \overline{y} \end{matrix} \end{array} \end{equation} is linear and bounded. Hence we obtain that $\overline{y}_{n}=\chi(\overline{{\widehat{y}}}_{n})$ ($\overline{y}_{n}=( y_{n}, w_{n,c} )$) weakly converges to $\overline{y}=\chi(\overline{{\widehat{y}}})$ ($\overline{y}=( y, w_{c} )$) in $(D_{\mu},\|.\|_{V^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}).$ Finally as $D_{\mu}$ is compact in $L^{2}(0,\infty;{(1+t)^{-1}}dt,\\{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}})$ (see Lemma \ref{l5-4}), $\overline{y}_{n}$ strongly converges to $\overline{y}$ in $L^{2}(0,\infty;{(1+t)^{-1}}dt,{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ The proof of Lemma \ref{l5-6} is complete. \end{proof} \subsection{Conclusion}\label{conc} Let $\mu$ is as in Lemma \ref{l5-3}. Then\\ $(i)$\, For an initial datum $(\sigma_{0},y_{0})$ satisfying \eqref{smu0}, the map $\chi$ defined in \eqref{chi} maps $D_{\mu}$ defined in \eqref{Dmu} into itself.\\ $(ii)$\, The non-empty convex set $D_{\mu}$ is compact in the topology of $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}})$ (see Lemma \ref{l5-4}).\\ $(iii)$\, The map $\chi$ is continuous on $D_{\mu},$ endowed with the norm $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega))$ (Lemma \ref{l5-6}).\\ One observes that all the assumptions of Schauder fixed point theorem are satisfied by the map $\chi$ on $D_{\mu},$ endowed with the norm $L^{2}(0,\infty,{(1+t)^{-1}}dt;{L}^{2}(\Omega)\times\mathbb{R}^{N_{c}}).$ Therefore, Schauder fixed point theorem yields a fixed point $( {y}_{f}, w_{{f},c} )$ of the map $\chi$ in $D_{\mu}.$ Hence the trajectory $( \sigma_{{f}}, y_{f},w_{f,c} )$ solves the non linear problem \eqref{5-2*}. Moreover, as a consequence of Theorem \ref{t4-1} the following holds \begin{equation}\label{stabreq} \begin{array}{l} \sigma_{{f}}(.,t)=0\quad\mbox{in}\quad\Omega\quad\mbox{for}\quad t\geqslant T_{1}.\\ \end{array} \end{equation} Using \eqref{setmu} in \eqref{5-15} and \eqref{mu0}, one further obtains \begin{equation}\label{bndywc} \begin{array}{l} \|({y}_{f},w_{f,c})\|_{{V}^{2,1}(Q_{\infty})\times H^{1}(0,\infty;\mathbb{R}^{N_{c}})}\leqslant C\mbox{max}\,\{e^{\beta T_{1}}\|\sigma_{0}\|_{L^{\infty}(\Omega)},\|{y}_{0}\|_{{V}^{1}_{0}(\Omega)}\}, \end{array} \end{equation} for some positive constant $C.$ Once again using Theorem \ref{t4-1}, \eqref{bndywc} furnish the following continuous dependence on initial data \begin{equation}\label{finalest} \begin{array}{l} \|(\sigma_{{f}},y_{f})\|_{L^{\infty}(Q_{\infty})\times V^{2,1}(Q_{\infty})}\leqslant C\|(\sigma_{0},y_{0})\|_{L^{\infty}(\Omega)\times V^{1}_{0}(\Omega)}, \end{array} \end{equation} for some positive constant $C.$ Now in view of the change of unknowns \eqref{chun}, we obtain the existence of a trajectory $(\rho,v)\in L^{\infty}(Q_{\infty})\times V^{2,1}(Q_{\infty})$ which solves \eqref{1-3} and satisfies the decay estimate \eqref{1-6}. The proof of Theorem \ref{main} is complete. \end{proof} \section{Further comments}\label{furcom} Our result considers that the control $u_{c}$ is supported on $\Gamma_{c},$ which is an open subset of the inflow part $\Gamma_{in}$ (see \eqref{dgc}) of the boundary. This is in fact natural to control the inflow boundary of the channel. At the same time we remark that our analysis applies if one wants to control the outflow boundary $\Gamma_{out}$ or the lateral boundary $\Gamma_{0}$ of the channel $\Omega.$ In what follows we briefly discuss these cases.\\ (i) \textbf{Controlling the outflow boundary.} In this case the control zone $\Gamma_{c}$ is an open subset of $\Gamma_{out}.$ After the change of unknowns \eqref{chun}, one can imitate the linearization procedure (as done while transforming \eqref{2-1} into \eqref{2-2}). In this linearized system the transport equation modeling the density \eqref{2-2}$_{1}$-\eqref{2-2}$_{3}$ will remain unchanged but the boundary conditions on the velocity equations \eqref{2-2}$_{4}$-\eqref{2-2}$_{8}$ should be replaced by $y=0$ on $(\Gamma_{0}\cup\Gamma_{in})\times(0,\infty)$ and $y=\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){{g}_{j}}(x)$ on $\Gamma_{out}\times(0,\infty).$ Still the proof of the boundary controllability of the Oseen equations can be carried in a similar way as done in Section \ref{velocity} and in the same spirit of Corollary \ref{p3.0.2}, one can prove that if the initial condition $y_{0}$ and the non-homogeneous term $f$ are suitably small then the inflow and the outflow boundaries of the perturbed vector field $(v_{s}+e^{-\beta t}y)$ coincide with that of $v_{s}.$ Since the transport equation \eqref{2-2}$_{1}$-\eqref{2-2}$_{3}$ remains unchanged in this case, the analysis done in Section \ref{density} applies without any change. The fixed point argument done in Section \ref{final} to prove the stabilization of the coupled system \eqref{1-3} also applies without change.\\ (ii) \textbf{Controlling the lateral boundary.} In this case the control zone $\Gamma_{c}$ is an open subset of $\Gamma_{0}.$ In particular we assume that $\Gamma_{c}\subset\Gamma_{b}$ (where $\Gamma_{b}=(0,d)\times\{0\}\subset \Gamma_{0}$). Now the inflow and outflow boundaries of the velocity vector $(e^{-\beta t}y+v_{s})$ cannot be characterized by using the notations $\Gamma_{in}$ and $\Gamma_{out}$ (as defined in \eqref{1-2}), since $\Gamma_{c}$ can contain an inflow part and an outflow part and one can not prove a result similar to Corollary \ref{p3.0.2}. More precisely here we can use the following notations for time $t>0,$ \begin{equation}\label{infl} \left\{ \begin{array}{l} \Gamma^{*}_{in,y}(t)=\Gamma_{in}\cup \{x\in\Gamma_{c}\suchthat( v_s (x)+ e^{-\beta t} y(x,t)\cdot n(x))<0\}\subset \Gamma_{in}\cup\Gamma_{b},\\ \Gamma_{h}=(0,d)\times\{1\}. \end{array}\right. \end{equation} In a similar way as we have obtained \eqref{2-2} from \eqref{1-3}, one gets the following system \begin{equation}\label{2-2*} \left\{\begin{array}{ll} \displaystyle \frac{\partial \sigma}{\partial t}+(({v}_s+e^{-\beta t}{y}) \cdot \nabla)\sigma-\beta\sigma=0\quad &\mbox{in} \quad Q_{\infty}, \vspace{1.mm}\\ \sigma (x,t)=0 \quad& \mbox{on} \quad \bigcup_{t\in(0,\infty)}(\Gamma^{*}_{in,y}(t) \times\{t\}), \vspace{1.mm}\\ \sigma (x,0)=\sigma_0\quad&\mbox{in}\quad\Omega,\\[1.mm] \displaystyle \frac{\partial {y}}{\partial t}-\beta {y}-\nu \Delta {y}+ ({v}_s \cdot \nabla){y}+({y} \cdot \nabla){v}_s +\nabla q={f}\quad &\mbox{in}\quad Q_{\infty}, \vspace{2.mm}\\ \mbox{div}{y}=0\quad& \mbox{in}\quad Q_{\infty}, \vspace{2.mm}\\ {y}=0\quad& \mbox {on} \quad (\Gamma_{in}\cup \Gamma_{h}\cup \Gamma_{out}) \times (0,\infty),\\[1.mm] {y}=\sum\limits_{j=1}^{N_{c}}{w_{j}}(t){{g}_{j}}(x) \quad& \mbox{on} \quad \ \Gamma_{b} \times (0,\infty), \vspace{1.mm}\\ {y}(x,0)={y}_0\quad&\mbox{in}\quad\Omega. \end{array}\right. \end{equation} One can use arguments similar to the ones in Section \ref{velocity} in order to stabilize $y$ solving \eqref{2-2*}$_{4}$-\eqref{2-2*}$_{8}.$ The functions ${g_{j}}$ can be constructed with compact support in $\Gamma_{b}$ (imitating the construction \eqref{basisU0}), and we can recover the $C^{\infty}$ regularity of the boundary control and $V^{2,1}(Q_{\infty})$ regularity of $y.$ Hence the flow corresponding to the vector field $(e^{-\beta t}y+v_{s})$ is well defined in classical sense, consequently one can adapt the arguments used in Section \ref{density} to prove that $\sigma,$ the solution of \eqref{2-2*}$_{1}$-\eqref{2-2*}$_{3}$ belongs to $L^{\infty}(Q_{\infty})$ and vanishes after some finite time provided the initial condition $\sigma_{0}$ is supported away from the lateral boundaries and $y$ is small enough. The use of a fixed point argument to prove the stabilizability of the solution of \eqref{1-3} is again a straightforward adaptation of the arguments used in Section \ref{final}. \bibliographystyle{plain}
1,116,691,500,559
arxiv
\section{Experimental Setup} \label{sec:appendix-exp} \subsection{Black Box Models: Training and Performance} For tabular data, we train four models: a logistic regression model, a gradient-boosted tree model (50 estimators), a random forest model (50 estimators), and a densely-connected feed-forward neural network (with 4 hidden layers with relu activation consisting of 50, 100, 100, and 50 neurons, respectively). For the COMPAS dataset, we train the four models based on a 70\%-30\% train-test split of the dataset, using features to predict COMPAS risk score group. The test accuracies of the four models are 0.84, 0.83, 0.82, and 0.84, respectively. For the German credit dataset, we train the same four models based on a 80\%-20\% train-test split of the dataset, using features to predict credit risk group. The test accuracies of the four models are 0.74, 0.69, 0.75, and 0.70, respectively. For text data, we trained a widely-used LSTM-based text classifier, based on 120,000 training samples and 7,600 test samples, to predict the news category of the article from which a sentence was obtained. The model performs with 90.67\% accuracy. The architecture comprises of an embedding layer of dimension 300, followed by an LSTM layer of hidden size 256 connected to a four-dimensional output layer. For image data, we use the pre-trained ResNet-18 model \cite{he2016deep} and analyze explanations generated for predictions made to classify images to one of the 1000 classes. This model performs 69.758 \% and 89.078 \% on Accuracy@1 and Accuracy@5 metrics\footnote{\url{https://pytorch.org/vision/stable/models.html}}, respectively. \subsection{Explanation Methods} For tabular data, the perturbation-based explanation methods (LIME and KernelSHAP) were applied to explain all four models while the gradient-based explanation methods (Vanilla Gradients, Integrated Gradients, Gradient*Input, and SmoothGRAD) were applied to explain the logistic regression and neural network models to explain samples from the test set (1,482 samples for the COMPAS dataset and 200 samples for the German Credit dataset). Because gradients are not computed for tree-based models, the gradient-based explanation methods were not applied to the random forest and gradient-boosted tree models. When applying explanation methods with a sample size hyperparameter (LIME, KernelSHAP, Integrated Gradients, SmoothGRAD), we performed a convergence check and selected the sample size at which an increase in the number of samples does not significantly change the explanations. Change in explanations is measured by the L2 distance of feature attributions at the current versus previous sample size. For both COMPAS and German Credit datasets, we used the following number of perturbations/samples/steps for the following explanation methods: LIME (3,000), Integrated Gradients (1,500), SmoothGRAD (1,500). For the COMPAS dataset, when applying KernelSHAP, since the number of features is small, we used a sample size large enough to cover the entire coalition space ($2^7=128$ samples), thereby calculating exact Shapley values. For the German Credit dataset, when applying KernelSHAP, we used 3,000 samples, based on the convergence analysis. For text data, we applied all six explanation methods on the LSTM-based classifier to explain predictions for 7,600 samples in the test set. For LIME and KernelSHAP, we follow the convergence analysis described above and find that attributions do not change significantly beyond 500 perturbations; hence, we use 500 perturbations for LIME and KernelSHAP. Integrated Gradients explanations were generated using 500 steps which is higher than the recommended number of steps mentioned in \cite{sundararajan2017axiomatic}. SmoothGRAD explanations were generated using 500 samples to get the most confident attribution which is significantly higher the recommended number of 50 samples \cite{smilkov2017smoothgrad}. For image data, we applied all six explanation methods on the ResNet-18 model \cite{he2016deep} to explain predictions for the PASCAL VOC 2012 test set of 1,449 samples. Integrated Gradients explanations were generated using 400 steps, significantly higher than the recommendation of 300 \cite{sundararajan2017axiomatic}, to obtain a stable and confident attribution map. Similarly, SmoothGRAD explanations were generated using a sample size of 200 which is also higher than the recommended sample size of 50 \cite{smilkov2017smoothgrad}. For LIME and KernelSHAP, we chose 100 perturbations to train the surrogate model as we did not notice any significant changes in attributions beyond 50 perturbations. KernelSHAP and LIME were used to compute attributions of super-pixels annotated in PASCAL VOC 2012 segmentation maps. Due to a larger feature space in images compared to the previous tabular and text datasets, disagreement metrics based on top-$k$ features may not provide a clear picture. Hence, we use Rank Correlation and cosine distance between attribution maps generated by a pair of explanation methods as the disagreement metric. Higher cosine distance between attribution maps indicate larger disagreement between explanation methods. \clearpage \section{Results from Empirical Analysis of Disagreement Problem} \label{appendix:results} \subsection{COMPAS Dataset} \label{appendix-compas} \subsubsection{Figure description: metrics measuring agreement among a set of selected features} \label{fig-descrip-allfeatures} Disagreement between explanation methods as measured by rank correlation (left column) and pairwise rank agreement (right column) over test set data points. Both metrics are calculated across all features. Heatmaps show the average metric value and boxplots show the distribution of metric values for each pair of explanation methods. In heatmaps, lighter colors indicate stronger disagreement. Minimum and maximum standard errors are indicated below each heatmap. \subsubsection{Figure description: metrics measuring agreement among top-$k$ features} \label{fig-descrip-varyingk} Disagreement between explanation methods as measured by rank agreement, feature agreement, sign agreement, and signed rank agreement (each row is one metric). By definition, when $k$ equals the full set of features, feature agreement equals one. Heatmaps show the average metric value for each pair of explanation methods, with lighter colors indicating stronger disagreement. Minimum and maximum standard errors are indicated below each heatmap. \clearpage \noindent\textbf{Neural Network} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-nn-a.png} \centering \caption{Disagreement between explanation methods for neural network model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-nn-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-nn-c.png} \centering \caption{Disagreement between explanation methods for neural network model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \noindent\textbf{Logistic Regression} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-logistic-a.png} \centering \caption{Disagreement between explanation methods for logistic regression model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-logistic-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-logistic-c.png} \centering \caption{Disagreement between explanation methods for logistic regression model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \noindent\textbf{Random Forest} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-rf-a.png} \centering \caption{Disagreement between explanation methods for random forest model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-rf-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-rf-c.png} \centering \caption{Disagreement between explanation methods for random forest model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \noindent\textbf{Gradient-Boosted Tree} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-gb-a.png} \centering \caption{Disagreement between explanation methods for gradient-boosted tree model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-gb-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-compas-gb-c.png} \centering \caption{Disagreement between explanation methods for gradient-boosted tree model trained on COMPAS dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \subsection{German Credit Dataset} \label{appendix-german} \noindent\textbf{Neural Network} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-nn-a.png} \centering \caption{Disagreement between explanation methods for neural network model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-nn-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-nn-c.png} \centering \caption{Disagreement between explanation methods for neural network model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \noindent\textbf{Logistic Regression} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-logistic-a.png} \centering \caption{Disagreement between explanation methods for logistic regression model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-logistic-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-logistic-c.png} \centering \caption{Disagreement between explanation methods for logistic regression model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \noindent\textbf{Random Forest} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-rf-a.png} \centering \caption{Disagreement between explanation methods for random forest model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-rf-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-rf-c.png} \centering \caption{Disagreement between explanation methods for random forest model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \noindent\textbf{Gradient-Boosted Tree} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-gb-a.png} \centering \caption{Disagreement between explanation methods for gradient-boosted tree model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-allfeatures}.} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-gb-b.png} \includegraphics[width=\linewidth]{images/tabular_appendix/app-german-gb-c.png} \centering \caption{Disagreement between explanation methods for gradient-boosted tree model trained on German Credit dataset. Figure description in Appendix \ref{fig-descrip-varyingk}.} \end{figure} \clearpage \subsection{AG\_News Dataset} \begin{figure}[!h] \includegraphics[width=0.5\linewidth]{images/text_final/nn_overlap_k11_distr.png} \centering \caption{Box plot for feature agreement on AG\_News dataset} \label{text:box2} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{images/text_final/topk_boxplot_by_method_avgprop_nn.png} \centering \caption{Box plot for feature agreement on AG\_News dataset} \label{text:box} \end{figure} \subsection{ImageNet Dataset} \begin{table*}[!ht] \centering \small \begin{tabular}{p{5cm}p{2cm}} \midrule \textbf{Metrics} & \textbf{ResNet-18 } \\ \cline{1-2} \textbf{Rank correlation} & 0.8977 \\ \textbf{Pairwise rank agreement} & 0.9302 \\ \textbf{Feature agreement} & 0.9535 \\ \textbf{Rank agreement} & 0.8478 \\ \textbf{Sign agreement} & 0.9218 \\ \textbf{Signed rank agreement} & 0.8193 \\ \bottomrule \end{tabular} \caption{\label{citation-guide} Disagreement on ImageNet between LIME and KernelSHAP} \label{tab:imagenet} \end{table*} \begin{figure}[!htb] \begin{minipage}{0.7\textwidth} \centering \includegraphics[width=\linewidth]{images/vision/cosine_nn_k5.png} \caption{Rank correlation for explanations computed at pixel level by gradient-based explanation methods}\label{vision:cosine} \end{minipage} \end{figure} \section{Omitted Details from Section~\ref{section-user-study}} \subsection{Screenshots of UI}\label{sec:appendix-ui} In Figures \ref{fig:appendix-screenshot-1} and \ref{fig:appendix-screenshot-2}, we present screenshots of the UI that participants are presented with before beginning the study. The purpose of this introduction page is to familiarize the participants with the COMPAS prediction setting, the six explainability methods we use, and the explainability plots we show in each of the prompts. \begin{figure}[H] \centering \includegraphics[width=0.95\linewidth]{images/study/ui_intro.png} \caption{This is a screenshot of the first half of the introductory page, describing our COMPAS risk score prediction setting and briefly summarizing the six explainability algorithms used (with links to their corresponding papers for the interested participant).} \label{fig:appendix-screenshot-1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\linewidth]{images/study/ui_yourtask.png} \caption{This is a screenshot of the second half of the introductory page, describing the concrete task and an explanation of what is shown in the explainability plots.} \label{fig:appendix-screenshot-2} \end{figure} \subsection{Prompts Used} In this section, we share the 15 prompts that we showed users. Each prompt highlights a pair of different explainability algorithms on a COMPAS data point. For each pair, we chose the data point from the entire COMPAS set that maximized the rank correlation between the explanations. \begin{figure} \includegraphics[width=.49\textwidth]{images/study/compas_Gradient_Gradient_Input.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_Gradient_Input_IntegratedGradients.png}\hfill \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_Gradient_Input_SmoothGRAD.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_Gradient_IntegratedGradients.png} \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_Gradient_SmoothGRAD.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_IntegratedGradients_SmoothGRAD.png}\hfill \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_KernelSHAP_Gradient_Input.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_KernelSHAP_Gradient.png} \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_KernelSHAP_IntegratedGradients.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_KernelSHAP_SmoothGRAD.png}\hfill \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_LIME_Gradient_Input.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_LIME_Gradient.png} \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_LIME_IntegratedGradients.png}\hfill \includegraphics[width=.49\textwidth]{images/study/compas_LIME_KernelSHAP.png}\hfill \\[\smallskipamount] \includegraphics[width=.49\textwidth]{images/study/compas_LIME_SmoothGRAD.png}\hfill \caption{Images showing the 15 prompts we used. Each prompt shows the explanation of the same input point with two different interpretability algorithms.}\label{fig:ui-prompts} \end{figure} \subsection{User Study Questions}\label{sec:appendix-questions} In each of the five prompts, we asked participants the following questions, which we refer to as \textit{Set 1}. Questions 3-4 were only shown if the user selected \textit{Mostly agree}, \textit{Mostly disagree}, or \textit{Completely disagree} to Question (1). \begin{enumerate} \item To what extent do you think the two explanations shown above agree or disagree with each other? (choice between \textit{Completely agree, Mostly agree, Mostly disagree, Completely disagree}) \item Please explain why you chose the above answer. \item Since you believe that the above explanations disagree (to some extent), which explanation would you rely on? (choice between \textit{Algorithm 1 explanation, Algorithm 2 explanation, It depends}) \item Please explain why you chose the above answer. \end{enumerate} \noindent After answering all five prompts, the user was then asked the following set of questions, which we refer to as \textit{Set 2}. Questions 4-9 were only shown if the user selected \textit{Yes} to Question 3. \begin{enumerate} \item (Optional) What is your name? \item What is your occupation? (eg: PhD student, software engineer, etc.) \item Have you used explainability methods in your work before? (\textit{Yes/No}) \item What do you use explainability methods for? \item Which data modalities do you run explainability algorithms on in your day to day workflow? (eg: tabular data, images, language, audio, etc.) \item Which explainability methods do you use in your day to day workflow? (eg: LIME, KernelSHAP, SmoothGrad, etc.) \item Which methods do you prefer, and why? \item Do you observe disagreements between explanations output by state of the art methods in your day to day workflow? \item How do you resolve such disagreements in your day to day workflow? \end{enumerate} \subsection{Further analysis of overall agreement levels} \label{sec:appendix-agreement} In this section, we present further plots analyzing responses to questions (1) in Set 1. As shown in Figure \ref{fig:overall-agreement}, only 32\% of responses were \textit{Mostly Agree/Completely Agree} and 68\% were \textit{Mostly Disagree/Completely Disagree}, indicating that participants experienced the disagreement problem. We also grouped the responses by prompt, shown in Figure \ref{fig:prompt-agreement}, highlighting that different pairs of algorithms can have different levels of disagreement. We removed prompts with less than 4 total responses. We see that there are varying levels of disagreements among prompts. For example, all participants who were shown the Gradient vs. SmoothGrad prompt believed they agreed to some extent, while all participants who were shown the Gradient vs. Integrated Gradients prompt believed they disagreed to some extent. \begin{figure}[ht!] \centering \subfloat[This figure shows the distribution of responses in aggregation over all prompts. The x-axis shows the four possible responses, and the y-axis shows the number of times that response was chosen. Observe that in 68\% of cases, participants indicated that the prompts mostly or completely disagreed.]{\label{fig:overall-agreement} \centering \includegraphics[width=0.45\linewidth]{images/study/overall_agreement.png} } \hfill \subfloat[This figure shows the distribution of responses, sorted by prompt. The y-axis shows the pair of explainability algorithms shown in the prompt, and the x-axis shows the frequency that each response was chosen.]{\label{fig:prompt-agreement} \centering \includegraphics[width=0.45\linewidth]{images/study/prompt_agreement.png} } \caption{These figures show the distribution of answers to Question (1) in Set (1) from Section \ref{sec:appendix-agreement} in aggregation over all participants.} \end{figure} \subsection{Further analysis of reasons participants chose specific algorithms}\label{sec:appendix-reasons-algorithms} In this section, we analyze the responses to Set 1, Question (3) in Section \ref{sec:appendix-questions}. We saw, in \ref{subsubsection:q2}, that algorithms such as KernelSHAP were favored over other algorithms. In Table \ref{table:method-reasons}, we list the top reasons the four most frequently chosen algorithms were preferred, showcasing direct quotes from participants. \begin{table*}[htp] \caption{Reasons participants chose the top four most favored explainability algorithms (KernelSHAP, SmoothGrad, LIME, and Integrated Gradients) over others when explanations disagreed.} \centering \begin{tabular}{ |p{0.25\textwidth}|p{0.67\textwidth}| } \hline Algorithm & Reasons that algorithm was chosen in disagreement \\ \hline \multirow{3}{10em}{\textbf{KernelSHAP}} &\textbullet \, [36\%] SHAP is better for tabular data (\textit{"SHAP is more commonly used [than Gradient] for tabular data"}) \\ &\textbullet \, [25\%] SHAP is more familiar (\textit{"More information present + more familiarity"}) \\ &\textbullet \, [14\%] SHAP is a better algorithm overall (\textit{"SHAP seems more methodical than LIME"}, \textit{"SHAP is a more rigorous approach [than LIME] in theory"})\\ \hline \multirow{2}{10em}{\textbf{SmoothGrad}} &\textbullet \, [33\%] SmoothGrad paper is newer or better (\textit{"SmoothGrad is apparently more robust", "SmoothGrad is often considered improved verison of grad"}) \\ &\textbullet \, [58\%] Reasons based on the explainability map shown (\textit{"directionality of the attributions ... [agree] with intuition"}, \textit{"gradient has unstability problems [, so] smoothgrad"}) \\ \hline \multirow{2}{10em}{\textbf{LIME}} &\textbullet \, [54\%] LIME is better for tabular data (\textit{"I use LIME for structured data."}) \\ &\textbullet \, [15\%] LIME is more familiar/easier to interpret (\textit{"I am more familiar with LIME"}, \textit{"LIME is easy to interpret"}) \\ \hline \multirow{1}{10em}{\textbf{Integrated Gradients}} &\textbullet \, [86\%] Integrated Gradients paper is better (\textit{"IG came after gradients and paper shows improvements"}, \textit{"integrated gradients paper showed improvements [over Gradient $\times$ Input]"} \\ \hline \end{tabular} \label{table:method-reasons} \end{table*} \subsection{Analysis of reasons participants chose neither algorithm} In this section, we analyze the responses to Set 1, Question (4) in Section \ref{sec:appendix-questions}, focusing on when participants selected \textit{``It depends''} in Question (3), which was chosen in 38\% of cases. Again, we present an overarching summary of the reasons participants made this decision in Table \ref{table:method-reasons-neither}. \begin{table*}[tph] \caption{Reasons people answered \textit{"It depends"} after being asked to choose between disagreements} \centering \begin{tabular}{ |p{0.25\textwidth}|p{0.67\textwidth}| } \hline Rationale & Representative Quote \\ \hline \textbf{1. Need more information} &\textbullet \, \textit{"need to see the final prediction of the model and the feature values"} \\ \hline \textbf{2. Pick neither explanation} &\textbullet \, \textit{"No compelling reason to choose one over the other. Both don't align with intuition."} \\ \hline \textbf{3. Unsure/Don't know} &\textbullet \, \textit{"I'm not sure which of the two methods is more trustworthy"} \\ \hline \textbf{4. Would consult an expert} &\textbullet \, \textit{"I would ask a domain expert for his/her opinion"} \\ \hline \textbf{5. Combine explanations} &\textbullet \, \textit{"I would combine both -- note that age might be doing weird things, but that length of stay and race both contribute to a negative prediction"} \\ \hline \textbf{6. Depends on use case} &\textbullet \, \textit{"The two methods have different interpretations - it depends on if I'm more interested in comparing my explanation to some baseline individual state versus just interested in understanding the immediate local behavior"} \\ \hline \end{tabular} \label{table:method-reasons-neither} \end{table*} \subsection{Further analysis of concluding questionnaire}\label{sec:appendix-practice} In this section, we extend the analysis presented in \ref{subsubsection:q3}, analyzing the responses to questions in Set 2 of Section \ref{sec:appendix-questions}. As stated in \ref{subsubsection:q3}, we received a total of 20 positive responses to Question (3), but one declined to answer Questions (4) through (9). Therefore, we analyze the remaining 19 responses. In Question (4), we found that study participants use explainability methods for a variety of reasons such as understanding models, debugging models, help explain models to clients, research. In Question (5), we found that 16 of 19 participants employed explanations for tabular data, 6 of 19 participants for text and language data, 11 of 19 participants for image data, and 1 of 19 for audio data. In Question (6), we found that 14 of 19 participants used LIME, 14 of 19 participants used SHAP, and 13 of 19 participants used some sort of gradient-based methods. Participants also indicated using methods like GradCAM, dimensionality reduction, MAPLE, and rule-based methods. In Question (7), 9 of 19 participants stated that they preferred both LIME and SHAP, with another 3 of 19 participants stating LIME only. We showcase some intriguing answers from Question (7) below: \begin{itemize} \item ``LIME and SHAP seem to be the most universally applicable and I can understand.'' \item ``Methods with underlying theoretical justifications such as KernelSHAP and Integrated Gradients'' \item ``lime and shap ... [easy to implement] and can work with black box'' \item ``shap and lime because ... [they are] easy to understand and have standard implementations'' \item ``LIME, because everything else isn't necessarily capturing what I actually want to know about the local behavior'' \end{itemize} Finally, we provide additional quotes highlighting the responses to Questions (8) and (9), which were briefly analyzed in Section \ref{subsubsection:q3}. These are shown in Table \ref{table:final-reasons-overall}. \begin{table*}[htp] \caption{Representative quotes highlighting themes of how participants address the disagreement problem in their day to day work} \centering \begin{tabular}{ |p{0.20\textwidth}|p{0.74\textwidth}| } \hline Category of Response & Samples Quotes \\ \hline \multirow{3}{0.20\textwidth}{\textbf{1. Make arbitrary decisions (50\%).}} &\textbullet \, \textit{"Such disagreements are resolved by data scientists picking their favorite algorithm"} \\ &\textbullet \, \textit{"I try to use rules of thumb based on results in research papers and/or easy to understand outputs."}\\ &\textbullet \, \textit{"I favor lime and shap because there is well documented packages on github"}\\ \hline \multirow{2}{0.20\textwidth}{\textbf{2. Unsure/Don't know/Don't resolve (36\%)}} &\textbullet \, \textit{"there is no clear answer to me. I hope research community can provide some guidance"} \\ &\textbullet \, \textit{"unfortunately there is no good answer at my end ... I hope you can help me with finding an answer"}\\ \hline \multirow{2}{0.20\textwidth}{\textbf{3. Use other metrics (fidelity) (14\%). }} &\textbullet \, \textit{"By quantitative assessment of feature importance methods that assess specific properties like faithfulness"} \\ &\textbullet \, \textit{"I might try and use some metric to measure fidelity."}\\ \hline \end{tabular} \label{table:final-reasons-overall} \end{table*} \section{Conclusions and Discussion} We introduced and studied the \emph{disagreement problem} in explainable ML. More specifically, we formalized the notion of disagreement between explanations, analyzed how often such disagreements occur in practice, and how practitioners resolve these disagreements. We conducted interviews with data scientists to understand what constitutes disagreement between explanations, and introduced a novel quantitative framework to formalize this understanding. We then leveraged this framework to carry out a rigorous empirical analysis with four real-world datasets, six state-of-the-art post hoc explanation methods, and eight different predictive models, to measure the extent of disagreement between the explanations generated by various popular explanation methods. We also carried out an online user study with data scientists to understand how they resolve explanation disagreements. Our results indicate that state-of-the-art explanation methods often disagree in terms of the explanations they output, and worse yet, there do not seem to be any principled approach that ML practitioners employ to resolve these disagreements. For instance, 84\% of our interview participants reported encountering the disagreement problem in their day-to-day workflow. Our empirical analysis with real world data further confirmed that explanations generated by state-of-the-art methods often disagree with each other, and that this phenomenon persists across various models and data modalities. Furthermore, 86\% of our online user study responses indicated that ML practitioners either employed arbitrary heuristics (e.g., choosing a favorite method) or just simply did not know how to resolve the disagreement problem. We shed light on a unique problem that poses a critical challenge to adopting post hoc explanations in practice and pave the way for several interesting future research directions. First, it would be interesting to systematically study the reasons behind the occurrence of the explanation disagreement problem. Second, it would be interesting to propose novel approaches to address this problem. One way to do this is to come up with principled evaluation metrics which can help practitioners readily discern a reliable explanation from an unreliable one when there is a disagreement. Third, it would also be interesting to rethink the problem of explaining ML from scratch, and potentially develop a whole new set of algorithms that are built on a common set of guiding principles to avoid these kinds of disagreements. Lastly, it would be also be incredibly important to regularly educate data scientists and practitioners about state-of-the-art approaches (e.g., novel evaluation metrics) that can be used to resolve disagreements between explanations. \section{Empirical Analysis of Explanation Disagreement } \label{sec:quantitative} We leverage the metrics outlined in Section 3 and carry out a comprehensive empirical analysis with six state-of-the-art explanation methods and four real-world datasets to study the explanation disagreement problem. In this section, we describe the datasets that we use (Section~\ref{sec:exp-datasets}), our experimental setup (Section~\ref{sec:exp-implementations}), and key findings (Section~\ref{sec:exp-results}). \subsection{Datasets} \label{sec:exp-datasets} To carry out our empirical analysis, we leverage four well known datasets spanning three different data modalities (tabular, text, and images). For \textbf{tabular} data, we use the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) dataset \cite{compas} and the German Credit dataset \cite{german}. This dataset comprises of seven features capturing information about the demographics, criminal history, and prison time of 4,937 defendants. Each defendant in the data is labeled either as high-risk or low-risk for recidivism based on the COMPAS algorithm's risk score. The German Credit dataset contains twenty features capturing the demographics, credit history, bank account balance, loan information, and employment information of 1,000 loan applicants. The class label here is a loan applicant's credit risk (high or low). For \textbf{text} data, we use Antonio Gulli (AG)'s corpus of news articles (AG\_News)~\cite{ag}. The dataset contains 127,600 sentences (collected from 1,000,000+ articles from 2,000+ sources with a vocabulary size of 95,000+ words). The class label is the topic of the article from which a sentence was obtained (World, Sports, Business, or Science/Technology). For \textbf{image} data, we use the ImageNet$-1k$~\cite{ILSVRC15, imagenet} object recognition dataset. It contains 1,381,167 images belonging to 1000 object categories. We experiment with images from PASCAL VOC 2012~\cite{voc} which provides segmentation maps that can be directly used as super-pixels for the explanation methods. \subsection{Experimental Setup} \label{sec:exp-implementations} We train a variety of black box models on the data. In case of tabular data, we train four models: logistic regression, densely-connected feed-forward neural network, random forest, and gradient boosted trees. In case of text data, we train a widely-used vanilla LSTM-based text classifier on AG\_News~\cite{zhang2015character} corpus. For image data, we use the pre-trained ResNet-18~\cite{he2016deep} for ImageNet. Next, we apply six state-of-the-art post hoc explanation methods to explain the black box models' predictions for a set of test data points. We apply two perturbation-based explanation methods (LIME \cite{ribeiro16:naacl-demo} and KernelSHAP \cite{lundberg2017unified}), and four gradient-based explanation methods (Vanilla Gradient~\cite{simonyan2013saliency}, Gradient*Input~\cite{Shrikumar2016NotJA}, Integrated Gradients~\cite{sundararajan2017axiomatic}, and SmoothGrad~\cite{smilkov2017smoothgrad}). In case of explanation methods with a sample size hyper-parameter, we either run the explanation method to convergence (i.e., select a sample size such that an increase in the number of samples does not significantly change the explanations) or use a sample size that is much higher than the sample size recommended by previous work. We then evaluate the (dis)agreement between the explanation methods using the metrics described in Section~\ref{metrics-descrip}. For tabular and text data, we apply rank correlation and pairwise rank agreement across all features; and feature agreement, rank agreement, sign agreement, signed rank agreement across top-$k$ features for varying values of $k$. For image data, metrics that operate on the top-$k$ features are more applicable to super-pixels. Thus, we apply the six disagreement metrics on explanations output by LIME and KernelSHAP (which leverage super pixels), and calculate rank correlation (across all pixels as features) between the explanations output by gradient-based methods. See Appendix~\ref{sec:appendix-exp} for details. \subsection{Results and Insights} \label{sec:exp-results} We discuss the results of our empirical analysis for each of the three data modalities. \begin{figure}[ht!] \includegraphics[width=\textwidth]{images/tabular_final/result1.png} \centering \caption{Disagreement between explanation methods for neural network model trained on COMPAS dataset measured by six metrics: rank correlation and pairwise rank agreement across all features, and feature, rank, sign, and signed rank agreement across top $k=5$ features. Heatmaps show the average metric value over test set data points for each pair of explanation methods, with lighter colors indicating stronger disagreement. Across all six heatmaps, the standard error ranges between 0 and 0.009. } \label{mainfig-compas-constantk} \end{figure} \begin{figure}[ht!] \includegraphics[width=\textwidth]{images/tabular_final/compas-varyingk-2metrics-no-se.png} \centering \caption{Disagreement between explanation methods for neural network model trained on COMPAS dataset measured by rank agreement (top row) and signed rank agreement (bottom row) at top-$k$ features for increasing values of $k$. Each cell in the heatmap shows the metric value averaged over test set data points for each pair of explanation methods, with lighter colors indicating stronger disagreement. Across all six heatmaps, the standard error ranges between 0 and 0.003. } \label{mainfig-compas-varyingk} \end{figure} \subsubsection{\textbf{Tabular Data }} Figure~\ref{mainfig-compas-constantk} shows the disagreement between various pairs of explanation methods for the neural network model trained on COMPAS dataset. We computed the six metrics outlined in Section~\ref{metrics-descrip} where we used $k=5$ (out of 7 features) for metrics that focus on the top features. Each cell in the heatmap shows the metric value averaged over the test data points for each pair of explanation methods with lighter colors indicating more disagreement. We see that explanation methods tend to exhibit slightly higher values on pairwise rank agreement and feature agreement metrics, and relatively lower values on other metrics (indicating more disagreement). We next study the effect of the number of top features on the degree of disagreement. Figure \ref{mainfig-compas-varyingk} shows the disagreement of explanation methods for the neural network model trained on COMPAS dataset. We computed rank agreement (top row) and signed rank agreement (bottom row) at top-$k$ features for increasing values of $k$. We see that as the number of top-$k$ features increases, rank agreement and signed rank agreement decrease. This indicates that, as $k$ increases, top-$k$ features of a pair of explanation methods are less likely to contain shared features with the same rank (as measured by rank agreement) or shared features with the same rank and sign (as measured by signed rank agreement). These patterns are consistent across other models trained on the COMPAS dataset. See Appendix \ref{appendix-compas}. In addition, across all metrics, values of $k$, and models, the specific explanation method pairs of Grad-SmoothGrad and Grad*Input-IntGrad consistently exhibit strong agreement while the pairs of Grad-IntGrad, Grad-Grad*Input, SmoothGrad-Grad*Input and SmoothGRAD-IntGrad consistently exhibit strong disagreement. This suggests a dichotomy among gradient-based explanation methods, i.e., certain gradient-based explanation methods are consistent with one another while others are inconsistent with one another. See Appendix \ref{appendix-compas} for more details. Furthermore, there are varying degrees of disagreement among pairs of explanation methods. For example, for the neural network model trained on the COMPAS dataset, rank correlation displays a wide range of values across explanation method pairs, with 10 out of 15 explanation method pairs even exhibiting negative rank correlation when explaining multiple data points. This is shown in the left panel of Figure \ref{text_tabular:boxplot}, which displays the rank correlation over all the features among all pairs of explanation methods for neural network model trained on COMPAS dataset. All the patterns discussed above are also generally reflected in the German Credit dataset (Appendix \ref{appendix-german}). However, explanation methods tend to display stronger disagreement for the German Credit dataset than for the COMPAS dataset. For example, rank agreement and signed rank agreement are lower for the German Credit dataset than for the COMPAS dataset at top 25\%, 50\%, 75\%, and 100\% of features for both logistic regression and neural network models (Appendix \ref{appendix-compas} and \ref{appendix-german}). One possible reason is that the German Credit dataset has a larger set of features than the COMPAS dataset, resulting in a larger number of possible ranking and sign combinations assigned by a given explanation method and making it less likely for two explanation methods to produce consistent explanations. Lastly, explanation methods display trends associated with model complexity. For example, the disagreement between explanation methods is similar or stronger for the neural network model than for the logistic regression model across metrics and values of $k$, for both COMPAS and German Credit datasets (Appendix \ref{appendix-compas} and \ref{appendix-german}). In addition, explanation methods show similar levels of disagreement for the random forest and gradient-boosted tree models. These trends suggest that disagreement among explanation methods may increase with model complexity. As the complexity of the black box model increases, it may be more difficult to accurately approximate the black box model with a simpler model (LIME’s strategy, for example) and more difficult to disentangle the contribution of each feature to the model’s prediction. Thus, the higher the model complexity, the more difficult it may be for different explanation methods to generate the true explanation and the more likely it may be for different explanation methods to generate differently false explanations, leading to stronger disagreement among explanation methods. \begin{figure}[ht!] \includegraphics[width=\textwidth]{images/tabular_final/result2.png} \centering \caption{Distribution of rank correlation over all features for neural network model trained on COMPAS (left), and rank correlation across all features (middle) and signed rank agreement across top-$11$ features (right) for neural network model trained on AG\_News.} \label{text_tabular:boxplot} \end{figure} \subsubsection{\textbf{Text Data}} \begin{figure}[ht!] \includegraphics[width=\textwidth]{images/text_final/combined.png} \centering \caption{Disagreement between explanation methods for the LSTM model trained on the AG\_News dataset using $k=11$ features for metrics operating on top-$k$ features, and all features for other metrics. Each heatmap shows the metric value averaged over test data for each pair of explanation methods. Lighter colors indicate more disagreement. Standard error ranges from 0.0 to 0.0025 for all six metrics. } \label{text:heatmap} \end{figure} In the case of text data, we deal with a high-dimensional feature space where words are features. We plot the six metrics for $k = 11$ which is ~25\% of the average text length of a sentence (data point) in the dataset (Figure \ref{text:heatmap}). As can be seen, we observe severe disagreements across all the six disagreement metrics. Rank agreement and signed rank agreement are the lowest between explanations with values under 0.1 for most cases indicating disagreement in over 90\% of the top-$k$ features. Trends are quite similar for rank correlation and feature agreement with better agreement between gradient based explanation methods, such as a feature agreement of 0.61 for Grad*Input and Gradient method. \par Other than the overall disagreements, we also notice specific patterns of agreement between a group of explanation methods. Based on middle and right panels of Figure \ref{text_tabular:boxplot}, we notice that there is a high rank correlation between pairs of gradient based explanations. Although Integrated Gradients has the lowest correlation with the rest of the gradient methods, still this correlation is significantly higher compared to its correlation with KernelSHAP and LIME. We also notice that disagreement is lower between LIME and other explanation methods compared to KernelSHAP and other methods (e.g., rank correlation of 0.4-0.6 for LIME as opposed to 0.2-0.4 for KernelSHAP. See Appendix~\ref{appendix:results} for other metrics). Finally, we see a higher disagreement in explanation methods for text compared to tabular data which suggests that disagreement may worsen as the number of features increases. We also observe a similar agreement pattern between LIME and other methods which was also observed earlier in our experiments with tabular datasets, hence, indicating that LIME explanations are most aligned with other post hoc explanation methods. \iffalse \begin{table*}[!ht] \centering \small \begin{tabular}{p{5cm}p{2cm}} \midrule \textbf{Metrics} & \textbf{ResNet-18 } \\ \cline{1-2} \textbf{Rank correlation} & 0.8977 \\ \textbf{Pairwise rank agreement} & 0.9302 \\ \textbf{Feature agreement} & 0.9535 \\ \textbf{Rank agreement} & 0.8478 \\ \textbf{Sign agreement} & 0.9218 \\ \textbf{Signed rank agreement} & 0.8193 \\ \bottomrule \end{tabular} \caption{\label{citation-guide} Disagreement in ImageNet \todo{will add ResNet-152 if needed}} \label{tab:imagenet} \end{table*} \fi \subsubsection{\textbf{Image Data}} While LIME and KernelSHAP consider super pixels of images as features, gradient based methods consider pixels as features. Furthermore, the notion of top-$k$ features and the metrics we define on top-$k$ features are not semantically meaningful when we consider pixels as features. Given this, we compute all the six metrics to capture disagreement between explanations output by LIME and KernelSHAP with super pixels as features. We also compute rank correlation on all the pixels (features) to capture disagreement between explanations output by gradient based methods. Unlike earlier trends with tabular and text data, we see higher agreement between KernelSHAP and LIME on all the six metrics: rank correlation of 0.8977, pairwise rank agreement of 0.9302, feature agreement of 0.9535, rank agreement of 0.8478, sign agreement of 0.9218 and signed rank agreement of 0.8193. However, the trends are quite opposite when we compute rank correlation at pixel-level for gradient-based methods (See Appendix \ref{appendix:results}). For instance, rank correlation between Integrated Gradients and SmoothGrad is 0.001 (indicating high disagreement). The disagreement is similarly quite high in case of other pairs of gradient based methods. This suggests that disagreement could potentially vary significantly based on the granularity of image representation. \section{Introduction} As machine learning (ML) models are increasingly being deployed to make consequential decisions in domains such as healthcare, finance, and policy, there is a growing emphasis on ensuring that these models are readily interpretable to ML practitioners and other domain experts (e.g., doctors, policy makers). In order to assess when and how much to rely on these models, and detect systematic errors and potential biases in them, practitioners often seek to understand the behavior of these models~\cite{doshi2017towards}. However, the increasing complexity as well as the proprietary nature of predictive models make it challenging to understand these complex black boxes, and thus motivate the need for tools and techniques that can explain them in a faithful and human interpretable manner. To this end, several techniques have been proposed in recent literature to explain complex models in a \emph{post hoc} fashion~\cite{ribeiro16:model-agnostic,lundberg2017unified,simonyan2013saliency, sundararajan2017axiomatic, selvaraju2017grad,smilkov2017smoothgrad}. Most of the popular \emph{post hoc explanation methods} focus on explaining individual predictions (i.e., local explanations) of any given model, and can be broadly categorized into \emph{perturbation based} (e.g., LIME, SHAP~\cite{ribeiro2016should,lundberg2017unified}) and \emph{gradient based} (e.g., Gradient times Input, SmoothGrad, Integrated Gradients, GradCAM~\cite{simonyan2013saliency, sundararajan2017axiomatic, selvaraju2017grad,smilkov2017smoothgrad}) methods. Owing to their generality, post hoc explanation methods are increasingly being utilized to explain a number of complex models in high stakes domains such as medicine, finance, law, and science~\cite{elshawi2019interpretability,ibrahim2019global,whitmore2016mapping}. Therefore, it becomes critical to ensure that the explanations generated by these methods are reliable. To this end, prior works~\cite{liu2021synthetic,petsiuk2018rise,slack2021reliable,zhou2021evaluating} proposed various evaluation metrics to quantify how \emph{faithfully} or \emph{accurately} a given explanation mimics the behavior of the underlying model. However, one of the biggest drawbacks of these metrics is that they are not general enough to be applicable to all model classes and real world settings. For example,~\citet{liu2021synthetic} evaluate fidelity (faithfulness) of post hoc explanations by comparing them with the ground truth (e.g., true feature importances) of the underlying model. However, such ground truth is typically unavailable in most real world applications where post hoc explanations are employed to understand complex black boxes~\cite{zhou2021evaluating}.~\citet{hooker2018evaluating} proposed ``Remove and Retrain'' (ROAR), which measures the fidelity of an explanation by retraining the underlying model with and without the features deemed as most important by the explanation. However, post hoc explanations are often employed in settings where there is no access to the underlying model. Prior work also leveraged some of the aforementioned metrics to analyze the behavior of post hoc explanation methods and their vulnerabilities -- e.g.,~\citet{ghorbani2019interpretation} and~\citet{slack2019can} demonstrated that methods such as LIME and SHAP may result in explanations that are not only inconsistent and unstable, but also prone to adversarial attacks and fair washing~\cite{aivodji2019fairwashing}. While prior research has already taken the first steps towards analyzing the behavior of explanation methods, several critical aspects pertaining to these methods still remain unexplored. For instance, data scientists and ML practitioners do not typically rely on a single explanation method, but instead employ multiple such methods simultaneously to understand the rationale behind individual model predictions~\cite{kaur2020interpreting}. While ML practitioners can obtain a coherent understanding of model behavior if multiple methods generate consistent explanations, this may not always be the case. There may be instances for which explanations generated by various methods disagree with each other -- e.g., the top-$k$ most important features output by different methods may differ. When faced with such a \emph{disagreement problem}, practitioners will need to decide which explanation to rely on. The extent to which this disagreement problem occurs in practice is unclear because there is little to no research on understanding how often explanations produced by state-of-the-art methods disagree with each other. Furthermore, if and when the disagreement problem occurs, practitioners need to tackle it carefully as they may end up relying on misleading explanations otherwise, which may in turn lead to catastrophic consequences -- e.g., trusting and deploying racially biased models, trusting incorrect model predictions and recommending sub-optimal treatments to patients, etc.~\cite{slack2019can}. However, the lack of reliable, general purpose evaluation metrics (as discussed in the previous paragraph) which can help ascertain and compare the quality of explanations may pose a serious challenge to addressing the disagreement problem in practice. Given all the above, it is critical to not only understand and quantify how often explanations output by state-of-the-art methods disagree with each other, but also study how such disagreements are currently being resolved by ML practitioners. However, there is no existing work that focuses on these important aspects. We address the aforementioned gaps by introducing and studying the \emph{disagreement problem} in explainable ML. To the best of our knowledge, this work is the first to highlight the disagreement problem, determine the extent to which it occurs in the real world, and understand how it is being resolved in practice. We make the following key contributions: a) We first obtain practitioner inputs on what constitutes explanation disagreement, and the extent to which they encounter this problem in their day-to-day workflow. To this end, we conduct semi-structured interviews\footnote{All the user interviews and studies in this work were approved by our institution’s IRB.} with data scientists (N = 25) who regularly work with explainability tools. Note that our work focuses on local explanation methods which output feature attributions such as LIME, SHAP, and gradient-based methods. b) Using the insights obtained from the aforementioned interviews, we formalize the notion of explanation disagreement, and propose a novel evaluation framework which can quantitatively measure the disagreement between any two explanations that explain the same model prediction. c) We leverage the aforementioned framework to carry out a rigorous empirical analysis with real-world data to quantify the level of disagreement between popular post hoc explanations. We experiment with four real-world datasets, six state-of-the-art explanation methods, and various popular predictive models (e.g., logistic regression, tree based models, deep neural networks, recurrent neural networks such as LSTM, and convolutional neural networks such as ResNet). d) Lastly, we study how explanation disagreements are resolved in practice. We carry out an online user study with data scientists (N = 24) where we show them pairs of explanations that disagree with each other, and ask them which explanation (if any) would they rely on and why. At the end of this survey, we also ask participants to provide a high-level description of the strategies they use to resolve explanation disagreements in their day-to-day workflow. \\ Results from our empirical analysis, user interviews and studies indicate that state-of-the-art explanation methods often disagree in terms of the explanations they output, and worse yet, there do not seem to be any principled, well established approaches that ML practitioners employ to resolve these disagreements. More specifically, 84\% of our interview participants reported encountering the disagreement problem in their day-to-day workflow. Our empirical analysis further confirmed that explanations generated by state-of-the-art methods often disagree with each other, and this phenomenon persists across various model classes and data modalities. Furthermore, 86\% of our online user study responses indicated that ML practitioners either employed arbitrary heuristics (e.g., choosing a favorite method) or just simply did not know how to resolve the disagreement problem. Our findings not only shed light on the previously unexplored disagreement problem, but also underscore the importance of developing principled evaluation metrics to effectively compare explanations, and educating practitioners about the same. \section{Understanding and Measuring Disagreement between Model Explanations} \label{sec:disagree} In this section, we discuss practitioner perspectives on what constitutes disagreement between two explanations, and then formalize the notion of explanation disagreement. To this end, we first describe the study that we carry out with data scientists to understand what constitutes explanation disagreement, and the extent to which they encounter this problem in practice. We then discuss the insights from this study, and leverage these insights to propose a novel framework which can quantitatively measure the disagreement between any two explanations. \subsection{Characterizing Explanation Disagreement Using Practitioner Inputs}\label{sec:survey-characterize} Here, we describe the study that we conducted with data scientists to characterize explanation disagreement, and then outline our findings and insights from this study. \subsubsection{\textbf{Interviews with Practitioners: Study Details}} We conducted 30-minute long semi-structured interviews with 25 data scientists who employ explainability techniques to understand model behavior and explain it to their customers and managers. All of these data scientists were employed in for-profit organizations, and worked for various companies in the technology and financial services sectors in the United States. Furthermore, all the participants used state-of-the-art (local) post hoc explanation methods such as LIME, SHAP, and gradient based methods in their day-to-day workflow. 19 of these participants (76\%) were male, and 6 of them (24\%) were female. 16 participants (64\%) had more than 2 years of experience working with explainability techniques, and the remaining 9 (36\%) had about 8 to 12 months of experience. Our interviews included, but were not limited to the following questions: Q1) \emph{How often do you use multiple explanation methods to understand the same model prediction?} Q2) \emph{What constitutes disagreement between two explanations that explain the same model prediction?} Q3) \emph{How often do you encounter disagreements between explanations output by different methods for the same model prediction?} \subsubsection{\textbf{Findings and Insights}} Our study revealed a wealth of information about how data scientists utilize explanation methods and their perspectives on disagreement between explanations. 22 out of the 25 participants (88\%) said that they almost always use multiple explanation methods to understand the same model prediction. Furthermore, 21 out of the 25 participants (84\%) mentioned that they have often run into some form of disagreement between explanations output by different methods for the same prediction. They also elaborated on when they think two explanations disagree: \paragraph{Top features are different:} Most of the popular post hoc explanation methods (e.g., LIME, SHAP, Gradient based methods) return a feature importance value associated with each feature. These values indicate which features contribute the most either positively or negatively (i.e., the top features) to the prediction. 21 out of the 25 participants (84\%) in our study mentioned that such a set of top features is \emph{"the most critical piece of information"} that they rely on in their day-to-day workflow. They also noted that they typically look at the top 5 to 10 features provided by an explanation for each prediction. When two explanations have different sets of top features, they consider it to be a disagreement. \paragraph{Ordering among top features is different:} 18 out of 25 participants (72\%) in our study indicated that they also consider the ordering among the top features very carefully in their workflow. Therefore, they consider a mismatch in the ordering of the top features provided by two different explanations to be a disagreement. \paragraph{Direction of top feature contributions is different:} 19 out of 25 participants (76\%) mentioned that the \emph{sign} or \emph{direction} of the feature contribution (is the feature contributing positively or negatively to the predicted class?) is another critical piece of information. Any mismatch in the signs of the top features between two explanations is a sign of disagreement. As remarked by one of the participants, "\emph{I saw an explanation indicating that a top feature bankruptcy contributes positively to a particular loan denial, and another explanation saying that it contributes negatively. That is a clear disagreement. The model prediction can be trusted with the former explanation, but not with the latter.}". \paragraph{Relative ordering of certain features is different:} 16 of our study participants (64\%) indicated that they also look at relative ordering between certain features of interest; and if explanations provide contradicting information about this aspect, then it is considered a disagreement. For example, one of the participants remarked, \emph{"I often check if salary is more important than credit score in loan approvals. If one explanation says salary is more important than credit score, and another says credit score is more important than salary; then it is a disagreement."} \\ A very striking finding from our study is that participants typically characterize explanation disagreement based on factors such as mismatch in top features, feature ordering, and directions of feature contributions, but not on the feature importance values output by different explanation methods. 24 out of 25 participants (96\%) in our study opine that feature importance values output by different explanation methods are not directly comparable. They also note that this is due to the fact that while LIME outputs coefficients of a linear model as feature importance values, SHAP outputs Shapley values as feature attributions which sum to the probability of the predicted class. So, they don't try to base explanation disagreement on these numbers not being equal or similar. One of our participants succinctly summarized practitioners' perspective on this explanation disagreement problem -- \emph{"The values generated by different explanation methods are clearly different. So, I would not characterize disagreement based on that. But, I would at least want the explanations they output to give me consistent insights. The explanations should agree on what are the most important features, the ordering among them and so on for me to derive consistent insights. But, they don't!"} \subsection{Formalizing the Notion of Explanation Disagreement} \label{metrics-descrip} Our study indicates that ML practitioners consider the following key aspects when they think about explanation disagreement: a) the extent to which explanations differ in the top-$k$ features, the signs (or directions of contribution) and the ordering of these top-$k$ features, and b) the extent to which explanations differ in the relative ordering of certain features of interest. To capture these intuitions about explanation disagreement, we propose six different metrics, namely, \emph{feature agreement}, \emph{rank agreement}, \emph{sign agreement}, \emph{signed rank agreement}, \emph{rank correlation}, and \emph{pairwise rank agreement}. While the first four metrics capture disagreement w.r.t. the top-$k$ features of the explanations, the last two metrics capture disagreement w.r.t. a selected set of features which could be provided as input by an end user. \subsubsection{\textbf{Measuring Disagreement w.r.t. Top-k Features}} We now define four metrics, which capture specific aspects of explanation disagreement w.r.t. the top-$k$ features.\footnote{The top-$k$ features of an explanation are typically computed only based on the magnitude of the feature importance values and not the signs.} Lower values indicate higher disagreement for all the metrics. \paragraph{Feature Agreement: } ML practitioners in our study (Section 3.1) clearly indicated that a key notion of disagreement between a pair of explanations is that they output different top-$k$ features. To capture this notion, we introduce the feature agreement metric which computes the fraction of common features between the sets of top-$k$ features of two explanations. Given two explanations $E_a$ and $E_b$, the feature agreement metric can be formally defined as: \[FeatureAgreement(E_a, E_b, k) = \frac{| top\_features(E_a, k) \cap top\_features(E_b, k)|}{k} \] where $top\_features(E,k)$ returns the set of top-$k$ features (based on the magnitude of the feature importance values) of the explanation $E$. If the sets of top-$k$ features of explanations $E_a$ and $E_b$ match, then $FeatureAgreement(E_a, E_b, k) = 1$. \paragraph{Rank Agreement: } Practitioners in our study also indicated that if the ordering of the top-$k$ features is different for two explanations (even if the feature sets are the same), then they consider it to be a disagreement. To capture this notion, we introduce the rank agreement metric which computes the fraction of features that are not only common between the sets of top-$k$ features of two explanations, but also have the same position in the respective rank orders. Rank agreement is a stricter metric than feature agreement since it also considers the ordering of the top-$k$ features. Given two explanations $E_a$ and $E_b$, the rank agreement metric ($RankAgreement(E_a, E_b, k)$) can be formally defined as: $$ \frac{|\bigcup\limits_{s \in S} \{ { s \text{ } | \text{ } s \in top\_features(E_a,k) \wedge s \in top\_features(E_b,k) \wedge rank(E_a, s) = rank(E_b, s) } \} | }{k} $$ where $S$ is the complete set of features in the data, $top\_features(E,k)$ is defined as above, and $rank(E,s)$ returns the position (or the rank) of the feature $s$ according to the explanation $E$. If the rank-ordered lists of top-$k$ features of explanations $E_a$ and $E_b$ match, then $RankAgreement(E_a, E_b, k) = 1$. \paragraph{Sign Agreement: } In our study, practitioners also mentioned that they consider two explanations to disagree if the feature attribution signs or the directions of feature contribution (does a feature contribute positively or negatively to the prediction?) do not align for the top-$k$ features. To capture this notion, we introduce the sign agreement metric which computes the fraction of features that are not only common between the sets of top-$k$ features of two explanations, but also share the same sign (direction of contribution) in both explanations. Sign agreement is a stricter metric than feature agreement since it also considers signs (directions of contributions) of the top-$k$ features. More formally, $SignAgreement(E_a, E_b, k)$ is defined as: $$ \frac{|\bigcup\limits_{s \in S} \{ {s \text{ } | \text{ } s \in top\_features(E_a,k) \wedge s \in top\_features(E_b,k) \wedge sign(E_a, s) = sign(E_b, s) } \} |}{k} $$ where $sign(E,s)$ returns the sign (direction of contribution) of the feature $s$ according to the explanation $E$. \paragraph{Signed Rank Agreement: } This metric fuses together all the above notions, and computes the fraction of features that are not only common between the sets of top-$k$ features of two explanations, but also share the same feature attribution sign (direction of contribution) and position (rank) in both explanations. Signed rank agreement is the strictest compared to all the aforementioned metrics since it considers both the ordering and the signs (directions of contributions) of the top-$k$ features. More formally, $SignedRankAgreement(E_a, E_b, k)$ is formulated as: \begin{equation*} \frac{\splitdfrac{|\bigcup\limits_{s \in S} \{ s \text{ } | \text{ } s \in top\_features(E_a,k) \wedge s \in top\_features(E_b,k)}{ \wedge sign(E_a, s) = sign(E_b, s) \wedge rank(E_a, s) = rank(E_b, s) \} |}}{k} \end{equation*} where $top\_features$, $sign$, $rank$ are all as defined above. $SignedRankAgreement(E_a, E_b, k) = 1$ if the top-$k$ features of two explanations match on all aspects (i.e., features, feature attribution signs, rank ordering) barring the exact feature importance values. \subsubsection{\textbf{Measuring Disagreement w.r.t. Features of Interest}}\label{metrics-chosen-feature-set} Practitioners also indicated that they consider two explanations to be different if the relative ordering of features of interest (e.g., salary and credit score discussed in Section 3.1) differ between the two explanations. To formalize this notion, we introduce the two metrics below. \paragraph{Rank Correlation: } We adopt a standard rank correlation metric (i.e., Spearman's rank correlation coefficient) to measure the agreement between feature rankings provided by two explanations for a selected set of features. In practice, this selected set of features corresponds to features that are of interest to end users, and can be provided as input by end users. Given two explanations $E_a$ and $E_b$, rank correlation can be computed as: \normalsize $$RankCorrelation(E_a, E_b, F) = r_{s}(Ranking(E_a, F), Ranking(E_b, F))$$ \normalsize where $F$ is a selected set of features potentially input by an end user, $r_s$ computes Spearman's rank correlation coefficient, and $Ranking(E,F)$ assigns ranks to features in $F$ based on explanation $E$. Lower values indicate higher disagreement. \paragraph{Pairwise Rank Agreement: } Pairwise rank agreement takes as input a set of features that are of interest to the user, and captures if the relative ordering of every pair of features in that set is the same for both the explanations i.e., if feature A is more important than B according to one explanation, then the same should be true for the other explanation. More specifically, this metric computes the fraction of feature pairs for which the relative ordering is the same between two explanations. More formally: \small $$PairwiseRankAgreement(E_a, E_b, F) = \frac{ \sum\limits_{i, j \text{ for } i<j} \mathbbm{1}[RelativeRanking(E_a, f_i, f_j) = RelativeRanking(E_b, f_i, f_j)]}{\binom{|F|}{2}}$$ \normalsize where $F = \{f_1, f_2 \cdots\}$ is a selected set of features input by an end user, $RelativeRanking(E,f_i,f_j)$ is an indicator function which returns $1$ if feature $f_i$ is more important than feature $f_j$ according to explanation $E$, and $0$ otherwise. \section{Related Work} \label{sec:related} Our work builds on the vast literature in explainable ML. We discuss prior works and their connections to this research. \xhdr{Inherently Interpretable Models and Post hoc Explanations} Many approaches learn inherently interpretable models for various tasks including classification and clustering. Examples of such models include decision trees, decision lists~\cite{letham15:interpretable}, decision sets~\cite{lakkaraju16:interpretable}, prototype based models~\cite{bien2009classification, kim14:the-bayesian}, and generalized additive models~\cite{lou2012intelligible,caruana15:intelligible}. However, complex models such as deep neural networks often achieve higher accuracy than simpler models~\cite{ribeiro2016should}; thus, there has been a lot of interest in constructing post hoc explanations to understand their behavior. To this end, several techniques have been proposed to construct \emph{post hoc explanations} of complex models. These techniques differ in their access to the complex model (i.e., black box vs. access to internals), scope of approximation (e.g., global vs. local), search technique (e.g., perturbation based vs. gradient based), and basic units of explanation (e.g., feature importance vs. rule based). For instance, LIME, SHAP, Anchors, BayesLIME and BayesSHAP~\cite{ribeiro16:model-agnostic,lundberg17:a-unified,slack2021reliable,ribeiro2018anchors} are \emph{perturbation based local} explanations as they leverage perturbations of individual instances to construct interpretable local approximations (e.g., linear models).On the other hand, methods such as Gradient*Input, SmoothGrad, Integrated Gradients and GradCAM~\cite{simonyan2013saliency, sundararajan2017axiomatic, selvaraju2017grad,smilkov2017smoothgrad} are \emph{gradient based local} explanations as they leverage gradients computed with respect to input dimensions of individual instances to explain model predictions. An alternate class of methods known as \emph{global} explanations attempt to summarize the behavior of black box models as a whole~\cite{bastani2017interpretability,lakkaraju19:faithful}. In contrast, our work focuses on analyzing the disagreements between explanations generated by state-of-the-art methods. \xhdr{Analyzing and Evaluating Post hoc Explanations} Prior research has studied several notions of explanation quality such as fidelity, stability, consistency and sparsity~\cite{liu2021synthetic,petsiuk2018rise,slack2021reliable,zhou2021evaluating}. Several metrics to quantify each of these aspects of explanation quality were also proposed~\cite{zhou2021evaluating,carvalho2019machine,gilpin2018explaining,liu2021synthetic}. As discussed in the introduction, most of these metrics are not general enough to cater to all models or real world settings. Follow up works leveraged these properties and metrics to theoretically and empirically analyze the behavior of popular post hoc explanations~\cite{ghorbani2019interpretation, slack2019can,dombrowski2019explanations,adebayo2018sanity,alvarez2018robustness,levine2019certifiably,pmlr-v119-chalasani20a,agarwal2021towards}. More specifically it has been shown that these explanations can be inconsistent or unstable \cite{ghorbani2019interpretation, slack2019can}, prone to fair washing~\cite{lakkaraju2020fool,slack2019can,aivodji2019fairwashing}, and can be unfaithful to the model to the extent that their usefulness can be severely compromised~\cite{rudin2019stop}. However, none of these works highlight or study the disagreement problem in explainable ML which is the focus of our work. The work that is closest to ours is the research by~\citet{neely2021order} which demonstrates that certain post hoc explanation methods (e.g., LIME, Integrated Gradients, DeepLIFT, Grad-SHAP, Deep-SHAP, and attention based explanations) disagree with each other based on rank correlation (Kendall's $\tau$) metric. However, their work neither formalizes the notion of explanation disagreement by leveraging practitioner inputs, nor studies how explanation disagreements are resolved in practice which are the key contributions of this work. \xhdr{Human Factors in Explainability} Many user studies evaluate how well humans can understand and utilize explanations~\cite{doshi2017towards}. \citet{kaur2020interpreting} show that data scientists do not have a good understanding of the state-of-the-art interpretability techniques, and are unable to effectively leverage them in debugging ML models. \citet{bhatt2020explainable}, conduct a survey to understand the use-cases for local explanations. \citet{hong2020human} conduct a similar survey to identify a variety of stakeholders across the model lifecycle, and higlight core goals: improving the model, making decisions, and building trust in the model. Furthermore, \citet{lakkaraju2020fool} study if misleading explanations can fool domain experts into deploying racially biased models. Similarly, \citet{poursabzi2018manipulating} find that supposedly-interpretable models can lead to a decreased ability to detect and correct model mistakes. \citet{lage2019evaluation} use insights from rigorous human-subject experiments to inform regularizers used in explanation algorithms. However, none of these works focus on understanding if and how often practitioners face explanation disagreement, and how they resolve it. \section{Resolving the disagreement problem in practice: a qualitative study} \label{section-user-study} In order to understand how practitioners resolve the disagreement problem, we conduct a qualitative user study targeted towards explainability practitioners. We now describe our user study design and discuss our findings. \subsection{User Study Design} \label{subsection:user-setup} In total, 25 participants participated in our study, 13 from academia and 12 from industry. Participants from academia were graduate students, and postdoctoral researchers, while participants from industry were data scientists and ML engineers from three different firms. 20 of these participants indicated that they have used explainability methods in their work in a variety of ways, including doing research, helping clients explain their models, and debugging their own models. Following the setup in Section \ref{sec:quantitative}, we asked participants to compare the output of five pairs of explainability methods on the predictions made by the neural network we trained on the COMPAS dataset. We chose the COMPAS dataset because it only has 7 features, making it easy for participants to understand the explanations. First, the participants are shown an information page explaining the COMPAS risk score binary prediction setting and various explainability algorithms. We indicate that we trained a neural network to predict the COMPAS risk score (low or high) from the seven COMPAS features. We also give a brief description of each of these seven features to the participant and tell them to assume that the criminal defendant's risk of recidivism is correctly predicted to be high risk. In this information page, we also briefly introduce and summarize the six explainability algorithms we use in the study (LIME, KernelSHAP, Gradient, Gradient*Input, SmoothGrad, and Integrated Gradients). Finally, we provide links to the papers describing each of the algorithms. We include a screenshot of this information page in Appendix \ref{sec:appendix-ui}. Next, each of the participants is shown a series of 5 prompts, a sample of which is shown in Figure \ref{fig:study}. Each prompt presents two explanations of our neural network model's prediction corresponding to a particular data point generated using two different explanation methods (e.g., LIME and KernelSHAP in Figure \ref{fig:study}). We chose a different data point from COMPAS to run the two methods, giving us a set of 15 prompts for each of the 15 pairs of explanation methods. These prompts were picked to showcase various levels of agreement. We display the full set of $k=7$ COMPAS features, showing the feature importance of each feature. Red and blue bars indicate that the feature contributes negatively and positively respectively to the predicted class. The participants were first asked the question \textit{``To what extent do you think the two explanations shown above agree or disagree with each other?''} and given four choices: \textit{completely agree, mostly agree, mostly disagree}, and \textit{completely disagree}. If the participant indicated any level of disagreement (any of the latter 3 choices), we then asked \textit{``Since you believe that the above explanations disagree (to some extent), which explanation would you rely on?''} and presented with three choices: the two explainability methods shown and \textit{``it depends''}. They were then asked to explain their response. The users were allowed to take as much time as they wanted to complete the study. \begin{figure}[th] \includegraphics[width=\linewidth]{images/study/explainability_ui.png} \centering \caption{The user interface for a prompt. The user is shown two explanations for a COMPAS data point, showing the feature importance value of each of the 7 features. Red and blue indicate negative and positive feature values, respectively. See the text for more details.} \label{fig:study} \end{figure} \subsection{Results and Insights}\label{subsection:user-findings-1} We now discuss the results and findings from our user study in Sections~\ref{subsubsection:q1}-\ref{subsubsection:practice}. \subsubsection{\textbf{Do practitioners observe disagreements? }}\label{subsubsection:q1} We aggregated the responses to the first question in each prompt, \textit{``To what extent do you think the explanations shown above agree or disagree with each other?''}. Overall, 4\%, 28\%, 50\%, and 18\% of responses indicate \textit{completely agree}, \textit{mostly agree}, \textit{mostly disagree}, and \textit{completely disagree}, respectively, highlighting that there is significant disagreement among our prompts. See Appendix \ref{sec:appendix-agreement} for more details. \subsubsection{\textbf{Are certain explanations favored over others?} }\label{subsubsection:q2} Next, since different algorithms have different levels of popularity, we analyze if certain algorithms are chosen more often in disagreements. Figure \ref{fig:disagreement-picked} shows the distribution of how participants resolved disagreements for each prompt (dropping prompts with 4 or less responses). We first emphasize that there is high variability in how participants chose to resolve disagreements, showing a lack of consensus for the majority of prompts. However, when participants do decide to choose an algorithm rather than abstaining, they often choose the same algorithm. For example, in the Gradient vs SmoothGrad (top row in Figure \ref{fig:disagreement-picked}), participants either chose SmoothGrad over Gradient or chose neither. We also aggregate these choices over all prompts, and in Figure \ref{fig:distribution-algorithms}, we plot how often each of the six explainability algorithms is chosen, finding that indeed, certain algorithms were favored over others. While KernelSHAP was chosen 66.7\% of the time when there are disagreements, Gradient $\times$ Input was only chosen 7.0\% of the time. We include a further explanation of why participants chose each of the explanations in Appendix \ref{sec:appendix-reasons-algorithms}, including quotes from participants that supported each algorithm. \begin{figure}[!htp] \centering \subfloat[The frequency with which each of the explanations in a pair is selected upon disagreement. The blue, gold, and grey bars show the percentage of participants (X axis) that picked the left, right, and neither algorithm when presented with the pair of algorithms shown on the Y axis.]{\label{fig:disagreement-picked} \centering \includegraphics[width=0.55\linewidth]{images/study/which_picked_pair_2.png} } \hfill \subfloat[The frequency with which each of the explanations was chosen when there is a disagreement. X axis indicates the explainability algorithms and Y axis indicates the frequency.]{\label{fig:distribution-algorithms} \centering \includegraphics[width=0.4\textwidth]{images/study/algorithm_picked.png} } \caption{Sub-figures highlight which algorithms participants chose when the explanations they were shown disagreed. In (a), we show how participants resolved each particular prompt. In (b), we show the overall frequencies with which each explanation method was selected. } \label{fig:csetup} \end{figure} \subsubsection{\textbf{How do practitioners resolve disagreements?}}\label{subsubsection:q3} Across all six explanation methods, we find three unifying themes that dictated why participants chose one explanation over the other. We give a high-level description of these themes below, highlighting direct quotes from participants in Table \ref{table:method-reasons-overall}. \\ \textit{1. One method is inherently better than the other because of its associated theory or publication time (33\%): } Participants often indicated preference towards a particular method without referencing the shown explanation citing features such as the paper's publication time (more recent papers are better), the theory behind the method, and the method's stability. \\ \textit{2. One of the generated explanations matches intuition better (32\%): } Participants frequently said that one method's explanation aligned with their intuition better, citing the absolute and relative values of specific features as evidence. \\ \textit{3. LIME and SHAP are better because COMPAS dataset comprises of tabular data (23\%): } Participants said that they mainly used LIME and SHAP for tabular data and commonly cited this as their sole reason. \begin{table*}[tph] \caption{Themes summarizing how participants decided between explanations when faced with disagreement along with quotes.} \centering \begin{tabular}{ |p{0.25\textwidth}|p{0.67\textwidth}| } \hline Theme Highlighted & Sample Quotes \\ \hline \multirow{3}{0.25\textwidth}{\textbf{1. One method's paper/theory suggests that it's inherently better (33\%).}} &\textbullet \, \textit{``I have no reason to believe the gradient holds anywhere other than very locally.''} \\ &\textbullet \, \textit{``[IG is] more rigorous [than SmoothGrad] based on the paper and axioms''}\\ &\textbullet \, \textit{``gradient explanations are more unstable''}\\ \hline \multirow{3}{0.25\textwidth}{\textbf{2. One explanation matches intuition better (32\%). }} &\textbullet \, \textit{``seems unlikely that all features contributed to a positive classification''} \\ &\textbullet \, \textit{``features such as priors\_count and length of stay [are] important for determining''} \\ &\textbullet \, \textit{``Gradient*Input only consider[s] sensitive features (age, race) as impactful which could be a sign of a biased underlying data distribution''}\\ \hline \multirow{3}{0.25\textwidth}{\textbf{3. LIME/SHAP are better for tabular data (23\%).}} &\textbullet \, \textit{``I use LIME for structured data''} \\ &\textbullet \, \textit{``SHAP is more commonly used [than Gradient] for tabular data''}\\ & \\ \hline \end{tabular} \label{table:method-reasons-overall} \end{table*} \subsubsection{\textbf{Experiencing and resolving disagreements in day to day work:} }\label{subsubsection:practice} After answering all 5 prompts, participants were asked a set of questions to help us understand their experience with the disagreement problem in their day to day work. First, to filter out participants who didn't use explainability methods, we asked: \textit{``Have you used explainability methods in your work before?''}. Of the 25 participants, 5 of them indicated that they had not. We asked the other 20 participants further questions to better understand their experience with the disagreement problem. The full set of questions can also be found in Appendix \ref{sec:appendix-questions}. Having understood what participants look for to determine disagreement in Section \ref{sec:survey-characterize}, we next aim to better understand two crucial questions related to the disagreement problem: \textit{(Q1): Do you observe disagreements between explanations output by state of the art methods in your day to day workflow?} and \textit{(Q2): How do you resolve such disagreements in your day to day workflow?}. One of the 20 participants declined to respond to (Q1) and (Q2) because they were not a practitioner. Out of the other 19, 14 participants (74\%) responded \textit{``yes''} to (Q1), indicating that they did in fact encounter explanation disagreement in practice. Of the remaining 5 who said they did not, 3 said they had not really paid attention to the issue. Through (Q2), we aimed to uncover how participants dealt with the disagreement problem when it arose in practice. Of the 14 participants answering yes to (Q1), their responses to (Q2) can be grouped into 3 categories. 50\% had personal heuristics for choosing which algorithms to use (\textit{``data scientists picking their favorite algorithm''}, \textit{``rules of thumb based on results in papers''}). These heuristics varied among participants and included ease of implementation, groundedness of theory, recency of publication, ease of understanding, and documentation of packages. 36\% didn't indicate any way to resolve these disagreements, but rather showed confusion and uncertainty (\textit{``no clear answer to me''}). Many of the responses indicated the desire for the research community to make progress and help (\textit{``I hope research community can provide some guidance''}). Therefore, we hope that these responses motivate and inspire future work in this direction. The remaining 14\% proposed to use other metrics such as fidelity (\textit{``try and use some metric to measure fidelity''}). See Appendix \ref{sec:appendix-practice}.
1,116,691,500,560
arxiv
\section{Introduction} Surveillance cameras, also known as closed-circuit television (CCTV) systems, have proliferated in the last several decades as the costs to record and store video have fallen dramatically. As of 2016, there were an estimated 350 million surveillance cameras worldwide~\cite{ihs2016}. The United States, with an estimated 50 million CCTV cameras installed, is believed to have the highest per capita number of surveillance cameras (15.3 CCTV cameras per 100 people) in the world~\cite{precise2019}. Past work has found that surveillance cameras may play an important role in crime prevention and investigation, but there is also growing concern about the dangers cameras pose to privacy and equity. Further, recent advances in facial recognition technology significantly amplify both the potential costs and the potential benefits of widespread surveillance, as it is now possible to identify and track specific individuals across space and time. While these technical advances promise to aid law enforcement efforts, they may also unjustly concentrate policing on more heavily monitored communities. This surveillance may also hinder longstanding freedoms of speech and association, as it becomes easier to identify those participating in public gatherings, potentially dissuading dissent. Despite the wide-ranging implications of surveillance cameras on public safety, police enforcement, and democratic governance, relatively little is known about the precise number and placement of cameras, hampering efforts to assess their impacts. Past work to gauge the prevalence and spatial distribution of surveillance cameras has either examined aggregate production or shipping numbers, or relied on public disclosures in select jurisdictions---approaches that suffer from limitations of scale and scope. To address these limitations, \citet{turtiainen2020cctv} note that researchers could, in theory, map surveillance cameras by applying computer vision algorithms to street view data, which provide nearly complete visual coverage of many cities. Building on that insight, here we describe and implement a scalable method for measuring the distribution of outdoor surveillance cameras across the United States, and, more generally, across the world. Specifically, we couple computer vision algorithms with a verification pipeline by expert human annotators, together with statistical adjustment, to analyze a large-scale corpus of street view images. In this manner, we leverage the proliferation of cameras and image data themselves to quantify the prevalence of surveillance technology. To carry out this analysis, we use the public repository of images collected as part of Google's Street View service, launched in 2007. Since its inception, millions of 360-degree panoramas have been collected by cameras mounted on the roof rack of Google Street View cars, covering more than 10 million miles across 83 countries~\cite{raman2017}. The rich archive of historical street view images provides opportunities to understand the evolution of the built environment, particularly the adoption of surveillance cameras on a global scale. However, it is still extremely challenging---if not impossible---for humans to eyeball millions of images and spot cameras from the diverse street view context: a camera usually consists of 30--50 pixels out of over 400,000 pixels in a standard 640 $\times$ 640 street view image. To scour this collection of images, we train and apply a computer vision algorithm to first filter street view data to those candidate images likely to contain a surveillance camera. We specifically start with a random selection of 1.6 million street view images from 10 large U.S. cities and 6 other major cities, which contains approximately 6,000 positive model detections. This curated set of candidate images is then examined by human experts for verification. To go from verified camera detections in our sample to city-wide estimates, we further estimate both the recall of our model (which we find is 0.63), and the proportion of the city covered by our sample. This latter quantity is computed based on the recorded camera position and angle, coupled with high-precision data on the road network and building footprints. We find substantial variation in the density of visible surveillance cameras across the 16 cities we consider, ranging from 0.07 cameras per linear kilometer along the road network in Seattle to 0.95 cameras per kilometer in Seoul. Examining the 10 U.S. cities in greater detail, we find that surveillance cameras are concentrated in commercial, industrial, and mixed city zones, and also in areas with higher shares of non-white residents. This concentration of cameras in majority-minority neighborhoods persists even after adjusting for zone, pointing to the potential disparate impacts of surveillance technology on communities of color. \section{Related work} Our work connects to several interrelated strands of research in computer vision, urban computing, and privacy, which we briefly summarize below. \subsection{Street View Understanding} Visual scene understanding~\cite{hoiem2015guest} is one of the most fundamental and challenging goals in computer vision. In part because of its potential to support self-driving vehicles, both the industrial and scientific communities have put considerable effort and investment into designing and creating labeled street view datasets for training and evaluating deep learning models, such as CamVid~\cite{brostow2008segmentation}, the KITTI Vision Benchmark Suite~\cite{geiger2013vision}, Cityscapes~\cite{cordts2016cityscapes}, and Mapillary Vistas~\cite{neuhold2017mapillary}. Based on these datasets, several studies exploit the characteristics of urban-scene images and propose object segmentation~\cite{choi2020cars,liu2015layered} and change detection~\cite{alcantarilla2018street} algorithms for general street view understanding. Related research has focused on detecting specific elements from street images, including greenery~\cite{li2015assessing}, buildings~\cite{kang2018building}, and city infrastructure such as utility poles~\cite{zhang2018using}. Of particular relevance to our work, \citet{neuhold2017mapillary} built an image segmentation model to identify---among other objects---CCTVs in street view data. The publicly available \citeauthor{neuhold2017mapillary} Mapillary Vistas Dataset contains over 20,000 labeled images but fewer than 100 labeled cameras, leading to relatively poor performance on the specific task of detecting cameras. More recently, \citet{turtiainen2020cctv} developed a state-of-the-art object detection model tailored specifically to CCTVs, based on nearly 10,000 images of cameras that they collected and labeled. That dataset, however, has not been publicly released at the time of writing. As a result, we constructed (and have released) our own labeled camera dataset and built a camera detection model using standard computer vision techniques. \subsection{Urban Computing} Urban computing aims to tackle major issues in cities---such as traffic control, public health, and economic development---by modeling and analyzing urban data. A large body of research has shown that it is possible to infer socioeconomic information from satellite images~\cite{jean2016combining, Sheng_2020_CVPR_Workshops}, monitor human mobility~\cite{xu2018human}, and identify geo-tagged social network activities~\cite{schwartz2014social}. Recent studies using street view images have dramatically increased the accuracy of processed data, as well as the geographic resolution analyzed. By manually scoring street view images from 2,709 city blocks, \citet{hwang2014divergent} find that gentrification in Chicago from 2007 to 2009 was negatively associated with the concentration of minority groups. \citet{mooney2016use} labeled the characteristics of 532 intersections in New York City, such as curb cuts and crosswalks, to assess environmental contributions to pedestrian injury. As an alternative to relying on human experts to annotate street view images, modern computer vision algorithms have a much higher throughput with close to zero cost, enabling researchers to scale to multiple cities. For example, \citet{gebru2017using} enumerated 22 million automobiles (8\% of all vehicles in the U.S.) in 50 million street view images to accurately estimate local income, race, education, and voting patterns. In our work, we draw on the merits of both approaches, combining high-recall computer vision algorithms with high-precision human verification in a unified estimation pipeline. \input{figure/flowchart} \subsection{Surveillance and Privacy} While past work has found that surveillance cameras play an important role in crime investigation~\cite{king2008citris} and deterrence~\cite{welsh2015effectiveness}, cameras also pose significant challenges to privacy. Legal scholars have long considered the ramifications of cameras on First Amendment freedoms and the constitutional right to privacy~\cite{robb1980police}. More recently, scholars have been concerned with the role of surveillance cameras in predictive policing \cite{joh2016discretion}, in enabling the adverse effects of facial recognition and computer vision~\cite{stanley2019robot,calo2010fake,buolamwini2018shades,nkonde2020black}, and with the threat of surveillance hacking~\cite{hermann2018hack,quintin2015license}. These concerns have led to bans on facial-recognition technology by law enforcement in San Francisco, Boston, and Portland~\cite{banfacialrecognition}, as well as the drafting of federal legislation~\cite{frtbill}. Despite these concerns, there has been limited success in identifying the number and geospatial distribution of cameras. The Electronic Frontier Foundation (EFF) recently acquired the locations of cameras accessible by prosecutors in San Francisco~\cite{maass2019camera}. Other private-market researchers have estimated the prevalence of installed cameras at a national level through unit shipments~\cite{jenkins2019surveil}. However, neither of these approaches are able to estimate the prevalence and specific locations of public and private cameras at scale, hindering downstream analysis on the impacts of surveillance. \section{Data and Methods} For 16 major cities, we estimate the total number and spatial distribution of surveillance cameras visible from the street. We specifically consider the 10 cities with the highest urban density in the U.S., among those with at least 500,000 residents, and 6 other major cities in Asia and Europe. Our statistical estimation procedure involves three key steps. First, we compile a dataset of street view images both with and without cameras and label these images with segmentation masks. We then train a camera segmentation model on this dataset, and, importantly, estimate the recall of our detection algorithm on a held-out validation dataset. Second, we run our camera detection algorithm on a random sample of street view images. All positive camera detections are then reviewed by human experts to remove false positives. Finally, by combining the geometry of the camera angle, the road network, and building footprints, we calculate our sample's coverage of the road network. These three steps are outlined in Figure~\ref{fig:flowchart}. In the following sections, we describe the data used in our analysis and more fully detail each step in our estimation pipeline. \subsection{Data} \input{table/city-stats} \input{figure/example} We analyze the 16 cities listed in Table~\ref{tab:citystats}. For each city, we obtained the road network and building footprints from OpenStreetMap~\cite{OpenStreetMap,boeing2017osmnx}. U.S.\ Census maps were used to restrict the geospatial data to the city's administrative borders. All street view images used for model training and camera detection were accessed through the Google Static Streetview API.\footnote{\url{https://developers.google.com/maps/documentation/streetview}} We further used San Francisco camera location data from the EFF~\cite{maass2019camera} to construct training and evaluation datasets for our model. \subsection{Step 1: Model Training and Evaluation} We start by creating training and evaluation datasets for our camera detection model. For each of the 2,660 geo-tagged cameras in San Francisco identified by the EFF, we pulled the closest street view images from 2012--2019 (if there is a scene available within 30 meters). Manually labeling the resulting 13,240 images yielded 861 positive instances containing 977 cameras. We note that many of the cameras listed in the EFF dataset appear to be indoors or otherwise are not visible from the street. In Figure~\ref{fig:example}, we show several labeled examples. We frame our camera detection problem as a binary image segmentation problem to maximize learning from a limited number of samples. We split the positive images by location into 70\%/15\%/15\% training/validation/test sets, making sure images from the same site always belong to the same split. We further include all camera instances from Mapillary Vista into our training dataset. By mixing with the negative images, we end up with 5,298 images for training, 1,040 for validation, and 1,040 for testing. For ease, we use off-the-shelf methods to train our computer vision model (for state-of-the-art camera detection, see \citet{turtiainen2020cctv}). In particular, our segmentation model follows the architecture of DeepLab V3+~\cite{chen2017rethinking,chen2018encoder} with an EfficientNet-b3~\cite{tan2019efficientnet} backbone. We apply a random horizontal flip and randomly crop the original image (640 x 640 pixels) to 320 x 320 before feeding it into the model during training. In the inference phase, we first crop the input image into four patches and then merge the output segmentation maps back to the original size. The segmentation model's performance is shown in Table~\ref{tab:performance}. To aggregate the pixel level prediction to the instance level, we first apply a morphological dilation with a 3x3 kernel to merge detected areas and filter false detections by size. After validating several combinations of pixel-level probability thresholds and size filters, we decided to use a probability threshold of 0.75 and a size threshold of 50 pixels, which yields precision and recall equal to 0.58 and 0.63, respectively (see Figure~\ref{fig:performance}). In Figure~\ref{fig:failure}, we present several illustrative failures of our detection model. The model is occasionally confused by objects that share some of the visual features of cameras, such as building structures, parking meters, and street lamps. In some instances, our model also merges multiple cameras into one detection. These problems are mitigated by the human verification step, as described in the next section. \input{table/performance} \input{figure/performance} \input{figure/failure} \subsection{Step 2: Camera Detection and Verification} \input{figure/spatial} \input{figure/geometry} \input{figure/verification} For each city, we sampled street view images at $N=100,000$ points chosen uniformly at random from the road network.\footnote{For reference, there are more than 400,000 points covered with distinct street view panoramas in San Francisco.} For approximately 3\% of the selected points, there was no street view coverage within 10 meters, in which case we discarded and then re-sampled the location. Figure~\ref{fig:spatial} shows the spatial distribution of the sampled points for three example cities: San Francisco, New York, and Chicago. For each location, we then selected a 360-degree street view panorama. For London, Paris, and the 10 American cities, we selected the oldest available image taken between 2015 and 2021; for the remaining cities, we selected the oldest available image in the Google Maps corpus, which goes back to 2007. We note that this sampling strategy is the result of a coding error; our intention was to select the \emph{newest} available image at each location. Finally, for each location sampled, we randomly selected one out of the two 90-degree views with a midpoint perpendicular to the orientation of the road (see Figure~\ref{fig:geometry}). This approach provides the maximum view of the roadside. We ran our camera detection model on the resulting set of 100,000 images for each of the 16 cities. Annotators received the raw image and bounding boxes highlighting the predicted cameras, which were automatically generated from the model segmentation outputs. This process yielded 6,281 positive images with a total of 6,469 camera detections, all of which were then verified by a human annotator. Figure~\ref{fig:verification} illustrates the pipeline from the raw image to segmentation and bounding boxes to human verification. In our subsequent analysis, we only consider these human-verified camera detections. \input{figure/result_detection} \subsection{Step 3: Road Network Coverage Estimation} \input{figure/coverage} The final step in our procedure is to estimate the fraction of the visible area covered by our randomly sampled images. To estimate how much of the total length of a city's road network ($D$) has been covered by our sample ($D_{\text{covered}}$), we estimate the average length of road covered by one street view image ($\bar{d}$), which we can then multiply by the number of images sampled ($N$). We estimate $\bar{d}$ based on the geometry illustrated in Figure~\ref{fig:geometry}. Each street view image comes with the exact latitude and longitude of where it was taken. Given an image's point location $p$, we find the closest point $p'$ within the nearby buildings' footprint, and denote the distance between $p'$ and $p$ with $\delta$. As discussed above, we chose the street view's heading to be perpendicular to the road orientation, and restricted it to a 90-degree view. As a result, we estimate the length of road segment covered by the image to be $d = 2\delta$. We repeat this procedure for each sampled street view image. We remove the relatively small number of images taken at locations more than 30 meters from a building---corresponding to 60 meters of street coverage---since at further distances, cameras become too small to be reliably detected by either humans or computer vision algorithms. This results in images that cover about 25--30 meters of street. For example, the mean road segment covered by an image $\bar{d}$ is 24, 29, and 28 meters in San Francisco, Chicago, and New York City, respectively, as shown in Figure~\ref{fig:coverage}. Now, we estimate the proportion of a city's road network covered by our sample as $c = (N \bar{d}) / (2D)$, where $N$ is the total number of samples for a city (within a given time period) and $D$ is the total length of the city's road network. Note that the factor of 2 is to account for the fact that our street view images only cover one of the two sides of a street at any sampled point. Finally, putting all the above pieces together, we can estimate the number of cameras $K_{i}$ in city $i$: \begin{equation} \hat{K}_{i} = \frac{n_{i}}{c_{i}r} \label{eq:estimation}, \end{equation} where $r$ is the recall of our model, $n_{i}$ is the number of verified camera detections, and $c_{i}$ is the proportion of the road network of a city covered by our sample. Similarly, we model variance by treating each sampled instance as a draw from a Bernoulli distribution with detection probability $p_{i} = n_{i} / N_{i}$. Assuming that recall and coverage are both exact, we can estimate the standard error of the number of cameras $K_{i}$ as: \begin{equation} \hat{\text{se}}(\hat{K}_{i}) = \frac{\sqrt{N_{i} \cdot p_{i} \cdot (1-p_{i})}}{c_{i}r}. \label{eq:variance} \end{equation} \section{Results} Applying the methods described above, we now estimate the total number and spatial distribution of cameras on the road network for all 16 cities. In addition, for the U.S. cities, we estimate the prevalence of cameras across city zones, and examine the racial composition of the neighborhoods in which cameras are concentrated. \subsection{Camera Prevalence} \input{table/estimation} \input{figure/result_density_cities} Table \ref{tab:results} shows the number of identified cameras for each city---after human verification---along with point estimates and 95\% confidence intervals for camera density and for the total number of cameras, following Eqs.~\eqref{eq:estimation} and \eqref{eq:variance}. The same density estimates are also depicted in descending order in Figure \ref{fig:result-density}. We find that camera density varies widely between cities: For example, Boston and New York City, the U.S. cities with the highest camera density, have almost four times as many cameras per kilometer than Seattle and Los Angeles.\footnote{% Our computer vision model was trained on San Francisco data, and so it is possible that the camera identification rate in San Francisco is inflated due to over-fitting. We note, however, that while model precision varies between cities, Philadelphia and Boston both have higher precision than San Francisco, which suggests our model does indeed transfer well across contexts. } We note that our estimates exclude indoor cameras, as well as outdoor cameras not captured by street view images. Perhaps due to these limitations, our estimate of 10,100 cameras in New York City is lower than the 18,000 cameras that the NYPD reportedly has access to~\cite{nypd_camera_count}. \subsection{Camera Placement} \input{figure/result_zone} \input{figure/result_race} The detection maps in Figure~\ref{fig:detections} show that cameras are not distributed uniformly across a city. Despite sampling uniformly over the road network, we find densely covered regions in each city, representing neighborhoods with a high concentration of cameras. We examine these patterns in more detail for the 10 U.S. cities we consider, analyzing the rate of (verified) camera identifications per street image across zoning designations and neighborhood racial composition. Figure~\ref{fig:result-zone} shows the camera identification rate for different zoning designations aggregated over the 10 U.S. cities we analyze. We find that images from mixed, industrial, and commercial zones are more likely to contain an identified camera than images from public (such as parks and other public facilities) and residential areas. For example, the identification rate in mixed zones (2.1\%) is more than three times the rate in residential zones (0.6\%). This pattern holds for the majority of our chosen cities. To compute the camera identification rate, we assigned each sampled point to the zoning designation of the closest parcel of land. To do so, we collected land use and zoning designation data for all 10 cities, and then standardized the zoning code into one of the following five categories: mixed, industrial, commercial, public, and residential. Zones with codes that represent planned development or did not clearly fit into the aforementioned categories are labeled as unknown and omitted in the following analysis. We find that 60\% of sampled points are classified as residential, and unknown codes comprise less than 3\% of sampled points. We next examine the relationship between camera identification rate and the share of residents in the surrounding area that identify as belonging to a minority racial or ethnic group, aggregated over our 10 U.S. cities. To compute this relationship, we assigned each sampled image to the minority proportion of the census block group in which it is located, as estimated by the 2018 American Community Survey. For purposes of this analysis, we define ``minority'' as comprising those individuals who identify either as Hispanic (regardless of their race) or who do not identify as white. Figure \ref{fig:result-race} shows the results. The blue line is a regression (with both linear and quadratic terms) fit to the data, and indicates that an increase in the share of minority residents in a neighborhood is associated with an increase in camera identification rate. For example, the identification rate in census blocks with a 50\% minority share (0.38\%) is twice as high as in those blocks with a 10\% minority share (0.2\%). We see qualitatively similar results with higher-order polynomial fits. \input{table/regression} The observed concentration of cameras in majority-minority neighborhoods persists even after adjusting for zone category. Specifically, Table~\ref{tab:regression} shows the results of a linear probability model that predicts camera detections as a function of city, zone, and racial composition---where we again use a quadratic term to account for the curvature seen in Figure~\ref{fig:result-race}. The fitted model confirms that camera identifications increase with the minority share of residents, plateauing at approximately 60\% share and then remaining relatively flat, as in Figure~\ref{fig:result-race}. It is unclear what is driving the apparent concentration of cameras in high minority neighborhoods. However, regardless of the underlying mechanism, these results point to the potential impacts that video surveillance can have on communities of color. \section{Discussion and Conclusion} By applying computer vision, human verification, and statistical analysis to large-scale, geo-tagged image data, we have---for the first time---estimated the number and spatial distribution of outdoor surveillance cameras in 16 major cities around the world. Further, the approach we have developed has the potential to scale to even more cities across the country and the world, providing a new perspective on the state of video surveillance. In the 16 cities we analyzed, we found considerable variation in the estimated number of surveillance cameras. Among U.S. cities, our analysis also shows that cameras are more likely to be found in industrial, commercial, and mixed zones as compared to residential areas. Finally, even after adjusting for zone category, we find a greater concentration of cameras in majority-minority neighborhoods, highlighting the need to carefully consider the potential disparate impacts of surveillance technology on communities of color. While our computational approach is able to provide a novel quantitative perspective into the state of surveillance, it is still subject to several important limitations, which we outline below. First, our method relies on being able to see cameras from the street, and, more specifically, from street view images. Indoor cameras, as well as outdoor cameras obscured from view, are not counted by our estimation pipeline. Further, due to the limited resolution of street view images, small cameras---such as increasingly popular doorbell cameras---are difficult to detect by either humans or algorithms. Higher resolution and higher coverage image data could mitigate these issues in the future. However, our current results likely underestimate the density of cameras in a city. Second, our human annotators may not perfectly label cameras in the candidate images selected by the model, skewing our final estimates. For example, it is possible that they rule out an actual camera (leading to an undercount) or, conversely, that they report a camera that is not in fact there (leading to an overcount). To minimize these errors, every candidate image is independently labeled by three human annotators, but at least some errors likely remain. Third, errors in the estimated recall of our computer vision model---and, similarly, errors in the estimated coverage of our images---can bias our final estimates. Estimating city-specific model recall is particularly challenging, as it requires city-specific labeled datasets. In our analysis, we thus estimated recall for a single city, San Francisco, in which the locations of some surveillance cameras had already been compiled, which we then apply to other jurisdictions. Further, our variance estimates treat the recall and coverage as known quantities. Accounting for errors in their measurement would increase the variance of our final estimates. Finally, our method does not provide any information about the cameras other than what can be inferred from their appearance. For example, we cannot determine whether identified cameras are decoys, are malfunctioning, or otherwise are not in use. We likewise cannot always tell who owns the cameras (e.g., a government agency or a private citizen), who has access to the video, and whether the camera footage is stored. All of these factors are critical in assessing the downstream consequences of video surveillance. Although difficult, future work may be able to answer some of these questions by conducting a more intensive audit of a sample of the identified cameras. Despite these limitations, we believe our approach and results constitute an important step toward understanding the use of surveillance technology across the world. More broadly, our general statistical estimation pipeline can be extended and applied to characterize the prevalence and spatial distribution of a variety of other city elements detectable from street images. Looking forward, we hope this work spurs further theoretical and empirical research at the intersection of computer vision, urban computing, and public policy. \section*{Publication Note} This version of the paper is updated from our original publication in two important respects. First, we now credit \citet{turtiainen2020cctv} both for creating a state-of-the-art camera detection model and for suggesting that computer vision could, in theory, be applied to street view data to map surveillance cameras. We were aware of their work when initially conducting our research, but we unfortunately failed to include a citation to their paper. We thank \citeauthor{turtiainen2020cctv} for bringing this to our attention and we apologize for the omission. Second, we discovered a coding error in our image sampling strategy that corrupted our analysis of camera density over time. We have now removed the results of that analysis. \balance \bibliographystyle{ACM-Reference-Format}
1,116,691,500,561
arxiv
\section{Introduction} \setcounter{equation}{0} Low-rank updated matrices are of great importance in real applications \cite{Chu,G. H. Golub}. Specifically, the eigenproblem with respect to a low-rank updated matrix is of great interest \cite{J. R. Bunch,G. H. Golub,GV,Gu,J. H. Wilkinson,Wu,WW}. However, most work is devoted to discussing the case of symmetric low-rank perturbations \cite{J. R. Bunch,G. H. Golub,Gu}, the Jordan form \cite{Moro,Wu}, and the characteristic polynomial of a low-rank updated matrix \cite{WW}. Recently, Farrell \cite{P. E. Farrell} gave an upper bound for the number of distinct eigenvalues of arbitrary matrices perturbed by updates of arbitrary rank, which is the central result in \cite{P. E. Farrell}. This result can be utilized to estimate the number of Krylov iterations required for solving a perturbed linear system. Very recently, by separating the spectra into two disjoint sets, Xu presented an improved upper bound in \cite{X.F. Xu}. Let us briefly introduce some definitions and notations that will be used in this paper. Given a matrix $M\in \mathbb{C}^{n\times n}, $ let $\Lambda(M)$ be the set of all distinct eigenvalues of $M$, and let $|\Lambda(M)|$ be the cardinality of the set $\Lambda(M)$. The {algebraic multiplicity} $m_a (M,\lambda)$ of an eigenvalue $\lambda\in\Lambda(M)$ is the multiplicity of $\lambda$ as a zero of the characteristic polynomial of $M$. The dimension of the eigenspace of $M$ corresponding to $\lambda$, denoted by $m_g (M,\lambda)$, is called its {geometric multiplicity}. Recall that the geometric multiplicity of an eigenvalue will never be greater than its algebraic multiplicity. If $m_g (M,\lambda)<m_a (M,\lambda)$ for some $\lambda\in \Lambda(M)$, then $M$ is called a {defective} matrix. If $m_g (M,\lambda)=m_a (M,\lambda)$ for all $\lambda\in \Lambda(M)$, then $M$ is said to be {nondefective} or {diagonalizable}. Clearly, $m_g (M,\lambda)\geq 1$ for all $\lambda\in \Lambda(M)$. If $m_g (M,\lambda)\equiv 1$ for all $\lambda\in \Lambda(M)$, then $M$ is said to be {nonderogatory}; otherwise, it is referred to as a {derogatory} matrix. We denote by $r(M)$ the rank of the matrix $M$. Let $S_1$ and $S_2$ be two sets, then $S_1\backslash S_2$ stands for the complement of $S_2$ with respect to $S_1$. The following definition defines the defectivity of an eigenvalue and that of a matrix: \begin{definition} {\cite{P. E. Farrell}} The defectivity of an eigenvalue $d(M,\lambda)\geq 0$ is the difference between its algebraic and geometric multiplicities \begin{equation}\label{211} d(M,\lambda)=m_a (M,\lambda)-m_g (M,\lambda). \end{equation And the defectivity of a matrix is the sum of the defectivities of its eigenvalues \begin{equation}\label{2.2} d(M)=\sum\limits_{\lambda\in \Lambda(M)} d(M,\lambda). \end{equation \end{definition} As $m_a (M,\lambda)\geq m_g (M,\lambda)$ for all $\lambda\in \Lambda(M)$, we have $d(M)\geq d(M,\lambda)\geq 0$. Indeed, the defectivity of a matrix can be considered as a quantitative measure of its nondiagonalizability \cite{P. E. Farrell}. Note that a matrix $M$ is diagonalizable if and only if $d(M) = 0$. For completeness, if $\lambda \not\in \Lambda(M)$, we simply set $m_a (M,\lambda)=0$ in this paper. Therefore, \begin{equation}\label{2.3} \lambda \not\in \Lambda(M)\Longleftrightarrow m_a (M,\lambda)=0\Longleftrightarrow m_g (M,\lambda)=0. \end{equation The following result is the central theorem of \cite{P. E. Farrell}. It relates the number of distinct eigenvalues of a matrix to the priori number of distinct eigenvalues. \begin{theorem}\label{Thm211}\cite[Theorem 1.3]{P. E. Farrell} Let $A,B\in\mathbb{C}^{n\times n}$. If $C=A+B$, then \begin{equation}\label{1.1} |\Lambda(C)|\leq (r(B)+1)|\Lambda(A)|+d(A). \end{equation} \end{theorem} However, the upper bound \eqref{1.1} may not be sharp and even can be invalid in practice \cite{X.F. Xu}. In order to enhance this result, the following bound was given, which is the main result of \cite{X.F. Xu}. \begin{theorem}\label{Thm2.2}\cite[Theorem 3.1]{X.F. Xu} Assume that $A,B\in\mathbb{C}^{n\times n}$ and let $C=A+B$, then we have \begin{equation}\label{1.2} |\Lambda(C)|\leq (r(B)+1)|\Lambda(A)|+d(A)-d(C). \end{equation} \end{theorem} The next result plays an important role in estimating the number of Krylov iterations after a rank one update \cite{X.F. Xu}. \begin{theorem}\label{Thm1.3}\cite[Corollary 4.2]{X.F. Xu} Suppose that $A \in\mathbb{C}^{n\times n}$ is diagonalizable, $r(B)=1$ and let $C=A+B$. If $C$ is also diagonalizable, then $|\Lambda(C)|\leq 2|\Lambda(A)|.$ If $C$ is not diagonalizable, then \begin{equation}\label{1.3} |\Lambda(C)|\leq 2|\Lambda(A)|-1. \end{equation} \end{theorem} In this paper, we show that all the results \eqref{1.1}, \eqref{1.2} and \eqref{1.3} are not sharp enough and can be overestimated in practice. Thus, it is necessary to establish new upper bounds on this problem. We give some refined upper bounds on the number of distinct eigenvalues of a perturbed matrix. Further, we provide some {\it prior} upper bounds that only rely on the information of $A$ and $B$. The key is to first separate $\Lambda(A)\cup\Lambda(C)$ into three disjoint sets, and then pay special attention to the set $\Lambda(A)\setminus\Lambda(C)$, i.e., the distinct eigenvalues that are in $\Lambda(A)$ but not in $\Lambda(C)$. Examples show the tightness of our new results, as well as their superiority over those provided in \cite{P. E. Farrell,X.F. Xu}. The number of distinct singular values of a matrix after perturbation is also discussed. \section{The main results} \setcounter{equation}{0} In this section, we propose refined bounds on the number of distinct eigenvalues of a perturbed matrix, which further improves the results given in Theorem \ref{Thm211}--Theorem \ref{Thm1.3}. Inspired by the definition of the derogatory index of a matrix \cite{X.F. Xu}, we can define the derogatory of an eigenvalue as follows. \begin{definition} The derogatory of an eigenvalue $\lambda\in \Lambda(M)$ is defined as \begin{equation*} I(M,\lambda)=m_g (M,\lambda)-1. \end{equation* \end{definition} Notice that $m_g (M,\lambda)-I(M,\lambda)=1 $ for all $\lambda\in \Lambda(M)$, we have \begin{equation} |\Lambda(M)|=\sum\limits_{\lambda\in \Lambda(M)} \big(m_g (M,\lambda)-I(M,\lambda)\big). \label{21} \end{equation We are ready to prove the following two lemmas. \begin{lemma}\label{Lem3.1} Let $M\in \mathbb{C}^{n\times n}$ and let $S$ be a subset of $\Lambda(M)$. Then \begin{equation*} \sum\limits_{\lambda\in S}m_g(M,\lambda)=n-\sum\limits_{\lambda\in \Lambda(M)\backslash S} m_g(M,\lambda)-d(M), \label{2.1} \end{equation*} where $\Lambda(M)\backslash S$ denotes the complement of $S$ with respect to $\Lambda(M)$. \end{lemma} \begin{proof} Recall that $\sum\limits_{\lambda\in \Lambda(M)} m_a (M,\lambda)=n$, and we have from \eqref{211} and \eqref{2.2} that \begin{eqnarray*} d(M) &=& n-\sum\limits_{\lambda\in \Lambda(M)} m_g (M,\lambda)\\ &=& n-\sum\limits_{\lambda\in S} m_g (M,\lambda)-\sum\limits_{\lambda\in \Lambda(M)\backslash S} m_g (M,\lambda). \end{eqnarray* Hence, \begin{eqnarray*} \sum\limits_{\lambda\in S}m_g(M,\lambda)=n-\sum\limits_{\lambda\in \Lambda(M)\backslash S} m_g(M,\lambda)-d(M). \end{eqnarray*} \end{proof} \begin{lemma}\label{Thm3.1} Let $A, B\in \mathbb{C}^{n\times n}$ and $C=A+B$. Denote $S_1=\Lambda(A)\cap \Lambda(C), S_2=\Lambda(C)\setminus S_1$, and $S_3=\Lambda(A)\setminus S_1$. Then \begin{equation}\label{3.1} |\Lambda(C)|\leq (r(B)+1)|\Lambda(A)|+d(A)-d(C)-N(A,B,C), \end{equation} where \begin{equation}\label{3.111} N(A,B,C)=|S_3|+\sum\limits_{\lambda\in S_3}\big(r(B)-m_g(A,\lambda)\big)+\sum\limits_{\lambda\in S_2}I(C,\lambda) \end{equation} is a nonnegative number. \end{lemma} \begin{proof} We note that \begin{equation} |\Lambda(C)|=|S_1|+|S_2|, \label{3.2} \end{equation} \begin{equation} |S_1|=|\Lambda(A)|-|S_3|, \label{3.3} \end{equation} and it follows from \eqref{21} that \begin{equation} |S_2|=\sum\limits_{\lambda\in S_2} \big(m_g(C,\lambda)-I(C,\lambda)\big). \label{3.4} \end{equation} As $S_1=\Lambda(C)\backslash S_2$, we have from Lemma \ref{Lem3.1} that \begin{eqnarray*} \sum\limits_{\lambda\in S_2} m_g(C,\lambda) = n-\sum\limits_{\lambda\in S_1} m_g(C,\lambda)-d(C). \end{eqnarray* Moreover, we obtain from \cite[(1.7c)]{P. E. Farrell} that \begin{equation}\label{3.7} m_g(A,\lambda)-m_g(C,\lambda)\leq r(B). \end{equation} Thus, \begin{eqnarray*} \sum\limits_{\lambda\in S_2} m_g(C,\lambda) &\leq & n+\sum\limits_{\lambda\in S_1}\big(r(B)- m_g(A,\lambda)\big)-d(C) \nonumber\\ &= & r(B)|S_1|+n-\sum\limits_{\lambda\in S_1}m_g(A,\lambda)-d(C). \end{eqnarray* On the other hand, since $S_3=\Lambda(A)\backslash S_1$, we obtain from Lemma \ref{Lem3.1} that \begin{equation*} n-\sum\limits_{\lambda\in S_1}m_g(A,\lambda)=d(A)+\sum\limits_{\lambda\in S_3}m_g(A,\lambda). \end{equation* Hence, \begin{equation}\label{3.9} \sum\limits_{\lambda\in S_2} m_g(C,\lambda) \leq r(B)|S_1|+d(A)+\sum\limits_{\lambda\in S_3}m_g(A,\lambda)-d(C). \\ \end{equation} Combining \eqref{3.2}--\eqref{3.9}, we arrive at {\small\begin{eqnarray*} |\Lambda(C)| &=& |S_1|+|S_2|=|S_1|+\sum\limits_{\lambda\in S_2} \big(m_g(C,\lambda)-I(C,\lambda)\big)\\ &\leq&|S_1|+r(B)|S_1|+d(A)+\sum\limits_{\lambda\in S_3}m_g(A,\lambda)-d(C)-\sum\limits_{\lambda\in S_2} I(C,\lambda)\\ &=&(r(B)+1)(|\Lambda(A)|-|S_3|)+d(A)+\sum\limits_{\lambda\in S_3}m_g(A,\lambda)-d(C)-\sum\limits_{\lambda\in S_2} I(C,\lambda)\\ &=&(r(B)+1)|\Lambda(A)|-|S_3|-r(B)|S_3|+d(A)+\sum\limits_{\lambda\in S_3}m_g(A,\lambda)-d(C)-\sum\limits_{\lambda\in S_2} I(C,\lambda)\\ &=&(r(B)+1)|\Lambda(A)|+d(A)-d(C)-|S_3|-\sum\limits_{\lambda\in S_3}\big(r(B)-m_g(A,\lambda)\big)-\sum\limits_{\lambda\in S_2} I(C,\lambda)\\ &=&(r(B)+1)|\Lambda(A)|+d(A)-d(C)-N(A,B,C). \end{eqnarray*} Furthermore, we note from \eqref{3.7} and \eqref{2.3} that $m_g (C,\lambda)=0,~\forall\lambda\in S_3$. Thus, \begin{eqnarray*} r(B)-m_g(A,\lambda)\geq 0, \ \ \ \ \forall \lambda\in S_3, \end{eqnarray* from which we get $N(A,B,C)\geq 0$. \end{proof} \begin{remark} Notice that all the three terms in $N(A,B,C)$ are greater than or equal to zero. Therefore, the new bound \eqref{3.1} is better than those given in \eqref{1.1} and \eqref{1.2}. \end{remark} \begin{example} We want to show the sharpness of \eqref{3.1}. Consider the matrices $$ A=\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 2\\ \end{array} \right],\quad B=\left[ \begin{array}{ccccc} 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{array} \right], $$ and \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccccc} 2 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 2 & 1 & 0\\ 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 3\\ \end{array} \right]. \end{eqnarray* It is straightforward to show that $d(A)=d(C)=0,~\Lambda(A)=\{0,1,2\},~\Lambda(C)=\{\frac{3-\sqrt{5}}{2},~\frac{3+\sqrt{5}}{2},~3\}$, and $r(B)=3$. Thus, $|\Lambda(A)|=|\Lambda(C)|=3,~S_1=\emptyset,~S_2=\Lambda(C),~S_3=\Lambda(A)$, and $m_g(A,0)=m_g(A,1)=2,~m_g(A,2)=1,~I(C,\frac{3-\sqrt{5}}{2})=I(C,\frac{3+\sqrt{5}}{2})=1,~I(C,3)=0$. So we have $$ |S_3|=3,\quad \sum\limits_{\lambda\in S_3}\big(r(B)-m_g(A,\lambda)\big)=(3-2)+(3-2)+(3-1)=4,\quad \sum\limits_{\lambda\in S_2} I(C,\lambda)=2, $$ and $$ N(A,B,C)=9. $$ For this example, both \eqref{1.1} and \eqref{1.2} yield $$ |\Lambda(C)|\leq \big(r(B)+1\big)|\Lambda(A)|=12, $$ which is invalid because the order of $C$ is 5. As a comparison, \eqref{3.1} gives \begin{eqnarray*} |\Lambda(C)|\leq\big(r(B)+1\big)|\Lambda(A)|-N(A,B,C)=12-9=3. \end{eqnarray* As $|\Lambda(C)|=3$, \eqref{3.1} is sharp. \end{example} However, the bound given in Lemma \ref{Thm3.1} is too complicated to use in practice. Indeed, it involves $m_g(A,\lambda)$ for each $\lambda\in S_3$ and $I(C,\lambda)$ for each $\lambda\in S_2$. We want to provide a more practical bound, and the idea is using $|S_3|=|\Lambda(A)\backslash \Lambda(C)|$ to take the place of $N(A,B,C)$ in \eqref{3.1}. We can present the first main theorem in this paper. \begin{theorem}\label{Cor2.4} Let $A, B\in \mathbb{C}^{n\times n}$ and $C=A+B$. Then \begin{equation}\label{3.11} |\Lambda(C)|\leq \big(r(B)+1\big)|\Lambda(A)|+d(A)-d(C)-|\Lambda(A)\backslash \Lambda(C)|. \end{equation} Specifically, if the spectra of $A$ and $C$ are disjoint, then we have \begin{equation}\label{3.12} |\Lambda(C)|\leq r(B)|\Lambda(A)|+d(A)-d(C). \end{equation} \end{theorem} \begin{proof} The inequality \eqref{3.11} follows directly from the fact that all the three terms in \eqref{3.111} are greater than or equal to zero. For \eqref{3.12}, we have $|\Lambda(A)\backslash \Lambda(C)|=|\Lambda(A)|$ if the spectra of $A$ and $C$ are disjoint. By \eqref{3.11}, \begin{eqnarray*} |\Lambda(C)| &\leq & (r(B)+1)|\Lambda(A)|+d(A)-d(C)-|\Lambda(A)\backslash \Lambda(C)|\\ &= & (r(B)+1)|\Lambda(A)|+d(A)-d(C)-|\Lambda(A)|\\ &=& r(B)|\Lambda(A)|+d(A)-d(C). \end{eqnarray*} \end{proof} \begin{remark} Theorem \ref{Cor2.4} indicates that the upper bounds given in \eqref{1.1} and \eqref{1.2} can be enhanced substantially in many case. The key idea is to first separate $\Lambda(A)\cup\Lambda(C)$ into three disjoint sets, and then pay special attention to the set $\Lambda(A)\setminus\Lambda(C)$. Obviously, our new bounds are better than those given in \eqref{1.1} and \eqref{1.2}. Note that $|S_3|>0$ {\it if and only if} there is at least an element in $\Lambda(A)$ but not in $\Lambda(C)$. Specifically, $|S_3|=|\Lambda(A)|$ as the eigenvalues of $A$ and $C$ are disjoint. \end{remark} \begin{example} In \eqref{3.12}, it requires that the eigenvalues of $A$ and those of its low-rank update $C$, are disjoint with each other. In this example, we try illustrate that this condition is not stringent and even trivial in practice, at least for randomly generated matrices $A$ and $B$: Given $n$, the size of the matrix $A$, $k$, the rank of the perturbation matrix $B$, and $m$, the number of tests, we run the following MATLAB code. Both the test matrices $A$ and the low-rank matrices $B$ are generated by using the MATLAB command {\tt randn}, where {\tt randn(M,N)} returns an M-by-N matrix containing pseudorandom values drawn from the standard normal distribution \cite{MATLAB}. {\it \begin{enumerate} \item ~ function num=dinstincteigen(n,k,m) \item ~{\bf for } i=1:m ~~\% Run the test for $m$ times \item ~~~~~ A=randn(n,n); \item ~~~~~ D1=unique(eig(A));~~\% $\Lambda(A)$ \item ~~~~~ B=randn(n,k)*randn(k,n); ~~\% Generate a random rank-k matrix \item ~~~~~ C=A+B; \item ~~~~~ D2=unique(eig(C));~~\% $\Lambda(C)$ \item ~~~~~ D=setdiff(D1,D2);~~\% $\Lambda(A)\backslash \Lambda(C)$ \item ~~~~~ num=size(D,1);~~ \% $S_3=|\Lambda(A)\backslash \Lambda(C)|$ \item ~{\bf end } \end{enumerate} } We run this code with $n=1000$, $k=1,2,\ldots,5$, and $m=1000$. In other words, we run the test for $mk=5000$ times altogether. The numerical results show that $num=1000$ for all the $5000$ runs. This implies that we often have $\Lambda(A)\cap\Lambda(C)=\emptyset$, at least for randomly generated matrices $A$ and $B$. Therefore, the condition $|\Lambda(A)\backslash \Lambda(C)|>0$ is trivial and $|\Lambda(A)\backslash \Lambda(C)|=|\Lambda(A)|$ is not stringent in practice. \end{example} Next, we show the sharpness of \eqref{3.11} and \eqref{3.12}, and demonstrate the superiority of them over \eqref{1.1} and \eqref{1.2}. \begin{example} Consider $$ A=\left[ \begin{array}{ccc} 0 & 0 & -1 \\ -1 & 0 & -1 \\ -1 & -1 & 1 \\ \end{array} \right],\quad B=\left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right], $$ and \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \\ \end{array} \right]. \end{eqnarray* We have $\Lambda(A)=\{-1.2470, 0.4450,1.8019\},~d(A)=0,~ \Lambda(C)=\{1,2\},~ d(C)=1,~ r(B)=1$, and $|S_3|=|\Lambda(A)|=3$. For this example, \eqref{1.1} and \eqref{1.2} give us $$ |\Lambda(C)|\leq \big(r(B)+1\big)|\Lambda(A)|+d(A)=6, $$ and $$ |\Lambda(C)|\leq \big(r(B)+1\big)|\Lambda(A)|+d(A)-d(C)=5, $$ respectively. Obviously, the above two bounds are invalid since the matrix $C$ is of order 3. As a comparison, both \eqref{3.11} and \eqref{3.12} yield \begin{eqnarray*} |\Lambda(C)|\leq r(B)|\Lambda(A)|+d(A)-d(C)=3-1=2, \end{eqnarray* which is sharp for the sake of $|\Lambda(C)|=2$. \end{example} However, all the results provided in Theorem \ref{Thm2.2}, Lemma \ref{Thm3.1} and Theorem \ref{Cor2.4} need to know the eigen-information on the perturbed matrix $C$, which is not available {\it a prior}. Thus, we aim to provide some {\it prior} upper bounds that only rely on the information of $A$ and $B$. Similar to \cite{P. E. Farrell}, we assume that the information on $|\Lambda(A)|$ is known in advance. We are in a position to give the second main theorem in this paper. \begin{theorem}\label{Cor2.6} Suppose that $A \in \mathbb{C}^{n\times n}$ is diagonalizable and $C=A+B$. Then {\rm (i)} If $C$ is diagonalizable and $|\Lambda(A)\backslash \Lambda(C)|\geq 1$, we have \begin{equation}\label{3.15} |\Lambda(C)|\leq (r(B)+1)|\Lambda(A)|-1. \end{equation} Specifically, if $\Lambda(A)\cap\Lambda(C)=\emptyset$, then \begin{equation}\label{41} |\Lambda(C)|\leq r(B)|\Lambda(A)|. \end{equation} {\rm (ii)} If $C$ is non-diagonalizable and $|\Lambda(A)\backslash \Lambda(C)|\geq 1$, then \begin{equation}\label{3.13} |\Lambda(C)|\leq (r(B)+1)|\Lambda(A)|-2. \end{equation} Specifically, if $\Lambda(A)\cap\Lambda(C)=\emptyset$, then \begin{equation}\label{3.14} |\Lambda(C)|\leq r(B)|\Lambda(A)|-1. \end{equation} \end{theorem} \begin{proof} Recall that $d(A)=d(C)=0$ if $A$ and $C$ are diagonalizable. So \eqref{3.15} follows directly from \eqref{3.11}. Specifically, if $\Lambda(A)\cap\Lambda(C)=\emptyset$, we have $|\Lambda(A)\backslash \Lambda(C)|=|\Lambda(A)|$, and thus \eqref{41} holds. On the other hand, if $A$ is diagonalizable while $C$ is non-diagonalizable, we have $d(A)=0$ and $d(C)\geq 1$. From \eqref{3.11} and the fact that $|\Lambda(A)\backslash \Lambda(C)|\geq 1$, we get \begin{eqnarray*} |\Lambda(C)| \leq (r(B)+1)|\Lambda(A)|-1-1 =(r(B)+1)|\Lambda(A)|-2. \end{eqnarray*} Specifically, if $C$ is non-diagonalizable and $\Lambda(A)\cap\Lambda(C)=\emptyset$, we obtain \eqref{3.14} from \eqref{3.12}. \end{proof} \begin{remark} Theorem \ref{Cor2.6} is an analogy of Theorem \ref{Thm1.3}, which plays an important role in the estimate for the number of Krylov iterations after low-rank update. Compared with Theorem \ref{Thm1.3}, our new results are tighter and improve those given in \cite[Corollary 4.2]{X.F. Xu} significantly: First, when $C$ is diagonalizable, $r(B)=1$, and $|\Lambda(A)\backslash \Lambda(C)|\geq 1$ {\rm(}which is trivial in practice{\rm)}, it follows from \eqref{3.15} that $$ |\Lambda(C)|\leq 2|\Lambda(A)|-1< 2|\Lambda(A)|. $$ Second, if $C$ is non-diagonalizable, $r(B)=1$ and $\Lambda(A)\cap\Lambda(C)=\emptyset$ {\rm(}which is not stringent in practice{\rm)}, we have $$ |\Lambda(C)|\leq |\Lambda(A)|-1, $$ which is much smaller than $2|\Lambda(A)|-1$; see \eqref{1.3}. On the other hand, the upper bound given in Theorem \ref{Thm211} involves $d(A)$, i.e., the sum of the defectivities of all the eigenvalues of $A$, which is difficult to evaluate. In practice, we can use $ (r(B)+1)|\Lambda(A)|-1 $ as an upper bound to $|\Lambda(C)|$, if we only have $|\Lambda(A)|$ and $r(B)$ at hand and there is no other information available. Another reason is that it is the upper bound of \eqref{3.15}--\eqref{3.14}, and it is much better than the right-hand side of \eqref{1.1}. \end{remark} \begin{example} In this example, we try to show the sharpness of \eqref{3.15}--\eqref{3.14}. {\rm (i)}~ Consider the two $n$-by-$n$ matrices $$ A=\left[ \begin{array}{ccccc} 0 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ \end{array} \right],\quad B=\left[ \begin{array}{ccccc} 1 & 1 & 0 & \cdots & 0 \\ 1 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 0 \\ \end{array} \right]. $$ Then \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccccc} 1 & 1 & 0 & \cdots & 0 \\ 1 & 2 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ \end{array} \right]. \end{eqnarray* Note that both $A$ and $C$ are diagonalizable since they are symmetric. Moreover, $\Lambda(A)=\{0,1\},~\Lambda(C)=\{1,~\frac{3-\sqrt{5}}{2},~\frac{3+\sqrt{5}}{2}\}$, and $r(B)=1$. Thus, $|\Lambda(A)|=2,~|\Lambda(C)|=3$, and $|\Lambda(A)\backslash \Lambda(C)|=1$, and \eqref{3.15} gives \begin{eqnarray*} |\Lambda(C)|\leq\big(r(B)+1\big)|\Lambda(A)|-1=4-1=3. \end{eqnarray* We see \eqref{3.15} is sharp because of $|\Lambda(C)|=3$. {\rm (ii)}~~Consider $$ A=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \\ \end{array} \right],\quad B=\left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right], $$ and \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccc} 2 & 1 & 1 \\ 1 & 3 & 1 \\ 1 & 1 & 4 \\ \end{array} \right]. \end{eqnarray* In this example, both $A$ and $C$ are diagonalizable, moreover, $\Lambda(A)=\{1, 2, 3\},~r(B)=1$, $\Lambda(C)=\{1.3249, 2.4608, 5.2143\}$, and $\Lambda(A)\cap\Lambda(C)=\emptyset$. We have from \eqref{41} that \begin{eqnarray*} |\Lambda(C)|\leq r(B)|\Lambda(A)|=3, \end{eqnarray* which is sharp for the sake of $|\Lambda(C)|=3$. {\rm (iii)}~~ We try to show the sharpness of \eqref{3.13}. Consider the two n-by-n matrices $$ A=\left[ \begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0 \\ 0 & 2 & 0 & \cdots & 0 \\ 0 & 0 & 2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 2 \\ \end{array} \right],\quad B=\left[ \begin{array}{ccccc} 2 & 0 & 1 & \cdots & 0 \\ 2 & 0 & 1 & \cdots & 0 \\ 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 0 \\ \end{array} \right]. $$ Then \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccccc} 3 & 0 & 1 & \cdots & 0 \\ 2 & 2 & 1 & \cdots & 0 \\ 0 & 0 & 2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 2 \\ \end{array} \right]. \end{eqnarray* Obviously, $A$ is diagonalizable, $\Lambda(A)=\{1,2\},~ r(B)=1, ~\Lambda(C)=\{2, 3\}$ and $m_a(C,2)=n-1$. Thus, $|\Lambda(A)|=2,~|\Lambda(C)|=2$, and $|\Lambda(A)\backslash \Lambda(C)|=1$. It turns out that $C$ is non-diagonalizable. Indeed, \begin{eqnarray*} C-2I=\left[ \begin{array}{ccccc} 1 & 0 & 1 & \cdots & 0 \\ 2 & 0 & 1 & \cdots & 0 \\ 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 0 \\ \end{array} \right], \end{eqnarray* where $I$ is the identity matrix. Therefore, $r(C-2I)=2$, implying that $m_g(C,2)=n-2< m_a(C,2)=n-1$. It follows from \eqref{3.13} that \begin{eqnarray*} |\Lambda(C)|\leq\big(r(B)+1\big)|\Lambda(A)|-2=4-2=2. \end{eqnarray* As $|\Lambda(C)|=2$, \eqref{3.13} is sharp. {\rm (iv)}~~ Consider $$ A=\left[ \begin{array}{ccc} 1 & -1 & 1 \\ -1 & 0 & 1 \\ -1 & -2 & 1 \\ \end{array} \right],\quad B=\left[ \begin{array}{ccc} 1 & 2 & -1 \\ 1 & 2 & -1 \\ 1 & 2 & -1 \\ \end{array} \right], $$ and \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccc} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \\ \end{array} \right]. \end{eqnarray* It is seen that $A$ is diagonalizable while $C$ is not. We have $\Lambda(A)=\{1.6506, 0.1747 + 1.5469{\bf i}, 0.1747 - 1.5469{\bf i}\}$, $r(B)=1$, and $\Lambda(A)\cap\Lambda(C)=\emptyset$, where ${\bf i}$ is the imaginary unit. For this example, \eqref{3.14} gives \begin{eqnarray*} |\Lambda(C)|\leq r(B)|\Lambda(A)|-1=2. \end{eqnarray* Note that $|\Lambda(C)|=2$, \eqref{3.14} is sharp. \end{example} The singular values of low-rank updated matrices are of great interest in many applications \cite{Chu}. Finally, we give the following result on the number of distinct singular values of a matrix after perturbation. \begin{corollary} Let $A, B\in \mathbb{C}^{n\times n}$ and $C=A+B.$ Then \begin{equation}\label{4.1} |\sigma(C)|\leq (2r(B)+1)|\sigma(A)|-|\sigma(A)\backslash \sigma(C)|. \end{equation} where $\sigma(\cdot)$ denotes the set of distinct singular values of a matrix, and $|\sigma(\cdot)|$ denotes the cardinality of the set $\sigma(\cdot)$ . \end{corollary} \begin{proof} Notice that \begin{eqnarray*} C^*C=(A+B)^*(A+B)=A^*A+A^*B+B^*(A+B), \end{eqnarray* where $C^*$ represents the conjugate transpose of $C$. From $r\big(A^*B+B^*(A+B)\big)\leq 2r(B)$, $d(A^*A)=d(C^*C)=0$ and \eqref{3.11}, we obtain \begin{eqnarray*} |\Lambda(C^*C)| &\leq& \big(r\big(A^*B+B^*(A+B)\big)+1\big)|\Lambda(A^*A)|-|\Lambda(A^*A)\backslash \Lambda(C^*C)|\\ &\leq& \big(2r(B)+1\big)|\Lambda(A^*A)|-|\Lambda(A^*A)\backslash \Lambda(C^*C)|. \end{eqnarray* Or equivalently, \begin{equation*} |\sigma(C)|\leq \big(2r(B)+1\big)|\sigma(A)|-|\sigma(A)\backslash \sigma(C)|. \end{equation*} \end{proof} \begin{example} In this example, we demonstrate the sharpness of \eqref{4.1}. Consider $$ A=\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{array} \right],\quad B=\left[ \begin{array}{ccccc} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 \\ \end{array} \right]. $$ It is easy to check that \begin{eqnarray*} C=A+B=\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 2 \\ \end{array} \right], \end{eqnarray* $\sigma(C)=\{1, 2.9208, 0.6847\}$ and $|\sigma(A)\backslash \sigma(C)|=0.$ We have from \eqref{4.1} that \begin{eqnarray*} |\sigma(C)|\leq\big(2r(B)+1\big)|\sigma(A)|-|\sigma(A)\backslash \sigma(C)|=3-0=3. \end{eqnarray* Recall that $|\sigma(C)|=3$, so \eqref{4.1} is sharp. \end{example} \bigskip \section*{Acknowledgments} We would like to thank Dr. Hongkui Pang for careful reading the manuscript and helpful discussions. \newpage
1,116,691,500,562
arxiv
\section{Methods} The process for obtaining the reconstruction shown in Fig.~\ref{4Panel}d was as follows: \begin{enumerate} \item{Tilted plane correction was applied to each of the 90 diffraction patterns in the full dataset.} \item{The standard ePIE algorithm \cite{Maiden2009} was applied to the corrected data, with subpixel scan position precision handled as in Maiden {\it et al.} \cite{Maiden2011}. A starting guess for the probe was calculated using knowledge of the sample-to-focus distance (300 ${\rm {\upmu}m}$). The object starting guess was set to unity and the probe guess was normalized to contain the same energy as the average diffraction pattern in the dataset. The algorithm was allowed to update the probe guess in parallel with the object guess at each sub-iteration. The algorithm was run in this way for 20 full ptychographic iterations, at which point the probe guess had made much more progress towards convergence than the object guess. The object guess was reinitialized to unity, and the algorithm was restarted using the new probe guess, and allowed to run for 100 iterations, long enough for both the object and probe to converge to stable solutions.} \item{The object guess was re-initialized as described in step 2, and the probe guess was set to that found at the end of step 2. The subpixel position correction method \cite{Zhang2013a} was applied to the ePIE, and the overlap constraint was applied with subpixel shifts of the probe\cite{Maiden2011}. The position correction feedback parameter $\beta$ was started at a value of 50, and automated as in Zhang {\it et al.} \cite{Zhang2013a}. The probe guess was not allowed to update during this step. Again, the algorithm was run for 100 iterations, until the position corrections converged to $<$ 0.1 pixel.} \item{Finally, using the probe found in step 2 and the corrected scan positions found in step 3, and with the object guess reinitialized to unity, the algorithm was run for 200 iterations to achieve the final reconstruction.} \end{enumerate} Each full iteration (cycling through all 90 diffraction patterns) took approximately 30 seconds on a personal computer, leading to a total reconstruction time of 3.5 hours. The sample used in the experiment was fabricated on a super-polished silicon wafer. The wafer was rinsed with acetone, isopropanol, and methanol, and baked on a hotplate for 20 minutes at $250^\circ$ C. It was then spin-coated with Microchem 2$\%$ PMMA in anisole, molecular weight 950 at 4000 r.p.m. for 45 seconds. Afterwards it was baked at $180^\circ$ C for 90 seconds. Electron beam lithography was performed using a FEI Nova NanoSEM 640, using Nanometer Pattern Generation System (NPGS) software and patterns. The resist was then developed by immersion in a 1:3 solution of methyl-isobutyl-ketone:isopropanol for 30 seconds. Approximately 30 nm of titanium was evaporated onto the surface using a CVC SC3000 3-boat thermal evaporator. The lift-off step was accomplished in acetone using a sonicator. \section{Author Contributions} M. S., B. Z., and D. A. designed the experiment. L. S. fabricated the sample. M. S., B. Z., D. A., and D. G. performed the experiments. M. S., B. Z., and D. A. analyzed the data. M.M and H.K. designed the HHG source and planned the experiments. All authors contributed to the manuscript. \section{Acknowledgments} We gratefully acknowledge support from the Semiconductor Research Corporation grant 2013-OJ-2443, the DARPA PULSE program through a grant from AMRDEC, a National Security Science and Engineering Faculty Fellowship, and facilities provided by the National Science Foundation Engineering Research Center in EUV Science and Technology. D. G. and E. S. acknowledge support from an NSF IGERT program. \section{Supplementary Information} \subsection{High Harmonic Beam Characterization Through Ptychography} To ensure that our recovery algorithm as discussed in the main text was correctly retrieving the probe illumination, we first characterized the extreme ultraviolet (EUV), high harmonic generation (HHG) beam by scanning a 5 ${\rm {\upmu}m}$ diameter pinhole across the beam near its focus and reconstructed the illumination using ptychography. In this case, the pinhole can be thought of as the probe, while the beam is an effective object. The scan consisted of a 6 x 6 grid with 1 ${\rm {\upmu}m}$ step size between adjacent scan positions. The reconstructed beam is shown in Fig.~S\ref{beamComparison}a. The reconstructed beam was propagated to the sample position (200 ${\rm {\upmu}m}$ upstream of the pinhole probe location) and calculated on the tilted plane (at 45$^\circ$) using tilted plane correction, shown in Fig.~S\ref{beamComparison}b. Immediately after this ptychography scan, the pinhole probe was removed and the sample was translated such that the beam illuminated one of the star patterns on the sample (with reconstruction shown in Fig.~2d). We performed a 3 x 3 ptychographic scan across the star feature, with 2.5 ${\rm {\upmu}m}$ step size. In this case, a probe starting guess consisting of a Gaussian amplitude profile with random phase sufficed to consistently retrieve the probe amplitude shown in Fig.~S\ref{beamComparison}c. As can be seen by comparison of Figs.~S\ref{beamComparison}b and c, the two beam characterization methods show very good agreement between both the phase and the amplitude. It should be noted that the HHG beam drifted slightly inside the adjustable aperture during the course of the two scans, resulting in slightly different beam structure during the two measurements. \begin{figure}[htb] \includegraphics[width=6.in]{beamReconstructionComparisonSmaller.jpg} \caption {{\bfseries A comparison of separate reconstructions of the HHG illumination beam, using the beam as the object in one case and as the probe in the second case.} (a) Reconstruction of the HHG beam near the focus using a 5 ${\rm {\upmu}m}$ diameter pinhole probe. The main image displays the amplitude and the inset displays the phase. The scale bar has width 2 ${\rm {\upmu}m}$. (b) The result of propagating the reconstructed beam from (a) to the tilted sample plane. Again, the main image shows the amplitude and the inset shows the phase. The scale bar has width 5 ${\rm {\upmu}m}$. (c) The amplitude (main image) and phase (inset) of the reconstructed probe based on a 3 x 3 ptychographic scan across the one of the features on the titanium sample discussed in the text. The scale bar is shared with (b). Note that the beam amplitudes in (b) and (c) are displayed in the tilted sample coordinates, resulting in elongation in the horizontal direction. } \label{beamComparison} \end{figure} As a further consistency check, the probe reconstruction discussed in the main text (shown in Fig.~2c) was propagated to the detector, and the tilted plane correction was undone in order to examine the result in the real coordinates of the detector. The result of these steps is shown in Fig.~S\ref{detectorComparison}a. A comparison was made with a direct measurement of the unscattered beam by translating the sample to a featureless region of the silicon substrate, shown in Fig.~S\ref{detectorComparison}b. As can be seen in Figs.~S\ref{detectorComparison}a and b, while it is evident that, as in the above sample plane comparison, some beam drift occurred during the course of the ptychographic scan, the reconstructed probe is entirely consistent with the high harmonic beam used to illuminate the sample. \begin{figure} \includegraphics[width=6.in]{beamDetectorComparisonSmaller.jpg} \caption {{\bfseries Comparison between the illumination reconstructed as a ptychographic probe and propagated to the detector, and the unscattered illumination measured directly on the detector (raw data).} (a) The probe reconstruction from Fig.~2c in the main text, propagated to the detector plane. (b) The HHG beam measured directly on the detector by translating the sample to a featureless region of the silicon substrate. The scale bar in (a) has width 1 mm and is shared by (a) and (b).} \label{detectorComparison} \end{figure} \subsection{Comparison between CDI reconstruction and SEM and AFM images} As mentioned in the main text, there are a number of defects visible in the sample image reconstructed through ptychography coherent diffractive imaging (CDI) which are also visible in scanning electron microscope (SEM) and atomic force microscope (AFM) images. A visual comparison between the three techniques is shown in Fig.~S\ref{defects}. Of the 7 defects pointed out in the figure, only defects 1-5 are visible in all of the images. The 6th and 7th defects are only visible in the CDI phase image and the AFM image. This is a demonstration of the fact that CDI has both amplitude contrast (analogous to SEM) and phase/height contrast (analogous to AFM). \begin{figure} \includegraphics[width=6.in]{DefectFigureSmaller.jpg} \caption {{\bfseries A visual comparison between the reconstructed CDI amplitude and phase with images obtained using SEM and AFM.} (a) Reconstructed CDI amplitude image of the sample. (b) Phase of the reconstructed image. (c) SEM image of the sample. (d) AFM image of the sample. In the above images, 7 defects have been pointed out (located above and to the right of each number). Defects 1-5 are visible in all of the images, whereas defects 6 and 7 are only visible in the reconstructed phase and in the AFM image. The circled defect in (c) and (d) was a result of contamination after the CDI measurements were taken.} \label{defects} \end{figure} \end{document}
1,116,691,500,563
arxiv
\section{Introduction} \label{sec:INTRO} It has been shown in~\cite{I} that the invariance of the partition function of a molecular system under the change of variables induced by the (measure preserving) canonical transformation~\footnote{A sum over repeated spatial indices ($a,b,\ldots=1,\ldots,D$) is understood. We also frequently use the shorthand notation $\partial f(\vec r\,)/\partial r^a =\nabla^a f(\vec r\,)$.} \begin{eqnarray} && [\{\vec r\,\},\{\vec p\,\}]\rightarrow [\{\vec r\,'\},\{\vec p\,'\}]\, ,\label{CHV}\\ && r_i'^{a}= r_i^a+\epsilon^a\,(\vec r_i)\, ,\qquad i=1,2,\ldots, N\, ,\label{QDIFF}\\ && p_i^{a}=\frac{\partial r_i'^{a}(\vec r\,)} {\partial r^b}\Big{|}_{\vec r=\vec r_i}\,p_i'^{b}= \Big{[}\delta^{ab}+\frac{\partial\epsilon^a(\vec r\,)} {\partial r^b}\Big{|}_{\vec r=\vec r_i}\Big{]}p'^{b}\, ,\label{PDIFF}\end{eqnarray} where $\vec \epsilon\,(\vec r)$ represents the infinitesimal displacement of the body elements~\footnote{Boundary conditions matching those imposed to the body must be obeyed by the function $\vec \epsilon\,(\vec r)$.} at the point $\vec r$, implies for a system with short-range interactions the local equilibrium condition~\cite{LL1} \begin{equation} \nabla^b \tau^{ab}(\vec r)+ {\cal{F}}^a_{\rm{ext}} (\vec r)=0 \label{EQC}\end{equation} with $\tau^{ab}(\vec r\,)$ the stress tensor at the point $\vec r$ and ${\cal{F}}^a_{\rm ext} (\vec r\,)$ an external force (density). The local expression of $\tau^{ab}(\vec r\,)$ in terms of the degrees of freedom (dof's) of the elementary constituents of the body was derived by explicitly computing the functional derivative of the partition function with respect to the particle displacement and comparing with eq.~(\ref{EQC}). The generality of the approach guarantees its validity in any statistical {\it ensemble}, for whichever type of (short-range) potential and boundary conditions, and in a classical as well in a quantum mechanical setting (see~\cite{I} for details). The purpose of this brief note is twofold. First of all, after recalling how the equilibrium condition~(\ref{EQC}) can be deduced from purely thermodynamic considerations, we prove that to the set of stress tensor components can be given an elegant interpretation as the set of Lagrange multipliers that are needed to enforce the relation between the displacement vector, $\vec\epsilon\,(\vec r\,)$, and the deformation tensor, $\eta^{ab}(\vec r\,)$, given by the formula~\cite{LL1} \begin{equation} \eta^{ab}(\vec r\,)=\frac{1}{2}\big{[}\nabla^a\epsilon^b(\vec r\,)+ \nabla^b\epsilon^a(\vec r\,)\big{]}\, . \label{DISDEF}\end{equation} Secondly we rederive the local expression of the stress tensor for a molecular system (endowed with short-range interactions) by moving from the ``passive'' interpretation of the transformation~(\ref{QDIFF}) (followed in~\cite{I}) where eqs.~(\ref{QDIFF}) and~(\ref{PDIFF}) are seen as a mere change of variables, to an ``active'' one where we imagine that $\vec\epsilon\,(\vec r\,)$ is the actual infinitesimal displacement of the body elementary constituents at the point $\vec r$. As a fall-out of the approach we present in this paper the question of the uniqueness of the stress tensor~\cite{UNI} can be neatly addressed with the conclusion that no ambiguity affects the formula~(\ref{TAU}) below. We will show, in fact, that there is no freedom to add to this expression any arbitrary divergenceless, symmetric rank-two tensor, as {\it a priori} geometrically allowed by the structure of eq.~(\ref{EQC}). \section{Mechanics and thermodynamics} \label{sec:THERMO} Let us start with a discussion of the physics of local body deformations. Assuming that the system is at mechanical equilibrium at fixed temperature, the principle of virtual works~\cite{LL2} ensures us that the work done by the body under an infinitesimal local deformation is zero. Furthermore, if the deformation transformation is reversible, the variation of the free energy will be equal to minus the work done by the body~\cite{LL3}. In these circumstances one can write the variation of the free energy of the system under an infinitesimal (reversible) deformation, as a function of the particle displacement vector and deformation tensor, in the form \begin{equation} d {\cal A}(\eta,\epsilon)=-\delta_{rev} L(\eta,\epsilon) = \int_V d^3r\, \tau^{ab} (\vec r\,)\eta^{ab}(\vec r\,) - \int_V d^3r\, {\cal{F}}^a_{\rm ext} (\vec r\,) \epsilon^{a}(\vec r\,)\, , \label{FEW} \end{equation} where the first term in the last equality corresponds to the work done by the body deformation and the second to the work done by the external force (if there is one). As recalled above, the sum of the two contribution vanishes provided the displacement vector and the deformation tensor are related as in eq.~(\ref{DISDEF}). The way to see what the condition of thermodynamic equilibrium implies for this constrained system, is to introduce Lagrange multipliers, $\lambda^{ab}$, to enforce eqs.~(\ref{DISDEF}), and define the ``unconstrained'' variation of ${\cal A}$ \begin{eqnarray} &&d {\cal A}_{\rm uncon}(\eta,\epsilon;\lambda)=\int_V d^3r\,\tau^{ab} (\vec r\,)\eta^{ab}(\vec r\,) - \int_V d^3r\, {\cal{F}}^a_{\rm ext} (\vec r\,) \epsilon^{a}(\vec r\,)+\nonumber\\ &&-\int_V d^3r\, \lambda^{ab} (\vec r\,)\Big{[}\eta^{ab}(\vec r\,)- \frac{1}{2}\big{[}\nabla^a\epsilon^b(\vec r\,)+ \nabla^b\epsilon^a(\vec r\,)\big{]}\Big{]}\, . \label{FEC} \end{eqnarray} Imposing the vanishing of $d {\cal A}_{\rm uncon}(\eta,\epsilon;\lambda)$ immediately yields the relations \begin{eqnarray} &&\tau^{ab} (\vec r\,)-\lambda^{ab}(\vec r\,) =0 \, ,\label{EQ1}\\ &&{\cal{F}}^a_{\rm ext} (\vec r\,) +\nabla^b \lambda^{ab} (\vec r\,)=0\, .\label{EQ2} \end{eqnarray} Eqs.~(\ref{EQ1}) provide the announced interpretation of the set of stress tensor components as the set of Lagrange multipliers which enforce the constraints~(\ref{DISDEF}). Eliminating $\lambda^{ab}$ between eqs.~(\ref{EQ1}) and~(\ref{EQ2}) gives back the body equilibrium condition~(\ref{EQC}). \section{Statistical Mechanics} \label{sec:STAT} We now want to get an explicit expression of the stress tensor in terms of the elementary dof's of the system. To this end we need to display the functional dependence of the free energy on $\eta^{ab}$ and $\epsilon^{a}$. Upon comparing with the form of eq.~(\ref{FEW}), one can derive the desired formula for $\tau^{ab} (\vec r\,)$. The procedure outlined above can be straightforwardly implemented in Statistical Mechanics. Let us consider, in fact, a system interacting through the short-range potential ${\cal{U}}[\{q\}] $ and let ${\cal{U}}_{\rm{ext}}[\{q\}]$ be a generic external potential. Working, for concreteness, in the {\it canonical ensemble} (but the argument that follows would similarly go through in the {\it micro-canonical ensemble}~\cite{I}), one has for the free energy the formulae \begin{eqnarray} && {\cal A}=-\frac{1}{\beta}\log {\cal Z}^0_c \, ,\label{FENL}\\ &&{\cal{Z}}^0_c=\int \prod^N (d^{D}p) \int_V \prod^N (d^{D}q) \,\exp\Big{(}-\beta\, {\cal{H}}^0_{\rm ext}[\{q\},\{p\}] \Big{)}\, , \label{ZETAC} \\\nonumber\\ &&{\cal{H}}^0_{\rm ext}[\{q\},\{p\}]= {\cal{H}}^0[\{q\},\{p\}] + {\cal{U}}_{\rm{ext}}[\{q\}]\, ,\label{HEXTT}\\ &&{\cal{H}}^0[\{q\},\{p\}]=\sum_{i=1}^{N}\frac{({\vec{p}_{i}})^2}{2m}+ {\cal{U}}[\{q\}]\, . \label{HH} \end{eqnarray} In eq.~(\ref{ZETAC}) the symbol $\prod^N (d^{D} p) \prod^N (d^{D} q)$ is a short-hand notation for the $D$-dimensional integration measure over the system phase space and $V$ is the volume of the box in which the system is contained. The superscript ``$^0$'' in the previous equations is to recall that they refer to the undeformed body in equilibrium. To find the functional dependence of the free energy upon $\eta^{ab}$ and $\epsilon^a$, we have to provide the expression of the Hamiltonian of a system subjected to a local deformation (of the type indicated in eq.~(\ref{QDIFF})). To this end we first notice that under the infinitesimal displacement $\vec\epsilon(\vec q\,)$ the line element squared changes according to the formula~\cite{LL1} \begin{equation} dq^adq^a\rightarrow dq^adq^a+2\eta^{ab}\,(\vec q\,)dq^a dq^b\, , \label{LEC}\end{equation} with $\eta^{ab}(\vec q\,)$ related to $\epsilon^{a}(\vec q\,)$ as in eq.~(\ref{DISDEF}). Consequently also the kinetic energy of the system will get modified because of the addition of the extra contribution coming from the second term in eq.~(\ref{LEC}). In fact, from eq.~(\ref{LEC}) one formally gets for the modulus square of the velocity \begin{equation} \frac{dq^a}{dt}\frac{dq^a}{dt}\rightarrow \frac{dq^a}{dt}\frac{dq^a}{dt} +2\eta^{ab}\,(\vec q\,) \frac{dq^a}{dt}\frac{dq^b}{dt}\, . \label{VEC}\end{equation} The Hamiltonian of the deformed system will thus read \begin{eqnarray} \hspace{-1.5cm}&&{\cal H}_{\rm ext}[\{q\},\{p\};\eta,\epsilon]={\cal H}_{\rm ext}^0[\{q\},\{p\}]+\nonumber\\ \hspace{-1.5cm}&&-\sum_{i=1}^{N}\eta^{ab}(\vec q_i)\frac{p_i^a p_i^b}{m} - \frac{1}{2}\sum_{j \neq i =1}^{N}\eta^{ab}(\vec q_i)q_{ij}^a\,{\cal{F}}^b_{ij}[\{q\}]- \sum_{i=1}^{N}\epsilon^a(\vec q_i) {\cal{F}}_{i,\rm ext}^a[\{q\}]\, ,\label{DEFH}\end{eqnarray} where we have introduced the definitions \begin{eqnarray} &&{{\cal{F}}}_{ij}^a[\{q\}]=-\frac{\partial{\cal{U}}[\{q\}]} {\partial q_{ij}} \cdot \frac{q_{ij}^a}{q_{ij}}\, ,\qquad {\cal{F}}^a_{i,\rm{ext}}[\{q\}]=-\frac{\partial {\cal{U}}_{\rm{ext}}[\{q\}]}{\partial q_i^a}\, ,\label{F}\\ &&\vec q_{ij}=\vec q_i-\vec q_j\, ,\qquad q_{ij}= \sqrt{\vec q_{ij}^{\,\,2}}\, .\label{QIJ}\end{eqnarray} The first term in the second line in the r.h.s.\ of eq.~(\ref{DEFH}) comes directly from eq.~(\ref{VEC}) after passing from velocities to canonical momenta. The second and third term arise as a consequence of the particle displacement $q^a\rightarrow q_i^a+\epsilon^a\,(\vec q_i)$ (eq.~(\ref{QDIFF})) in ${\cal{U}}[\{q\}]$ and ${\cal{U}}_{\rm{ext}}[\{q\}]$, respectively. While the structure of the third term is obvious, the form of the second needs some explanation. First of all, we observe that we can always consider a translationally invariant interaction potential as a function of the set of the two-particle distances $\{q_{ij}\}$. Secondly, since $\cal U$ is assumed to be short-range on the macroscopic scale over which $\vec \epsilon$ can appreciably vary, we conclude that one can get non-vanishing contributions to the r.h.s.\ of eq.~(\ref{DEFH}) only from terms where all particle distances are very small (smaller than some typical microscopic length). In computing the variation of $q_{ij}$ under a particle displacement, we are then justified in writing \begin{eqnarray} \hspace{-1.5cm}&&q_i^a-q_j^a\rightarrow q_i^a+\epsilon^a(\vec q_i)-q_j^a-\epsilon^a(\vec q_j)= \nabla^b\epsilon^a(\vec q_i)(q_i^b-q_j^b)+\ldots \, ,\label{VDIST}\\ \hspace{-1.5cm}&&q_{ij}\rightarrow q_{ij}+\frac{1}{2}\big{[}\nabla^a\epsilon^b(\vec q_i)+ \nabla^b\epsilon^a(\vec q_i)\big{]}\frac{q_{ij}^aq_{ij}^b}{q_{ij}} \ldots = q_{ij}+\eta^{ab}(\vec q_i)\frac{q_{ij}^aq_{ij}^b}{q_{ij}} \ldots\, ,\label{NDIST} \end{eqnarray} where dots represent terms of higher order in the differences $q_{ij}^a$, that we neglect. When eq.~(\ref{NDIST}) is introduced in ${\cal{U}}[\{q\}]$, the second term immediately emerges by expanding in the small quantity $\eta^{ab}$. We stress that, as expected, in~(\ref{DEFH}) the body deformation is completely described by the tensor $\eta^{ab}$, while the displacement $\epsilon^a$ is directly coupled only to the external force. Inserting the Hamiltonian~(\ref{DEFH}) in the formulae for the partition function and free energy, one obtains the sought for functional dependence on $\eta^{ab}$ and $\epsilon^a$. To first order in $\eta^{ab}$ and $\epsilon^a$ one thus gets for the free energy variation \begin{eqnarray} \hspace{-1.5cm}&&d{\cal A}(\eta,\epsilon)=\frac{1}{{\cal{Z}}^0_c} \int \prod^N (d^{D}p) \int_V \prod^N (d^{D}q) \,e^{-\beta\, {\cal{H}}^0_{\rm ext}[\{q\},\{p\}]}\cdot\nonumber\\ \hspace{-1.5cm}&&\cdot\Big{[}\!-\!\sum_{i=1}^{N}\eta^{ab}(q_i)\frac{p_i^a p_i^b}{m} - \frac{1}{2}\sum_{j\neq i=1}^{N}\eta^{ab}(q_i)q_{ij}^a\,{\cal{F}}^b_{ij}[\{q\}]- \sum_{i=1}^{N}\epsilon^a(q_i) {\cal{F}}_{i,\rm ext}^a[\{q\}]\Big{]}\, . \label{FF} \end{eqnarray} For an easy comparison to the equations of sect.~\ref{sec:THERMO} it is convenient to introduce in each term of the sum over the index $i$, the identity $\int_V d^3r\, \delta(\vec r -\vec q_i)=1$. Having done this, eq.~(\ref{FF}) can be cast in the form \begin{eqnarray} \hspace{-1.0cm}&&d{\cal A}(\eta,\epsilon)=\nonumber\\ \hspace{-1.0cm}&&=\int_V d^3r \,\eta^{ab}(\vec r\,) \Big{\langle}-\sum_{i=1}^{N}\delta(\vec r -\vec q_i)\frac{p_i^a p_i^b}{m} - \frac{1}{2}\sum_{j\neq i=1}^{N} \delta(\vec r -\vec q_i) q_{ij}^a\,{\cal{F}}^b_{ij}[\{q\}]\Big{\rangle}+\nonumber\\ \hspace{-1.0cm}&& - \int_V d^3r\, \epsilon^{a}(\vec r\,)\Big{\langle} \sum_{i=1}^{N} \delta(\vec r -\vec q_i) {\cal{F}}_{i,\rm ext}^a[\{q\}]\Big{\rangle}\, ,\label{FFD} \end{eqnarray} where $\langle\ldots\rangle$ means {\it ensemble} average. Comparing with eq.~(\ref{FEW}), the expression of $\tau^{ab}(\vec r\,)$ in terms of the elementary dof's of the system is readily identified as the tensor that multiplies the deformation tensor in the formula for the work done by the system under a local deformation. One finds in this way \begin{equation} \tau^{ab}(\vec r\,)=-\,\Big{\langle} \sum_{i=1}^{N}\, \delta(\vec r-\vec q_i)\Big{(}\frac{p_i^a p_i^b}{m}+ \frac{1}{2}\sum_{j\,(\neq i)=1}^{N}q_{ij}^a\,{\cal{F}}^b_{ij}[\{q\}] \Big{)}\,\Big{\rangle}\, .\label{TAU}\end{equation} This formula is in agreement with~\cite{I} and, once integrated over volume, with the expression that can be found in the classical papers of ref.~\cite{IK}. In closing we observe that the approach we have developed allows to answer the old question of whether a stress tensor obeying eq.~(\ref{EQC}) is unique or not~\cite{UNI}. The question arises, as from a purely geometrical point of view, one could imagine to add to whichever expression of $\tau^{ab}$ an arbitrary divergenceless, symmetric rank-two tensor, still fulfilling eq.~(\ref{EQC}). However if, as we advocate in this paper, $\tau^{ab}$ is identified as the tensor which multiplies the deformation tensor, $\eta^{ab}$, in the formula which expresses the work done by the system under an infinitesimal local deformation (eq.~(\ref{FEW})), such a freedom does not exist anymore. \section{Conclusions} \label{sec:CONCL} In this note we have given a consistent derivation of the microscopic expression of the stress tensor of a body, that complies with the principles of Thermodynamics and takes properly into account the geometrical constraints existing between the particle displacement vector and the body deformation tensor. In agreement with what is known to happen in the case of a continuum system, we find that the possibility of defining a (local) stress tensor rests on the assumption that the interaction potential between the body elementary constituents (for the rest fully arbitrary) is ``short-range''. The discussion we give is totally general and holds in any {\it ensemble} and whichever the boundary conditions imposed to the system might be. Remarkably the whole procedure goes through also in a quantum-mechanical setting~\cite{I}. A consequence of the line of reasoning we have presented above is that apparently there is no room for any ambiguity in the expression of the stress tensor we derive, if $\tau^{ab}$ is identified with the tensor which multiplies the deformation tensor in the formula for the work done by the system under an infinitesimal local deformation. \vspace{.5cm} {\bf Acknowledgments} - We thank E. Presutti for a useful discussion. We also wish to thank E. Tamrod for his interest in our work and the organizers of the SIAM2008 Conference (http://www.siam.org/meetings/ms08/) where this investigation was started.
1,116,691,500,564
arxiv
\section{Methods} \subsection{Solution to the inverse centrality problem.} The set of $N$ linear equations with $K$ variable weights, $\omega_1,\ldots,\omega_K$, in Eq.~\ref{eq1} can be rewritten as a system of $N$ linear equations with $K$ variables: \begin{equation}\label{eq:sistema} B \bm{\omega} =\rho \mathbf{c}, \end{equation} where now $B$ is a $N\times K$ matrix of real numbers, and $\bm{\omega}\equiv \{\omega_1,\ldots,\omega_K\}$. Notice that the linear system in Eq.~\ref{eq:sistema} has solutions since the rank of $B$ is $N<K$ (all the equations are separated and each of the variables, $\omega_1,\ldots,\omega_K$, appears in one equation only), and the in-degree of all nodes is positive by definition. Hence, there always exists $\bm{\omega} \in \mathbb{R}^K$ such that Eq.~\ref{eq1} is satisfied. It is convenient to rewrite Eq. \ref{eq:sistema} in a form that emphasises the dependence of matrix $B$ from $c$. We choose to label the arcs as follows: $(i,l)$, $l=1\ldots k^{in}_i$ denotes the $l$-th arc entering node $i$, where $k^{in}_i$ is the in--degree of node $i$. Likewise, $S_{i,l}$ is the source of arc $(i,l)$, while $\omega_{i,l}$ is the corresponding weight. Using this notation, the $i$-th component of Eq.~\ref{eq:sistema} can be written as: \begin{equation} \sum_{l=1}^{k^{in}_i}\omega_{i,l}c_{S_{i,l}} = \rho c_i \label{eq:eq2} \end{equation} By direct computation, one positive solution of Eq. \ref{eq:eq2} is given by \begin{equation} \omega_{i,1}=\omega_{i,2} = \cdots =\omega_{i,k^{in}_i} = \frac{\rho c_i}{\sum_{l=1}^{k^{in}_i} c_{S_{i,l}} } \end{equation} where $i=1\ldots N$, and by continuity there are infinite many solutions such that $\omega_{i,l}$ are all positive. In particular, if for node $i$ we have $k^{in}_i=1$, then the $i$-th equation of Eq.~\ref{eq:eq2} has a unique solution, while if $ k^{in}_i>1$, there are always infinitely many solutions depending on $k^{in}_i -1$ parameters. Summing up, Eq.~(\ref{eq:sistema}) has only one solution if all the node in-degrees are equal to one, while there are, in general, infinitely many solutions depending on $K-N$ parameters. Notice that $\rho$ can be different from $\rho_0$, meaning that it is also possible to set the value of the largest eigenvalue of the weighted graph. \subsection{Tuning a subset of the graph links.} Here, we show that it is not necessary to fix the weights of all the graph links in order to get an arbitrary centrality vector $\mathbf{c}>0$. In fact, given a subset of links $E'\subseteq E$ containing at least one incoming link for each node, it is sufficient to assign some positive weights $\tilde\omega(\ell')$ to each $\ell'\in E'$, while keeping $\omega(\ell)$ constant $\forall \ell\in E \setminus E'$, for instance all equal to $1$, such that the resulting weighted graph has eigenvector centrality equal to $\mathbf{c}$. Without loss of generality we can assume that the first $k^{c}_i>0$ incoming links of each node $i$ belong to $E'$, so that the components of Eq.~\ref{eq:eq2} can be written as: \begin{equation} \sum_{l=1}^{k^{c}_i}\omega_{i,l}c_{S_{i,l}} = \rho c_i - \sum_{l=k^{c}_i + 1}^{k^{in}_i}\omega_{i,l}c_{S_{i,l}},\quad i=1\ldots N \label{eq:eq3} \end{equation} Therefore, since $c_i>0$ for each $1\le i\le N$, then there is a $\rho_0>0$ such that for every $\rho>\rho_0$ \begin{equation} \rho c_i- \sum_{l=k^{c}_i + 1}^{k^{in}_i}\omega_{i,l}c_{S_{i,l}}>0, \quad i=1\ldots N \end{equation} and hence, by a similar continuity argument as above, we can ensure that there are infinitely many positive solutions to Eq.\ref{eq:eq3}. \subsection{Finding minimum controlling sets.} A \textit{controlling set} of graph $G$ is any set of nodes $C\subseteq V$ such that: \begin{equation} \displaystyle V = C\cup\left(\bigcup_{i\in C}\{j \in V:\enspace e_{ij}\in E\}\right). \label{eq:control_set} \end{equation} This means that, for each node $j$ in the graph, at least one of the two following conditions holds: {\em a}) $j \in C$, or {\em b}) $j$ is pointed by at least one node in $C$. We use $|C|$ to denote the size of the controlling set, i.e. the number of nodes contained in $C$. Finding the \textit{minimum controlling set} $C^*$ of a graph $G$, i.e. a controlling set having minimal size, is equivalent to computing the so-called {\em domination number} of $G$. The domination number problem is a well known NP-hard problem in graph theory \cite{west}. Therefore, the size of the minimum controlling set can be determined exactly only for small $N$ graphs as those in Fig.~\ref{fig:real_social}. To investigate larger graphs we have used two greedy algorithms. The first algorithm, called Top--Down Controller Search (TDCS), works as follows. We initially set $G_{t=0}=G$. We select the node $i_0$ with the maximum out-degree in $G_{t=0}$, and mark it as \textit{controller node}. Then, all the nodes in the out-neighbourhood of $i_0$ are marked as \textit{controlled} and are removed from $G_{t=0}$, together with $i_0$ itself. In this way, we obtain a new graph $G_{t=1}$, and we store the controller node $i_0$, together with the list of nodes controlled by $i_0$. Notice that, removing a generic node $j$ from $G_{t=0}$, also implies that $G_{t=1}$ does not contain any of the links pointing to $j$ or originating from it. The same procedure is iteratively applied to $G_{t=1}$, $G_{t=2}$ and so on, until all the nodes of $G$ are either marked as controller or as controlled nodes. The algorithm produces a set $\overline{C} = \{i_0, i_1, i_2,\ldots\}$, with $|\overline{C}|\ge |C^*|$, which is a controlling set of $G$ by construction. The second algorithm is called Bottom--Up Controller Search (BUCS), and it works as follows. We set $G_{t=0}=G$ and consider the set $M(0)$ containing all the nodes in $G_{t=0}$ with minimum in--degree. For each node $i\in M(0)$, we consider the set of nodes pointing to $i$ and select from this set the node $m_i$ with the maximal out--degree. This node is marked as \textit{controller}. Then we obtain a new graph $G_{t=1}$ by removing from $G_{t=0}$ all the controller nodes $m_i$ for all $i\in M(0)$, together with all the nodes, marked as \textit{controlled}, pointed by them. The same procedure is iteratively applied to $G_{t=1}$, $G_{t=2}$ and so on, until all the nodes of $G$ are either marked as controller or as controlled nodes. If a graph $G_{t}$ contains isolated nodes, these are marked as \textit{controller} and removed from $G_{t}$. The algorithm finally produces a set $\overline{C} = \{i_0, i_1, i_2,\ldots\}$ which is a controlling set of $G$ by construction. We have verified that the controlling sets obtained by both TDCS and BUCS for each of the networks considered are much smaller than those obtained by randomly selecting the controlling nodes. Moreover, the set of controller nodes found by TDCS is in general different from that obtained on the same network by BUCS. Also the sizes of the two controlling sets obtained by the two algorithms are different. In particular, we have noticed that in assortative (disassortative) networks the controlling set produced by TDCS is smaller (larger) than that produced by BUCS.
1,116,691,500,565
arxiv
\section{Introduction} \label{sec:introduction} The majority of textual web content is supplemented by multimedia information depicted in pictures, animations, audio, or videos -- for a good reason: it can help users to comprehend information more effectively and efficiently. In addition, some kind of information can be solely expressed by text and not by an image (e.g., a date like a person's birthday), or vice versa (e.g., the exact shape of a plant's leaf). Although multimodal information is omnipresent (for example, in web documents, videos, scientific papers, graphic novels), today's search engines and recommender systems do not exploit the full potential of multimodal information yet. When documents of different media types are retrieved to answer a user query, typically the results are displayed separately or sequentially, while semantic cross-modal relations are not exploited and remain hidden. One reason is that the automatic understanding and interpretation of (non-textual) visual or audio sources itself is difficult -- and it is even more difficult to model and understand the interplay of two different modalities (see Figure \ref{fig:divide}). Communication sciences and applied linguistics have been investigating the visual/verbal divide for many years (e.g., Barthes~\cite{barthes1977image}, Martinec and Salway~\cite{Martinec2005}, Bateman~\cite{bateman2017multimodality}). Although the semantic gap has been identified also as a fundamental problem in multimedia search nearly twenty years ago \cite{smeulders2000content}, insights and taxonomies from the field of visual communication science have been disregarded yet. However, we believe that insights from this field are very useful for multimodal retrieval research since they provide a new and differentiated perspective on image-text relations. \vspace{0.2cm} \begin{SCfigure}[1][!ht] \includegraphics[width=0.18\textwidth]{graphics/nike-gravity.pdf} \hfill \caption{An example of a complex message portrayed by an image-text pair elucidating the semantic gap between the textual information and the image content. (Source:~\cite{hussain2017automatic})} \label{fig:divide} \end{SCfigure} In this paper, we leverage taxonomies from visual communication research and derive a set of eight computable, semantic image-text relations for multimodal indexing and search. These image-text relations are systematically characterized by three metrics. Our contributions can be summarized as follows: (1) \textit{Modeling a set of semantic image-text relations}: Based on previous work in communication sciences, we derive a categorization of \textit{distinctive} semantic image-text classes for multimodal analysis and search. To derive a systematic characterization, we build upon previous work in multimedia retrieval, where the metrics cross-modal mutual information and semantic correlation have been suggested to describe the information gap between image and text \cite{henning2017estimating}. In addition, we introduce a third metric, the status relation, and show that these three metrics allow us to systematically characterize the eight classes of image-text relations. (2) \textit{Training data augmentation}: Since there is no sufficiently large dataset set to train a deep learning system to predict the eight image-text classes, it is outlined how a comprehensive dataset can be automatically collected and augmented for this purpose. (3) \textit{Automatic prediction of image-text classes}: Utilizing our new training dataset, we present a deep learning system to automatically classify these metrics. Two variations are realized and evaluated: (a) a "conventional" end-to-end approach for direct classification of an image-text class as well as (b) a "cascaded" architecture to estimate the different metrics separately and then infer the classes by combining the results. Experiments are conducted on a demanding, human-annotated testset. The remainder of the paper is organized as follows. Related work in the fields of communication sciences and information retrieval is discussed in Section 2. The derived categorization of image-text classes and their characterization by three dimensions are presented in Section 3. In Section 4, we propose a deep learning system to predict these metrics as well as resulting image-text classes and describe our approach for automatic data collection and augmentation. Experimental results are presented in Section 5, while Section 6 concludes the paper and outlines areas of future work. \section{Related Work} \label{sec:relatedwork} \subsection{Multimedia information retrieval} Numerous publications in recent years deal with multimodal information in retrieval tasks. The general problem of reducing or bridging the semantic gap~\cite{smeulders2000content} between images and text is the main issue in cross-media retrieval~\cite{qi2018life, balaneshin2018deep, Mithun2018ACMMM, mithun2018learning, xu2018modal, joslyn2018cross}. Fan et al.~\cite{Fan2017} tackle this problem by modeling humans' visual and descriptive senses for an image through a multi-sensory fusion network. They argue to bridge the \textit{cognitive and semantic gap} by improving the comparability of heterogeneous media features and obtain good results for image-to-text and text-to-image retrieval. Liang et al.~\cite{Liang2016} propose a self-paced cross-modal subspace matching method by constructing a multimodal graph that preserves the intra-modality and inter-modality similarity. Another application is targeted by ~\cite{Mazloom2016}, who extract a set of engagement parameters to predict the popularity of social media posts. This can be leveraged by companies to understand their customers and evaluate marketing campaigns. While the confidence in predicting basic emotions like happiness or sadness can be improved by multimodal features~\cite{xu2017multisentinet}, even more complex semantic concepts like sarcasm~\cite{Schifanella2016} or metaphors~\cite{Shutova2016} can be predicted. This is enabled by evaluating the textual cues in the context of the image, providing a new level of semantic richness. The attention-based text embeddings introduced by Bahdanau et al.~\cite{bahdanau2014neural} analyze textual information under the consideration of previously generated image embedding and improve tasks like document classification~\cite{yang2016hierarchical} and image caption generation~\cite{xu2015show, johnson2016densecap, xie2014cross, lan2017fluency}. Henning and Ewerth~\cite{henning2017estimating} propose two metrics to characterize image-text relations in a general manner: \textit{cross-modal mutual information} and \textit{semantic correlation}. They suggest an autoencoder with multimodal embeddings to learn these relations while minimizing the need for annotated training data. A prerequisite to use heterogeneous modalities in machine learning approaches is the encoding in a joint feature space. The encoding might depend on the type of modality to encode, the number of training samples available, the type of classification to perform and the desired interpretability of the models~\cite{baltruvsaitis2018multimodal}. One type of algorithms utilizes \textit{Multiple Kernel Learning}~\cite{bucak2014multiple, gonen2011multiple}. Application areas are multimodal affect recognition~\cite{poria2015deep, jaques2015multi}, event detection~\cite{yeh2012novel}, and Alzheimer's disease classification~\cite{liu2014multiple}. Deep neural networks can also be utilized to model multimodal embeddings. For instance, these systems can be used for the generation of image captions~\cite{karpathy2014deep}; Ramanishka et al.~\cite{Ramanishka2016} exploit audiovisual data and metadata, i.e., a video's domain, to generate coherent video descriptions "in the wild", using convolutional neural networks (CNN, ResNet~\cite{he2016deep}) for encoding visual data. Alternative network architectures are GoogleNet~\cite{szegedy2017inception} or DenseNet~\cite{huang2016densely}. \subsection{Communication sciences} The interpretation of multimodal information and the "visual/verbal divide" have been investigated in the field of visual communication and applied linguistics for many years. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{graphics/martinec-salway-status.png} \vspace{-0.3cm} \caption{Part of Martinec and Salway's taxonomy that distinguishes image-text relation based on status (simplified).} \label{fig:tax_martinec_salway} \end{figure} One direction of research in recent decades has dealt with the assignment of image-text pairs to distinct classes. In a pioneering work, Barthes~\cite{barthes1977image} discusses the respective roles and functions of text and images. He proposes a first taxonomy, which introduces different types of status relations between the modalities, denoting different hierarchic relations between the modalities. In case of unequal status, the classes \textit{Illustration} and \textit{Anchorage} are distinguished, otherwise their relation is denoted as \textit{Relay}. Martinec and Salway~\cite{Martinec2005} extend Barthes' taxonomy and further divide the image-text pairs of \textit{equal} rank into a \textit{Complementary} and \textit{Independent} class, indicating that the information content is either intertwined or equivalent in both modalities. They combine it with Halliday's~\cite{halliday2013halliday} logico-semantics relations, which originally have been developed to distinguish text clauses. Martinec and Salway revised these grammatical categories to capture the specific logical relationships between text and image regardless of their \textit{status}. McCloud~\cite{mccloud1993understanding} focuses on comic books, whose particular characteristic is that image and text do not share information by means of depicted or mentioned concepts, although they have a strong semantic connection. McCloud denotes this category as \textit{Interdependent} and argues that 'pictures and words go hand in hand to convey an idea that neither could convey alone'. Other authors mention the case of negative correlations between the mentioned/visually depicted concepts (for instance, N\"oth~\cite{noth1995handbook} or van Leeuwen~\cite{van2005introducing}), denoting them \textit{Contradiction} or \textit{Contrast}, respectively. Van Leeuwen states that they can be used intentionally, e.g., in magazine advertisements by choosing opposite colors or other formal features to draw attention to certain objects. \section{Semantic Image-Text Relations} \label{sec:image_text_relations} \begin{SCfigure*}[0.2][htbp] \includegraphics[width=0.85\textwidth]{graphics/Classes_Table_large.pdf} \caption{Overview of the proposed image-text classes and their potential use cases.} \label{fig:class_table} \end{SCfigure*} The discussion of related work reveals that the complex cross-modal interplay of image and text has not been systematically modeled and investigated yet from a computer science perspective. In this section, we derive a categorization of classes of semantic image-text relations which can be used for multimedia information retrieval and web search. This categorization is based on previous work in the fields of visual communication (sciences) and information retrieval. However, one drawback of taxonomies in communication science is that their level of detail makes it sometimes difficult to assign image-text pairs to a particular class, as criticized by Bateman \cite{bateman2014text}. First, we evaluate the image-text classes described in communication science literature according to their usefulness for information retrieval. As a point of departure, we consider Martinec and Salway's taxonomy in the status dimension (Fig. \ref{fig:tax_martinec_salway}). This yields the classes of image-text relations of \textit{Illustration}, \textit{Anchorage}, \textit{Complementary}, and \textit{Independent}. We disregard the class \textit{Independent} since it is very uncommon that both modalities exactly describe the same information. Furthermore, we introduce the class \textit{Interdependent} as suggested by McCloud, which in contrast to \textit{Complementary} consists of image-text pairs where the intended meaning cannot be gathered from either of them exclusively. While a number of categorizations does not consider negative semantic correlations at all, N\"oth~\cite{noth1995handbook}, van Leeuwen~\cite{van2005introducing}, and Henning and Ewerth~\cite{henning2017estimating} consider this aspect. We believe that it is important for information retrieval tasks to consider negative correlations as well, for instance, in order to identify less useful multimodal information, mistakes etc. Consequently, we introduce the classes \textit{Contrasting}, \textit{Bad Illustration}, and \textit{Bad Anchorage}, which are the negative counterparts for \textit{Complementary}, \textit{Illustration}, and \textit{Anchorage}. Finally, we consider the case when text and image are \textit{uncorrelated}. While one objective of our work is to derive meaningful, distinctive and comprehensible image-text classes, another contribution is their systematic characterization. For this purpose, we leverage the metrics \textit{cross-modal mutual information} (CMI) and \textit{semantic correlation} (SC)~\cite{henning2017estimating}. However, these two metrics are not sufficient to model a larger set of image-text classes. It stands out that the \textit{status} relation, originally introduced by Barthes \cite{barthes1977image}, is adopted by the majority of taxonomies established in the last four decades (e.g. \cite{Martinec2005, unsworth2007image}), implying that this relation is essential to describe an image-text pair. It portrays how two modalities can relate to one another in a hierarchical way reflecting their relative importance. Either the text supports the image (\textit{Anchorage}), or the image supports the text (\textit{Illustration}), or both modalities contribute equally to the overall meaning (e.g., \textit{Complementary}, originally denoted by Barthes as \textit{Relay}). This encourages us to extend the two-dimensional feature space of CMI and SC with the \textit{status} dimension (\textit{STAT}). In the next section, we provide some definitions for the three metrics and subsequently infer a categorization of semantic image-text classes from them. Our goal is to reformulate and clarify the interrelations between visual and textual content in order to make them applicable for multimodal indexing and retrieval. An overview of the image-text classes and their mapping to the metrics, as well as possible use cases is given in Figure \ref{fig:class_table}. \subsection{\textbf{Metrics for image-text relations}} \label{sec:metrics} \textbf{Concepts and entities:} The following definitions are related to concepts and entities in images and text. Generally, plenty of concepts and entities can be found in images ranging from the main focus of interest (e.g., a person, a certain object, an event, a diagram) to barely visible or background details (e.g., a leaf of grass, a bird in the sky). Normally, the meaning of an image is related to the main objects in the foreground. When assessing relevant information in images, it is reasonable to regard these concepts and entities, which, however, adds a certain level of subjectivity in some cases. But most of the time the important entities can be easily determined. \textbf{Cross-modal mutual information (CMI)} \newline Depending on the (fraction of) mutual presence of concepts and entities in both image and text, the cross-modal mutual information ranges from $0$ (no overlap of depicted concepts) to $1$ (concepts in image and text overlap entirely). It is important to point out that CMI ignores a deeper semantic meaning, in contrast to \textit{semantic correlation}. If, for example, a small man with a blue shirt is shown in the image, while the text talks about a tall man with a red sweater, the CMI would still be positive due to the mutual concept "man". But since the description is confusing and hinders interpretation of the multimodal information, semantic correlation (SC, see below) of this image-text pair would be negative. Image-text pairs with high CMI can be found in image captioning datasets, for instance. The images and their corresponding captions have a descriptive nature, that is they have explicit representations in both modalities. In contrast, news articles or advertisements often have a rather loose connection to their associated images by means of mutual entities or concepts. The range of cross-modal mutual information (CMI) is $[0,1]$. \textbf{Semantic correlation (SC)} \newline The (intended) meaning of image and text can range from coherent (SC=$1$), over independent (SC=$0$) to contradictory (SC=$-1$). This refers to concepts and entities, descriptions and interpretation of symbols, metaphors, as well as to their relations to one another. Typically, an interpretation requires contextual information, knowledge, or experience and it cannot be derived exclusively from the entities in the text and the objects depicted in the image. Possible values range from $[-1,1]$, where a negative value indicates that the co-occurrence of an image and a text disturbs the comprehension of the multimodal content. This is the case if a text refers to an object in an image and cannot be found there, or has different attributes as described in the text. An observer might notice a contradiction and ask herself "Do image and text belong together at all, or were they placed jointly by mistake?". A positive score on the contrary suggests that both modalities share a semantic context or meaning. The third possible option is that there is no semantic correlation between entities in the image and the text, then $SC = 0$. \textbf{Status (STAT)} \newline Status describes the hierarchical relation between an image and text with respect to their relative importance. Either the image is "subordinate to the text" ($stat=T$), implying an exchangeable image which plays the minor role in conveying the overall message of the image-text pair, or the text is "subordinate to the image" ($stat=I$), usually characterizing text with additional information (e.g., a caption) for an image that is the center of attention. An \textit{equal status} ($stat=0$) describes the situation where image and text are equally important for the overall message. Images which are "subordinate to text" (class \textit{Illustration}) 'elucidate' or 'realize' the text. This is the case, if a text describes a general concept and the associated image shows a concrete example of that concept. Examples for \textit{Illustrations} can be found in textbooks and encyclopedias. On the contrary, in the class \textit{Anchorage} the text is "subordinate to the image". This is the case, if the text answers the question "What can be seen in this image?". It is common that direct references to objects in the image can be found and the readers are informed what they are looking at. This type of image-text pair can be found in newspapers or scientific documents, but also in image captioning data sets. The third possible state of a \textit{status relation} is "equal", which describes an image-text pair where both modalities contribute individually to the conveyed information. Also, either part contains details that the other one does not. According to Barthes'~\cite{barthes1977image}, this class describes the situation, where the information depicted in either modality is part of a more general message and together they elucidate information on a higher level that neither could do alone. \subsection{\textbf{Defining classes of image-text relations}} \label{sec:categorization} In this section, we show how the combination of our three metrics can be naturally mapped to distinctive image-text classes (see also Fig. \ref{fig:class_table}). For this purpose, we simplify the data value space for each dimension. The level of semantic correlation can be modeled by the interval $[-1,1]$. Henning and Ewerth ~\cite{henning2017estimating} distinguish five levels of CMI and SC. In this work, we omit these intermediate levels since the general idea of positive, negative, and uncorrelated image-text pairs is sufficient for the task of assigning image-text pairs to distinct classes. Therefore, the possible states of semantic correlation (SC) are: $sc \in \left\{-1, 0, 1\right\}$. For a similar reason, finer levels for CMI are omitted, resulting in two possible states for $cmi \in \left\{0, 1\right\}$, which correspond to \textit{no overlap} and \textit{overlap}. Possible states of status are $stat \in \left\{T, 0, I\right\}$: \textit{image subordinate to text} ($stat=T$), \textit{equal status} ($stat=0$), and \textit{text subordinate to image} ($stat=I$). \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{graphics/taxonomy.pdf} \vspace{-0.3cm} \caption{Our categorization of image-text relations. Discarded subtrees are marked by an \textbf{X} for clarity. Please note that there are no hierarchical relations implied.} \label{fig:categorization} \vspace{-0.2cm} \end{figure} If approached naively, there are $2\times3\times3=18$ possible combinations of SC, CMI and STAT. A closer inspection reveals that (only) eight of these classes match with existing taxonomies in communication sciences, confirming the coherence of our analysis. The remaining ten classes can be discarded since they cannot occur in practice or do not make sense. The reasoning behind is given after we have defined the eight classes that form the categorization. \textbf{Uncorrelated ($cmi=0, sc=0, stat=0$)}\\ This class contains image-text pairs that do not belong together in an obvious way. They neither share entities and concepts nor there is an interpretation for a semantic correlation (e.g., see Fig.~\ref{fig:example_uncorrelatedinterdependentcomplementary}, left). \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{graphics/example_uncorrelatedinterdependentcomplementary.pdf} \caption{Examples for the \textit{Uncorrelated} (left), \textit{Interdependent} (middle) and \textit{Complementary} (right) classes. (Sources: see Section \ref{sec:train-data-augment})} \label{fig:example_uncorrelatedinterdependentcomplementary} \end{figure} \textbf{Complementary ($cmi=1, sc=1, stat=0$)}\\ The class \textit{Complementary} comprises the classic interplay between visual and textual information, where both of them share information but also provide information that the other one does not. Neither of them is dependent on the other one and their status is equal. It is important to note that the amount of information is not necessarily the same in both modalities. The most significant factor is that observer is still able to understand the key information provided by either of the modalities alone (Figure~\ref{fig:example_uncorrelatedinterdependentcomplementary}, right). The definitions of the next two classes will clarify that further. \textbf{Interdependent ($cmi=0, sc=1, stat=0$)}\\ This class includes image-text pairs that do not share entities or concepts by means of mutual information, but are related by a semantic context. As a result, their combination conveys a new meaning or interpretation which neither of the modalities could have achieved on its own. Such image-text pairs are prevalent in advertisements where companies combine eye-catching images with funny slogans supported by metaphors or puns, without actually naming their product (Figure \ref{fig:example_uncorrelatedinterdependentcomplementary}, middle). Another genre that relies heavily on these \textit{interdependent} examples are comics or graphic novels, where speech bubbles and accompanying drawings are used to tell a story. Interdependent information is also prevalent in movies and TV material in the auditory and visual modalities. \textbf{Anchorage ($cmi=1, sc=1, stat=I$)}\\ On the contrary, the \textit{Anchorage} class is generally speaking an image description and acts as a supplement for an image. Barthes states that the role of the text in this class is to fix the interpretation of the visual information as intended by the author of the image-text pair \cite{barthes1977image}. It answers the question "What is it?" in a more or less detailed manner. This is often necessary since the possible meaning or interpretation of an image can noticeably vary and the caption is provided to pinpoint the author's intention. Therefore, an \textit{Anchorage} can be a simple image caption, but also a longer text that elucidates the hidden meaning of a painting. It is similar to \textit{Complementary}, but the main difference is that the text is subordinate to image in \textit{Anchorage}. \textbf{Illustration ($cmi=1, sc=1, stat=T$)}\\ The class \textit{Illustration} contains image-text pairs where the visual information is subordinate to the text and has therefore a lower \textit{status}. An instance of this class could be, for example, a text that describes a general concept and the accompanying image depicts a specific example. A distinctive feature of this class is that the image is replaceable by a very different image without rendering the constellation invalid. If the text is a definition of the term "mammal", it does not matter if the image shows an elephant, a mouse, or a dolphin. Each of these examples would be valid in this scenario. In general, the text is not dependent on the image to provide the intended information. \textbf{Contrasting ($cmi=1, sc=-1, stat=0$)} \textbf{Bad Illustration ($cmi=1, sc=-1, stat=T$)} \textbf{Bad Anchorage ($cmi=1, sc=-1, stat=I$)}\\ These three classes are the counterparts to \textit{Complementary, Illustration}, and \textit{Anchorage}: they share their primary features, but have a \textbf{negative SC} (see Fig.~\ref{fig:example_contrasting_bad_illustration_bad_anchorage}). In other words, the transfer of knowledge is impaired due to inconsistencies or contradictions when comparing image and text \cite{henning2017estimating}. In contrast to \textit{uncorrelated} image-text pairs, these classes share information and obviously they belong together in a certain way, but particular details or characteristics are contradicting. For instance, a \textit{Bad Illustration} pair could consist of a textual description of a bird, whose most prominent feature is its colorful plumage, but the bird in the image is actually a grey pigeon. This can be confusing and an observer might be unsure if he is looking at the right image. Similarly, contradicting textual counterparts exist for each of these classes. In section \ref{sec:train-data-augment}, we describe how we generate training samples for these classes. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{graphics/example_contrasting_bad_illustration_bad_anchorage.pdf} \vspace{-0.3cm} \caption{Examples for the \textit{Contrasting} (left), \textit{Bad Illustration} (middle), and \textit{Bad Anchorage} (right) classes. (Sources: see Section \ref{sec:train-data-augment})} \label{fig:example_contrasting_bad_illustration_bad_anchorage} \vspace{-0.3cm} \end{figure} \subsection{\textbf{Impossible image-text relations}} The eight classes described above form the categorization as shown in Figure \ref{fig:categorization}. The following ten combinations of metrics were discarded, since they do not yield meaningful image-text pairs.\\ \textbf{Cases A: $cmi=0, sc=-1, stat=T,0,I$}\\ These three classes cannot exist: If the shared information is zero, then there is nothing that can contradict one another. As soon as a textual description relates to a visual concept in the image, there is cross-modal mutual information and $CMI > 0$.\\ \textbf{Cases B: $cmi=0, sc=0, stat=T,I$}\\ The metric combination $cmi=0, sc=0, stat=0$ describes the class \textit{Uncorrelated} of image-text pairs which are neither in contextual nor visual relation to one another. Since it is not intuitive that a text is subordinate to an uncorrelated image or vice versa, these two classes are discarded.\\ \textbf{Cases C: $cmi=0, sc=1, stat=T,I$}\\ Image-text pairs in the class \textit{Interdependent} ($cmi=0, sc=1, stat=0$) are characterized by the fact, that even though they do not share any information they still complement each other by conveying additional or new meaning. Due to the nature of this class a subordination of one modality to the other one is not plausible: Neither of the conditions for the states \textit{image subordinate to text} and \textit{text subordinate to image} is fulfilled due to lack of shared concepts and entities. Therefore, these two classes are discarded.\\ \textbf{Cases D: $cmi=1, sc=0, stat=T,0,I$}\\ As soon as there is an overlap of essential depicted concepts there has to be a minimum of semantic overlap. We consider entities as essential, if they contribute to the overall information or meaning of the image-text pair. This excludes trivial background information such as the type of hat a person wears in an audience behind a politician giving a speech. The semantic correlation can be minor, but it would still correspond to $SC=1$ according to the definition above. Therefore, the combination $cmi=1, sc=0$ and the involved possible combinations of \textit{STAT} are discarded. \section{Predicting Image-Text Classes} \label{sec:classifier} In this section, we present our approach to automatically predict the introduced image-text metrics and classes. We propose a deep learning architecture that realizes a multimodal embedding for textual and graphical data. Deep neural networks achieve better results, when they are trained with a large amount of data. However, for the addressed task no such dataset exists. Crowdsourcing is an alternative to avoid the time-consuming task of manually annotating training data on our own, but requires significant efforts to maintain the quality of annotations obtained in this way. Therefore, we follow two strategies to create a sufficiently large training set. First, we automatically collect image-text pairs from different open access Web sources. Second, we suggest a method for training data augmentation (Section \ref{sec:train-data-augment}) that allows us to also generate samples for the image-text classes that rarely occur on the Web, for instance, \textit{Bad Illustration}. We suggest two classifiers, a \textbf{"classic"} approach, which simply outputs the most likely image-text class, as well as a cascaded approach based on classifiers for the three metrics. The motivation for the latter is to divide the problem into three easier classification tasks. Their subsequent \textbf{"cascaded"} execution will still lead us to the desired output of image-text classes according to Figure~\ref{fig:categorization}. The deep learning architecture is explained in section \ref{sec:cascade-deep}. \subsection{\textbf{Training data augmentation}} \label{sec:train-data-augment} The objective is to acquire a large training dataset of high quality image-text pairs with a minimum effort in manual labor. On the one hand, there are classes like \textit{Complementary} or \textit{Anchorage} available from a multitude of sources and can therefore be easily crawled. Other classes like \textit{Uncorrelated} do not naturally occur in the Web, but can be generated with little effort. On the other hand, there are rare classes like \textit{Contrasting} or \textit{Bad Anchorage}. While they do exist and it is desirable to detect these image-text pairs as well (see Fig.~\ref{fig:class_table}), there is no abundant source of such examples that could be used to train a robust classifier. Only few datasets are publicly available that contain images and corresponding textual information, which are not simply based on tags and keywords but also use cohesive sentences. Two examples are the image captioning dataset MSCOCO~\cite{linMicrosoft2014} as well as the Visual Storytelling dataset (VIST~\cite{huang2016visual}). A large number of examples can be easily taken from these datasets, namely for the classes \textit{Uncorrelated}, \textit{Complementary}, and \textit{Anchorage}. Specifically, the underlying hierarchy of MSCOCO is exploited to ensure that two randomly picked examples are not semantically related to one another, and then join the caption of one sample with the image of the other one to form \textit{Uncorrelated} samples. In this way, we gathered $60\,000$ \textbf{\textit{uncorrelated}} training samples. The VIST dataset has three types of captions for their five-image-stories. The first one "Desc-in-Isolation" resembles the generic image-caption dataset and can be used to generate examples for the class \textbf{\textit{Anchorage}}. These short descriptions are similar to MSCOCO captions, but slightly longer, so we decide to use them. Around $62\,000$ examples have been generated this way. The pairs represent this class well, since they include textual descriptions of the visually depicted concepts without any low-level visual concepts or added interpretations. More examples could have been generated similarly, but we have to restrict the level of class imbalance. The second type of VIST captions "Story-in-Sequence" is used to create \textbf{\textit{Complementary}} samples by concatenating the five captions of a story and pairing them randomly with one of the images of the same story. Using this procedure, we generated $33\,088$ examples. While there are certainly more possible constellations of \textit{complementary} content from a variety of sources, the various types of stories of this dataset give a solid basis. The same argumentation holds for the \textbf{\textit{Interdependent}} class. Admittedly, we had to manually label a set of about $1\,007$ entries of Hussain et al.'s Internet Advertisements data set~\cite{hussain2017automatic} to generate these image-text pairs. While they exhibit the right type of image-text relations, the accompanied slogans (in the image) are not annotated separately and optical character recognition does not achieve high accuracy due to ornate fonts etc. Furthermore, some image-text pairs had to be removed, since some slogans specifically mention the product name. This contradicts the condition that there is no overlap between depicted concepts and textual description, i.e., \textit{cmi}$=0$. The \textbf{\textit{Illustration}} class is established by combining one random image for each concept of the ImageNet dataset~\cite{ILSVRC15} with the summary of the corresponding article of the English Wikipedia, in case it exists. This nicely fits the nature of the class since the Wikipedia summary often provides a definition including a short overview of a concept. An image of the ImageNet class with the same name as the article should be a replaceable example image of that concept. The three remaining classes \textbf{\textit{Contrasting, Bad Illustration}} and \textbf{\textit{Bad Anchorage}} occur rarely and are hard to detect automatically. Therefore, it is not possible to automatically crawl a sufficient amount of samples. To circumvent this problem, we suggest to transform the respective positive counterparts by replacing around $530$ keywords~ \cite{website} (adjectives, directional words, colors) by antonyms and opposites in the textual description of the positive examples to make them less comprehensible. For instance, "tall man standing in front of a green car" is transformed into a "small woman standing behind a red car". While this does not absolutely break the semantic connection between image and text it surely describes certain attributes incorrectly, which impairs the accurate understanding and subsequently justifies the label of \textit{sc}$=-1$. This strategy allows us to transform a substantial amount of the "positive" image-text pairs into their negative counterparts. Finally, for all classes we truncated the text if it exceeded 10 sentences. In total the dataset consists of $224\,856$ image-text pairs. Table \ref{tab:dist_classes} and \ref{tab:dist_labels} give an overview about the data distribution, first sorted by class and the second one according to the distribution of the three metrics, which were also used in our experiments. \vspace{0.1cm} \begin{table}[htbp] \parbox{0.4\linewidth}{ \centering \begin{tabular}{|l | c |} \hline Class & \# Samples \\ \hline \textbf{Uncorrelated} & 60\,000 \\ \hline \textbf{Interdependent} & 1\,007 \\ \hline \textbf{Complementary} & 33\,088 \\ \hline \textbf{Illustration} & 5\,447 \\ \hline \textbf{Anchorage} & 62\,637 \\ \hline \textbf{Contrasting} & 31\,368 \\ \hline \textbf{Bad Illustration} & 4\,099 \\ \hline \textbf{Bad Anchorage} & 27\,210 \\ \hline \end{tabular} \centering \caption{Distribution of class labels in the generated dataset.} \label{tab:dist_classes} }\hfill \parbox{0.4\linewidth}{ \centering \begin{tabular}{| l | c |} \hline Class & \# Samples \\ \hline \textbf{STAT T} & 125\,463 \\ \hline \textbf{STAT 0} & 9\,546 \\ \hline \textbf{STAT I} & 89\,847 \\ \hline \textbf{SC -1} & 62\,677 \\ \hline \textbf{SC 0} & 60\,000 \\ \hline \textbf{SC 1} & 102\,179 \\ \hline \textbf{CMI 0} & 61\,007 \\ \hline \textbf{CMI 1} & 163\,849 \\ \hline \end{tabular} \centering \caption{Distribution of metric labels in the generated dataset.} \label{tab:dist_labels} } \vspace{-0.6cm} \end{table} \subsection{\textbf{Design of the deep classifiers}} \label{sec:cascade-deep} As mentioned above, we introduce two classification approaches: "classic" and "cascade". The advantage of the latter is that it is easier to maintain a good class balance of samples, while it is also the easier classification problem. For instance, the classes \textit{Contrasting}, \textit{Bad Illustration}, and \textit{Bad Anchorage} are used to train the neural network how negative semantic correlation looks like. This should make the training process more robust against overfitting and underfitting, but naturally also increases the training and evaluation time by a factor of three. Both methods follow the architecture shown in Figure \ref{fig:classifier}, but for "cascade" three networks have to be trained and subsequently applied to predict an image-text class. To encode the input image, the deep residual network "Inception-ResNet-v2"\,\cite{szegedy2017inception} is used, which is pre-trained on the dataset of the ImageNet challenge\,\cite{ILSVRC15}. To embed this model in our system, we remove all fully connected layers and extract the feature maps with an embedding size of $2048$ from the last convolutional layer. The text is encoded by a pre-trained model of the word2vec~\cite{mikolov2013distributed} successor fastText~\cite{Bojanowski2016}, which has the remarkable ability to produce semantically rich feature vectors even for unknown words. This is due to its skip-gram technique, which does not observe words as a whole but as n-grams, that is a sum of word parts. Thus, it enables the system to recognize a word or derived phrasings despite of typing errors. FastText utilizes an embedding size of $300$ for each word and we feed them into a bidirectional GRU (gated recurrent unit) inspired by Yang et al.~\cite{yang2016hierarchical}, which reads the sentence(s) forwards and backwards before subsequently concatenating the resulting feature vectors. In addition, an attention mechanism is incorporated through another convolutional layer, which reduces the image encoding to $300$ dimensions, matching the dimensionality of the word representation set by fastText. In this way it is ensured that the neural network reads the textual information under the consideration of the visual features, which forces it to interpret the features in unison. The final text embedding has a dimension of $1024$. After concatenating image (to get a global feature representation from the image, we apply average pooling to the aforementioned last convolutional layer) and text features, four consecutive fully connected layers (dimensions: $1024, 512, 256, 128$) comprise the classification layer. This layer has two outputs for \textit{CMI}, three outputs for \textit{SC} and \textit{STAT}, or eight outputs for the "classic" classifier, respectively. For the actual classification process in the cascade approach, the resulting three models have to be applied sequentially in an arbitrary order. We select the order $CMI\Rightarrow SC\Rightarrow STAT$, the evaluations of the three classifiers yield the final assignment to one of the eight image-text classes (Figure \ref{fig:categorization}). \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{graphics/cnn.pdf} \caption{General structure of the deep learning system with multimodal embedding. The last fully connected layer (FC) has $2$, $3$, or $8$ outputs depending on whether CMI (two levels), SC/STAT (three levels), or all eight image-text classes ("classic" approach) are classified.} \label{fig:classifier} \end{figure} \section{Experimental Evaluation} \label{sec:experiments} The dataset was split into a training and test set, where the latter one was manually labeled to generate high quality labels. It initially contained $800$ image-text pairs, where for each of the eight classes 100 examples were taken out of the automatically crawled and augmented data. The remaining $239\,307$ examples were used to train the four different models (three for the "cascade" classifier and one for the "classic" approach) for $100\,000$ iterations each with the TensorFlow framework~\cite{tensorflow2015}. The \textit{Adam optimizer} was used with its standard learning rate and a dropout rate of $0.3$ for the image embedding layer and $0.4$ for the text embedding layer. Also, a softmax cross entropy loss was used and a batch size of $12$ on a NVIDIA Titan X. All images were rescaled to a size of $299\times299$ and Szegedy et al.'s~\cite{szegedy2015going} image preprocessing techniques were applied. This includes random cropping of the image as well as random brightness, saturation, hue and contrast distortion to avoid overfitting. In addition, we limit the length of the textual information to 50 words per sentence and 30 sentences per image-text pair. All "Inception-ResNet-v2" layers were pre-trained with the ILSVRC (ImageNet Large Scale Visual Recognition Competition) 2010\,\cite{ILSVRC15} dataset to reduce the training effort. The training and test data are publicly available at \url{https://doi.org/10.25835/0010577}. \subsection{Experimental results} \label{sec:results} To assure highly accurate groundtruth data for our test set, we asked three persons of our group (one of them is a co-author) to manually annotate the $800$ image-text pairs. Each annotator received an instruction document containing short definitions of the three metrics (section \ref{sec:metrics}), the categorization in Figure~\ref{fig:categorization}, and one example per image-text class (similar to Figures~\ref{fig:example_uncorrelatedinterdependentcomplementary}-\ref{fig:example_contrasting_bad_illustration_bad_anchorage}). The inter-coder agreement has been evaluated using Krippendorff's alpha~\cite{krippendorff1970estimating} and yielded a value of $\alpha = 0.847$ (across all annotators, samples, and classes). A class label was assigned, if the majority of annotators agreed on it for a sample. Besides the eight image-text classes, the annotators could also mark a sample as \textit{Unsure} which denotes that an assignment was not possible. If \textit{Unsure} was the majority of votes, the sample was not considered for the test set. This only applied for two pairs, which reduced the size of the final test set to $798$. \vspace{0.2cm} \begin{table}[htbp] \resizebox{0.4\textwidth}{!}{ \begin{tabular}{| l | c | c | c | c |} \hline Class & Uncorr. & Interdep. & Compl. & Illustration\\ \hline Recall & $69.2\%$ & $97.6\%$ & $83.8\%$ & $83.7\%$ \\ \hline Precision & $98.7\%$ & $96.3\%$ & $88.0\%$ & $80.7\%$ \\ \hline \#Samples & $149$ & $100$ & $106$ & $95$ \\ \hline \hline Class & Anchorage & Contrasting & Bad Illu. & Bad Anch. \\ \hline Recall & $90.3\%$ & $89.0\%$ & $98.6\%$ & $91.9\%$ \\ \hline Precision & $87.3\%$ & $78.3\%$ & $69.0\%$ & $87.0\%$ \\ \hline \#Samples & $95$ & $87$ & $71$ & $95$ \\ \hline \end{tabular}} \caption{Comparison of the automatically generated labels with the annotations of the three volunteers and the resulting number of samples per class in the test set.} \label{tab:annotation} \vspace{-0.4cm} \end{table} \begin{SCtable*} \resizebox{0.78\textwidth}{!}{ {\begin{tabular}{| l | c | c | c | c | c | c | c | c | c |} \hline Class & Uncorrelated & Interdep. & Compl. & Illustration & Anchorage & Contrasting & Bad Illust. & Bad Anch. & Sum \\ \hline Uncorr. & \textbf{67} & 3 & 5 & 23 & 34 & 5 & 11 & 1 & 149\\ \hline Interd. & 0 & \textbf{94} & 0 & 0 & 5 & 0 & 0 & 1 & 100\\ \hline Compl. & 0 & 0 & \textbf{93} & 0 & 4 & 9 & 0 & 0 & 106\\ \hline Illus. & 0 & 0 & 0 & \textbf{84} & 0 & 0 & 11 & 0 & 95\\ \hline Anchor. & 2 & 2 & 0 & 2 & \textbf{83} & 0 & 0 & 6 & 95\\ \hline Contr. & 0 & 0 & 3 & 0 & 0 & \textbf{84} & 0 & 0 & 87\\ \hline Bad Illus. & 0 & 0 & 0 & 2 & 0 & 0 & \textbf{69} & 0 & 71\\ \hline Bad Anch. & 2 & 0 & 0 & 0 & 21 & 1 & 0 & \textbf{71} & 95\\ \hline \hline Precision & $94.4\%$ & $94.9\%$ & $92.1\%$ & $75.7\%$ & $56.5\%$ & $84.8\%$ & $75.8\%$ & $89.9\%$ & -\\ \hline Recall & $45.0\%$ & $94.0\%$ & $87.7\%$ & $88.4\%$ & $87.4\%$ & $96.5\%$ & $97.2\%$ & $74.7\%$ & -\\ \hline \end{tabular}}} \caption{Confusion matrix for the \mbox{"classic"} classifier on the testset of 798 image-text pairs. The rows show the groundtruth, while the coloumns show the predicted samples.}\label{tab:confusion_matrix_classic} \vspace{-0.2cm} \end{SCtable*} Comparing the human labels with the automatically generated labels allowed us to evaluate the quality of the data acquisition process. Therefore we computed how good the automatic labels matched with the human ground truth labels (Table \ref{tab:annotation}). The low recall for the class \textit{Uncorrelated} indicates that there were uncorrelated samples in the other data sources that we exploited. The \textit{Bad Illustration} class has the lowest precision and was mostly confused with \textit{Illustration} and \textit{Uncorrelated}, that is the human annotators considered the automatically "augmented" samples either as still valid or uncorrelated. \vspace{0.2cm} \begin{table}[H] \resizebox{0.4\textwidth}{!}{ \begin{tabular}{| l | c | c | c || c | c |} \hline Classifier & CMI & SC & STAT & Cascade & Classic \\ \hline \textbf{Ours} & 90.3\% & 84.6\% & 83.8\% &\textbf{74.3\%} & \textbf{80.8\%}\\ \cite{henning2017estimating} & 68.8\% & 49.6\% & - & - & - \\ \hline \end{tabular}} \caption{Test set accuracy of the metric-specific classifiers and the two final classifiers after $75\,000$ iterations.} \label{tab:results_classifiers} \vspace{-0.3cm} \end{table} The (best) results for predicting image-text classes using the "classic approach" are presented in Table~\ref{tab:confusion_matrix_classic}. The overall results for our classifiers in predicting CMI, SC, STAT as well as for the image-text classes are presented in Table \ref{tab:results_classifiers}. Figure~\ref{fig:bar_chart} compares the results of the approaches "classic" and "cascade". The accuracy of the classifiers for CMI, SC and STAT ranges from 83.8\% to 90.3\%, while the two classification variations for the image-text classes achieved an accuracy of $74.3\%$ (\emph{cascade}) and $80.8\%$ (\emph{classic}). We also compared our method with \cite{henning2017estimating} by mappingtheir intermediate steps for CMI=$0,1,2$ to 0, CMI=3,4 to 1, and SC=$\pm0.5$ to $\pm1$. \vspace{0.2cm} \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{graphics/barchart.PNG} \vspace{-0.3cm} \caption{Results for both classifiers.} \label{fig:bar_chart} \vspace{-0.2cm} \end{figure} \subsection{Discussion of results} As shown by Table~\ref{tab:results_classifiers} the \emph{classic} approach outperformed the \emph{cascade} method by about $6\%$ in terms of accuracy, indicating that a direct prediction of the image-text class is to be preferred over a combination of three separate classifiers. A reason might be that an overall judgment is probably more accurate than the single ones, which only consider one metric. This is also pleasant since an application would only need to train one classifiers instead of three. The class \textit{Uncorrelated} achieved the lowest recall indicating that both classifiers often detected a connection (either in the SC dimension or CMI), even though there was none. This might be due to the concept detector contained in InceptionResnetV2 focusing on negligible background elements. However, the high precision indicates that if it was detected it was almost always correct, in particular for the cascade classifier. The classes with positive SC are mainly confused with their negative counterparts, which is understandable since the difference between a positive and a negative SC is often caused by a few keywords in the text. But the performance is still impressive when considering that positive and negative instances differ only in a few keywords, while image content, sentence length, and structure are identical. Another interesting observation can be reported regarding the cascade approach: the rejection class \textit{Undefined}, which is predicted if an invalid leaf of the categorization (the crosses in \ref{fig:categorization}) is reached, can be used to judge the quality of our categorization. In total, 10 out of 18 leaves represent such an invalid case, but only $27$ image-text pairs ($3.4\%$) of all test samples were assigned to it. Thus, the distinction seems to be of high quality. This is due to the good results of the classifiers for the individual metrics (Table \ref{tab:results_classifiers}). \section{Conclusions and Future Work} \label{sec:conclusion} In this paper, we have introduced a set of eight semantic image-text classes and presented an approach to automatically predict them via a deep learning system that utilizes multimodal embeddings. We have leveraged previous research in communication sciences and showed how the image-text classes can be systematically characterized by the three metrics semantic correlation, cross-modal mutual information, and the status relation. Moreover, we have outlined how to gather a large training data set for the eight classes in an (almost) automatic way by exploiting data augmentation techniques. This allowed us to train a deep learning framework with an appropriate multimodal embedding. The experimental results yielded an accuracy of $77\%$ for predicting the eight image-text classes, which demonstrates the feasibility of the proposed approach. We believe that our categorization and the automatic prediction are a solid basis to enable a multitude of possible applications in fields such as multimodal web content analysis and search, cross-modal retrieval, or search as learning. In the future, we will explore further semantic relations between visual and textual information in order to enhance the understanding of these complex relationships. More diverse datasets and data generation methods should be included such that every possible arrangement of different information sources is covered, e.g., scientific documents, mainstream media etc. Finally, we will apply our approach to different search and retrieval scenarios. \begin{acks} Part of this work is financially supported by the Leibniz Association, Germany (Leibniz Competition 2018, funding line "Collaborative Excellence", project SALIENT [K68/2017]). \end{acks}
1,116,691,500,566
arxiv
\section{Local functionals and evolutionary vector fields} Let us start with notions from the theory of graded spaces as they are given in Ref.~\cite{dorf}. A {\it grading} in linear space $L$ is a decomposition of it into direct sum of subspaces, with a special value of some function $p$ (grading function) assigned to all the elements of any subspace. Below the function $p$ takes its values in the set of all positive multi-indices $J=(j_1,\dots,j_n)$ and so, \[ L=\bigoplus\limits_{J=0}^{\infty} L^{\langle J\rangle}. \] Elements of each subspace are called homogeneous. A bilinear operation $x,y\mapsto x\circ y$, defined on $L$, is said to be {\it compatible with the grading} if the product of any homogeneous elements is also homogeneous, and if \[ p(x\circ y)=p(x)+p(y). \] Now turn to concrete structures. The space of local functionals $\cal A$ has already been defined in I. Here we will call the expression given in Definition 2.1 of I the {\it canonical form of a local functional}. We formally extend that definition by allowing local functionals to be written as follows \[ F=\sum_{J=0}^{\infty}\int D_J\theta_{\Omega}f^{\langle J\rangle} \biggl(\phi_A(x), D_K\phi_A(x)\biggr)d^nx=\sum\int\theta^{(J)}f^{\langle J\rangle}, \] where in accordance with the previous definition only a finite number of terms is allowed. Here and below we simplify the notation for derivatives of $\theta$ and remove $\Omega$. Of course, any functional of such a form can be transformed to the form used in I through integration by parts \[ F=\int\theta_{\Omega}f= \int\limits_{\Omega}f, \] where \[ f=\sum(-1)^{|J|}D_Jf^{\langle J\rangle}. \] So, the formal integration by parts over infinite space $\rm R^n$ evidently changes the grading. It will be clear below that the general situation is from one side compatibility of all bilinear operations with the grading and from the other side with formal integration by parts. So, basic objects (local functionals etc.) are defined as equivalence classes modulo formal divergences (i.e., divergences of expressions containing $\theta$-factors) and the unique decomposition into homogeneous subspaces with fixed grading function can be made only for representatives of these classes. We call expressions of the form \[ \psi=\sum\int\theta^{(J)} D_K\psi^{\langle J\rangle}_A\frac{\partial}{\partial\phi_A^{(K)}} \] the {\it evolutionary vector fields}. Value of the evolutionary vector field on a local functional is given by the expression \[ \psi F=\sum\int\theta^{(I+J)} D_K\psi^{\langle J\rangle}_A\frac{\partial f^{\langle I \rangle}}{\partial\phi_A^{(K)}}. \] In principle, this formula can be understood as a definition but we can interpret it also as a consequence of the standard relation \[ \frac{\partial\phi_A(y)}{\partial\phi_B(x)}= \delta (x,y)\delta_{AB} \] and Rule 5.4 of I. It is a straightforward calculation to check that this operation is compatible with the formal integration by parts, i.e. \[ \psi{\rm Div}(f)={\rm Div}(\psi f), \] as it is in the standard formal variational calculus. This relation is, of course, valid for integrands. It is easy to see that the evolutionary vector field with coefficients \[ \psi^{\langle J\rangle}_A=\sum \biggl( D_L\xi_B^{\langle I\rangle} \frac{\partial\eta_A^{\langle J-I\rangle }}{\partial \phi_B^{(L)}}- D_L\eta_B^{\langle I\rangle} \frac{\partial\xi_A^{\langle J-I\rangle}} {\partial \phi_B^{(L)}}\biggr) \] can be considered as the {\it commutator of the evolutionary vector fields} $\xi$ and $\eta$ \[ \psi F=[\xi,\eta]F=\xi(\eta F)-\eta(\xi F), \] with the Jacobi identity fulfilled for the commutator operation, and so these vector fields form a Lie algebra. \section{Differentials and functional forms} The {\it differential of a local functional} is simply the first variation of it \[ \mbox{\sf d} F=\sum\int\theta^{(J)} \frac{\partial f^{\langle J\rangle}}{\partial\phi_A^{(K)}}\delta\phi_A^{(K)}, \] where here and below $\delta\phi_A^{(K)}=D_K\delta\phi_A$. It can also be expressed through the Fr\'echet derivative (Definition 2.13 of I) or through the higher Eulerian operators (Definition 2.4 of I) \[ \mbox{\sf d} F=\sum\int\theta^{(J)}{f^{\langle J\rangle}}'(\delta\phi)= \sum\int\theta^{(J)} D_K\biggl( E^K_A(f^{\langle J\rangle})\delta\phi_A\biggr) . \] This differential is a special example of functional 1-form. A general functional 1-form can be written as \[ \alpha = \sum\int\theta^{(J)}\alpha ^{\langle J\rangle}_{AK} \delta\phi_A^{(K)}. \] Of course, the coefficients are not unique since we can make formal integration by parts. Let us call the following expression the {\it canonical form of a functional 1-form} \[ \alpha=\sum\int\theta^{(J)}\alpha^{\langle J\rangle}_A\delta\phi_A. \] Analogously, we can define {\it functional $m$-forms} as integrals or equivalence classes modulo formal divergences of vertical forms \[ \alpha =\frac{1}{m!} \sum\int\theta^{(J)}\alpha^{\langle J\rangle}_{A_1K_1,\dots,A_mK_m}\delta \phi_{A_1}^{(K_1)}\wedge\dots\wedge\delta\phi_{A_m}^{(K_m)}. \] Define the {\it pairing} (or the {\it interior product}) of an evolutionary vector field and 1-form as \begin{equation} \alpha (\xi)=\xi\inprod\alpha=\sum \int\theta^{(I+J)}\alpha ^{\langle J\rangle}_{AK}D_K\xi_A^{\langle I\rangle}. \label{eq:pairing} \end{equation} The interior product of an evolutionary vector field and functional $m$-form will be \[ \xi\inprod\alpha= \frac{1}{m!}\sum(-1)^{i+1} \int\theta^{(J+I)}\alpha^{\langle J\rangle}_{A_1K_1,\dots,A_mK_m} D_{K_i}\xi_{A_i}^{\langle I\rangle}\delta \phi_{A_1}^{(K_1)}\wedge\dots \] \[ \dots\wedge\delta\phi_{A_{i-1}}^{(K_{i-1})} \wedge\delta\phi_{A_{i+1}}^{(K_{i+1})}\wedge\dots\wedge \delta\phi_{A_m}^{(K_m)}. \] Then the value of $m$-form on the $m$ evolutionary vector fields will be defined by the formula \[ \alpha (\xi_1,\dots,\xi_m)=\xi_m\inprod\dots\xi_1\inprod\alpha. \] It can be checked by straightforward calculation that \[ {\rm Div}(\alpha) (\xi_1,\dots,\xi_m)= {\rm Div}(\alpha (\xi_1,\dots,\xi_m)). \] The {\it differential of the $m$-form} given as \[ \mbox{\sf d}\alpha =\frac{1}{m!} \sum \int\theta^{(J)}\frac{\partial\alpha^{\langle J \rangle}_{A_1K_1,\dots,A_mK_m}} {\partial\phi_A^{(K)}}\delta\phi_A^{(K)}\wedge\delta \phi_{A_1}^{(K_1)}\wedge\dots \wedge\delta\phi_{A_m}^{(K_m)}, \] satisfies standard properties \[ {\mbox{\sf d}}^2=0 \] and \[ \mbox{\sf d}\alpha(\xi_1,\dots,\xi_{m+1})= \sum\limits_i(-1)^{i+1}\xi_i\alpha(\xi_1,\dots, \hat\xi_i,\dots,\xi_{m+1})+ \] \[ +\sum\limits_{i<j}(-1)^{i+j}\alpha([\xi_i,\xi_j],\xi_1,\dots,\hat\xi_i,\dots, \hat\xi_j,\dots,\xi_{m+1}). \] The {\it Lie derivative} of a functional form $\alpha$ along an evolutionary vector field $\xi$ can be introduced by the standard formula \[ L_{\xi}\alpha=\xi\inprod\mbox{\sf d}\alpha+\mbox{\sf d} \biggl(\xi\inprod\alpha\biggr). \] \section{Graded differential operators and their adjoints} We call linear matrix differential operators of the form \[ \hat I=\sum_{J\ge 0}\theta^{(J)} \sum_{N=0}^{N_{max}}I^{\langle J\rangle N}_{AB}D_N \] the {\it graded differential operators}. Let us call the linear differential operator $\hat I^{\ast}$ the {\it adjoint} to $\hat I$ if for an arbitrary set of smooth functions $f_A$, $g_A$ \[ \sum\limits_{A,B}\int f_A\hat I_{AB}g_B= \sum\limits_{A,B}\int g_A\hat I^{\ast}_{AB}f_B. \] For coefficients of the adjoint operator we can derive the expression \begin{equation} I^{\ast\langle J\rangle M}_{AB}=\sum\limits_{K=0}^{K_{max}} \sum\limits_{L=0}^{min(K,J)} (-1)^{|K|}{K\choose L}{K-L\choose M} D_{K-L-M}I^{\langle J-L\rangle K}_{BA}.\label{eq:adj} \end{equation} It is easy to check that the relation \[ \hat I(x)\delta(x,y)=\hat I^{\ast}(y)\delta(x,y) \] follows from Rule 4.2 of I. For example, we have \begin{equation} \biggl(\frac{\partial}{\partial x^i}+\frac{\partial}{\partial y^i}\biggr) \delta (x,y)=-\theta^{(i)}\delta (x,y).\label{eq:delta} \end{equation} In one of our previous publications \cite{sol3} we tried to connect the appearance of surface terms in Poisson brackets and the standard manipulations with the $\delta$-function. The ansatz used there for the above simplest example coincided with (\ref{eq:delta}) up to the sign. The reason for the difference laid in the other choice made there instead of Rule 4.2 of I. That ansatz led us to the standard Poisson brackets which were not appropriate for boundary problems. Operators satisfying relation \[ \hat I^{\ast}=-\hat I \] will be called {\it skew-adjoint} ones. With the help of them it is possible to express 2-forms (and also 2-vectors to be defined below) in the canonical form \[ \alpha=\frac{1}{2}\sum\limits_{A,B}\int\delta\phi_A\wedge\hat I_{AB} \delta\phi_B. \] It is clear that we can consider representations of functional forms as decompositions over the basis derived as a result of the tensor product of $\delta\phi_A$, with the totally antisymmetric multilinear operators \[ \hat\alpha=\sum\theta^{(J)}\alpha^{\langle J\rangle}_{A_1K_1,\dots,A_mK_m} \biggl( D_{K_1}\cdot,\dots,D_{K_m}\cdot\biggr) \] as coefficients of these decompositions. \section{Multi-vectors, mixed tensors and Schouten-Nijenhuis bracket} Let us introduce dual basis to $\vert\delta\phi_A\rangle$ by relation \begin{equation} \left\langle\frac{\delta}{\delta\phi_B(y)},\delta\phi_A(x)\right\rangle =\delta_{AB}\delta(x,y) \label{eq:dual} \end{equation} and construct by means of the tensor product a basis \[ \frac{\delta}{\delta\phi_{B_1}(y)}\otimes\frac{\delta}{\delta\phi_{B_2}(y)} \otimes\dots\otimes\frac{\delta}{\delta\phi_{B_m}(y)}. \] Then by using totally antisymmetric multilinear operators described in previous Section we can define {\it functional $m$-vectors} (or {\it multi-vectors}) \[ \psi=\frac{1}{m!}\sum\int\theta^{(J)} \psi^{\langle J\rangle}_{B_1L_1,\dots,B_mL_m}D_{L_1} \frac{\delta}{\delta\phi_{B_1}}\wedge\dots\wedge D_{L_m} \frac{\delta}{\delta\phi_{B_m}}. \] Here a natural question on the relation between evolutionary vector fields and 1-vectors arises. Evidently, evolutionary vector fields lose their form when being integrated by parts whereas 1-vectors conserve it. Let us make a partial integration in the expression of a general evolutionary vector field \[ \xi=\sum\int\theta^{(J)} D_K\xi^{\langle J\rangle}_A\frac{\partial}{\partial\phi_A^{(K)}} \] by removing $D_K$ from $\xi^{\langle J\rangle}_A$, then we get \[ \xi=\sum \int\xi^{\langle J\rangle}_A\theta^{(J+L)} (-1)^{|K|}{K \choose L}D_{K-L}\frac{\partial}{\partial\phi_A^{(K)}}. \] It is easy to see that by using Rule 5.4 from I in the backward direction we can write \[ \xi=\sum\int\bigl[ \theta^{(J)}\xi_A^{\langle J\rangle}\bigr]\bigl[ \theta^{(L)}(-1)^{|L|}E^L_A\bigr]=\sum\int\theta^{(J)}\xi^{\langle J\rangle}_A \frac{\delta}{\delta\phi_A}, \] where the higher Eulerian operators and full variational derivative (Definition 5.1 of I) are consequently used. Therefore, we have proved a following Statement. {\bf Statement 5.1} {\it There is a one-to-one correspondence between evolutionary vector fields and functional 1-vectors. The coefficients of 1-vector in the canonical form $\xi_A^{\langle J \rangle}$ are equal to the characteristic of the evolutionary vector field.} It is not difficult to show that we can deduce the pairing (interior product) of 1-forms and 1-vectors and this pairing preserves this identification. Really, the definition of the dual basis (\ref{eq:dual}) and Rules 4.2, 5.4 of I permit us to derive that \[ \alpha(\xi)=\xi\inprod\alpha=\sum\int \int\theta^{(I)}(x) \theta^{(J)}(y)\alpha^{\langle I\rangle}_{AK}(x)\xi^{\langle J\rangle }_{BL}(y) \left\langle D_L\frac{\delta}{\delta\phi_B(y)},D_K\delta\phi_A(x)\right \rangle= \] \[ =\sum\int\theta^{(I+J)}D_L\alpha^{\langle I\rangle}_{AK} D_K\xi^{\langle J\rangle}_{AL}= \sum\int\theta^{(I+J)}{\rm Tr}(\alpha^{\langle I\rangle} \xi^{\langle J\rangle}), \] and when 1-vector is in the canonical form (only $L=0$ term is nonzero) this result coincides with Eq.(\ref{eq:pairing}). This formula for the pairing will be exploited below also for interior product of 1-vectors and $m$-forms or 1-forms and $m$-vectors. Its importance comes from the fact that it is invariant under the formal partial integration both in forms and in vectors, i.e., \[ {\rm Div}(\alpha)(\xi)={\rm Div}(\alpha(\xi))=\alpha({\rm Div}(\xi)). \] Evidently, it is the trace construction for convolution of differential operators (as coefficients of tensor objects in the proposed basis) that guarantees this invariance. The interior product of 1-vector onto $m$-form and, analogously, of 1-form onto $m$-vector is defined as \[ \xi\inprod\alpha=\frac{1}{m!}\sum (-1)^{(i+1)}\int \theta^{(I+J)}D_{K_i}\xi^{\langle I\rangle}_{A_iL}D_L \biggl(\alpha^{\langle J\rangle}_{A_1K_1,\dots, A_mK_m}\delta\phi_{A_1}^{(K_1)}\wedge\dots \] \[ \dots\wedge\delta\phi_{A_{i-1}}^{(K_{i-1})}\wedge \delta\phi_{A_{i+1}}^{(K_{i+1})}\wedge\dots \wedge\delta\phi_{A_m}^{(K_m)}\biggr). \] Then we also can define the value of $m$-form on $m$ 1-vectors (or, analogously, $m$-vector on $m$ 1-forms) \[ \alpha (\xi_1,\dots,\xi_m)= \xi_m\inprod\dots\xi_1\inprod\alpha= \sum\int\theta^{(J+I_1+\dots+I_m)}{\rm Tr} \biggl( \alpha^{\langle J\rangle} \xi_1^{\langle I_1\rangle}\cdots\xi_m^{\langle I_m\rangle}\biggr), \] where each entry of multilinear operator $\alpha$ acts only to the one $\xi$, whereas each derivation of the operator $\xi$ acts on the product of $\alpha$ and all the rest of $\xi$'s. It is possible to define the {\it differential of $m$-vector} \[ \mbox{\sf d}\psi =\frac{1}{m!}\sum\int\theta^{(J)} \frac{\partial\psi^{\langle J\rangle}_ {A_1K_1,\dots,A_mK_m}}{\partial\phi_B^{(L)}}\delta\phi_B^{(L)}D_{K_1} \frac{\delta}{\delta\phi_{A_1}}\wedge\dots\wedge D_{K_m}\frac{\delta}{\delta\phi_{A_m}}, \] as an example of a mixed ${m \choose 1}$ object. Evidently, ${\mbox{\sf d}}^2\psi=0$. With the help of the previous constructions we can define the {\it Schouten-Nijenhuis bracket} \[ \bigl[ \xi,\eta\bigr]_{SN} =\mbox{\sf d}\xi\inprod\eta + (-1)^{pq}\mbox{\sf d}\eta\inprod\xi \] for two multi-vectors of orders $p$ and $q$. The result of this operation is $p+q-1$-vector and it is analogous to the Schouten-Nijenhuis bracket in tensor analysis \cite{nij}. Its use in the formal variational calculus is described in Refs.\cite{olv},\cite{dorf}. However, in cited references this bracket is usually defined for operators. We can recommend Ref.\cite{olv2} as an interesting source for the treatment of the Schouten-Nijenhuis bracket of multi-vectors. Our construction of this bracket guarantees a compatibility with the equivalence modulo divergences \[ \bigl[ {\rm Div}(\xi),\eta\bigr]_{SN} ={\rm Div}\bigl[ \xi,\eta\bigr]_{SN}= \bigl[ \xi, {\rm Div}(\eta)\bigr]_{SN}. \] \medskip {\bf Statement 5.2} {\it The Schouten-Nijenhuis bracket of functional 1-vectors up to a sign coincides with the commutator of the corresponding evolutionary vector fields.} \medskip {\it Proof.} Let us take the two 1-vectors in canonical form without loss of generality \[ \xi=\sum\int\theta^{(J)}\xi^{\langle J\rangle}_A \frac{\delta}{\delta\phi_A},\qquad \eta=\sum\int\theta^{(K)}\eta^{\langle K\rangle}_B\frac{\delta}{\delta\phi_B} \] and compute \[ \bigl[ \xi,\eta\bigr]_{SN}=\mbox{\sf d}\xi\inprod\eta -\mbox{\sf d}\eta\inprod\xi. \] We have \[ \mbox{\sf d}\xi=\sum\int\theta^{(J)}{\xi^{\langle J\rangle}_A}'(\delta\phi)\frac{\delta}{\delta \phi_A}=\sum\int\theta^{(J)}\frac{\partial\xi^{\langle J\rangle}_A}{\partial\phi^{(L)} _C}\delta\phi_C^{(L)}\frac{\delta}{\delta\phi_A}, \] and \[ \mbox{\sf d}\xi\inprod\eta=-\sum\int\theta^{(J+K)}\frac{\partial \xi_A^{\langle J\rangle}} {\partial\phi_B^{(L)}}D_L\eta_B^{\langle K\rangle}\frac{\delta}{\delta\phi_A}. \] Therefore, we obtain \[ \bigl[ \xi,\eta\bigr]_{SN}=-\sum\int\theta^{(J+K)}\biggl( D_L\eta_B^{\langle K\rangle}\frac{\partial\xi_A^{\langle J\rangle}} {\partial\phi_B^{(L)}}- D_L\xi_B^{\langle K\rangle}\frac{\partial\eta_A^{\langle J\rangle}} {\partial\phi_B^{(L)}} \biggr) \frac{\delta}{\delta\phi_A}=-[\xi,\eta], \] and the proof is completed. \medskip {\bf Statement 5.3} (Olver's Lemma \cite{olv}) {\it The Schouten-Nijenhuis bracket for two bivectors can be expressed in the form} \begin{equation} \bigl[ \xi,\psi\bigr]_{SN}=-\frac{1}{2}\sum\int\ \xi\wedge\hat I'(\hat K\xi)\wedge\xi -\frac{1}{2}\sum\int\ \xi\wedge\hat K'(\hat I\xi)\wedge\xi, \label{eq:prolong} \end{equation} {\it where the two differential operators $\hat I$, $\hat K$ are the coefficients of the bivectors in their canonical form.} \medskip {\it Proof.} Let us consider the Schouten-Nijenhuis bracket for the two bivectors and without loss of generality take them in the canonical form \[ \chi=\frac{1}{2}\sum\int\theta^{(L)}\xi_A\wedge I^{\langle L\rangle N}_{AB} D_N\xi_B, \] \[ \psi=\frac{1}{2}\sum\int\theta^{(M)}\xi_C\wedge K^{\langle M\rangle P}_{CD} D_P\xi_D, \] where $\xi_A={\delta}/{\delta\phi_A}$ and operators $\hat I$ , $\hat K$ are skew-adjoint. Then we have \[ \mbox{\sf d}\chi=\frac{1}{2}\sum\int\theta^{(L)}\frac{\partial I^{\langle L\rangle N}_{AB}} {\partial\phi_E^{(J)}}\delta\phi_E^{(J)}\xi_A\wedge D_N\xi_B \] and \[ \mbox{\sf d}\chi\inprod\psi=\frac{1}{4}\sum\int\theta^{(L+M)} \frac{\partial I^{\langle L\rangle N}_{AB}}{\partial\phi_C^{(J)}}D_J \biggl( K^{\langle M\rangle P}_{CD}D_P\xi_D\biggr)\wedge \xi_A\wedge D_N\xi_B- \] \[ -\frac{1}{4}\sum\int\theta^{(L+M)}D_P\biggl( \frac{\partial I^{\langle L\rangle N}_{AB}} {\partial\phi_D^{(J)}}\xi_A\wedge D_N\xi_B\biggr) \wedge D_J(\xi_C K^{\langle M\rangle P}_{CD}). \] Now let us make integration by parts in the second term \[ \mbox{\sf d}\chi\inprod\psi=-\frac{1}{4}\sum\int\theta^{(L+M)}\xi_A\wedge (I^{\langle L\rangle N}_{AB})'\biggl(\hat K^{\langle M\rangle} \xi\biggr)\wedge D_N\xi_B- \] \[ -\frac{1}{4}\sum\int\theta^{(L+M+Q)}(-1)^{|P|}{P\choose Q} \frac{\partial I^{\langle L\rangle N}_{AB}} {\partial\phi_D^{(J)}}\xi_A\wedge D_N\xi_B \wedge D_{J+P-Q}(\xi_C K^{\langle M\rangle P}_{CD}). \] At last we change the order of multipliers under wedge product in the second term, make a replacement $M\rightarrow M-Q$ and organize the whole expression in the form \[ \mbox{\sf d}\chi\inprod\psi=-\frac{1}{4}\sum\int\theta^{(L+M)}\xi_A\wedge (I^{\langle L\rangle N}_{AB})'_C\Biggl(\hat K^{\langle M\rangle}_{CD}\xi_D- \] \[ -(-1)^{|P|}{P\choose Q}{P-Q\choose R} D_{P-Q-R}K^{\langle M-Q\rangle P}_{CD}D_R\xi_C \Biggr)\wedge D_N\xi_B. \] Having in mind the definition of adjoint operator (\ref{eq:adj}) we can represent the final result of the calculation as follows, \[ \bigl[ \xi,\psi\bigr]_{SN}=-\frac{1}{2}\sum\int\theta^{(L+M)} \xi\wedge\biggl((\hat I^{\langle L\rangle})'(\hat K^{\langle M\rangle}\xi) -(\hat K^{\langle M\rangle})'(\hat I^{\langle L\rangle}\xi)\biggr)\wedge\xi, \] thus supporting in this extended formulation the method, proposed in Ref.~\cite{olv} for testing the Jacobi identity (see Section 7). \section{Poisson brackets and Hamiltonian vector fields} Let us call a bivector \[ \Psi=\frac{1}{2}\sum\int\frac{\delta}{\delta\phi_A}\wedge\hat I_{AB} \frac{\delta}{\delta\phi_B}, \] formed with the help of the graded skew-adjoint differential operator \[ \hat I_{AB}=\sum \theta^{(L)}I^{\langle L\rangle N}_{AB}D_N, \] the {\it Poisson bivector} if \[ \bigl[ \Psi,\Psi\bigr]_{SN} =0. \] The operator $\hat I_{AB}$ is then called the {\it Hamiltonian operator}. We call the value of the Poisson bivector on the differentials of two functionals $F$ and $G$ \[ \{ F,G\} = \Psi (\mbox{\sf d} F,\mbox{\sf d} G)=\mbox{\sf d} G\inprod\mbox{\sf d} F\inprod\Psi \] the {\it Poisson bracket} of these functionals. The explicit form of the Poisson brackets can easily be obtained. It depends on the explicit form of the functional differential, which can be changed by partial integration. Of course, all the possible forms are equivalent. Taking the extreme cases we get an expression through Fr\'echet derivatives \begin{equation} \{ F,G \} = \sum\int\theta^{(J)} {\rm Tr}\biggl( f'_A\hat I^{\langle J\rangle}_{AB} g'_B \biggr) \label{eq:brack1} \end{equation} or through higher Eulerian operators \begin{equation} \{ F,G \} = \sum\int\theta^{(J)} D_{P+Q}\biggl( E^P_A(f)\hat I^{\langle J\rangle}_{AB} E^Q_B(g) \biggr).\label{eq:brack2} \end{equation} \medskip {\bf Theorem 6.1} {\it The Poisson bracket defined above satisfies} Definition 2.3 {\it of} I. \medskip {\it Proof.} Equivalence of these definitions follows from the three facts: 1) from the previous formulas (\ref{eq:brack1}), (\ref{eq:brack2}) it is clear that $\{ F,G \}$ is a local functional, 2) antisymmetry of $\{ F,G \}$ is evident and 3) equivalence of the Jacobi identity to the Poisson bivector property (to be proved in Section 7). \medskip The result of interior product of the differential of a local functional $H$ on the Poisson bivector (up to the sign) will be called the {\it Hamiltonian vector field} (or the {\it Hamiltonian 1-vector}) \[ \hat I\mbox{\sf d} H=-\mbox{\sf d} H\inprod\Psi \] corresponding to the Hamiltonian $H$. Evidently, the standard relations take place \[ \{ F,H\} = \mbox{\sf d} F(\hat I \mbox{\sf d} H)=(\hat I \mbox{\sf d} H)F. \] \medskip {\bf Theorem 6.2} {\it The Hamiltonian vector field corresponding to the Poisson bracket of the functionals $F$ and $H$ coincides up to the sign with commutator of the Hamiltonian vector fields corresponding to these functionals.} \medskip {\it Proof.} Consider a value of the commutator of Hamiltonian vector fields $\hat I\mbox{\sf d} F$ and $\hat I\mbox{\sf d} H$ on the arbitrary functional $G$ \[ [\hat I\mbox{\sf d} F, \hat I\mbox{\sf d} H]G=\hat I\mbox{\sf d} F(\hat I\mbox{\sf d} H(G))-\hat I\mbox{\sf d} H(\hat I\mbox{\sf d} F(G)) = \] \[ =\hat I\mbox{\sf d} F(\{G,H\})-\hat I\mbox{\sf d} H(\{G,F\})=\{\{G,H\},F\}-\{\{G,F\},H\}= \] \[ =-\{G,\{F,H\}\}=-\hat I\mbox{\sf d}\{F,H\}(G), \] where we have used the Jacobi identity and antisymmetry of Poisson bracket. Due to the arbitrariness of $G$ the proof is completed. \medskip {\it Example 6.3} Let us consider a first structure \[ \{ u(x),u(y)\} =\frac{1}{2}(D_x-D_y)\delta(x,y) \] of the Korteweg-de Vries equation (Example 7.6 of Ref.~\cite{olv}) \[ u_t=u_{xxx} +uu_x. \] Construct the adjoint graded operator to $\theta D$ according to Eq.(\ref{eq:adj}) \[ (\theta D)^{\ast}=-\theta D -D\theta \] and the skew-adjoint operator is \[ \hat I=\frac{1}{2}\biggl(\theta D- (\theta D)^{\ast}\biggr)=\theta D + \frac{1}{2}D\theta. \] The Poisson bivector has a form \[ \Psi=\frac{1}{2}\int\theta\biggl( \frac{\delta}{\delta u}\wedge D\frac{\delta}{\delta u}\biggr). \] The differential of a local functional $H$ (for simplicity it is written in canonical \[ H=\int\theta h \] form) is equal to \[ \mbox{\sf d} H=\int\theta h'(\delta u)=\sum_{k=0}^{\infty} \int\theta^{(k)}(-1)^kE^k(h)\delta u, \] where the Fr\'echet derivative or higher Eulerian operators can be used. Therefore, the Hamiltonian vector field generated by $H$ is \[ \hat I\mbox{\sf d} H=-\mbox{\sf d} H\inprod\Psi= -\frac{1}{2}\int\theta\biggl[ h'\bigl( D\frac{\delta}{\delta u} \bigr) - Dh'\bigl( \frac{\delta}{\delta u}\bigr)\biggr] , \] or \[ -\frac{1}{2}\int\theta^{(k)}(-1)^k\biggl[ E^k(h)D- DE^k(h)\biggr]\frac{\delta}{\delta u}, \] or also \[ -\frac{1}{2}\int\theta^{(k)}(-1)^kD_i\biggl[ E^k(h)D- DE^k(h)\biggr]\frac{\partial}{\partial u^{(i)}}. \] The value of this vector field on another functional $F$ coincides with the Poisson bracket \[ -\mbox{\sf d} F\inprod\mbox{\sf d} H\inprod\Psi=\{ F,H\}= \frac{1}{2}\sum\int\theta^{(k+l)}(-1)^{k+l} \biggl( E^k(f)DE^l(h)-E^k(h)DE^l(f)\biggr). \] \section{Proof of Jacobi identity} In this section we will prove that the Jacobi identity for the Poisson bracket is fulfilled if and only if the Schouten--Nijenhuis bracket of the corresponding Poisson bivector with itself is equal to zero. This should complete the proof of Theorem 6.1. Let us use one of the possible forms of the Poisson brackets given in Appendix of I \[ \{ F,G\} =\frac{1}{2}\sum\int\theta^{(J)}{\rm Tr}\biggl( f'(\hat I^{\langle J\rangle}g') -g'(\hat I^{\langle J\rangle}f')\biggr), \] where the differential operator $\hat I$ is not supposed to be skew-adjoint for the easier comparison of this proof with that given in I. We remind the reader that in less condensed notations \[ {\rm Tr}\biggl( f'(\hat Ig')\biggr)=\sum{J\choose M}{K\choose L} D_L\frac{\partial f}{\partial\phi_A^{(J)}}D_{J+K-L-M}I^N_{AB} D_{N+M}\frac{\partial g}{\partial\phi_B^{(K)}} \] (in Appendix of I the indices $M$ and $L$ in the binomial coefficients of the same formula are unfortunately given in the opposite order). We will estimate the bracket \[ \{\{ F,G\} ,H\} =\frac{1}{2}\sum\int\theta^{(J)}{\rm Tr}\biggl[ {\{ f,g\} }'(\hat I^{\langle J\rangle} h')-h'(\hat I^{\langle J\rangle} {\{ f,g\}}')\biggr], \] where $\{ f,g\}$ denotes the integrand of $\{ F,G\}$. Since Fr\'echet derivative is a derivation we have \[ {\{ f,g\} }'=\frac{1}{2}\sum\theta^{(K)}{\rm Tr}\biggl( f''(\hat I^{\langle K\rangle}g',\cdot) +f'\hat I'^{\langle K\rangle}(\cdot)g'+ g''(f'\hat I^{\langle K\rangle},\cdot)-(f\leftrightarrow g) \biggr) \] and \[ {\rm Tr}\biggl[ {\{ f,g\}}'\hat Ih'\biggr]=\frac{1}{2}\biggl[ f''(\hat Ig',\hat Ih')+f'\hat I'(\hat Ih')g'+g''(f'\hat I,\hat Ih')- (f\leftrightarrow g)\biggr]. \] Let us explain that $f''$ denotes the second Fr\'echet derivative, i.e., the symmetric bilinear operator arising in calculation of the second variation of the local functional $F$ (in the canonical form): \[ f''(\xi,\eta)=\sum\limits_{A,B}\sum\limits_{J,K}\frac{\partial^2f} {\partial\phi_A^{(J)}\partial\phi_B^{(K)}}D_J\xi_AD_K\eta_B. \] When we put into entries of $f''$ operators under the trace sign it should be understood that these operators act on everything except their own coefficients, for example, \[ {\rm Tr}\biggl(f''(\hat Ig',\hat Ih')\biggr) =\sum{L\choose P}{L-P\choose Q}{M\choose S} {M-S\choose T}\times \] \[ \times D_{L+M-P-Q-S-T}\frac{\partial^2f}{\partial\phi_A^{(J)} \partial\phi_B^{(K)}} D_{J+T}\biggl( D_P\hat I_{AC}\frac{\partial g} {\partial\phi_C^{(L)}}\biggr) D_{K+Q}\biggl( D_S\hat I_{BD}\frac{\partial h} {\partial\phi_D^{(M)}}\biggr) \] and the expression remains symmetric under permutation of its entries \[ {\rm Tr}\biggl( f''(\hat Ig',\hat Ih')\biggr)= {\rm Tr}\biggl( f''(\hat Ih',\hat Ig')\biggr) . \] When the operator $\hat I$ stands to the right from the operator of Fr\'echet derivative $f'$ as in expression \[ {\rm Tr}\biggl( g''(\hat Ih',f'\hat I)\biggr) , \] it acts on everything except $f'$. At last, for Fr\'echet derivative of the operator we have \[ \hat I'(\hat Ih')=\sum\frac{\partial I^K_{AB}}{\partial\phi_C^{(J)}} D_J\biggl( I^L_{CD}D_L\frac{\partial h}{\partial\phi_D^{(M)}}D_M\biggr) D_K. \] Making similar calculations we get \[ {\rm Tr}\biggl[ h'\hat I{\{ f,g\}}'\biggr]= \frac{1}{2}{\rm Tr}\biggl( f''(h'\hat I,\hat Ig')+f'\hat I'(h'\hat I)g'+ g''(f'\hat I,h'\hat I)-(f\leftrightarrow g)\biggr) \] and therefore \[ \{\{ F,G\} ,H\} =\frac{1}{4}\sum\int\theta^{(J+K)}{\rm Tr}\biggl( f''(\hat I^{\langle J\rangle}g',\hat I^{\langle K\rangle}h')- f''(h'\hat I^{\langle J\rangle},\hat I^{\langle K\rangle}g')- \] \[ -f''(\hat I^{\langle J\rangle}h',g'\hat I^{\langle K\rangle})+ f''(g'\hat I^{\langle J\rangle},h'\hat I^{\langle K\rangle})+ f'\hat I'^{\langle J\rangle}(\hat I^{\langle K\rangle}h'-h' \hat I^{\langle K\rangle})g' -(f\leftrightarrow g) \biggr) . \] Just the first four terms, apart from the fifth containing Fr\'echet derivative of the operator $\hat I$, were present in our proof for nonultralocal case given in I (only terms with zero grading were allowed for $\hat I$ there). After cyclic permutation of $F$, $G$, $H$ all terms with the symmetric operator of the second Fr\'echet derivative are mutually cancelled and \[ \{\{ F,G\} ,H\} + {\rm c.p.}=\frac{1}{4}\int\theta^{(J+K)}{\rm Tr}\biggl( f'\hat I'^{\langle J\rangle}(\hat I^{\langle K\rangle}h'- h'\hat I^{\langle K\rangle })g'- \] \[ -g'\hat I'^{\langle J\rangle} (\hat I^{\langle K\rangle}h'-h'\hat I^{\langle K\rangle})f'+ c.p. \biggr), \] where cyclic permutations of $F$, $G$, and $H$ are abbreviated to $c.p.$. When operator $\hat I$ is given in explicitly skew-adjoint form all the four terms are equal. Taking into account Olver's Lemma (\ref{eq:prolong}) we get \[ \{\{ F,G\} ,H\} + c.p. =-\bigl[\hat I,\hat I\bigr]_{SN}(\mbox{\sf d} F,\mbox{\sf d} G,\mbox{\sf d} H), \] so finishing the proof. \section{Examples of nonultralocal operators} The second structure of the Korteweg-de Vries equation may serve as a counter-example to hypothesis \cite{coll} that all operators which are Hamiltonian with respect to the standard Poisson brackets should also be Hamiltonian in the new brackets. \medskip {\it Example 8.1} Let us start with the standard expression (Example 7.6 of Ref.~\cite{olv}) \[ \{ u(x),u(y)\} =\biggl(\frac{d^3}{dx^3}+\frac{2}{3}u\frac{d}{dx}+ \frac{1}{3}\frac{du}{dx})\delta(x,y) \] and construct the adjoint operator to \[ \hat K=\theta(D_3+\frac{2}{3}uD+\frac{1}{3}Du), \] which is \[ \hat K^{\ast}=-\theta(D_3+\frac{2}{3}uD+\frac{1}{3}Du)-D\theta(3D_2+ \frac{2}{3}u)-3D_2\theta D-D_3\theta. \] Then the skew-adjoint operator \[ \hat I=\frac{1}{2}(\hat K-\hat K^{\ast})= \theta(D_3+\frac{2}{3}uD+\frac{1}{3}Du)+D\theta(\frac{3}{2}D_2+\frac{1}{3}u) +\frac{3}{2}D_2\theta D+\frac{1}{2}D_3\theta \] can be used for forming the bivector \[ \Psi=\frac{1}{2}\int\xi\wedge\hat I\xi, \] where ${\delta}/{\delta u}=\xi$. This bivector has a form \[ \Psi=\frac{1}{2}\int\biggl(\theta\xi\wedge D_3\xi+\frac{3}{2}D\theta\xi \wedge D_2\xi+(\frac{3}{2}D_2\theta+\frac{2}{3}\theta u)\xi\wedge D\xi\biggr). \] Then evaluating the Schouten-Nijenhuis bracket for the bivector with the help of Statement 5.3 \[ \bigl[\Psi,\Psi\bigr]_{SN}=\int (\frac{2}{3}\theta\xi\wedge D_3\xi \wedge D\xi + D\theta\xi\wedge D_2\xi\wedge D\xi) \] and integrating the first term by parts we get \[ \bigl[\Psi,\Psi\bigr]_{SN}=\frac{1}{3}\int\theta D\bigl(\xi\wedge D\xi \wedge D_2\xi\bigr). \] Therefore, instead of the Jacobi identity we have \[ \{\{ F,G\},H\}+c.p.=-\frac{1}{3}\sum\limits_{i,j,k=0}^{\infty} \int\limits_{\Omega}D_{i+j+k+1}\biggl( E^i(f)DE^j(g)D_2E^k(h)+c.p.\biggr)dx. \] So, the second structure of KdV equation can be Hamiltonian only under special boundary conditions. \medskip {\it Example 8.2} Now let consider another example which is also nonultralocal, but the operator remains to be Hamiltonian in the new brackets independently of boundary conditions. The Euler equations for the flow of ideal fluid can be written \cite{olv} in Hamiltonian form as follows (Example 7.10 of Ref.~\cite{olv}) \[ \frac{\partial{\bf\omega}}{\partial t}={\cal D}\frac{\delta H} {\delta{\bf\omega}}, \] where \[ H=\int\frac{1}{2}\vert {\bf u}\vert^2d^2x,\qquad {\bf\omega}={\bf\nabla} \times{\bf u}. \] Let us limit our consideration by the 2-dimensional case when $\bf\omega$ has only one component $\omega$ and \[ {\cal D}={\bf\omega}_xD_y-{\bf\omega}_yD_x, \] where $\omega_i=D_i\omega$, $i=(x,y)$. We can construct the skew-adjoint operator \[ \hat I=\frac{1}{2}\biggl( \theta{\cal D}-(\theta{\cal D})^{\ast}\biggr)= \theta(\omega_xD_y-\omega_yD_x) +\frac{1}{2}(D_y\theta\omega_x-D_x\theta\omega _y), \] and then the bivector \[ \Psi=\frac{1}{2}\int\xi\wedge\hat I\xi= \frac{1}{2}\int\theta(\omega_x\xi\wedge \xi_y-\omega_y\xi\wedge\xi_x), \] where $\xi={\delta}/{\delta\omega}$. Statement 5.3 gives us \[ \bigl[\Psi,\Psi\bigr]_{SN}= \int\Biggl(\theta\biggl[ \omega_x(\xi\wedge\xi_{xy}\wedge\xi_y-\xi\wedge\xi_{yy} \wedge\xi_x)+\omega_y(\xi\wedge\xi_{xy}\wedge\xi_x-\xi\wedge\xi_{xx}\wedge \xi_y)\biggr]+ \] \[ +\biggl[ D_y\theta\omega_x- D_x\theta\omega_y\biggr]\xi\wedge\xi_x\wedge\xi_y\Biggr) \] and after integration by parts the expression can be reduced to zero. \section{Conclusion} We have shown that there is an extension of the standard formal variational calculus which incorporates the real divergences without any specification of the boundary conditions on the boundary of a finite domain. It would be important to understand relations of this formalism to the constructions of the variational bicomplex \cite{and}. It seems also rather interesting to study if some physically relevant algebras can be realized with the help of the new Poisson brackets as algebras of local functionals. It is not clear for us now whether the Hamiltonian equations generated by the new brackets can be solved in some space of functions and what kind of space could be used for this purpose. \vspace{12pt} {\large\bf Acknowledgements} It is a pleasure to thank S.N.Storchak for discussions and A.V.Razumov for answering numerous questions. This work has been started during a visit of the author to the International Centre for Theoretical Physics in Trieste, partial support from ICTP is gratefully acknowledged. \hfill
1,116,691,500,567
arxiv
\section{Introduction} Since the liberalization of the electricity markets, electricity is traded on the electricity SPOT markets \citep{EPEX2021Documentation,mayer2018electricity}. The auction-based format of the day-ahead markets requires electricity producers and large scale consumers to specify fixed amounts of electricity they want to buy or sell one day prior to the delivery \citep{EPEX2021Documentation}. Thus, renewable electricity producers have to account for the uncertain and non-dispatchable nature of renewable electricity generation from wind and photovoltaic when submitting their bids \citep{perez2012impacts,mayer2018electricity,mitsos2018challenges}. To find profitable solutions, operators often leverage optimization techniques from the process systems engineering (PSE) community \citep{ZHANG2016114,grossmann_2021advanced}. In particular, scheduling optimization identifies cost-optimal operational setpoints and leverages variable electricity prices \citep{schafer2020wavelet,leo2021stochastic}. To address the uncertainty stemming from the uncertain renewable electricity production and volatile price curves, scheduling problems are often implemented as stochastic programs that include the uncertainty in the problem formulation \citep{conejo2010decision,grossmann_2021advanced}. Typically, stochastic programs are based on scenarios, e.g., possible realizations of renewable production trajectories \citep{conejo2010decision,morales2013integrating,chen2018advances}. The PSE community has been on the forefront of finding solutions to scheduling problems and stochastic programs for decades \citep{grossmann1978optimum,halemane1983optimal,pistikopoulos1995novel,sahinidis2004optimization}. Thus, energy system scheduling problems are solved successfully by the PSE community \citep{ZHANG2016114,schafer2019model,schafer2020wavelet}. Many PSE examples address electricity procurement for power-intensive processes and demand-side-management \citep{ZHANG2016114,zhang2016risk, leo2021stochastic}. Examples with energy focus include \cite{garcia2008stochastic}, who derive a stochastic bidding problem for a wind producer with pumped hydro storage, and \cite{liu2015bidding}, who propose a model to obtain bidding curves for a micro-grid considering distributed generation. In their book, \cite{conejo2010decision} derive a wind producer bidding problem considering both price and production uncertainties. While most works focus on optimization problem formulations, obtaining high-quality scenarios is also critical for operational success. The scenarios for stochastic programming either stem from historical data or specialized scenario generation methods \citep{conejo2010decision}. \cite{Kaut2003Evaluation} state that different finite scenario sets should consistently give results close to the perfect foresight case and that the optimal objective value should be approximately equal throughout the different scenario sets \citep{Kaut2003Evaluation}. Critically, the scenarios must fit the given time horizon, i.e., for a day-ahead bidding schedule optimization, the stochastic problem requires scenarios that span the time frame between 00:00\,am and 11:59\,pm of the following day. Established methods for scenario generation often utilize univariate, i.e., step-by-step prediction, approaches like classical autoregressive models \citep{sharma2013wind} or autoregressive neural networks \citep{vagropoulos2016ann,voss2018Residential}. As opposed to univariate models, multivariate modeling techniques model a series of time steps in a single prediction step. This makes them particularly suitable for day-ahead operation problems as the multivariate predictions best capture the correlations throughout the day \citep{ziel2018day} and can be set up to model the distribution of full 24\,h trajectories directly. Prominent multivariate scenario generation models are Gaussian copulas \citep{pinson2009probabilistic,staid2017generating,camal2019scenario} as well as deep generative models like generative adversarial networks (GANs) \citep{chen2018model,jiang2018scenario,wei2019short} and variational autoencoders (VAEs) \citep{zhanga2018optimized}. Despite their widespread application, the training success of both GANs and VAEs is sometimes poor and their loss functions are difficult to judge as they are not directly concerned with the quality of the generated data \citep{salimans2016improved, borji2018pros}. Furthermore, GANs and VAEs often result in a mode collapse, i.e., the models converge to a single feasible scenario instead of describing the true probability distribution \citep{arjovsky2017principled}. Besides GANs and VAEs, normalizing flows are another type of deep generative model \citep{papamakarios2021normalizing}. The major advantage of normalizing flows is their training via direct log-likelihood maximization, which leads to interpretable loss functions and stable convergence \citep{rossi2018mathematical}. In prior works, normalizing flows performed well for scenario generation of residential loads \citep{zhang2019scenario,ge2020modeling} as well as wind and photovoltaic electricity generation \citep{dumas2021deep,cramer2022pricipalcomponentflow}. Many authors argue that their scenario generation approach samples high-quality scenarios \citep{pinson2009probabilistic, chen2018model, zhang2019scenario}. However, a connection of scenario generation to downstream applications in stochastic programming is missing in most contributions. Exceptions are \cite{zhanga2018optimized} and \cite{wei2019short} who both solve operational problems for wind-solar-hydro hybrid systems. However, their respective VAE and GAN are restricted to unconditional scenario generation, i.e., they sample unspecific scenarios without considering the day-ahead setting and without including forecasts or other available information. For a day-ahead bidding problem, this can potentially lead to suboptimal solutions based on an unrealistic scenario set containing many unlikely scenarios. Meanwhile, conditional scenario generation incorporates forecast and other available information to specifically tailor the scenarios to the following data, i.e., the conditional scenarios better describe the trends of the following day and maintain a lower spread. Examples of conditional scenario generation are the Gaussian copula approach by \cite{pinson2009probabilistic} and the normalizing flow by \cite{dumas2021probabilistic}, where only \cite{dumas2021probabilistic} solve a stochastic optimization problem using quantiles derived from the conditional normalizing flow. Herein, we extend our previous work on normalizing flow-based scenario generation \citep{cramer2022pricipalcomponentflow} to perform conditional scenario generation \citep{zhang2019scenario,dumas2021probabilistic} of wind power generation with wind speed forecasts as conditional inputs, i.e., we use the wind speed forecast to generate day-ahead wind power generation scenarios that are specifically tailored to the given day. We then apply the generated scenarios in a stochastic day-ahead wind electricity producer bidding problem based on \cite{garcia2008stochastic} and \cite{conejo2010decision}. We compare the results obtained using the normalizing flow scenarios with unconditional historical scenarios and two other multivariate conditional scenario generation approaches, namely, the well-established Gaussian copula \citep{pinson2009probabilistic} and the recently very popular Wasserstein-GAN \citep{chen2018model}. Our analysis shows that all conditional scenario generation methods result in significantly more profitable decisions compared to the historical data, and that the profits obtained using the normalizing flow scenarios are closest to the perfect foresight solution. Unlike \cite{wei2019short} or \cite{dumas2021probabilistic}, we also perform a statistical investigation of the reliability of the scenarios based on the criteria formulated by \cite{Kaut2003Evaluation}. In particular, we show that normalizing flows result in the lowest variance of objective values for very small scenario sets of only five scenarios. To our knowledge, this is the first work to investigate the reliability of different day-ahead scenario generation models for application in stochastic programming. The remainder of this work is organized as follows: Section~\ref{sec:Cond_NormFlow} details the concept of conditional density modeling using normalizing flows. Then, Section~\ref{sec:Cond_dayaheadScheme} details the conditional day-ahead scenario generation method and reviews the input-output relation of normalizing flows, Gaussian copulas, and W-GANs. Section~\ref{sec:Cond_CaseStudyData} draws a comparison of historical scenarios and scenarios generated using the three different methods based on the analysis outlined in \cite{pinson2012evaluating} and \cite{mitsos_22_timeseries}. Section~\ref{Sec:Cond_CaseStudy2} introduces the stochastic bidding problem and analyzes the obtained profits and the reliability of the different scenario sets. Finally, Section~\ref{sec:Cond_Conclusion} concludes this work. \section{Conditional density estimation using normalizing flows}\label{sec:Cond_NormFlow} Normalizing flows are data-driven, multivariate probability distribution models that use invertible neural networks $T: \mathbb{R}^D \rightarrow \mathbb{R}^D$ to describe a data probability density function (PDF) as a change of variables of a $D$-dimensional Gaussian distribution \citep{kobyzev2019normalizing, papamakarios2021normalizing}: \begin{equation*} \begin{aligned} \mathbf{x} &= T(\mathbf{z})\\ \mathbf{z} &= T^{-1}(\mathbf{x}) \end{aligned} \end{equation*} Here, $\mathbf{x}\in X\subset \mathbb{R}^D$ are samples of the data and $\mathbf{z}\in \mathcal{N}(\mathbf{0}_D, \mathbf{I}_D)$ are the corresponding vectors in the Gaussian distribution with $\mathbf{I}_D$ being the $D$-dimensional identity matrix. New data $\mathbf{x}$ is generated by drawing samples $\mathbf{z}$ from the known Gaussian distribution and transforming them via the forward transformation $T(\cdot)$. Since the transformation between the data and the Gaussian is a change of variables, the PDF of the data can be expressed explicitly via the inverse transformation $T^{-1}(\cdot)$ using the change of variables formula \citep{papamakarios2021normalizing}: \begin{equation} p_X(\mathbf{x}) = \phi(T^{-1}(\mathbf{x})) \left| \det \mathbf{J}_{T^{-1}}(\mathbf{x}) \right| \label{Eq:ChangeOfVariablesInverse} \end{equation} Here, $\mathbf{J}_{T^{-1}}$ is the Jacobian of the inverse transformation $T^{-1}$, and $p_X$ and $\phi$ are the PDFs of the data and the Gaussian, respectively. Intuitively, Equation~\eqref{Eq:ChangeOfVariablesInverse} describes a projection of the data onto the Gaussian and a scaling of the distribution's volume to account for the constant probability mass. If the transformation $T$ is a trainable function, the normalizing flow can be trained via direct log-likelihood maximization using the log of Equation~\eqref{Eq:ChangeOfVariablesInverse}. To describe a conditional PDF $p_{X\vert Y}(\mathbf{x}\vert \mathbf{y})$ with conditional inputs $\mathbf{y}\in Y$, i.e., the joint PDF of $X$ and $Y$ where the realization $\mathbf{y}\in Y$ is known, the transformation $T$ and its inverse $T^{-1}$ must accept the conditional information vector $\mathbf{y}$ in addition to the transformed variables $\mathbf{z}$ and $\mathbf{x}$, respectively \citep{winkler2019learning}: \begin{equation*} \begin{aligned} \mathbf{x} &= T(\mathbf{z}, \mathbf{y})\\ \mathbf{z} &= T^{-1}(\mathbf{x}, \mathbf{y}) \end{aligned} \end{equation*} If $T$ remains differentiable for any fixed value of the conditional inputs $\mathbf{y}$, the likelihood can still be described using the change of variables formula: \begin{equation} p_{X\vert Y}(\mathbf{x}\vert \mathbf{y}) = \phi(T^{-1}(\mathbf{x},\mathbf{y})) \left| \det \mathbf{J}_{T^{-1}}(\mathbf{x},\mathbf{y}) \right| \label{Eq:ChangeOfVariablesConditionalInverse} \end{equation} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/CondRealNVP.pdf} \caption{Example RealNVP architecture containing two conditional affine coupling layers \citep{dinh2016realNVP,winkler2019learning}, with conditioner models $\mathbf{s}^{I}$, $\mathbf{t}^{I}$, $\mathbf{s}^{II}$, and $\mathbf{t}^{II}$, Gaussian sample vector $\mathbf{z}$, data sample vector $\mathbf{x}$, intermediate sample vector $\mathbf{z}^{I}$, conditional input vector $\mathbf{y}$. The indices $1:D/2$ and $D/2+1:D$ refer to the two halves of the data vectors, respectively. The dashed lines indicate the flow of the conditional input data $\mathbf{y}$. } \label{fig:CondRealNVP} \end{figure*} In this work, we employ the real non-volume preserving transformation (RealNVP) \citep{dinh2016realNVP}, which is based on a composition of affine coupling layers. In each coupling layer, one half of the data vector undergoes an affine transformation, which is parameterized via functions of the remaining half of the data vector: \begin{equation} \begin{aligned} \mathbf{x}_{1:D/2} =& \mathbf{z}_{1:D/2} \\ \mathbf{x}_{D/2+1:D} =& \mathbf{z}_{D/2+1:D}\odot \exp(\mathbf{s}(\mathbf{z}_{1:D/2},\mathbf{y})) + \mathbf{t}(\mathbf{z}_{1:D/2},\mathbf{y}) \end{aligned} \label{Eq:RealNVPConditional} \end{equation} Here, $\odot$ denotes element-wise multiplication, the indices $1:D/2$ and $D/2+1:D$ refer to the two halves of the data vectors, respectively, and $\mathbf{s}$ and $\mathbf{t}$ are the so-called conditioner models that are implemented as neural networks. The clever design of the affine coupling layer results in lower triangular Jacobians. Hence, the Jacobian determinant required for the likelihood computation is simply given by the product over the diagonal elements. The log-form Jacobian determinant used for training then is: \begin{equation*} \log \det \mathbf{J}_{\text{RealNVP}}(\mathbf{z}) = \sum_{k = D/2+1}^{D} \mathbf{s}(\mathbf{z}_{1:D/2},\mathbf{y}) \end{equation*} Large and highly expressive normalizing flows can be built using compositions of Equation~\eqref{Eq:RealNVPConditional} in an alternating manner. Figure~\ref{fig:CondRealNVP} shows an illustrative sketch of an exemplary RealNVP architecture with two conditional affine coupling layers. In \cite{cramer2022pricipalcomponentflow}, we showed that normalizing flows sample uncharacteristically noisy scenarios when applied to sample for the distributions of renewable electricity time series, due to their inherent lower-dimensional manifold structure. To address the issue, we proposed dimensionality reduction based on the principal component analysis (PCA). In this work, we use PCA to reduce the dimensionality of the data $\mathbf{x}$ and the Gaussian samples $\mathbf{z}$. The conditional input vectors $\mathbf{y}$ are not affected by the PCA. For more information on the effects of manifolds we refer to \cite{brehmer2020flows}, \cite{behrmann2021understanding}, and \cite{cramer2022pricipalcomponentflow}. \section{Day-ahead scenario generation}\label{sec:Cond_dayaheadScheme} This work addresses scenario generation with a particular focus on applications in day-ahead scheduling problems. Thus, all scenarios describe a possible realizations covering the time between 00:00\,am and 11:59\,pm of the following day. In particular, we generate day-ahead wind power generation scenarios and use day-ahead forecasts of wind speeds as conditional inputs to narrow down the range of possible trajectories and make the scenarios specific to the following day. For reference, we include a comparison to historical data, which represents unconditional scenarios, i.e., randomly drawn sampled from the full distribution $p_X(\mathbf{x})$ that does not consider the wind speed forecasts. Meanwhile, the scenario generation methods aim to fit models of the full conditional PDF $p_{X\vert Y}(\mathbf{x}\vert \mathbf{y})$ that are valid for every possible wind power realization $\mathbf{x} \in X$ and every possible day-ahead wind speed forecast $\mathbf{y} \in Y$. In the application, the wind speed predictions are known one day prior to the scheduling horizon and the scenario generation models are evaluated for fixed conditional inputs $\mathbf{y}$. Normalizing flows, Gaussian copulas, and W-GANs all employ multivariate modeling approaches, i.e., the models generate full daily trajectories in a vector form \citep{pinson2009probabilistic, ziel2018day, chen2018model}. All models use multivariate Gaussian samples $\mathbf{z}$ and the wind speed forecast vectors, i.e., the conditional information $\mathbf{y}$, as inputs to generate wind power generation scenario vectors $\mathbf{x}$. For a given fixed wind speed forecast $\mathbf{y} = \text{const.}$, sampling and transforming multiple Gaussian samples $\mathbf{z}$ results in a set of wind power generation scenarios, i.e., the Gaussian acts as a source of randomness to generate sets of scenarios instead of point forecasts: \begin{equation*} \mathbf{x}_i = T(\mathbf{z}_i,\mathbf{y}=\text{const.})\quad \forall i \in 1, \dots, \# \text{Scenarios} \end{equation*} Here, $T(\cdot)$ can be any of the scenario generation models. For more details on the evaluation of the different models, we refer to our supplementary material and the papers by \cite{pinson2009probabilistic} and \cite{chen2018model}. All models generate capacity factor scenarios, i.e., the actual production scaled to installed capacity, of the 50 Hertz transmission grid in the years 2016 to 2020 \citep{DataSource}. The year 2019 is set aside as a test set to avoid complications in the stochastic programming case study due to the unusual prices resulting from the COVID-19 pandemic \citep{narajewski2020changes,badesa2021ancillary}. To avoid including test data in the scenario sets, the unconditional historical scenarios are drawn from the training set. The 15\,min recording interval renders 96-dimensional scenario vectors that fit the 24\,h time horizon of a day-ahead bidding problem. The day-ahead wind speed forecasts have hourly resolution and are obtained from the reanalysis data set ``Land Surface Forcings V5.12.4'' of MERRA-2 \citep{globalmodelingassimilationoffice2015} which is based on previously recorded historical data. We use the predictions at the coordinates 53.0$^\circ$\,N, 13.0$^\circ$\,E, in the center of the 50 Hertz region. Note that due to potential wind speed forecast errors and agglomeration effects in the power generation, there is no direct known functional relationship between the wind speed forecast and the realization of regionally distributed power generation. Due to numerically singular Jacobians and non-invertible transformations \citep{behrmann2021understanding,cramer2022pricipalcomponentflow}, full-space normalizing flows fail to accurately describe the distribution of daily wind time series trajectories residing on lower-dimensional manifolds \citep{cramer2022pricipalcomponentflow}. Therefore, we use PCA \citep{pearson1901pca} to reduce the data dimensionality following our recent contribution \citep{cramer2022pricipalcomponentflow}. We select the number of principal components based on the explained variance ratio, i.e., the amount of information maintained by the PCA \citep{pearson1901pca}. For an explained variance ratio of 99.95\%, we obtain 18 principal components to represent the original 96-dimensional scenario vectors. The adversarial training algorithm for the W-GAN \citep{arjovsky2017wasserstein} did not converge consistently for the considered learning problem. Thus, the results presented below are drawn from the best performing model out of 20 different trained models w.r.t. the metrics outlined in Section~\ref{sec:Cond_CaseStudyData}. For more detailed information on the implementation, we refer to the supplementary material. \section{Conditional wind power scenario generation}\label{sec:Cond_CaseStudyData} We start by analyzing the scenarios without a specific application in mind. To this end, we present some examples, analyze the described distributions, and investigate whether the models can identify the correct daily trends. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/Example.pdf} \caption{ 20 wind capacity factor scenarios (``Scenarios'') each from the historical scenario set (left), normalizing flow (center-left), Gaussian copula (center-right), and W-GAN (right) in relation to the realized wind capacity factor (``Realization''). The plots for the the scenario generation methods include the conditional input (``Wind Speed''). Realization and scenarios on the left y-axis and scaled wind speed predictions on right y-axis. Top: March 29th, 2019, bottom: September 15th, 2019. Data from 50 Hertz region \citep{DataSource}.} \label{fig:Cond_ExampleScenario} \end{figure*} Figure~\ref{fig:Cond_ExampleScenario} shows example scenarios for two randomly selected days of the test year 2019. The left, center-left, center-right, and right columns show historical scenarios and scenarios sampled from the conditional normalizing flow, the Gaussian copula, and the W-GAN, respectively. The historical scenarios are randomly selected from the training set and are, therefore, unspecific to the respective days. Thus, they fail to identify the daily trends and show large discrepancies for both days. For both example days in Figure~\ref{fig:Cond_ExampleScenario} the normalizing flow identifies and follows the general trend of the realized wind capacity factor. For the presented examples, the realization lies within the span of the scenarios. Similarly, the Gaussian copula also identifies the trend of the realization. However, there are some scenarios with significantly higher or lower capacity factors in the case of both days, i.e., the Gaussian copula appears prone to sample outliers that do not follow the trend. The W-GAN-generated scenarios fail to identify the trend and, instead, appear tightly agglomerated and only represent the daily average of the realization, which can be observed for the morning hours of the first day and, to a lesser extend, on the afternoon hours of the second day. The failed identification of the trend is likely due to a mode collapse of the W-GAN, which is a frequently observed phenomenon with GANs \citep{arjovsky2017principled}. Mode collapse happens when the adversarial training algorithm converges to a small range of realistic scenarios but fails to identify the true distribution. Note that due to the multivariate modeling approach of generating full daily trajectories, this type of deviation may occur at any time step throughout the day. To gain insight into the quality of the full scenario sets, we analyze whether the scenario generation methods are able to reproduce the probability distributions and the frequency behavior of the actual time series by looking at the full year of 2019 in comparison to the eventual realization. To this end, we look into the marginal PDF \citep{parzen1962estimation}, the quantile distribution in Q-Q plots \citep{chambers2018graphical}, and the power spectral density (PSD) \citep{welch1967use}. For a detailed introduction to the interpretation of PDF and PSD, we refer to our previous work \citep{mitsos_22_timeseries}. Figure~\ref{fig:Cond_PDF_QQ_PSD} shows the marginal PDFs (left), the Q-Q plots (center), and the PSD (right) of the historical- and the generated scenarios from the normalizing flow, the Gaussian copula, and the W-GAN in comparison to the realizations in 2019. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/JointDataEval.pdf} \caption{ Distribution and fluctuational behavior of generated wind capacity factor scenarios from historical training data (``Historical''), normalizing flow (``Normalizing Flow''), Gaussian copula (``Copula''), and W-GAN (``W-GAN'') in comparison to historical test data (``Realization'') \citep{DataSource}. Left: marginal probability density function (PDF) estimated using kernel density estimation \citep{parzen1962estimation}, center: quantile-quantile plots (Q-Q-plots) \citep{chambers2018graphical}, right: power spectral density (PSD) estimated using Welch's method \citep{welch1967use}. } \label{fig:Cond_PDF_QQ_PSD} \end{figure*} In Figure~\ref{fig:Cond_PDF_QQ_PSD}, the historical scenarios and the normalizing flow scenarios describe the test set PDF well and show good matches of the quantile distribution in the Q-Q-plot. Meanwhile, the Gaussian copula produces a broader PDF with a much lower peak than the realization, whereas the W-GAN's PDF shows a shift towards higher values. The Q-Q-plot also shows the shift of the W-GAN generated distribution, as there is an offset between the W-GANs and the other quantile lines. The poor distribution match by the Gaussian copula is likely due to the linear quantile regression that is unable to represent the nonlinear relation between the predicted wind speed and the capacity factor. Furthermore, the copula relies on linear interpolation of quantiles which can inflate the PDF in the tails and, thus, lead to outlier sampling \citep{pinson2009probabilistic}. The W-GAN can theoretically model any distribution \citep{goodfellow2014generative}. In our analysis, however, the adversarial training algorithm was very difficult to handle with the time series data and often resulted in poor fits. The presented results are the best of 20 training runs in terms of matching the criteria in Figure~\ref{fig:Cond_PDF_QQ_PSD}. Meanwhile, both the Gaussian copula and the normalizing flow with PCA converge consistently and typically yield the presented results after the first training attempt. The Q-Q-plot reveals that all methods yield distributions with longer tails than the realizations, i.e., they produce scenarios with higher capacity factors than the maximum realized capacity factor. The reason is that for days with the highest capacity factor of the year, even higher capacity factors are still feasible as the realizations never reach the full installed capacity. Also, the log tail of the PDF makes the offset appear inflated as it only occurs for the 99-th and 100-th percentile. Note that both Copula and W-GAN are restricted to sample from the [0,1] interval via the boundaries of the inverse CDF \citep{pinson2009probabilistic} and the tanh output activation function, respectively. Meanwhile, the normalizing flow has no such restriction and yields some scenarios surpassing 1, which leads to the normalizing flow having the strongest deviation in the Q-Q-plot. Although these scenarios are theoretically infeasible, they have a very low probability and can efficiently be removed in postprocessing. The PSD in Figure~\ref{fig:Cond_PDF_QQ_PSD} shows a good match of the frequency behavior by the historical scenarios, the normalizing flow, and the Gaussian copula. The W-GAN is close to the overall power-law, i.e., the slope of the PDF curve, of the data, but fails to match the exact frequency behavior. In addition to the analysis of the full scenario sets in Figure~\ref{fig:Cond_PDF_QQ_PSD}, we also compute the energy score (ES) for each day in 2019. ES is a quantitative measure for the assessment of multivariate scenario generation models that compares the conditional scenario set with the respective realization \citep{gneiting2008assessing,pinson2012evaluating}: \begin{equation*} \text{ES} = \frac{1}{N_S} \sum_{s=1}^{N_S} \vert\vert \mathbf{x} - \hat{\mathbf{x}}_s \vert\vert_2 - \frac{1}{2{N_S}^2} \sum_{s=1}^{N_S} \sum_{s'=1}^{N_S} \vert\vert \hat{\mathbf{x}}_s - \hat{\mathbf{x}}_{s'} \vert\vert_2 \end{equation*} Here, $\mathbf{x}$ is the realization vector, $\hat{\mathbf{x}}_s$ are the scenario vectors, $N_S$ is the number of scenarios, and $\vert\vert \cdot \vert\vert_2$ is the 2-norm. The energy score is a negatively oriented score, i.e., lower values indicate better results. The two parts of the energy score reward closeness to the realization and diversity of the scenario set, respectively. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/EnergyScore.pdf} \caption{ Energy score (ES) \citep{gneiting2008assessing, pinson2012evaluating} over all days in 2019 (left) and boxplots (right). Historical- (``Historical'') and generated scenarios from normalizing flow (``Normalizing Flow''), Gaussian copula (``Copula''), and Wasserstein-GAN (``W-GAN''). Boxes indicate quartiles and diamonds indicate outliers \citep{waskom2021seaborn}. Note different y-scale for historical ES. } \label{fig:Cond_EnergyScore} \end{figure*} In Figure~\ref{fig:Cond_EnergyScore}, we display the energy score for each day in 2019 as well as boxplots that showcase the overall energy score distributions for the historical data and the three different models. The normalizing flow energy score is lower on average compared to the Gaussian copula and the W-GAN, indicating a better fitting of the realizations and more diverse scenarios. The Gaussian copula shows the highest energy score which is likely a result of the outliers observed in Figure~\ref{fig:Cond_ExampleScenario}. Furthermore, the normalizing flow leads to the lowest spread of energy scores, indicating consistently good results. Meanwhile, the historical scenarios consistently result in significantly higher energy scores compared to the conditional day-ahead scenario generation methods. By design, the unconditional historical scenarios do not identify the daily trends and are not generated specifically for the respective days. Thus, the deviations from the realizations penalized by the energy score are significant for most days. In conclusion, we find that the conditional normalizing flow presented in Section~\ref{sec:Cond_NormFlow} generates scenarios that match the true distribution of realizations closely, while also providing a diverse set of possible realizations. Furthermore, the normalizing flow outperforms the Gaussian copula and the W-GAN with respect to all important metrics. The historical scenarios describe the overall distribution well, but are not specific to the individual days and, hence, return poor results in day-ahead problem-specific metrics like the energy score. \section{Day-ahead bidding strategy optimization}\label{Sec:Cond_CaseStudy2} We apply the scenarios generated in the previous section in a wind producer bidding problem based on \cite{garcia2008stochastic} and \cite{conejo2010decision}. We first state the problem formulation and then analyze the obtained profits based on the different scenario sets. Finally, we investigate the reliability of small scenario sets based on the criteria defined by \cite{Kaut2003Evaluation}. \subsection{Wind producer problem formulation} We consider the deterministic equivalent formulation \citep{birge2011introduction} of the stochastic wind producer problem from \cite{garcia2008stochastic} and \cite{conejo2010decision} shown in Figure~\ref{fig:Cond_WindProducer} that aims to find an optimal bidding schedule for the operator of a wind farm participating in the European Power Exchange (EPEX SPOT) market \citep{EPEX2021Documentation}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Figures/WindProducer.pdf} \caption{ Structural setup of the wind producer problem from \cite{garcia2008stochastic} and \cite{conejo2010decision} with generated electricity $P_{s,t,q}$, (dis-) charging rates $P_{s,t,q}^{\text{in}}$ and $P_{s,t,q}^{\text{out}}$, placed bids $P_t^{D}$, and day-ahead electricity prices $\lambda_t^D$. The indices $s$, $t$, and $q$ indicate scenarios, hourly time intervals, and quaterhourly time intervals, respectively. } \label{fig:Cond_WindProducer} \end{figure} First, the operator places bids at the day-ahead auction market one day prior to delivery and, thereby, commits to deliver a certain amount of electricity $P^{D}_t$ during the given trading time interval $t$. The revenue made is given by $\lambda^{D}_t P^{D}_t \delta_h$, where $\lambda^{D}_t$ is the day-ahead price and $\delta_h =1\,h$ is the trading interval. As wind electricity generation is stochastic and non-dispatchable \citep{conejo2010decision}, the placed bids may not always be met by the actual production. To balance the difference between the placed bids and the actual production we allow for a small electricity storage that can store the electricity of up to 15\,min of maximum production. For any remaining production imbalance, we enforce a penalty on the absolute value of the imbalance \citep{garcia2008stochastic}. The full objective then reads \citep{garcia2008stochastic}: \begin{equation} \underset{P^{D}_t}{\max} \sum_{t=1}^{N_T} \left[\lambda^{D}_t P^{D}_t \delta_h - \omega \vert \lambda^{D}_{t} \vert \sum_{s=1}^{N_S}\pi_s \vert \Delta_{s,t} \vert \right] \label{Eq:Cond_WindProducerObjective} \end{equation} Here, $\Delta_{t,s}$ is the imbalance at time point $t$ and scenario $s$. The penalty term is based on the absolute values of the day-ahead price $\vert \lambda^{D}_{t} \vert$ to compensate for possible negative electricity prices \citep{garcia2008stochastic}. The penalty term in Equation~\eqref{Eq:Cond_WindProducerObjective} contains the absolute value operator $\vert \cdot \vert$, leading to a nonlinear problem. However, any positive deviation can be avoided via curtailment of the plant and the imbalance will only take negative values in practice, which makes the absolute value operator obsolete. Thus, the absolute imbalance is substituted by its negative parts to obtain a linear problem \citep{conejo2010decision}. The complete linear formulation of the wind producer market participation problem including the electricity storage is shown in Problem~\eqref{Prob:Cond_WindProducerProblem}. \begin{equation} \tag{WP} \label{Prob:Cond_WindProducerProblem} \begin{aligned} \underset{P^{D}_t, P^{\text{in}}_{s,t,q}, P^{\text{out}}_{s,t,q}}{\max} \quad &\sum_{t=1}^{N_T} \left[\lambda^{D}_t P^{D}_t \delta_h - \omega \vert \lambda^{D}_{t} \vert \sum_{s=1}^{N_S}\pi_s \Delta^{-}_{s,t} \right] \\ \text{s.t.}\quad & \text{SOC}_{s,t,q} = \text{SOC}_{s,t,q-1} + \eta \delta_q P^{\text{in}}_{s,t,q} - \frac{1}{\eta} \delta_q P^{\text{out}}_{s,t,q}, &\forall s \in \mathcal{S}, \forall t\in \mathcal{T}, \forall q\in \mathcal{Q} \\ & \text{SOC}_{s, t=24, q=4} = \text{SOC}_{0}, &\forall s \in \mathcal{S}\\ & \Delta^{-}_{s,t} \leq \delta_h P^{D}_{t} - \delta_q \sum_{q\in \mathcal{Q}} P_{s,t,q} - \left(P^{\text{out}}_{s,t,q} - P^{\text{in}}_{s,t,q} \right), &\forall s \in \mathcal{S}, \forall t\in \mathcal{T} \\ & 0\leq P^{D}_{t} \leq P^{D,\max}, &\forall t\in \mathcal{T}\\ & 0\leq \Delta^{-}_{s,t}, & \forall s \in \mathcal{S}, \forall t\in \mathcal{T} \\ & 0\leq P^{\text{in}}_{s,t,q} \leq P^{\max}, &\forall s \in \mathcal{S}, \forall t\in \mathcal{T}, \forall q\in \mathcal{Q} \\ & 0\leq P^{\text{out}}_{s,t,q} \leq P^{\max}, &\forall s \in \mathcal{S}, \forall t\in \mathcal{T}, \forall q\in \mathcal{Q} \\ & 0\leq \text{SOC}_{s,t,q} \leq \text{SOC}^{\max}, &\forall s \in \mathcal{S}, \forall t\in \mathcal{T}, \forall q\in \mathcal{Q} \\ & \mathcal{S} = \{1,\dots, N_{S} \} \\ & \mathcal{T} = \{1,\dots, N_T\} \\ & \mathcal{Q} = \{1,\dots, 4 \} \end{aligned} \end{equation} Tables~\ref{tab:Cond_Indices}, \ref{tab:Cond_Parameters}, and \ref{tab:Cond_Variables} list the indices, parameters, and variables of Problem~\eqref{Prob:Cond_WindProducerProblem}, respectively. The problem is implemented in pyomo \citep{pyomo}, version 6.2, and solved using gurobi \citep{gurobi}, version 9.5. \begin{table} \centering \caption{Indices in Problem~\eqref{Prob:Cond_WindProducerProblem}. } \begin{tabularx}{\columnwidth}{lX} \hline Indices & Description \\ \hline $q$ & Quater hour interval \\ $s$ & Scenarios \\ $t$ & Hour interval \\ \hline \end{tabularx} \label{tab:Cond_Indices} \end{table} \begin{table} \centering \caption{Parameters in Problem~\eqref{Prob:Cond_WindProducerProblem}.} \begin{tabularx}{\columnwidth}{lXX} \hline Parameter & Description & Value/Unit\\ \hline $\delta_h$ & Trading time interval & 1\,h \\ $\delta_q$ & Production time interval & 15\,min \\ $\eta$ & (Dis-) Charging efficiency & 0.91 \\ $\lambda^{D}_t$ & Day-Ahead Price & [EUR/MWh] \\ $\omega$ & Penalty factor & 1.5 \\ $\pi_s$ & Probability of scenario s & $1/N_S$ \\ $N_T$ & Number of time steps & 24 \\ $N_S$ & Number of scenarios & [-] \\ $P^{D,\max}$ & Maximum production capacity & 100\,MW \\ $P^{\max}$ & Maximum (dis-) charging rate & 12.5\,MW \\ $P_{s,t,q}$ & Actual production & [MW] \\ $\text{SOC}^{\max}$ & Maximum battery capacity & 25\,MWh \\ $\text{SOC}_{0}$ & Initial battery state of charge & 12.5\,MWh \\ \hline \end{tabularx} \label{tab:Cond_Parameters} \end{table} \begin{table} \centering \caption{Variables in Problem~\eqref{Prob:Cond_WindProducerProblem}.} \begin{tabularx}{\columnwidth}{lXl} \hline Variable & Description & Unit \\ \hline $P^{D}_{t}$ & Bid at day-ahead market & [MW] \\ $P^{in}_{s,t,q}$ & Charging rate & [MW] \\ $P^{out}_{s,t,q}$ & Discharging rate & [MW] \\ $\text{SOC}_{s,t,q}$ & Battery state of charge & [MWh] \\ $\Delta^{-}_{s,t}$ & Negative production imbalance& [MW] \\ \hline \end{tabularx} \label{tab:Cond_Variables} \end{table} Note that in Problem~\eqref{Prob:Cond_WindProducerProblem}, simultaneous charging and discharging of the storage is feasible, however, does not occur in the optimum due to the losses associated with using the storage. The problem operates on both the trading time scale with hourly intervals and the production time scales with 15\,min intervals. \subsection{Obtained profits} Solving Problem~\eqref{Prob:Cond_WindProducerProblem} yields a fixed schedule of electricity delivery commitments for the day-ahead market $P^{D}_t$. By fixing $P^{D}_t$ and solving Problem~\eqref{Prob:Cond_WindProducerProblem} with the realized electricity production instead of the scenarios, we can compute the actual profits. To gain statistically relevant results, we solve Problem~\eqref{Prob:Cond_WindProducerProblem} for each day in 2019, each time using 100 historical or generated scenarios. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{Figures/Profit.pdf} \caption{ Boxplot of profits obtained in 2019 in Problem~\eqref{Prob:Cond_WindProducerProblem}. Each problem uses 100 scenarios generated from the historical data (``Historical''), the normalizing flow (``Normalizing Flow''), the Gaussian copula (``Copula''), and the W-GAN (``W-GAN'') or the realization for the perfect foresight (``Realization''), respectively. Boxes indicate quartiles and diamonds indicate outliers \citep{waskom2021seaborn}. } \label{fig:Cond_ProfitBoxplot} \end{figure} Figure~\ref{fig:Cond_ProfitBoxplot} shows box plots of the distribution of profits obtained by using scenarios from the historical data and the three different generation methods in comparison to the perfect foresight problem (Realization). The profits obtained by using the normalizing flow scenarios are the highest, while the Gaussian copula scenarios yield profits between the normalizing flow and the W-GAN. The profits obtained by using the unconditional historical scenarios are significantly lower than those of all models generating day-ahead scenarios. \begin{table} \centering \caption{ Average, average percentage, and maximum expected value of perfect information (EVPI) \citep{birge2011introduction} of actual profits obtained over all days in 2019 with 100 scenarios each generated from normalizing flow, Gaussian copula, and W-GAN, respectively. Best results are marked in \textbf{bold} font.} \begin{tabularx}{\columnwidth}{Xrrrr} \hline & Historical & Normalizing Flow & Copula & W-GAN \\ \hline Average EVPI [EUR] & -8381 & \textbf{-1832} & -2658 & -3485 \\ Average EVPI [\%] & -81.5\% & \textbf{-10.9\%} & -16.6\% & -23.0\% \\ Max. EVPI [EUR] & -70234 & \textbf{-8311} & -11017 & -17432 \\ \hline \end{tabularx}\label{tab:Cond_EVPI_Profits} \end{table} Table~\ref{tab:Cond_EVPI_Profits} lists the average and maximum expected value of perfect information (EVPI) \citep{birge2011introduction}, i.e., the difference between the scenario-based profit and the profit with perfect foresight, in 2019. The average EVPIs show that the normalizing flow scenarios yield solutions that are on average 6\% and 12\% points closer to the perfect foresight profit compared to the Gaussian copula and the W-GAN, respectively. Meanwhile, the historical scenario profits are over 80\% lower on average than the perfect foresight solutions. The maximum EVPI, i.e., the worst performing days, also show that the normalizing flow scenarios give significantly more profitable results compared to the other generation methods. The higher profits obtained using the normalizing flow scenarios reflect the findings of Section~\ref{sec:Cond_CaseStudyData}, i.e., the normalizing flow identifies the correct trends and also reflects a diverse distribution. Meanwhile, the Gaussian copula shows outliers and does not match the distribution, and the W-GAN even struggles to identify the daily trends. The unconditional historical scenarios do not describe the daily trends and, thus, result in significantly lower profits in the day-ahead scheduling optimization. Hence, the results shown in Figure~\ref{fig:Cond_ProfitBoxplot} and Table~\ref{tab:Cond_EVPI_Profits} confirm that the normalizing flow generates the best scenarios and yields the most profitable bids. \subsection{Reliability} Next, we analyze the reliability of the scenarios generated using the three different methods based on the criteria defined by \cite{Kaut2003Evaluation}, i.e., we consider reliability to indicate whether different small scenario sets consistently yield objectives close to the perfect foresight solution. In particular, we compare the spread of the objective values and the average EVPIs for small scenario sets. The reliability is particularly important, as larger and more complex problems often cannot be solved for a large number of scenarios due to the increasing computational complexity \citep{birge2011introduction,Kaut2003Evaluation}. To analyze the reliability of the scenario generation methods, we solve Problem~\eqref{Prob:Cond_WindProducerProblem} for each day of the year and each scenario generation method 50 times using small scenario sets of only five scenarios, each. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{Figures/Reliability.pdf} \caption{ Boxplot of the objective function, i.e., expected profit, distributions for 50 iterations with five scenarios from the historical data set (``Historical''), normalizing flow (``Normalizing Flow''), Gaussian copula (``Copula''), and W-GAN (``W-GAN''), respectively. Boxes indicate quartiles and diamonds indicate outliers \citep{waskom2021seaborn}. The perfect foresight objective (``Realization'') is depicted as a black bar, i.e., shows zero variance, as there is only one realization. } \label{fig:Cond_ConsistiencyBoxplot} \end{figure} Figure~\ref{fig:Cond_ConsistiencyBoxplot} shows box plots of the objectives using small scenario sets from the historical data set, normalizing flow, Gaussian copula, and W-GAN, as well as the perfect foresight objective (Realization) for ten randomly selected days from 2019. Note that Figure~\ref{fig:Cond_ConsistiencyBoxplot} shows the expected profit as opposed to the actual profits shown in the previous section. As an indicator of reliability, we look at the range of objectives, i.e., the height of the box plots, that results from the 50 different scenario sets. For the ten randomly selected days shown in Figure~\ref{fig:Cond_ConsistiencyBoxplot}, the normalizing flow scenarios result in the lowest spreads. The Gaussian copula scenarios show significantly larger spreads than both the normalizing flow and the W-GAN scenarios. Meanwhile, the randomly selected historical scenario sets lead to by far the largest spreads. It appears that the outlier scenarios of the Gaussian copula observed in Figure~\ref{fig:Cond_ExampleScenario} are weighed more significant in the case of only a few scenarios. As the normalizing flow shows no extreme outliers and identifies the overall trends well, the distributions described by small scenario sets are closer to the true distribution. The normalizing flow shows some outliers in Figure~\ref{fig:Cond_ConsistiencyBoxplot}, however, for the ten presented days, these are few and typically smaller than the spread of the Gaussian copula and W-GAN objectives. \begin{table} \centering \caption{ Average standard deviation (StD), average max-min spread (Spread), and average expected value of perfect information (EVPI) \citep{birge2011introduction} of the objectives of 50 different solutions with 5 scenarios each generated from normalizing flow, Gaussian copula, and W-GAN over all days in 2019. Best results are marked in \textbf{bold} font.} \begin{tabularx}{\columnwidth}{Xrrrr} \hline & Historical & Normalizing Flow & Copula & W-GAN \\\hline StD [EUR] & 4936 & \textbf{1317} & 2638 & 1878 \\ Spread [EUR] & 22622 & \textbf{6008} & 12616 & 8374 \\ EVPI [EUR] & -6874 & \textbf{-1488} & -2464 & -2783 \\ \hline \end{tabularx} \label{tab:Cond_Reliability} \end{table} Table~\ref{tab:Cond_Reliability} shows statistics derived over all days in 2019, namely, the average standard deviation, the max-min spread, i.e., the difference between the maximum and minimum objective value, and the average EVPI of the objective. The results in Table~\ref{tab:Cond_Reliability} confirm the observation from Figure~\ref{fig:Cond_ConsistiencyBoxplot} that normalizing flows yield the most reliable scenarios with the lowest standard deviation and lowest spread. The average EVPI shows that the normalizing flow is consistently closest to the perfect foresight objective. In conclusion, the normalizing flow yields the most reliable decisions among the three considered methods. \section{Conclusion}\label{sec:Cond_Conclusion} The present work considers the scenario generation problem for a day-ahead bidding problem of a wind farm operator to participate in the EPEX spot markets. We utilize a data-driven multivariate scenario generation scheme based on conditional normalizing flows to model the distribution of wind capacity factor trajectories with wind speed predictions as conditional inputs. The generated scenarios are specifically tailored to stochastic optimization problems concerning the time frame between 00:00\,am and 11:59\,pm. We analyze the normalizing flow scenarios in comparison to randomly selected historical data and scenarios generated from other more established methods, namely, Gaussian copulas and Wasserstein generative adversarial networks (W-GANs), and compare them to the actually realized power generation in 2019. The historical scenarios reflect the overall distribution of realizations well, but fail to identify daily trends and show large spreads independent of the investigated day. Among the conditional scenario generation methods, the normalizing flow scenarios best reflect the realized power generation trends and their distributions while also displaying a diverse set of possible realizations. Meanwhile, the Gaussian copula results in uncharacteristic outliers and the W-GAN struggles to identify the main trends of the realizations. Furthermore, both Gaussian copula and W-GAN result in skewed distributions. To assess their value for stochastic programming, i.e., whether they lead to profitable and reliable decisions, the scenarios are applied in a stochastic programming case study that aims to set bids for electricity sales on the day-ahead market. The analysis of the results of all days of 2019 shows that there is a significant advantage of using day-ahead scenarios that are specifically tailored to the investigated day. Using randomly selected historical scenarios results in an average expected value of perfect information (EVPI) of over 80\% while the conditional scenario generation methods return EVPIs between 10 to 23\%. The bids placed using the normalizing flow scenarios obtain the highest profits and have the lowest EVPI, i.e., the solutions are closest to the perfect foresight profits. Furthermore, we showed how normalizing flows yield reliable scenarios that result in consistent solutions for small scenario sets. In particular, the normalizing flow scenarios result in the smallest standard deviation, the smallest spread, and the lowest EVPI in the objective values. In conclusion, utilizing conditional, day-specific scenarios in day-ahead scheduling problems leads to significantly more profitable and reliable decisions compared to relying on unconditional historical data. Furthermore, the conditional normalizing flow model yields high-quality scenarios that result in highly profitable and reliable solutions for stochastic programs, in particular, for small scenario sets. Therefore, we argue that normalizing flow scenarios have a high potential for scheduling problems that cannot be solved with a large scenario set. \section*{Acknowledgements} \noindent We would like to thank Marcus Vo{\ss} (Technical University of Berlin, Distributed Artificial Intelligence Laboratory) for his valuable input on Copula methods, scenario evaluation, and the supervision of L. Paeleke. This work was performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE) and received funding from the Helmholtz Association of German Research Centres. \section{Daily quantiles} Figure~\ref{fig:Cond_Supp_QuantileTrajectories} plots the quantiles of the capacity factor and wind speed distributions over the scenario length of one day. Neither Capacity factor nor wind speed show any significant daily trends. The distributions are very broad, motivating the use of conditional data. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{Supplement/QuantileTrajectories.pdf} \caption{ Quantile trajectories of wind capacity factor (top) and scaled wind speed (bottom). Quantiles are computed for every time step individually. q0.0 and q1.0 refer to the lowest and the highest values in the data set, respectively. } \label{fig:Cond_Supp_QuantileTrajectories} \end{figure*} \section{Scenario generation model implementation} Both normalizing flow and W-GAN use both Gaussian samples and conditional inputs as direct inputs to their respective ANN structures \citep{chen2018model, zhang2019scenario}. The Gaussian copula uses linear quantile-regression based on the conditional inputs to estimate the inverse cumulative distribution function (CDF), which is then used to transform the Gaussian samples \citep{pinson2009probabilistic}. For more information on scenario generation using Gaussian copulas and W-GANs, we refer to \cite{pinson2009probabilistic} and \cite{chen2018model}, respectively. Due to the significant dimensionality reduction from 96 to 18 dimensions, a small RealNVP \citep{dinh2016realNVP} model is sufficient. The employed model uses four affine coupling layers with fully connected conditioner models with 2 hidden layers with 9 neurons each. The Gaussian copula was implemented using the linear quantile regression in \cite{seabold2010statsmodels} and the required inverse CDF is estimated using linear interpolation with 20 intervals. The model structures of the generator and critic used for the W-GAN are shown in Table~\ref{tab:Cond_WGAN_Layers}. Both the normalizing flow model and the W-GAN are implemented in TensorFlow 2.5.0~\citep{tensorflow2015}. The PCA is computed using the scikit-learn library \citep{scikitlearn}. \begin{table} \centering \caption{ Layers of generator and critic for W-GAN \citep{arjovsky2017wasserstein}. Attributes of layer types are: Linear (fully connected): Number of nodes, Conv (1D convolutional layer): (Number of filters, filter size, strides, padding), Conv-T (1D convolutional layer transpose): (Number of filters, filter size, strides, padding), and Reshape: (output dim 1, output dim 2, \dots). The W-GAN is trained for 2000 epochs and the sampling dimensionality is 20. } \begin{tabular}{lrl} \hline Generator layers & Attributes & Activation \\ \hline Linear & 96 & ReLU \\ Linear & $96\cdot 12$ & ReLU \\ Reshape & (96,12) & - \\ Conv-T & (12,3,1,1) & ReLU \\ Conv-T & (1,3,1,1) & tanh \\\hline \\ \hline Critic layers & Attributes & Activation \\ \hline Conv & (12,3,1,1) & LeakyReLU \\ Conv & (4,3,1,1) & LeakyReLU \\ Flatten & - & - \\ Linear & 96 & LeakyReLU \\ Linear & 1 & - \\ \hline \end{tabular} \label{tab:Cond_WGAN_Layers} \end{table}
1,116,691,500,568
arxiv
\section{Introduction} It is well established that every galaxy is surrounded by a dark halo. This halo consists of dark matter (DM) and its size is much larger than the visible size of the galaxy. The mass of DM halo is the dominant contribution to the total mass of the galaxy. We assume that DM consists of some still unknown particles that are not affected by the strong and electromagnetic interactions. They interact between themselves and with baryonic matter (BM) gravitationally. A weak interaction is possible. However, a "mirror" DM incapable of weak interaction is also acceptable. In principle, it is also possible that DM particles can interact with each other using forces unknown to us, which do not act or act very weakly on BM. However, the strength of these interactions must be sufficiently weak and we neglect them in the present paper. We consider DM particles passing through a dark halo. Their velocities inside a halo are higher than in intergalactic space due to acceleration in the gravitational potential well. The conservation of phase space density (Liouville's theorem) then implies that the density of DM particles in the halo is lower than outside the halo. Hence, the observed high DM density inside the halo can only be associated with the existence of DM particles that cannot leave the halo. Let us recall that the Universe was very homogeneous during the recombination era, the relative density and temperature fluctuations did not exceed $10^{-4}$. After recombination, fluctuations grow and lead to the formation of a large-scale structure, in particular galaxies. Therefore, the DM particles held within the halos of galaxies must have been either captured or formed there by some unknown reaction. We consider the first of these two possibilities. In this article, we study the capture of DM particles during their flight through the halo surrounding a galaxy. It is clear that for this the particle must reduce its speed sufficiently so that it cannot leave the boundaries of the galaxy or simply reach them. Once inside the dark halo and the galaxy’s BM, the DM particle moves in the gravitational field of many moving objects, changing its velocity and direction of the motion. It might perform a gravitational maneuver near a massive star and reduce its speed so that its kinetic energy is no longer sufficient to leave the galaxy. But the opposite process is also possible during the passage of a particle, in which a slow captured DM particle acquires speed by a gravitational maneuver and leaves the galaxy. This is just the time reversal of the above process. The accumulation of trapped particles is possible if the capture rate exceeds the ejection rate. This is possible if we do not have symmetry with respect to time inversion and there is some kind of “arrow of time”. In addition to the general expansion of space, asymmetry can be caused by energy dissipation in the BM, lack of thermal equilibrium in the system, or other reasons. An energy loss mechanisms not requiring dissipation by collisions and heat production which we assume to be negligible for DM is dynamical friction. It is due to gravitational interaction of fast DM particles with many other DM particles and bodies, specks of dust, molecules and atoms made from BM. It is usually associated with the energy loss of a massive, rapidly moving star with a cluster of stars (see \S 5.7 in \cite{Longair}). This process is irreversible. The transfer of energy from a fast particle to many slow ones occurs under arbitrary initial conditions. But the time-reversed process of transferring energy from many slow particles to one particle accelerating it to high speed requires precise adjustment of the initial positions and speed of each of these particles and is very improbable. Therefore, the probability of the direct process is 100\%, and the one of the reverse process is negligible. The energy loss of a particle with mass $m$ moving with the velocity $v$ through the media with matter density $\rho$ is given by~ \cite{Longair} \begin{equation}\label{e:Edot} \frac{dE}{dt} = - \frac{4\pi G^2 m^2\rho}{v}\ln\left(\frac{b_{\max}}{b_{\min}}\right) \end{equation} where $G$ is the gravitational constant and $b$ is the impact parameter of the collision. Taking into account that $E =(mv^2)/2$, we find that $dv/dt \propto m$. This process is effective for stars in a globular cluster, but is very weak for the DM particles with much smaller masses. Characteristic time for a change in velocity is much larger than the age of the Universe. Therefore, we can neglect this process This also applies to some other exotic energy loss mechanisms of DM particles such as the radiation of gravitational waves, the transformation of kinetic energy into heat due to the action of tidal forces on celestial bodies, etc. \section{Capture of DM particles. } However, we are more interested in the processes that can ensure the capture of DM particles. In our opinion, the following mechanism may be relevant. The velocities of DM particles increase as they enter from intergalactic space into the halo of a galaxy and decrease as they leave it. If during the flight the mass of the galaxy has increased, then slow DM particles are captured by the galaxy further increasing its mass, while faster particles slow down, transferring part of their energy to the galaxy. Let us consider this mechanism using a simple model that allows us to draw a number of qualitative conclusions. We consider a dark halo with a visible galaxy inside. The first consists mainly of DM while the second consists mainly of BM. Let us assume that both of these components are spherically symmetric, so that the total matter density $\rho$ depends only on the radius $r$. This greatly simplifies the model, but contradicts the results of $N$-body simulations described, e.g., in \S 9.3.3 in \cite{BinneyTremaine}. Spherical symmetry does certainly not apply to BM in spiral galaxies. However, it is reasonable to consider this simplified model, because the dominant part of the mass of the galaxy $M$ is associated with the halo. The halo mass is about 85\% of the total mass for a typical galaxy. We do not consider inhomogeneities in the distribution of matter within a galaxy, such as stars, massive black holes, or spiral arms. The change in density is reduced to its radial dependence, $\rho(r)$. Therefore, a particle moving radially cannot be deflected in any direction. It is clear that the halo does not have a sharp boundary. This does not prevent us from using a reasonable estimate for its radius $R$. We neglect the DM density $\rho(r)$ at $r>R$. We consider some estimate of the size of the halo that is roughly constant in time. This is not a value like $r_{200}$, which changes as galactic halos form (see Chapter 9 of \cite{BinneyTremaine}). Therefore, we choose a value $R$ that is not very much larger than the maximum size of the halo at all stages of the evolution of the galaxy, at which it can be called a galaxy. Taking into account the ambiguous definition of this quantity, we will use various possible values of the halo radius $R$ in numerical estimates, from underestimated to overestimated. We consider DM particles to be non-relativistic, so we use the formulas of classical mechanics. Naturally, nothing prevents us from taking into account the effects of special relativity, but this does not change the qualitative results of our model. We include Hubble expansion to account for the change in the density of dark matter particles in intergalactic space. Consider the radial motion of a DM particle at first. Let its initial speed far away from the galaxy at $r\gg R$ be $v_0$. Then at the boundary of the halo the particle velocity is equal to $v(R)=\sqrt{v_0^2+u^2 }$ with $u^2=2GM/R$ and inside halo at a distance $r$ from the center it is equal to \begin{equation} v(r) = \left(v_0^2+u^2+8\pi G\int_r^R\frac{dx}{x^2}\int_0^xy^2\rho(y)dy\right)^{1/2} \,. \end{equation} At the same time \begin{equation}\label{e:Mass} M=4\pi\int_0^Ry^2\rho(y)dy \,. \end{equation} At the very center it reaches a maximum value equal to $\sqrt{v_0^2+\alpha u^2}$. The factor $\alpha$ is equal $1.5$ for a constant density of matter inside the galaxy. It is easy to calculate it for any given density distribution, e.g. for Navarro-Frenk-White profile~\citep{NFW}. However, for any reasonable density distribution in which the density decreases with distance from the center, the value of $\alpha$ is not much higher than $1.5$ and, for a rough estimate, we can set $\alpha\simeq 1$ and $v(r)\simeq v(R)$ at $ r<R$. If the gravitational field of the galaxy is static, the particle will leave the halo with speed $v(R)$. Far away from the galaxy it will move again with speed $v_0$. There is no particle capture. However, capture is possible if the mass of the galaxy increases during the passage of the particle. Indeed, galaxies continue to grow after their formation. This is due to the accretion of the surrounding BM, the capture of DM particles, and even by merging with other galaxies. The rate of increase in mass exceeds the rate of the loss due to radiation and ejection of matter, for example, in the form of jets. As a result, the gravitational potential well formed by the galaxy becomes deeper, and the potential barrier surrounding it becomes higher. The DM particle may not have enough kinetic energy to fly out and it could be captured by the galaxy. Let the mass of the galaxy, including the halo, be equal to $M$ at the moment the DM particle enters inside. Let us denote the time of flight of the particle through the galaxy $\tau$. Then, at the time of its flying out, the mass of the galaxy will be equal to $M+\dot M\tau$, where $\dot M$ is the average rate of mass increase of the galaxy. The particle capture condition takes the form \begin{equation}\label{e:capt} v_0\leq \sqrt{\frac{2G\dot M\tau}{R}} \,. \end{equation} It is reasonable to expect that the mass increase $\dot M\tau$ does not exceed the mass $M$. Therefore, the maximal initial speed of captured particles is less than $u$ and we can estimate $v(R)\simeq u$ so that \begin{equation}\label{e:tau} \tau \simeq \frac{\ell}{u} \end{equation} for the time of flight of particles with a minimum initial velocity at which they are not captured by the galaxy. Here $\ell$ is the length of the path traveled inside the halo. Let us denote a minimal initial velocity of a DM particle which is able to fly through the galaxy and escape from it by $v_p$ (the subscript $p$ indicates passage). For $v_0<v_p$ the particle is captured by galaxy. If a particle flies through the center we denote its minimal initial velocity by $ v_{pc}$ (the subscript $pc$ indicates passage through the center). Within the above approximations we obtain the estimates \begin{eqnarray} v_p &=&\sqrt{\frac{G\dot M\ell}{Ru}}=\left(\frac{G\dot M^2\ell^2}{2RM}\right)^{1/4}\,, \label{e:vp}\\ v_{pc} &=& \sqrt{\frac{2G\dot M}{u}}=\left(\frac{2G\dot M^2R}{M}\right)^{1/4}\,. \label{e:vpc} \end{eqnarray} It may seem that these approximations are too crude, but they are not. We demonstrate this with an example. As is well known, the rotation curves of galaxies are perfectly flat if the density decreases like $\rho(r)=M/(4\pi Rr^2)$ at $r<R$. In this extreme case, the density diverges at the center, so this density profile is usually modified to avoid this divergence. Let us, however, consider the unmodified profile. In the framework of classical mechanics, the velocity of particle motion along the diameter is equal to $v(r)=u\sqrt{\ln\frac{R}{r}}$ at $r<R$ if $v_0\ll u$. So, the factor $\alpha$ introduced below eq.~\eqref{e:Mass} is infinite in this case. But we are interested in the time of flight, which is equal to $\tau=2CR/u$ with $C =e\sqrt{\pi} {\rm erfc}(1)\simeq 0.76$. So, the estimate \eqref{e:tau} deviates from the exact value of $\tau$ only by 25\% even for this extreme density profile. We can use the simplest estimate $\dot M\simeq M/T,$ where $T$ is the age of the galaxy and find with \eqref{e:vpc} \begin{equation} v_{pc}\simeq \sqrt{\frac{2GM\tau}{RT}} =u\sqrt{\frac{\tau}{T}} \,. \label{e:tt} \end{equation} \section{Decrease in the DM particle speed} If a particle escapes from the galaxy, its velocity far from it, $v_1$, is smaller than the initial velocity $v_0$ due to the growth of the galaxy mass, \begin{equation} v_1^2=v_0^2-\frac{2G\dot M\tau}{R} \,. \end{equation} Each particle reduces its speed as it passes through a growing galaxy, transferring the released kinetic energy to the DM and BM inside the galaxies. This deceleration mechanism is similar to the integrated Sachs–Wolfe effect. It works more efficiently in galaxy clusters and poorly in voids, simply because of the difference in the number of galaxies that a particle passes through in the same amount of time. Therefore, we expect the mean kinetic energy of DM particles to be higher in voids than in superclusters. For fast particles with $v_0\gg u$ we can set $v(R)\simeq v_0$. If such particles fly a path of length $\ell$ inside the galaxy, then $\tau\simeq\ell/v_0$ and \begin{equation} v_1\simeq\sqrt{v_0^2-\frac{2G\dot M\ell}{Rv_0 }}\simeq v_0-\frac{G\dot M\ell}{Rv^2_0} \,. \end{equation} The faster the particle, the less is the loss of speed. Therefore, the initial velocity distribution of particles not only shifts in the direction of decreasing velocities, but this shifts depends on $v_0$, changing its velocity spectrum. Let us estimate the rate of energy loss for a DM particle with mass $m$. During the time $\tau $ of passage through the galaxy, it transfers energy $\frac{mG\dot M\tau}{R}$ to it. In this case, the distance traveled is equal to $\ell\simeq v(R)\tau$. The rate of energy loss per unit time and per unit path are \begin{equation} \dot E =\frac{mG\dot M}{R}\, , \qquad \frac{dE}{d\ell}\simeq \frac{mG\dot M}{Rv(R)}\,. \end{equation} The trapped particles transfer all their energy and mass to the galaxy. \section{The rate of increase in the galaxy mass} We have already mentioned various mechanisms for increasing the mass of galaxies. Let us denote the rate of galaxy mass increase due to DM particle capture by $\dot M_{DM}$, the total rate of galaxy mass increase due to all effects by $\dot M$, and the rate of mass increase due to accretion of baryonic matter by $\dot M_b$. We assume this to be the net increase where the rate of mass loss due to ejections and radiation of baryonic matter is subtracted. Obviously \begin{equation}\label{e:Mdotsum} \dot M=\dot M_{DM}+\dot M_b \,. \end{equation} We consider $\dot M_b$ as an external quantity that can change with time and is determined by non-gravitational processes of cooling which we do not describe here. Let us determine the value of $\dot M_{DM}$ from the described model as a function of $\dot M_b$. We consider particles in extragalactic space far from galaxies. We assume that their velocities are distributed isotropically in the reference frame of the galaxy, and the number of particles with velocities in the range from $v_0$ to $v_0+dv_0$ in a unit volume is equal to $dN=f(v_0 )dv_0$. The total density of DM particles in extragalactic space is $N=\int_0^\infty f(v_0 )dv_0$. Then the number of particles flying in extragalactic space through an area dS into a solid angle $d\Omega$ in a time $dt$ with velocities in the range from $v_0$ to $v_0+dv_0$ is \begin{equation}\label{e:dN} dn=\frac{v_0}{4\pi} \cos(\phi) f(v_0 )dv_0 dSd\Omega dt \,, \end{equation} where $\phi$ is the angle between the particle velocity direction and the normal to the area. Consider a sphere of radius $R_1\gg R$, surrounding the galaxy and concentric to it. Its surface area is equal to $4\pi R_1^2$. The number of particles passing through it in a time $dt$ with velocities in the range from $v_0$ to $v_0+dv$ is given by equation~\eqref{e:dN}. The halo is reached by particles emitted into a solid angle $\Omega \simeq \pi R^2/R_1^2\ll 1$. Let us define the function $k(v_0,\phi )$ which is equal to $1$ if a particle with angle $\phi$ and velocity $v_0$ is trapped and $0$ if it is not trapped. With this we obtain for the rate of increase in the mass of the galaxy by DM particles capture \begin{equation} \dot M_{DM}\simeq 2\pi R_1^2\int\int \sin\phi\cos\phi\, d\phi m f(v_0)v_0 k(v_0,\phi)\,dv_0 \,. \end{equation} Let us evaluate this integral. Considering that our model is rather a toy-model, not very accurate estimates are applicable. In order for a particle to be captured by a galaxy, it must enter it. This happens if $0\leq\phi \leq\arcsin(\beta ) $ with $\beta =R/R_1\ll 1$. We therefore can set $\sin\phi \simeq\phi$ and $\cos\phi \simeq 1$. Let us also introduce the variable $\xi=\phi /\beta$ . If we neglect the curvature of the particle trajectory inside the galaxy (this does not significantly affect the path length for particles flying through), then the path length inside the galaxy is about \begin{equation} \ell=2R\sqrt{1-\phi^2⁄\beta^2} =2R\sqrt{1-\xi^2} \,. \end{equation} Capture occurs and $k= 1$ if the condition \eqref{e:capt} is met, that is, if \begin{equation} 1-\xi^2>\frac{v_0^4 (v_0^2+u^2 )}{4(G\dot M )^2}\simeq \frac{v_0^4 u^2}{4(G\dot M )^2} \, . \end{equation} (Remember that the initial velocity of captured particles, $v_0$, is much smaller than the escape velocity $u$). The particle is captured if the variable $\xi$, which is proportional to the angle of deviation of the particle velocity from the center of the halo $\phi$, does not exceed the value \begin{equation} \xi_0^2 (v_0 )\simeq \max\left(0,1-\frac{v_0^4 u^2}{4(G\dot M )^2 }\right) \,. \end{equation} A particle flying through the very center of halo is captured if its initial velocity is less than $ v_{pc}$ given in \eqref{e:vpc}. In an off-center passage, the particle is captured if \begin{equation} v_0\leq v_p=v_{pc} (1-\xi^2)^{1/4}. \end{equation} If this inequality is not satisfied, then there is no capture and k=0. With this we can write \begin{eqnarray} \dot M_{DM} &\simeq& 2\pi R^2 \int_0^{\infty} mf(v_0 ) v_0dv_0 \int_0^{\xi_0} k(v_0,\xi)\xi d\xi \nonumber \\ &=& \pi R^2\int_0^{\infty} \xi_0^2 mf(v_0 ) v_0 dv_0 \nonumber \\ &=& \pi R^2\int_0^{v_{pc}}\left(1-\frac{v_0^4}{v_{pc}^4} \right) mf(v_0 ) v_0 dv_0 \,. \label{e:Mdmdot} \end{eqnarray} The integral on the right side depends on $\dot M$ via $v_{pc}$, and this dependence is highly non-linear. The combination of \eqref{e:Mdotsum} and \eqref{e:Mdmdot} determines $\dot M$ for a given $\dot M_b$. At $\dot M_b=0$ there is a trivial solution $\dot M=0$. Assuming the form of the function $f$, we can obtain the dependence of $\dot M_{DM}$ on $\dot M$. For example, if $f$ is a simple Maxwell-Boltzmann distribution, \begin{equation} f(v)=\frac{4Nv^2}{\sqrt\pi v_{max}^{3}}\exp{\left[-\left(\frac{v}{v_{max}}\right)^2\right]}, \,v_{max}=\sqrt{\frac{2k\Theta}{m}} \end{equation} with temperature $\Theta$ and maximum at $v=v_{max}$, then from \eqref{e:Mdmdot} one obtains \begin{equation} \dot M_{DM}=2\sqrt{\pi} R^2mNv_{max}P(v_{pc}^2/v_{max}^2)\,, \label{e:Max1} \end{equation} where the function $P$ is defined by \begin{equation} P(x)=1-6x^{-2}+2e^{-x}(1+3x^{-1}+3x^{-2}).\label{e:Max2} \end{equation} However, interesting qualitative conclusions can be drawn from general considerations without assumptions about the velocity distribution of the DM particles (more precisely, their phase space number density $f$ or mass density $mf$). We just assume that the function $f(v_0 )\geq 0$, that is it continuous, that it vanishes at $v_0=0$, reaches a maximum at some value $v_0=v_{\max}$, and quickly decreases at high velocities, most likely exponentially in $v_0^2$. This function is proportional to the particle density $N$. We consider the expansion of $f$ in a Taylor series. It includes only even powers of $v_0$. As for the Maxwell distribution, the expansion starts with a quadratic term due to the three independent Cartesian velocity components: \begin{equation} f(v_0 )=\sum_{i=1}^\infty a_i v_0^{2i} = N\sum_{i=1}^\infty \tilde a_i v_0^{2i} \, . \end{equation} The quantities $\tilde a_i= a_i/N$ do not depend on $N$. So that \begin{eqnarray} \dot M _{DM} &\simeq& \pi mR^2\int_0^{v_{pc}}\left(1-\frac{v_0^4}{v_{pc}^4}\right) \sum_{i=1}^\infty a_i v_0^{2i+1} dv_0 \qquad \nonumber \\ &=& \pi mR^2 \sum_{i=1}^\infty \frac{a_i}{(1+i)(3+i)}v_{pc}^{2i+2} \nonumber \\ & =&\dot M \sum_{i=1}^\infty b_i \dot M ^i =\dot M N\sum_{i=1}^\infty \tilde b_i \dot M ^i \,\quad\mbox{with} \label{e:MdotMdmdot}\\ b_i &=& \pi mR^2\left(\frac{2GR}{M}\right)^{(i+1)/2}\frac{a_i }{(1+i)(3+i)}\,. \end{eqnarray} Here we denote $\tilde b_i=b_i/N$. These quantities help to explicitly extract the dependence on the particle density $N$, which varies significantly over the lifetime of galaxies. If these particles are not be formed and do not decay, then $N(z)=N(0)(1+z)^3$ and during the existence of a galaxy formed at $z=10$, the density decreases by a factor 1000 due to the expansion of the Universe. However, due to the considered capture of DM particles, their number density in intergalactic space, $N$ is reduced even further. The quantities $\tilde a_i$ change with time due to changes in the velocity distribution, in particular, because of the processes under consideration. However, most likely, the most significant contribution to the time dependence of the parameters $a_i$ is associated to the evolution of $N$. The same can be said about the coefficients $b_i$, although in this case the (weak) dependence of $\tilde b_i$ on time gets additional contributions from the evolution of $M$. Positivity of $f$ for small velocities requires that $b_1>0$. We assume that $b_2<0$ like for the Maxwell-Boltzmann distribution. (We can actually assume that all odd coefficients are positive and all even ones are negative, by analogy with the Maxwell distribution.) Let us try to draw some conclusions based on these basic properties of the distribution function. We start from the case when captured DM particles have initial velocities much smaller than their mean velocity. So that they are described by the first term of the expansion of $f(v_0)$, \begin{equation}\label{e:nfirst} f(v_0 )\simeq a_1 v_0^2 \,. \end{equation} Let us also assume that the considered galaxy is not very large and the time of passage of particles through it is much less than its age and than the age of the Universe. Then we can approximately set $a_1=$ const. and $M=$ const. during the time of flight. In this case \begin{eqnarray} \dot M -\dot M _b &=& \dot M _{DM}\simeq b_1 \dot M^2=\frac{\pi a_1 R^3 mG}{4M} \dot M ^2 \,, \nonumber\\ \dot M _b &=& \dot M -b_1 \dot M ^2=\frac{1}{4b_1} -b_1\left(\frac{1}{2b_1} -\dot M \right)^2. \label{e:MDM2} \end{eqnarray} For $\dot M _b=0$ this gives us the equation \begin{equation} \dot M \left(1-\frac{\pi}{ 4} a_1 R^3 mG \frac{\dot M}{ M}\right)=0 \,. \label{e:Mdot2} \end{equation} The vanishing condition for the second factor gives us a rough estimate of $T\simeq \pi N \tilde a_1 GR^3 m/4$ in\eqref{e:tt}. So, apart from the trivial solution $\dot M =0$, we find only this clearly unphysical solution with $T \propto N$. So that, $T$ tends to $0$ with $N=0$, or, in other words, $\dot M_{DM}$ grows indefinite when $N$ tends to $0$, i.e., when DM becomes less and less abundant, which is meaningless and is a consequence of our approximation which breaks down when $\dot M$ becomes large. \section{A jump in the particle capture rate} \begin{figure}\begin{center} \includegraphics[width=8cm]{fig2.eps} \end{center} \caption{\label{f:1}Plots of $\dot M(\dot M_b)$ for various functions $f$ and DM densities. Solid curves show stable branches, dashed curves show unstable ones, vertical dotted arrows show jumps in the state of the system. The four panels correspond to the following cases. Panel a) depicts approximation \eqref{e:nfirst} for low particle velocities. Panels b) and c) show two possibilities for density above threshold. Both are s-shaped. They differ in the position of the left boundary of the upper stable branch at point C. Panel d) shows the case when the baryon accretion rate is below the threshold $ \dot M_{bc}$ and the function becomes monotonically increasing.} \end{figure} In order to understand this situation, we apply the methods of catastrophe theory. For this it is important to note that we consider $\dot M_b$ as an external parameter which is determined by accretion and baryonic cooling processes which we do not describe in our model. We want to study the increasing of total mass for a given $\dot M_b$. Fig. \ref{f:1}a shows the dependence $\dot M$ on $\dot M_b$ according to equation \eqref{e:MDM2}. This fold consists of two parts. The lower half of the parabola AB (in solid) corresponds to a stable solution. With an increase in the baryon mass growth rate $\dot M_b$, the DM particle capture rate $\dot M_{DM}$ increases. It is described by \eqref{e:MDM2} The upper half of the parabola BC (dashed) with a negative slope corresponds to an unstable solution. Two points A and C of intersection of the curve with the y-axis correspond to two the solutions of the equation \eqref{e:Mdot2}. Of these, only the solution $\dot M=0$ is stable. At a nonzero matter accretion rate $\dot M_b$, particle capture begins. However, the rate of increase in the mass of the dark halo $\dot M_{DM}$ is less than the rate of increase in baryonic matter $\dot M_b$. They become equal only at the top of the parabola. At point B we have $\dot M_b=\dot M_{DM} =1/(4b_1)\equiv \dot M_{bc}$. From this, two conclusions can be drawn. First, the capture of dark matter requires the accretion of ordinary matter. There is no DM capture without the baryonic matter accretion. Secondly, at the ratio of the mass growth of baryonic and dark matter described by the curve in Fig. 1a, it is impossible to form a galaxy containing 85\% dark matter. The system~\eqref{e:MDM2} does not have a solution for $\dot M_b>1/(4b_1)$. With a further $\dot M_b$ increase, the state of the system reaches the top of the parabola, after which a sudden regime change begins. and the rate of mass growth $\dot M$ and $v_{pc}$ rapidly increases Therefore, we cannot consider the particle velocities to be small and use the approximation \eqref{e:nfirst}. So, equation \eqref{e:MDM2} ceases to adequately describe the process. The accretion rate $\dot M_b= 1/(4N\tilde b_1)$, which is required to loose stability in point B, can be quite small at the time of galaxy formation, when $N$ is very large. To consider larger accretion rates we take into account the next term in the expansion \eqref{e:MdotMdmdot} and obtain the equation \begin{equation}\label{e:MDMdot2} \dot M_{DM}=\dot M-\dot M_b \simeq b_1 \dot M^2+b_2 \dot M^3,\quad b_2<0. \end{equation} Figures 1b and 1c show the curves ABCDE which can be obtained in this case. They have a characteristic s-shaped form, typical for the fold catastrophe which is so well known in catastrophe theory, see e.g.~\cite{Arnold}. It consists of three parts, two of which have a positive slope (AB and CDE) and are stable. BC with a negative slope is unstable. When the end of the stable part is reached, a jump from B to D to the other stable branch occurs. We see that the upper stable solution is suitable for our problem, since for it the ratio of the mass growth rates of DM and baryonic matter can be quite large. However, to enter the required regime, the baryon accretion rate must exceed a certain threshold value $\dot M_{bc}\simeq 1/(4N\tilde b_1)$ corresponding to a jump in the capture rate $\dot M_{DM}$. Note also that a kind of a hysteresis loop appears in the situation shown in Figure 1c: After jumping to the upper branch, the state of the system on the graph shifts to the left along the upper stable branch if $\dot M_b$ is decreasing. When it reaches the edge of the stable branch at point C, it jumps to the lower branch at point F. With a certain ratio of the coefficients $b_1$ and $b_2$ this formally happens at a negative value of $\dot M_b$ as is the case in panel 1b. It is easy to calculate, that point C corresponds to a negative value of $\dot M_b$ if $N>-4\tilde b_2 \tilde b_1^{-2}$ and a positive value of $\dot M_b$ for $N<-4\tilde b_2 \tilde b_1^{-2}$. Thus, the value of $\dot M_{DM}$ can remain nonzero and large if in the past the galaxy had a value of $\dot M_b$ larger than the critical value and there was a jump, even if later $\dot M_b$ decreases or vanishes. As the particle density $N$ decreases, the left boundary of the upper stable branch, i.e. the point C, crosses the y-axis and the system can return to the lower stable branch via the CF transition. It is clear that we cannot restrict ourselves to a finite number of expansion terms \eqref{e:MdotMdmdot}. So, let us obtain a qualitative form of the dependence of $\dot M$ on $\dot M_b$ without using the series expansion. We introduce the new variable $\eta=v_0/v_{pc}$. With \eqref{e:Mdmdot} we obtain \begin{eqnarray} \label{e:26} \hspace*{-5mm}\dot M_{DM}\!\!\! &\simeq& \pi R^2 mv_{pc}^2F,\\ F \! &=&\int_0^1(1-\eta^4 ) f(\eta v_{pc} )\eta d\eta \nonumber \\ &=&\int_0^1(1-\eta^4 ) f\left( \frac{\eta}{\eta_0} v_{max} \right) \eta d\eta,\quad\eta_0=\frac{v_{max}}{v_{pc}}. \quad \end{eqnarray} The integral $F$ goes over a fixed interval. The integrand is the product of the function $\eta(1-\eta^4)$ that vanishes at both ends of the interval and an unknown function $f$ whose properties we discussed above. The function $f$ reaches its maximum at $v_0=v_{\max}$ , i.e., at $\eta=\eta_0$. For small $v_{pc}$ we have $\eta_0\gg 1$. The maximum of $f$ lies outside the region of integration and the integral is proportional to $\eta_0^{-2}\propto v_{pc}^2$. As a result, we can approximate $f$ by \eqref{e:nfirst}. At large $v_{pc}$ we have $\eta_0\ll 1$ and the maximum of $f$ shifts to the lower boundary of the interval. The integral is approximately proportional to $\eta_0^{2}\propto v_{pc}^{-2}$, which is compensated by the pre-factor $v_{pc}^2$. A more accurate estimate for this case can be obtained directly from the expression \eqref{e:Mdmdot}, in which the upper limit of integration is replaced by infinity, which gives a negligible error. As a result, we obtain the asymptotic expression \begin{eqnarray} \dot M_{DM} &\simeq& \pi R^2\int_0^\infty\left(1-\frac{v_0^4}{v_{pc}^4}\right) mf(v_0 ) v_0 dv_0 \nonumber \\ &\simeq&C_1-C_2 v_{pc}^{-4} =C_1-C_3 \dot M^2, \label{e:27} \\ C_1&=& \pi mR^2\int_0^\infty f(v_0 ) v_0 dv_0 >0,\, C_1\propto N, \label{e:28}\\ C_2&=&\pi mR^2 \int_0^\infty f(v_0 ) v_0^5 dv_0 >0,\, C_2\propto N, \\ C_3 &=& \frac{M}{2GR}C_2 \,. \end{eqnarray} At high accretion rate, the DM capture rate is saturated. The dependence acquires the asymptotic form $\dot M\rightarrow C_1+\dot M_b$. In Figures 1b,c, the slopes of the curves in the upper right corner are not drawn to scale. Let us also consider the behavior of the function for intermediate values of $\eta_0$. The integral $F$ in \eqref{e:26} is a function of the variable $v_{pc}$, that decreases with large and small values of the argument. Therefore, it reaches a maximum at a certain value of $v_{pc}$ which we call $v_{pcm}$. We have $\frac{\partial F}{\partial v_{pc}}=0$ for $v_{pcm}$. The value of $F$ is proportional to $f(v_{pcm})$, which, in turn, is proportional to the density of particles in intergalactic space $N$. The integral $F$ is multiplied by the factor $v_{pc}^2\propto\dot M$. Therefore, for $v_{pc}=v_{pcm}$ we obtain a dependence of the form \begin{equation} \dot M_{DM}\simeq Q(M,R)mN\dot M \label{e:29b} \end{equation} where the function $Q(M,R)$ does not depend on $\dot M$. On the other hand, $\dot M_b=\dot M-\dot M_{DM}$. For small $\dot M$ we have \eqref{e:27} with positive slope. Also for large $\dot M$ we have~\eqref{e:27} with positive slope. At the maximum of $F$ we have \eqref{e:29b} with the slope $1-Q(M,R)mN$, which is negative if the DM particle density $N$ exceeds some critical value. If this happens, we obtain an S-shaped curve $\dot M(\dot M_b)$ like in Fig. 1b and 1c. We apply the theory of catastrophes and find that with a continuous increase of $\dot M_b$, a jump in $\dot M$ occurs and something like a hysteresis loop can appear. It is clear that the upper stable branch must lie above the point with a negative derivative with respect to $\dot M_b$, which is attained at $v_{pcm}$ for which $F$ is maximal. Let us evaluate this maximum. The integrand in $F$ is the product of two functions, each of which has a maximum. The function $f$ reaches its maximum at $v_0=v_{max}$ , i.e. at $\eta=\eta_0$. The maximum of the function $\eta (1-\eta^4)$ achieved at $\eta=\eta_1\approx 0.7$. The integral is maximal is these two maxima roughly agree, hence $\eta_0\approx\eta_1$. In this case, the speed $v_{pc}$ for the upper stable branch is about $v_{pcm}\simeq 1.4v_{\max}$. A DM particle with an initial velocity $v_0=v_{max}$ is then captured by the galaxy if its trajectory passes at a distance less than $0.85R$ from the center. This means that a significant fraction of the DM particles is captured as they fly through the galaxy. The ratio of the influx rates of dark and baryonic matter can be quite large. But the jump is not possible if the rate of accretion of baryonic matter did not exceed some threshold value in the past or present. This circumstance contributes to the growth of large-scale structure in the Universe. It should be taken into account when studying and numerically modeling this process. However, the jump requires the presence of DM with a density $N$ exceeding a certain threshold value. Taking into account that $N$ decreases with time both due to Hubble expansion and because of the capture of particles as discussed in this paper, it can be assumed that at some point the s-shaped curve has turned or will turn into the monotonic dependence shown in Fig. 1d. So the capture process has either weakened significantly in the past, or will weaken in the future. In addition, $v_{pc}$ will decrease or has decreased significantly. Let us also find the positions of the points B and C in figures 1b,c. They are determined by the condition $d\dot M_b/d\dot M=0$. Taking into account \eqref{e:Mdotsum} this gives $d\dot M_{DM}/d\dot M=1$. With \eqref{e:Mdmdot} we can write this condition as \begin{eqnarray} H(v_{pc})\equiv N^{-1}v_{pc}^{-6}\int_0^{v_{pc}}v_0^5f(v_0)dv_0 \hspace{2cm}\nonumber\\=\frac{1}{2\pi R^2mN}\left(\frac{M}{2GR}\right)^{1/2}.\label{v1} \end{eqnarray} The function $H(v_{pc})$ tends to zero as $v_{pc}\to 0$ and as $v_{pc}\to \infty$. This means that it has a maximum at a certain $v_{pc}=v_1$. It can be estimated that $v_1\approx v_{max}$. We denote \begin{equation} N_1=\frac{1}{2\pi R^2mH(v_1)}\left(\frac{M}{2GR}\right)^{1/2}.\label{N1} \end{equation} At $N<N_1$ we have no solution and the inflection points B and C do not exist. The value $N=N_1$ corresponds to the transition from s-shaped curve to the monotonic one. At $N>N_1$ we have two solutions of equation \eqref{v1}. The solution with smaller $v_{pc}$ corresponds to point B, one with larger $v_{pc}$ to point C. For small $v_{pc}$ we can use the approximation \eqref{e:nfirst} and obtain the same estimate for coordinates of point B. \begin{equation} \dot M_b = \dot M_{bc} \simeq \frac{1}{4N\tilde b_1} =\frac{M}{\pi NR^3 mG\tilde a_1}.\label{e30} \end{equation} When point C intersects the ordinate axis (figure 1b), in addition to \eqref{v1}, the condition \begin{equation} v_{pc}^{-2}\int_0^{v_{pc}}v_0f(v_0)dv_0 =\frac{3}{2\pi R^2m}\left(\frac{M}{2GR}\right)^{1/2}\label{v2} \end{equation} must also be satisfied. We can determine the values of $N$ and $\dot M$ in this case by solving the system of equations (\ref{v1}) and (\ref{v2}). Let us roughly estimate the ratio of the rates of increase in the mass of DM and BM immediately after the jump from B to D. At point B, these rates are approximately equal and, according to \eqref{e30}, they are inversely proportional to the DM mass density $Nm$. At point D, lying on the upper branch, the value of $\dot M_b$ is the same as at point B. The ratio $\dot M_{DM}$ to $\dot M_b$ at point D is approximately equal to the ratio of $\dot M_{DM}$ at points D and B, which is clearly greater than 1. We can use \eqref{e:27} and set $\dot M_{DM} \simeq C_1 \propto Nm$. So, the ratio of the rates of increase in the mass of DM and BM at point D is proportional to $N^2$ and can be very large at the early stages of galaxy evolution. \begin{figure}\begin{center} \includegraphics[width=8cm]{4formulas_bw.eps} \end{center} \caption{\label{f:2}Dependence of the quantities \eqref{xy} proportional to the rates of increase in the total and baryonic masses of the galaxy for the case of the Maxwell distribution of DM particle velocities. The curves from left to right correspond to the different values of the parameter $\mu$ from \eqref{mu} equal to 10, 7, 5 and 4.} \end{figure} Let us confirm these arguments with the example of the Maxwell-Boltzmann distribution and use eq. \eqref{e:Max1}. Since we are interested in the ratio of mass gain rates, we introduce the dimensionless variables \begin{equation} x=\gamma \dot M_b,\quad y=\gamma \dot M,\quad\gamma=\sqrt{\frac{2GR}{M}}v_{\max}^{-2}. \label{xy} \end{equation} From \eqref{e:Max1} and \eqref{e:Mdotsum} we obtain \begin{eqnarray} x &=& y-\mu P(y) \label{e:xofy} \\ \mu &=&\frac{2R^2mN}{v_{\max}}\sqrt{\frac{2\pi GR}{M}} \label{mu} \\ &\simeq& \! 0.01\frac{mN}{0.3\rho_{c0}/h^2}\frac{100{\rm km}/{\rm s}}{v_{\max}}\left(\frac{R}{{\rm 0.3Mpc}}\right)^{5/2}\!\!\left(\frac{10^{12}M_\odot}{M}\right)^{1/2} \qquad \nonumber \end{eqnarray} where the function $P(y)$ is given in \eqref{e:Max2}. In \eqref{e:xofy} the parameter dependence has been reduced to the single dimensionless parameter $\mu$, which decreases with time, mainly due to the decrease in $N$. Figure \ref{f:2} shows four curves corresponding to the values of this parameter equal (from left to right) 10, 7, 5 and 4. They are shown not schematically, as in Fig. \ref{f:1}, but accurately to scale. At $\mu=4$ the curve is monotonic like in Fig. \ref{f:1}d and this value is slightly below the critical value for the transition to the s-shaped curve. For $\mu=5$ we have a curve similar to that shown in Fig. \ref{f:1}c and this value is slightly below the value of $\mu$ at which the left edge of the top branch intersects the y-axis. These two special values of $\mu$ are rather close, they differ only by a factor 1.25. For larger $\mu$ the curve has the form shown in Fig. \ref{f:1}b. The ratio of the rates of mass gain of DM and BM after the jump to upper branch is approximately 11 at $\mu=5$, 55 at $\mu=7$, and 85 at $\mu=10$. It increases rapidly with increasing $\mu\propto N$. As a result, the galaxy at the stage of intense capture accumulates a lot of dark matter. \section{Qualitative description of the halo formation process}\label{Qd} After analyzing the conclusions obtained from eqs.~\eqref{e:Mdotsum} and \eqref{e:Mdmdot} within the framework of our model, we can describe the process of accumulation of dark matter inside the halo of the galaxy. As a result of the growth of small density fluctuations, regions of increased density emerge, in which galaxies can form. The surrounding matter, both baryonic and dark, begins to fall into them. The infalling BM cools and is captured. In the absence of accretion of baryonic matter, DM particles fly through protogalaxies and are not captured. But the situation changes with an increase in the mass of the baryonic component of the galaxy due to accretion, mergers of galaxies and other processes. A fraction of the DM particles flying into the galaxy at sufficiently low speeds is being captured. In Fig. 1b, this is described by the section AB on the lower stable branch of the s-shaped curve. The mass of dark matter inside the galaxy begins to grow, but the rate of its increase is smaller than the rate of increase in the mass of BM. If the rate of increase in the mass of the BM exceeds a certain threshold value, $M_{bc}$, something like a phase transition occurs with a change in the state of the system. In Fig.~1b, this corresponds to a sharp jump from B to D after reaching the right boundary of the bottom stable branch at point B. The simple estimate \eqref{e30} determines the dependence of the relative critical growth rate of baryonic matter $\dot M_{bc}/M$ on the size of the galaxy $R$ and on the mass density of dark matter in the intergalactic space $Nm$ (during the formation of galaxies, this is simply the density of dark matter). The value of $N$ decreases rapidly due to the expansion of the Universe, hence if the jump did not occur during the formation of the galaxy, then it will not occur at later times. The only exception might be the process of merging of galaxies, which can provide a transition to the upper branch due to a sharp temporary increase in the rate of baryonic mass growth rate $\dot M_b$. The critical baryonic mass growth rate required for the jump is smaller for objects with large $R$ (galaxies and their clusters) than for objects with small $R$ (stars, etc.). Therefore, dark halos form around galaxies, but not around stars. After a jump to the upper stable branch, the state of the system corresponds to point D or its vicinity. Let us consider the case where the density $N$ is high enough such that $-4b_2/b_1^2<1$ so that the $\dot M(\dot M_b)$ curve has the shape 1b. If $\dot M_b$ increases, the system shifts to the right, say, to point E. If $\dot M_b$ decreases, the system shifts to the left along the curve DG. On this curves, $\dot M_{DM}\gg \dot M_b$. As a result, we observe galaxies consisting on average of 85\% DM. Note that in case 1b the capture of DM particles continues even if the accretion of baryonic matter ceases at point G. However, not only the value of $\dot M_b$, but also the curve itself changes with time. The dynamics of the change in the curve is associated primarily with the decrease of $N$. The change in the shape of the distribution of the velocities of one DM particle, say, the coefficients $\tilde b_i$, has a much weaker effect. When the threshold value of the intergalactic DM density $N=-4b_2/b_1^2$ is reached, the left boundary of the upper stable branch crosses the y-axis. The curve takes the form shown in Fig. 1c. At point C, the system can jump to point F on the lower branch. In this case, the mass of the dark halo practically stops growing. Without knowledge of the DM velocity distribution, we do not know at which density the BD jump occurred, so we do not know whether the curve is described by graph 1b or 1c at the present time. But the transition from curve 1b to curve 1c is inevitable. In the above description, we assume that it happened later than the jump from B to D. If the state of the system has not descended to the lower branch and intensive capture of DM particles continues due to the high rate of accretion of baryonic matter $\dot M_b$, then with a further decrease in $N$, the $\dot M(\dot M_b)$ curve becomes monotonic as shown in fig. 1d and represents a single stable branch at densities $N$ below the next threshold value $N_1$ given in eq.~\eqref{N1}. This can be considered the end of the stage of intensive capture of DM particles. It is obvious that this transformation occurs later than the crossing of point C through the $y$-axis. \section{Some quantitative estimates}\label{Sq} For an estimation we use the parameters of our Galaxy. The Milky Way cannot be considered typical as there are many more dwarf galaxies in the Universe, but it is a good example of a large galaxy. We set $M\approx10^{12} M_{\odot}\approx 2\cdot 10^{42}$ kg and $R\approx10^6$ ly $\approx10^{22}$m. The last estimate is based on the value $R=292 \pm 61$ kpc \citep{2020MNRAS.496.3929D} and is a rather large value. It is of the order of the average distance between galaxies and slightly less than half the distance to the Andromeda galaxy, M31. For this values the time of flight through the center of the Galaxy exceeds 2 million years even for an ultra-relativistic particle. But we are more interested in slow particles captured by the Galaxy. As I mentioned above, their initial speed is less, and the speed of passage of the halo is approximately equal to 170 km/s. This gives an upper bound on the time-of-flight of a galaxy $\tau$ for the non-capture case as $\tau\leq4.5\cdot 10^9$ years. There are alternative estimation of $R$. Some of them one can find in the review article by \cite{2013JCAP...07..016N} and in papers by \cite{2014ApJ...794...59K}, \cite{2012PASJ...64...75S}. If we choose the value $R=200$ kpc $\approx 6.5\cdot 10^5$ ly $\approx6.2\cdot 10^{21}$m with the same estimate of $M$, we find $u\approx 210$ km/s, $\tau\leq 10^9$ years. If we choose a lower estimate $R=100$ kpc, then $u\approx 300$ km/s and $\tau\leq3.3\cdot10^8$ years. These $\tau$ values are less or much less than the ages of the Universe and of the Galaxy for all estimates of $R$. This confirms the assumption underlying the model that during the passage of a particle that is not captured by the galaxy, the mass of the latter increases, but not by very much. It is clear that this is also true for dwarf galaxies with significantly smaller halo sizes. We can estimate $v_{pc}$ from \eqref{e:tt}, taking $T\simeq 1.3\times 10^{10}$ years, which corresponds to galaxy formation at $z\simeq5$ to $20$. For the Milky Way we obtain $v_{pc}\approx 100\,$km/s for $R=300\,$kpc, $v_{pc}\approx 60\,$km/s for $R=200\,$kpc, and $v_{pc}\approx 50\,$km/s for $R=100\,$kpc. We are more interested in estimating the rate of halo mass increase due to DM capture. Let us assume that the Milky Way has not left the stage of intense capture and apply the formula \eqref{e:27}, more precisely, its limit for large $\dot M$. Using the \eqref{e:28} with $\int_0^\infty f(v_0 ) v_0 dv_0 \approx N v_{max}$, we find \begin{eqnarray} \dot M_{DM}\approx C_1\approx \pi R^2 \rho_{DM}v_{max}\nonumber \\ \approx 0.08 \varkappa h^2\left(\frac{R}{200\, {\rm kpc}}\right)^2 \frac{v_{max}}{100\,{\rm km/s}} M_{\odot} \textrm{ per year.} \label{e:29} \end{eqnarray} Here we have denoted the DM mass density in the intergalactic space as $\rho_{DM}$. In our estimation, we took into account that $Nm=\rho_{DM}= 0.25\varkappa\rho_{c}$. It is less than the average density of dark matter in the Universe, which is now approximately 25\% of the critical density $\rho_{c}$, determined by the Hubble constant $H_0=h100\,$km/s/Mpc. The coefficient $\varkappa<1$ is introduced to account for this difference, which caused by the fact that part of the dark matter is accumulated in the halo of galaxies. The product $\dot M_{DM}$ times the age of the galaxy is much smaller than the DM mass in our Galaxy. The reason for this is that the rate of mass increase was significantly higher in the early stages of the capture of dark matter particles by the Galaxy. Let us estimate the mass of dark matter $M_{DM}(z_0)$ captured from the time corresponding to the redshift $z_0$ to today. We assume that all this time there was an intense capture of particles and the mass gain is described by the equation \eqref{e:29}. The mean density of dark matter in the Universe is proportional to $(1+z)^{3}$. We can set $\varkappa\simeq 1$ for the early stages of galaxy` evolution which account for most of the captured DM. Let us assume that the galaxy from the beginning of the considered period formed a gravitationally bound system and its dimensions did not increase due to Hubble expansion. It is difficult to estimate by how much $v_{max}$ changes with the expansion of the Universe. On the one hand, the speed of a particle flying far from galaxies and not interacting with other particles remains almost unchanged. On the other hand, an analogy can be drawn with the cooling of an ideal gas during its adiabatic expansion. However, it is doubtful that DM particles would be in a state of thermal equilibrium. Therefore, and for simplicity, we estimate $M_{DM}(z_0)$, assuming that the values $R$ and $v_{max}$ to be approximately constant during the period under consideration and the change in the capture rate is to be determined mainly by the change in the DM density. Let us denote the current capture rate as $\dot M_{DM}(0)$ and apply the flat $\Lambda$CDM model. We obtain \begin{equation} M_{DM}(z_0)=\int \dot M_{DM}(0)(1+z)^{3} dt=W(z_0)\dot M_{DM}(0)H_0^{-1} \end{equation} with \begin{equation} W(z_0)=\int_{0}^{z_0} \frac{(1+z)^2}{\sqrt{\Omega_{\Lambda}+\Omega_m(1+z)^3}} dz. \end{equation} Here $\Omega_{\Lambda}\approx0.7$ and $\Omega_m\approx0.3$ are density parameters for the cosmological constant and matter. Thus, the mass of dark matter captured by the Galaxy from the moment $z=z_0$ corresponds to that which it would have captured during the time $W(z_0)H_0^{-1}=W(z_0)\cdot 10^{10}/h\,$years if the current rate of capture was maintained. We can calculate $W(19)\approx114$, $W(24)\approx161$, $W(32.3)\approx250$. We see that, according to our rough estimate, the dark matter that forms the dark halo of the Galaxy can be captured if the process of intense capture begins at $z\simeq20$ or $z\simeq30$. In order to avoid misunderstanding, we emphasize once again that we do not assert that at the present time the Galaxy continues to actively capture DM particles and its state is now on the upper branch. Almost all DM was captured at the earliest stage of this process. It can be assumed that the process of moving to the lower branch (the fall from C to F in Fig. 1c) has already occurred. We do not know if active capture resumed temporarily during the capture of a single dwarf galaxy (see \cite{2021ApJ...923...92N}). There are galaxies in the Universe more massive than the Milky Way. When evaluating \eqref{e:29}, we assumed that the capture rate is maximal. Differences in the mass of dark matter in galaxies can be related to the sizes of galaxies and the moment of the beginning and end of intense capture, i.e., the time of the jump to the upper branch and fall back to the lower branch. The rate of matter capture is proportional to $R^2$. From the estimate \eqref{e30} we can assume that for a larger proto-galaxy the jump to the upper stable branch occurred earlier than for a smaller one. The system can descend to the lower branch not earlier than when point C crosses the y-axis and the curve takes the form shown in Fig. 1c. But a significant accretion rate of BM allows it to remain on the upper branch for some time after that and to continue to accumulate dark matter at a significant rate. The accretion rate is clearly larger in a big massive galaxy, all other things being equal. We know galaxies with estimates of $R$ more than $300-400$ kpc. This are NGC 4889, NGC 4874, ESO 306-17, etc. It can be assumed that their large masses are associated, among other things, with a particularly effective capture of dark matter. Another possibility is associated with the merger of two galaxies of comparable mass, continuous merging of galaxies in the cluster potential (“galactic cannibalism”), or early merging during cluster formation. An example of such a merger is the giant interacting elliptical galaxy ESO 146-5 (ESO 146-IG 005) in the center of the cluster Abell 3827. Its total mass is $(2.7 \pm 0.4)\times 10^{13} M_\odot$ within $37 h^{-1}$ kpc according to~\cite{2010ApJ...715L.160C}. This estimate was obtained from strong gravitational lensing. The total halo mass of ESO 146-5 is larger. It is perhaps the most massive galaxy in the nearby universe. In conclusion, if a galaxy or a galaxy cluster is formed from a strong density perturbation and has a larger than average size and a high initial rates of baryonic mass increase $\dot M_b$, it can accumulate more DM particles. \section{Comparison with other approaches} We have considered the capture of particles and the growth of the mass a galaxy halo in the framework of a simple model. In this model, the initial increase in mass due to the growth of primordial fluctuations is enhanced and turns into a self-sustained process. The analysis of the process is carried out purely analytically, without complex numerical calculations or simulations. However, it allows us to understand not only the physical essence of the process, but also to obtain some quantitative estimates. The model can naturally incorporate results from other approaches. Thus, in eq.~\eqref{e:Mdmdot}, one can use the velocity distribution of dark matter particles obtained from the analysis of the linear growth of perturbations, and when estimating the time of flight $\tau$, one of the numerous density profiles obtained from the results of $N$-body simulations can be used. These two approaches and the analysis of the observed rotation curves of galaxies underlie our knowledge of the structure and evolution of cold dark matter halos. The paper by~\cite{2011ASL.....4..297D} describes well our knowledge at 2011. Note that discussions continue even on the most basic aspects of halo properties. For simplicity, we have considered a spherically symmetric halo. Some simulations predict this~\citep{2004ApJ...601..242M}. The article by~\cite{2004MNRAS.351..643H} states that the halo is an oblate or prolate spheroid with typical axis ratios of 0.6–0.8. An even more asymmetric halo is predicted by the analysis of the growth of linear fluctuations for some initial conditions. In~\citep{2011MNRAS.414.3044V} is is found that halos are triaxial with major-to-minor axial ratios of the order of 3:1. There are also predictions for halos with intermediate deviations from sphericity. The considered mechanism most certainly also works for a non-spherical halo, but the estimates and formulas become much more complicated and have to be treated numerically. We note one aspect that makes analytical models qualitatively more reliable than results from $N$-body simulations. No matter how large the number of particles in a computer simulations, it is many orders of magnitude smaller than the true number of DM particles for any reasonable choice of their mass. If we use N particles when modeling a section of space with a total mass of matter $m_{\rm tot}$, then the "effective mass of the particle" will be equal to $m_{\rm tot}/N$, i.e. at least the mass of the galaxy or globular cluster. The square of this value enters formula \eqref{e:Edot} and provides the strong dynamic friction in this case. The particle dynamics is different from the dynamics of a significantly larger number of lighter particles. A possible continuation of this work is related to the statistical behaviour of a thermodynamic system which is not in equilibrium. DM particles after flying through a galaxy are partially captured and lose part of their speed. One could try to investigate the influence of this process on the particle velocity distribution by writing something like Boltzmann transport equation. \section{Conclusions} \begin{itemize} \item We studied the capture DM particles flying into the galaxy from the space surrounding it. It is associated with an increase in the mass of the galaxy, primarily its dark halo. The kinetic energy of a particle, which decreases as it enters the halo, may become insufficient for the particle to leave the halo if the gsalaxy mass increases sufficiently during the flight. This requires the combined action of two factors. One is an increase in the mass of the baryonic component of the galaxy, and the other is determined by the particle flux. Both have to be sufficiently large for significant captude of DM particles. Furthermore, both change in time, leading to quick changes in the DM capture rate. A capture occurs precisely by objects of the size of galaxies, but not by much smaller astronomical objects. \item As a result, the particle is captured and begins to move inside the gravitational potential well formed by the gravitational attraction of the galaxy. The capture process is well described by the catastrophe theory. Its start is associated with the growth of primordial density fluctuations during the Dark Ages. The ratio of the influx rates of dark and baryonic matter can be quite large. It contributes to the growth of large-scale structure in the Universe. \item The growth rate of the mass of baryonic matter inside the galaxy, for example due to accretion and cooling or due to galaxy cannibalism must exceed a certain threshold value to enter the required regime. In addition, the density of DM particles in intergalactic space must exceed a certain threshold value to switch to the intensive DM capture mode. Taking into account that the matter density decreases with time both due to the Hubble expansion and because of the capture of particles discussed in this paper, it can be assumed that the capture process has either weakened significantly in the past, or will do so in the future. \item Particles with sufficiently high initial velocity can fly through the galaxy, leaving it with a speed reduced due to the action of the mechanism under consideration. The higher the initial velocity, the smaller the loss of both velocity and energy of the particle. As a result, a general decrease in energy and a change in the velocity distribution of the particles occur. This process is more effective if the galaxy is in a cluster rather than in a void. \item A qualitative description of the formation of a dark halo around galaxies is given in Section \ref{Qd}. The process includes several transformations and changes in the state of the system. Particularly strong fluctuations lead to the appearance of large galaxies, often in clusters. Their size and high rates of mass increase ensure the capture of almost all DM particles that enter inside. \item Some quantitative estimates of the considered process in the Milky Way galaxy are presented in Section \ref{Sq}. \vspace{1cm} \end{itemize} {\bf Acknowledgement} RD acknowledges support from the Swiss National Science Foundation. SP expresses gratitude to the people and the government of the Swiss Confederation for supporting Ukrainian scientists in wartime. He thanks SwissMAP for funding his visit to the University of Geneva in Spring 2022, and the Département de Physique Théorique for the opportunity to prolong this visit. \bibliographystyle{mnras}
1,116,691,500,569
arxiv
\section{Introduction} \label{sec:intro} With our societies increasingly relying on technology, we have now the critical need to anticipate major malfunctioning or even catastrophic events in order to protect civilians. Some of the most significant risks have been realized to be events coming from space \citep{Schrijver2015}. Highly energetic particles can be accelerated at the Sun or by magnetic structures in the interplanetary medium \citep{Reames_2013}, reaching energies that allow them to disrupt satellites, jeopardize astronauts' lives and interact with the Earth's atmosphere, leading to communication blackouts \citep{Bothmer2007}. These events are called Solar Energetic Particles events (SEPs); for more details, see review by \cite{Reames2021}. Magnetic storms are another type of events caused by coronal mass ejections (CMEs) interacting with Earth's magnetosphere \citep{Pulkkinen2007}, and resulting in currents in the Earth's crust that cause severe electrical damage to installations \citep{Pirjola2005}. Space weather has the mission to anticipate these disrupting events by simulating the chain of causality from the Sun to Earth and issue forecasts \citep{Temmer2021}. The key to reliable previsions is not only to be able to model accurately the transient phenomena, but to also describe precisely the interplanetary medium in which they propagate and with which they interact before reaching Earth \citep{Shen2022}. Although there are many effects that influence the transients' propagation \citep{Lavraud2014}, they can be linked back to two main physical ingredients. On the one hand, the magnetic field bathes the interplanetary medium, following a complex pattern influenced by the Parker spiral at large scales and fluctuations at small scales \citep{Owens2013}. Its long-term variations are linked to the 11-year cycle of solar activity generated inside the star by dynamo effect \citep{Brun2017}, while its short-term variations may be linked to the convection at the surface of the star \citep{Fargette2021}. On the other hand, the solar wind flows the interplanetary medium with continuous ejected plasma, and shapes large-scale structures with shock regions caused by the interaction between slow and fast wind (SIRs for Stream Interacting Regions) \citep{McComas2003, McComas2008}. It is only natural that an increasing number of countries are then developing frameworks for space weather forecasting: we can cite ENLIL and SWMF for the United States \citep{Odstrcil2003, Toth2012}, SUSANOO for Japan \citep{Shiota2014} and the VSWMC for Europe \citep{Poedts2020_vswmc}. All these frameworks are based on the same principle: since it is impossible to use one model to cover the diversity of scales between the Sun and Earth, the best approach is to couple models dedicated to a specific region and physics. For instance, the VSWMC framework uses photospheric measurements of the solar magnetic field as input, then semi-empirical (WSA) and magnetic (PFSS + SCS) extrapolations from 1 to $21.5\;R_\odot$, and then the heliospheric propagator EUHFORIA to compute physical quantities all the way from 0.1 AU to Earth and beyond (typical outer boundary condition is set at 2 AU) \citep{Pomoell2018}. The first steps of this chain of model, namely the magnetic map chosen as input and the coronal model used to compute the boundary conditions at 0.1 AU, are thus crucial as they determine the initialization of the rest of the models. They are also at the core of the two main physical ingredients that are going to disturb the transients' propagation: the magnetic maps are a direct measurement of the solar activity, and the solar corona is the siege of the acceleration of the solar wind \citep{Cranmer2019}. To better model these sensitive effects, it is planned to use alternative magneto-frictional and MHD coronal models with more physics incorporated within, in order to replace and improve the semi-empirical and potential extrapolations up to 0.1 AU \citep{Poedts2020_euhforia}. Within the MHD models, there are other levels of complexity, such as the number of dimensions which are considered (1D vs. 3D) \citep{Pinto2017, Mikic2018}, or the level of sophistication to describe the coronal heating (polytropic vs. Alfvén waves) \citep{Perri2018, Reville2020}. There are even models going beyond the fluid approximation by taking into account the multi-species nature of the solar wind \citep{vanderHolst2014, Chhiber2021}. This approach has already proven successful for specific test cases \citep{Samara2021}. The dilemma is that, as we put more and more physics, what we gain in accuracy is lost in speed and robustness. As space weather forecasting requires all three qualities, we have developed a new coronal model to satisfy all these constraints. The COCONUT (COolfluid COroNal UnsTructured) coronal model uses the time-implicit methods from the COOLFluiD framework, which allows it to be up to 35 faster than typical explicit codes while achieving the same level of accuracy \citep{Perri2022}. It also has the advantage of using unstructured meshes instead of regular grids, which allow it to avoid degeneracy at the poles and thus provide more accuracy in this region. As more and more coronal models begin to be suited for space weather forecasts, another important effort for the community is to come up with metrics to evaluate the quality of the models and thus retain the best parameters for previsions \citep{Lionello2009, Wagner2022, Samara2022, Badman2022}. This paper will focus in particular on the choice of the input magnetic map, as it is the driver of the entire numerical simulation. Many studies have tried to bridge the gap between various magnetic maps from different observatories, but no general consensus could be found behind these observations \citep{riley2014, virtanen2017}. This comes essentially from the lack of multi-vantage point, as for example no 360 degrees view of the Sun is available at all time since the breakdown of STEREO-B. New studies suggest that the choice of the input map and its pre-processing would change significantly the description of the coronal structure \citep{Yeates2018}, and thus of the SIRs and CME propagation \citep{Riley2021, Samara2021}. For this reason, more and more studies focus on trying to assess the impact of the choice of the input map on the resulting coronal structure \citep{Petrie2013, Wallace2019, Caplan2021, Li2021}. However, most of these studies rely on PFSS extrapolations to describe the coronal magnetic field, while MHD would be more physical, especially further away from the star \citep{Reville2015b}. MHD studies have started to be conducted, but so far mostly for few codes, which are the MAS and AWSoM codes \citep{Linker2017, meng2022}. For all magnetic maps, the greatest uncertainty lies in the solar poles, as the viewpoint from Earth and satellites in the ecliptic plane does not allow for precise global measurement. Only local observations by Hinode or soon Solar Orbiter allows us to retrieve high-resolution information from the solar poles \citep{Tsuneta2008}. There are however indirect techniques that can be used such as microwave imaging observations \citep{Gopalswamy2012} or Zeeman effect \citep{Ito2010}. This is problematic for global coronal models, as it leads to huge uncertainties on the open solar flux \citep{Riley2019} and therefore underestimation of the magnetic field at Earth \citep{Owens2008, Jian2015}. The solar poles have been known to influence greatly the dynamics of the corona, by affecting the IMF field strength, the HCS excursions, and the wind speed through the polar coronal holes \citep{Petrie2015}. However, the impact of the solar poles modeling in space weather forecasts is still not properly quantified. It is made even more difficult by the fact that most models do not include the solar poles in the heliospheric part \citep{Pomoell2018}, and sometimes even in the coronal part \citep{Pinto2017}, thus implicitly assuming the influence of the poles can be neglected. Our goal is to test these assumptions, first for a well-documented case of minimum of activity of the $2_{nd}$ of July 2019. The choice of the minimum of activity allows us to focus on the influence of the poles rather than the active regions, which is also made possible by our unstructured mesh approach allowing for fully including the poles within the computational domain. The choice of the date allows us to have precise pictures of the solar corona thanks to a total solar eclipse as seen from Earth. This paper is organized as follows. In section \ref{sec:mag_maps}, we give an overview of the magnetic maps which are used as input of our simulations (all 20 maps publicly available for the $2^{nd}$ of July 2019 total solar eclipse), explaining in particular their differences in spectral line selection, resolution and pole-filling techniques. In section \ref{sec:cf}, we then present our numerical model COCONUT which uses these magnetic maps in order to simulate the solar wind in the corona up to 0.1 AU. We describe the physical as well as the numerical parameters which are used to constrain the simulations. We also discuss the pre-processing of the maps for quantifying the difference in initialization of the simulations. In section \ref{sec:comp_min}, we analyze the results of the 20 corresponding simulations which have been performed. We use 3 different observational data available for this date to validate the results: we compare magnetic field lines to white-light images (section \ref{subsec:min_wl}), open and closed magnetic field line distribution to coronal hole detection in EUV (section \ref{subsec:min_ch}) and the position of the Heliospheric Current Sheet (HCS) to the Streamer Belt (SB) white-light reconstruction (section \ref{subsec:min_hcs}). In section \ref{sec:discussion}, we discuss the implications for space weather forecasting. We begin by comparing the resulting magnetic field configuration at 0.1 AU with the typical WSA + PFSS + SCS model used currently for coupling with EUHFORIA (section \ref{subsec:min_space_weather_forecast}). We then assemble all our results into a scoreboard for this event, determining which magnetic map allows our model to fit the observational data the best (section \ref{subsec:map_scores}). We focus especially on the pole-filling techniques and their implication for forecasts (section \ref{subsec:poles}). Finally, in section \ref{sec:conclusion} we sum up the conclusions of our study and present the perspectives for future work. \section{Description of the magnetic maps} \label{sec:mag_maps} Our simulations are data-driven in the sense that the inner boundary condition for the radial magnetic field $B_r$ is imposed based on a synoptic map derived from solar observations of the photospheric magnetic field. There are also models which are fully data-driven because they use the three components of vector magnetograms as an inner boundary condition, along with velocity components $V_\theta$ and $V_\varphi$. The number of Dirichlet conditions is then determined by the directions of the characteristic waves going in and out of the photosphere \citep{Wu2006, Yalim2017, Singh2018}. Such methods are more difficult to implement within our unstructured grid and implicit solver, so this remains outside the scope of this study and will be considered for future extensions of the code. For the selected date ($2^{nd}$ of July 2019), we used all publicly available processed synoptic maps from 4 different providers: WSO (Wilcox Solar Observatory), GONG (Global Oscillation Network Group), HMI (Helioseismic and Magnetic Imager) and GONG-ADAPT (Air Force Data Assimilative Photospheric Flux transport). Links to their corresponding source in order to download them are shown in the acknowledgments section. A summary of their main properties can be found in table \ref{tab:maps}. In this section we will explain the differences between these different maps, focusing on the observation techniques, the assembly methods and the pole-filling methods. \begin{table}[!t] \centering \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline Provider & Spectral line & Type & Resolution & Units & Y-axis & Pole filling & Time span & CRs \\ \hline \hline WSO & Fe 525 nm & LOS & 73x30 & $\mu T$ & $\rm{sin}\theta$ or $\theta$ & None & 1976.3 - & 1642 - \\ GONG & Ni 676.8 nm & Pseudo-Radial & 360x180 & $G$ & $\rm{sin}\theta$ & Cubic-polynomial fit & 2006.7 - & 2047 - \\ HMI & Fe 617.3 nm & Pseudo-Radial & 3600x1440 & $G$ & $\rm{sin}\theta$ & None & 2010.4 - & 2096 - \\ GONG-ADAPT & Ni 676.8 nm & Pseudo-Radial & 360x180 & $G$ & $\theta$ & Flux-transport models & 2007.002 - & 2052 - \\ \hline \end{tabular} \caption{Properties of synoptic magnetic maps which have been used in this study. For each provider, we specify the observed spectral line, the type of magnetic field, the resolution of the map, the units of the magnetic field, the type of $y$ axis which has been used, the pole-filling technique, the available time span and the corresponding Carrington Rotations (CRs). For the source of the magnetic maps, please check the acknowledgments section.} \label{tab:maps} \end{table} All the maps were obtained through magnetographs, although the latter use various techniques in different contexts. A first difference is the observed spectral line, as seen in column 2 of table \ref{tab:maps}. At WSO, a Babcock solar magnetograph records the Zeeman polarization in the wings of an absorption line of iron at 5250 \AA \citep{Ulrich1992}. It is the longest homogeneous series of observations with the same instrumentation which has been used since 1976. GONG uses interferometric techniques in order to measure the opposite states of polarization of the Ni I 6768 \AA line, which is based on 6 stations around the world since 2006. HMI is an instrument onboard the SDO satellite (Solar Dynamics Observatory) launched in 2010, which observes the full solar disk in the Fe I absorption line at 6173 \AA. It was calibrated using the instrument MDI (Michelson Doppler Imager) onboard SOHO (Solar and Heliospheric Observatory). It can also record 3D vector magnetograms. Finally, the GONG-ADAPT maps are based on the GONG observations, so relying as well on the Ni I 6768 \AA line. These differences in spectral line technically mean that the maps are not representing the magnetic field at the same height, which can result in slightly different structures. The third column refers to the fact that all observatories measure line-of-sight (LOS) component of the magnetic field. However some of them convert this value into a pseudo-radial field under the assumption that the total field is radial. Column 4 shows another important difference between the maps which is their resolution. WSO is the lowest-resolution device with only a 3-arcmin aperture size, which results in maps of 73 pixels in longitude and 30 pixels in latitude. GONG (and consequently GONG-ADAPT) provides map products with 360 pixels in longitude and 180 pixels in latitude. Finally, HMI has the best resolution thanks to the fact that it is in space, with a 1-arc-second resolution, and provides high-resolution maps with 3600 pixels in longitude and 1440 pixels in latitude. We also note in column 5 that the units are mostly in Gauss, except for the WSO maps which are in micro-Teslas. Column 6 shows another important geometric parameter which is the type of $y$ axis used. "$\theta$" means that the pixels are in equal steps of latitude, which is the case for GONG-ADAPT between -90 and 90 degrees, and a possible option for WSO between -70 and 70 degrees. "$\rm{sin}\theta$" means that the pixels are in equal steps of sine latitude (to account for the fact that the poles are difficult to measure from the point of view of the ecliptic plane), which is the case for GONG and HMI between -1 and 1, and an option for WSO between -14.5/15 and 14.5/15. We should also note that over the years, various processings have been applied to the data or have been highly recommended. In this study, we took the maps as they were, and chose to not apply any correction. WSO for example had several periods with sensitivity issues, some of them having been recalibrated (between November 200 and July 2002, and between $16^{th}$ December 2016 and $18^{th}$ May 2017). There is also a general problem of saturation described in \cite{svalgaard1978} and updated in \cite{svalgaard2006good}. Please note that the difference between GONG and GONG-ADAPT is also mostly some post-processing, as we will explain in the next paragraph. This modification history is not always made public, and thus can produce differences based on the date at which the data have been downloaded and processed. For more details about the instruments, the reader can also refer to the reviews of \cite{riley2014} and \cite{virtanen2017}. \begin{figure} \centering \includegraphics[width=\textwidth]{map_other2.png} \caption{Comparison of synoptic maps for the $2^{nd}$ of July 2019 (CR2219). From top to bottom, and then left to right: WSO, HMI, GONG (mrmqs), GONG (mrnqs), HMI (synchronic), GONG (mrbqs), GONG (mrbqj), and GONG (mrzqs). The first column shows Carrington-frame synoptic maps, while the second column shows maps with longitude converted to the Carrington longitude for CR2219. All data are in their original resolution and axis (longitude - sine-latitude). The ranges of the color bars have been set to plus and minus of the maximum of the field divided by 10, in order to have positive polarities in red and negative polarities in blue, as well as a good balance between small and large-scale structures.} \label{fig:maps_other} \end{figure} Another important difference to discuss is the way the synoptic maps are assembled, and the very definition of a synoptic map in the first place. A synoptic map means that the full surface of the Sun is covered in 360 degrees. However, it does not guarantee that all data which were used to create this full view were taken at the same time (this would be called a synchronic map). In reality, most of the maps are assembled using data at different dates, thus producing diachronic maps. For the WSO map, the full-disk images of the Sun are remapped over a month into Carrington longitudes, which means that there is a 27-day difference on average between data at 0 and 360 degrees on the map. The HMI map follows the same idea, except that the better resolution allows to average 20 magnetograms for each Carrington longitude. More precisely, individual pseudo-radial magnetograms are remapped on a very high-resolution Carrington coordinate grid. For each Carrington longitude, the 20 magnetograms closest in time to the central meridian passage (CMP) (within 2 degrees) for that longitude are selected and averaged. The result is that the effective temporal width of the HMI synoptic map is about three hours. The choice of a constant number of contributing magnetograms allows to minimize the variation of the noise over the entire map. A two-dimensional Gaussian function (whose width is 3 pixels) is then applied to high-resolution remapped data to reduce the spatial resolution before generating the high-resolution synoptic maps.\footnote{\url{http://jsoc.stanford.edu/HMI/LOS_Synoptic_charts.html}} The HMI daily update synchronic frames provide a more up-to-date version of the synoptic map with the first 120 degrees being replaced by the daily full-disk observation at the corresponding date from the twenty 720s-magnetograms obtained between 10 and 14 UT, which helps reducing the time gap between data and allows to take into account fast-evolving structures. The origin of the frame is adjusted so that the newest data will appear on the left of the 360 degree map. We will refer to this frame as the synchronic frame through the rest of this article. It does not mean that the full map is synchronic, but it is chosen so that the central meridian of the given date is always at 60 degrees in longitude from the left-leading edge. Within this set of maps, we would like to take some time to describe more precisely some subsets of maps. Within the GONG products, there are 5 different synoptic maps available. 2 of them are integral magnetogram synoptic maps, and follow the same idea as described before: the mrmqs and mrnqs maps are built using data from the full Carrington rotation. To derive a map of the full-sun magnetic field, fully calibrated one-minute full-disk photospheric magnetograms from GONG's six sites are used. The first step is that the one-minute images from the GONG network are merged to give continuous minute-by-minute coverage of the field. Then the merged images are remapped into longitude (measured from the central meridian) and sine latitude. Next, these remapped images are shifted to the Carrington frame and merged with a weighted sum to form a full-surface picture of the solar magnetic field. Weighting factors take the form of a cosine to the power 4 of the longitude to ensure that measurements taken at a particular Carrington time contribute most to that Carrington longitude in the final synoptic map.\footnote{\url{https://gong.nso.edu/data/dmac_magmap/}} The 3 others are synchronic frames magnetogram synoptic maps. This is especially visible when we plot all the maps in figure \ref{fig:maps_other}. The mrbqj product called the Janus maps are similar to the HMI synchronic frame maps: the left 60 degrees in longitude between -60 and 60 degrees in latitude are updated using classic synoptic information, thus resulting in a composite magnetogram. However, in the case of the mrbqs and mrzqs products, this means that the 60 degrees to the left of the map have not crossed the central meridian, and are thus not updated for the current Carrington rotation. Then, there is another distinction made between the zero-point corrected products (mrzqs, mrnqs) and the standard products (mrbqs, mrbqj, mrmqs): these maps have corrections at the poles to have a better estimate of the global magnetic flux. This is visible in figure \ref{fig:maps_other} where we see the southern pole negative polarity being enhanced for GONG mrzqs and GONG mrnqs. Within the GONG-ADAPT map, there are actually 12 realizations produced. The differences rely on the various models used to try to approximate a synchronic map \citep{Hickmann2015}: here, GONG full-disk magnetograms are processed using forward modeling to account for differential rotation, meridional circulation and supergranulation. Combined with data assimilation, this leads to a model ensemble of 12 realizations at the time of observation. All these different realizations are plotted on figure \ref{fig:maps_adapt} for the $2^{nd}$ of July 2019 in order to show the differences for a minimum of activity. \begin{table}[!t] \centering \begin{tabular}{|c||c|c|c|c|} \hline Name & Full name & Frame & Zero-point correction & Updated data \\ \hline mrmqs & Integral Carrington Rotation Magnetogram Synoptic Map & Carrington & no & no \\ \hline mrnqs & Integral synoptic map & Carrington & yes & no \\ \hline mrbqs & Standard QuickReduce Magnetogram Synoptic Map & Synchronic & no & no \\ \hline mrbqj & Janus QuickReduce Magnetogram Synoptic Map & Synchronic & no & yes \\ \hline mrzqs & Synoptic map & Synchronic & yes & no \\ \hline \end{tabular} \caption{Summary of the properties of the GONG products. For each product, we explain the full name of the product and the associated frame. We also specify whether the zero-point correction is applied, and whether updated data are included.} \label{tab:gong_maps} \end{table} To make it easier for the reader, we have summarized the main properties of the various GONG products in Table \ref{tab:gong_maps}. Not all of these products were necessarily designed to be used as inputs for coronal modeling and space weather previsions. The recommended products are the zero-point corrected ones (mrzqs and mrnqs), but for practical reasons, it turns out that some facilities still use the non-corrected synchronic products (mrbqs) \citep{Poedts2020_vswmc}, which makes them still relevant to study. The Janus maps were designed to reproduce more closely sudden changes of magnetic flux in the solar disk facing Earth. This makes them more precise but also possibly more unstable because noisier. Finally, the integral maps in Carrington frame were not necessarily designed as an operational product, but they are closer to the HMI map, and we found it interesting to adopt an unbiased approach and test all of these maps for our model. \begin{figure} \centering \includegraphics[width=\textwidth]{map_adapt.png} \caption{Comparison of the 12 GONG-ADAPT realizations for the $2^{nd}$ of July 2019 (CR2219). All data are in their original resolution and axis (longitude - latitude). The ranges of the color bars have been set to plus and minus of the maximum of the field divided by 10, in order to have positive polarities in red and negative polarities in blue, as well as a good balance between small and large-scale structures.} \label{fig:maps_adapt} \end{figure} Finally, the maps may use different techniques to fill the solar poles. The solar poles are currently not clearly visible with an extended range of latitudes by any magnetograph because all of them are located in the ecliptic plane, perpendicular to the poles. This will change with Solar Orbiter, which is scheduled to go 30 degrees out of the ecliptic plane around 2025, in order to provide more detailed global pictures of the solar poles with an extended range of accessible latitudes. In the meantime, magnetic maps need to use extrapolation techniques if they want to improve the description of the poles. In the set that we are studying, we can see in table \ref{tab:maps} that the HMI map has no correction for the poles. The WSO map neither, but since it does not provide data between $-70^\circ$ and $-90^\circ$, and $70^\circ$ and $90^\circ$, we perform a linear extrapolation to fill these gaps. This means that the WSO map is going to have the least accurate information about the solar poles due to instrument limitations, since all data above 55 degrees of latitude comes from only one 3-arcmin pixel. The GONG map performs a cubic-polynomial fit. Finally, the GONG-ADAPT has the most sophisticated model, which takes into account flux-transport to increase the concentration of the magnetic field at the poles because of the modeled meridional circulation. \section{Description of COCONUT code} \label{sec:cf} COCONUT stands for COolfluid COroNa UnsTructured, and is a 3D MHD coronal model based on a fully implicit solver for Finite Volume Methods (FVM) on unstructured grids. The solver is part of the COOLFluiD framework (Computational Object-Oriented Libraries for Fluid Dynamics) \citep{Lani2005, Lani2006, Kimpe2005, Lani2013}, designed for scientific heterogeneous high-performance computing of multi-physics applications, including astrophysical plasmas \citep{LaniGPU, Laguna2016, Maneva2017, Laguna2018, Asensio2019}. We refer the reader to \cite{Perri2022} for the complete description of the COCONUT code. We will focus here on its main physical and numerical features. \subsection{Equations and physical parameters} \label{subsec:cf_physics} We solve the ideal MHD equations in conservation form in Cartesian coordinates (more details are given in \citet{Yalim, LaniGPU}): \begin{equation} \frac{\partial}{\partial t}\left(\begin{array}{c} \rho \\ \rho \vek{v} \\ \vek{B} \\ E \\ \rho \\ \phi \end{array}\right)+\vek{\nabla} \cdot\left(\begin{array}{c} \rho \vek{v} \\ \rho \vek{v} \vek{v}+\tens I\left(p+\frac{1}{2}|\vek{B}|^{2}\right)-\vek{B} \vek{B} \\ \vek{v} \vek{B}-\vek{B} \vek{v}+\underline{\tens I \phi} \\ \left(E+p+\frac{1}{2}|\vek{B}|^{2}\right) \vek{v}-\vek{B}(\vek{v} \cdot \vek{B}) \\ V_{r e f}^{2} \vek{B} \end{array}\right)=\left(\begin{array}{c} 0 \\ \rho \vek{g}\\ 0\\ 0 \\ \rho \vek{g} \cdot \vek{v} \\ 0 \end{array}\right), \end{equation} in which ${E}$ is the total energy, $\vek{B}$ is the magnetic field, $\vek{v}$ the velocity, $\vek{g}$ the gravitational acceleration, $\rho$ the density, and $p$ is the thermal gas pressure. The gravitational acceleration is given by $\vek{g}(r) = -(G M_\odot/r^2)\, \hat{\vek{e}}_r$ and the identity dyadic $ \tens I = \hat{\vek{e}}_x \otimes \hat{\vek{e}}_x + \hat{\vek{e}}_y \otimes \hat{\vek{e}}_y + \hat{\vek{e}}_z \otimes \hat{\vek{e}}_z$. Since the ideal MHD equations are scale independent, they are implemented in COOLFluiD in dimensionless form. The following basis set $\{\ell_0,\rho_0,B_0\}$ of code units $Q_0$ is used to adimensionalize any physical quantity $Q$ as $\tilde Q = Q/Q_0$: the unit length $\ell_0 = R_\odot =6.95\times10^{10}\,\rm{cm}$, unit mass density $\rho_0 = \rho_\odot=1.67\times10^{-16}\,\rm{g\,cm^{-3}}$, and $B_0 = 2.2\ \mathrm{G}$, a typical value for the background solar dipole field all represent solar surface values. All other code units are composed of combinations of the three base units, such as unit pressure $P_0 = \rho_0 V_0^2$ and gravitational acceleration $g_0 = V_0^2/\ell_0$ with $V_0 = B_0/\sqrt{\mu_0 \rho_0}$. We use typical solar surface values for the mass density $\rho_\odot = 1.67 \times 10^{-16}\ \mathrm{g/cm^3}$ and $T_\odot = 1.9 \times 10^6\ \mathrm{K}$ for fixed-value Dirichlet conditions of density and pressure. The pressure at the inner boundary follows from the solar surface temperature by application of the ideal gas law: $P_\odot = 4.15 \times 10^{-2} \, \mathrm{dyn/cm^2}$. \subsection{Numerical methods and boundary conditions} \label{subsec:cf_numerics} The state variables are evolved in time using a one-point and three-point implicit Backward Euler scheme for steady and unsteady cases \citep{Yalim}, respectively, solving the resulting linear system with the Generalized Minimal RESidual (GMRES) method \citep{Saad1986} which is implemented within the PETSc library \citep{petsc-web-page, petsc-user-ref, petsc-efficient}. In order to ensure the divergence constraint $\nabla \cdot \vek B = 0$, we use the Artificial Compressibility Analogy \citep{chorin1997}, which is very similar to the Hyperbolic Divergence Cleaning (HDC) method originally developed by \cite{Dedner2002} and has been shown to perform well with our implicit solver \citep{Yalim}: \begin{equation} \label{eqn:hdc} \pdq \phi t + V_{ref}^2 \nabla \cdot \vek B = 0 \end{equation} which couples the zero-divergence constraint to the induction equation, ensuring that the whole system remains purely hyperbolic. $c_h$ denotes the propagation speed of the numerical divergence error, set to $1.0$. The inner velocity is set to 0 at the inner boundary by following the prescription: $V_{{x,y,z}G} = - V_{{x,y,z}I}$. This condition allows us to suppress the currents at the solar surface in order to produce a better perfect conductor boundary condition (see \cite{Perri2022} and \cite{Brchnelova2022b} for more details). In order to be able to pass an initial condition for the magnetic field distribution to the MHD solver, we compute a potential field approximation based on a particular magnetic map as inner (i.e. at the solar surface) boundary condition. From the input synoptic map, we derive a Dirichlet condition based on the radial magnetic field: $B_{r\mathrm{G}} = 2 B_{r\mathrm{PF}}\Big|_{\partial \varOmega_\mathrm{i}} - B_{r\mathrm{I}}$. Here and in the following, index~``G'' is supposed to indicate a value evaluated at a particular ghost cell center, while index ``I'' refers to the corresponding inner cell, adjacent to the ghost cell. The field value at the ghost cell center is assigned such that the exact boundary value at the cell face bordering ghost- and inner state symmetrically, e.g.\ $B_{r\mathrm{PF}}|_{\varOmega_\mathrm{i}}$ is the arithmetic mean of the quantity in question as evaluated on the ghost- and inner state cell centers. $\partial \varOmega_\mathrm{i} = \{(r,\vartheta,\varphi)|r=R_\odot\}$ denotes the solar surface boundary and $\partial \varOmega_\mathrm{o}$ the outer spherical shell at $r=21.5\;R_\odot$. Because the other components of the magnetic field are not derived from data, we use simple zero gradient conditions across the inner boundary ($\partial B_\theta/\partial r = \partial B_\varphi/\partial r = 0$). Due to the solar wind being supersonic at $r = 20.0\;R_\mathrm{S}$, we can extrapolate the spherical field components $r^2 B_r$, $B_\vartheta$, $B_\varphi$, as well as $\rho$, $V_r$, $V_\vartheta$, $V_\varphi$ and $P$ from the outermost cell centers to the ghost cells with a zero gradient. We extrapolate $r^2 B_r$ instead of $B_r$ to comply with the divergence-free constraint for the magnetic field (see \cite{Perri2018} for more details). The mesh which has been used for all simulations is a spherical shell domain defined by: $\varOmega = \{(r,\vartheta,\varphi)|R_\odot < r < 21.5\;R_\odot\}$, where the inner and outer boundary conditions are applied on $r = R_\odot$ and $r = 21.5R_\odot$ respectively. The surface mesh of a level-6 subdivided geodesic polyhedron (consisting of triangular elements) was generated to represent the inner boundary and then extended radially outwards in layers until the outer boundary was reached, resulting in a 3-D domain consisting of prismatic elements. The default mesh used a 6th-level subdivision of the geodesic polyhedron with 20,480 surface elements, resulting in a grid with 3.9M elements. One advantage of this mesh is that it does not produce any polar singularity, contrary to most spherical structured meshes. For more details about the mesh design and its impact on the numerical solution, see \cite{Brchnelova2022a}. \subsection{Discussion about input radial magnetic field} \label{subsec:input_br} Before analyzing the comparison with the observations, we want first to discuss the pre-processing of the synoptic maps, as it will impact the simulation results. There are two main categories of pre-processing applied to synoptic maps for coronal simulations. PFSS-based models tend to use a Gaussian filtering, in combination with a flux-conserved remapping of the map in order to better approximate the poles \citep{Pomoell2018}. This pre-processing is important for this kind of method, since the PFSS and the subsequent WSA usually applied afterwards is very sensitive to flux distribution and expansion factor. However, for an MHD simulation, we can use another pre-processing: we can do a scale filtering by doing a spherical harmonics decomposition and selecting a maximum cut-off frequency $\ell_{max}$. This is closer to the techniques used in stellar physics, where the ZDI measurement of the magnetic field usually provides only the first 5 modes \citep{Vidotto2018}. In this study, we have chosen to apply the same pre-processing to all the maps, with an $\ell_{max}$ of 15. This is similar to a space-weather operational set-up: $\ell_{max}=15$ allows us to capture smaller structures like active regions without resolving too refined structures that would slow down the simulation. In all the following plots, we will divide the maps into three categories that we feel are more logical to compare. The first category are the maps in Carrington frame, which are integral maps. This category concerns WSO, HMI, GONG mrmqs and GONG mrnqs maps. All of these maps are diachronic, meaning that they are constructed by assembling observations at different times, and thus reflect only approximately the state of the solar surface at a given date. The second category are the maps with synchronic frames with usually daily-updated data. This category concerns HMI daily, GONG mrbqs, GONG mrbqj and GONG mrzqs. These maps have a different reference frame as the 120 degrees in longitude to the left of the map are replaced with the most recently measured disk data (except for the GONG mrbqs product, which however still uses the same frame). Thus, the central meridian of the chosen date is always placed at 60 degrees from the left side of the map. Finally, we set apart the GONG-ADAPT maps, as they are 12 different variations on the same original GONG data, with just differences in parameters for the applied modeling. The selected GONG-ADAPT maps for this study are also in Carrington frame, but they are set apart because they are synchronic maps, contrary to the others which are diachronic. \begin{figure} \centering \gridline{ \fig{std_monthly.png}{0.49\textwidth}{(a) Carrington frame diachronic maps.} \fig{std_daily.png}{0.49\textwidth}{(b) Synchronic frame maps.} } \gridline{\fig{std_adapt.png}{0.49\textwidth}{(c) GONG-ADAPT realizations.}} \caption{Standard deviation for each pixel between input radial magnetic fields which have been derived from magnetic maps. The fields have been interpolated to the medium resolution 360x180 for comparison. The first panel shows the standard deviation from Carrington frame diachronic maps, the second one from synchronic frame maps, and the last one from all 12 GONG-ADAPT realizations for the same map. The corresponding input magnetic fields are shown in figure \ref{fig:bc_file_all}.} \label{fig:std_bcfile} \end{figure} All radial magnetic fields which have been used as boundary conditions can be found in the appendix in figure \ref{fig:bc_file_all}. The pre-processing smoothens the maps and reduces the differences due to resolution. At minimum of activity, the maps are dominated by the dipolar configuration with a positive polarity at the northern pole going down to 50 degrees in latitude, and a symmetric negative polarity at the southern pole that goes up to - 50 degrees. Despite the low activity, an active region is visible, interestingly exactly at the Carrington longitude of the date of interest (around 319 degrees). In order to show a more quantitative comparison between the boundary conditions, we display in figure \ref{fig:std_bcfile} the standard deviation as computed for all 3 above-mentioned categories for each pixel, after the input magnetic fields have been interpolated to the medium resolution of 360x180. We have chosen this resolution as it offers a good compromise between the lowest for WSO (73x30) and the highest for HMI data (3600x1440), and also because it is the most common with the chosen maps (GONG and GONG-ADAPT maps already have this resolution). The input field will anyway be interpolated to the unstructured boundary mesh, which is a bit more resolved, at the beginning of the simulation. This shows that at minimum of activity, the most significant differences between the input $B_r$ maps are located at the poles, and this for all 3 categories: above 60 degrees and below -60 degrees in latitude, Carrington frame diachronic maps have a standard deviation between 1.0 and 1.6, synchronic frame maps between 0.9 and 1.7 and GONG-ADAPT maps between 0.4 and 0.55. We can also note some other sources of differences. For Carrington frame diachronic maps (panel (a)), there is a very good agreement for the edges of the magnetic structures, but a rise of deviation at the center of the active region. This is probably due to the difference in saturation and resolution of the various maps which lead to different amplitudes of the magnetic field in the active region. The synchronic frame maps (panel (b)) also show some stronger deviation in the active region, although it is not where the maximum deviation is reached. The GONG-ADAPT maps (panel (c)) have the lowest standard deviation between the 3 categories, but they exhibit some mild deviation also at the center of the map, which is probably a result of the granulation model which is used and its various parameters that have been tested. The filling of the poles is thus going to be the main factor for explaining the differences which have been observed in the simulations. \section{Comparing synoptic maps for the minimum of activity of July 2nd 2019} \label{sec:comp_min} We have selected the date of $2^{nd}$ of July 2019 because it was the most recent quiet minimum of activity date where we could combine three interesting observations in order to quantify the results of our simulations: a total solar eclipse, visible in South America at this date, provided precise white-light images of the corona, the space observatory SDO took pictures in EUV with its instrument AIA to provide maps of the coronal hole locations and the space observatory SoHO took white-light picture with its instrument LASCO to provide an estimate of the streamer belt location. Although the PSP satellite was launched by this date, it was not close to the Sun at this precise date, making it difficult to provide in-situ data in the solar corona (its closest perihelia were on $4^{th}$ April and $1^{st}$ September 2019). In this study, we will thus concentrate on remote-sensing comparisons in order to quantify the impact of the choice of the input synoptic map. \subsection{Comparison with white-light eclipse images for streamer edges} \label{subsec:min_wl} \begin{figure} \centering \gridline{ \fig{streamers_monthly.png}{0.49\textwidth}{(a) Carrington frame diachronic maps.} \fig{streamers_daily2.png}{0.49\textwidth}{(b) Synchronic frame maps.} } \gridline{\fig{streamers_adapt.png}{0.49\textwidth}{(c) GONG-ADAPT realisations.}} \caption{Comparison of the shape of the meridional streamers with the white-light (WL) eclipse image from $2^{nd}$ of July 2019. The first panel compares the streamers from Carrington frame diachronic maps, the second one from synchronic frame maps, and the last one from all 12 GONG-ADAPT realizations for the same map. The solar disk is highlighted as a red circle as reference. Streamers contours are shown as shades of grays. All streamers have been remapped to the same size ratio using this reference and its conversion to the picture pixels, shown as axis. Credits for the WL eclipse picture: Peter Aniol, Miloslav Druckmüller.} \label{fig:wl_eclipse_streamers} \end{figure} The first comparison we show is the comparison between streamer edges and white-light eclipse images. White-light images are usually records of polarization brightness (pB) formed by Thomson scattering of photospheric light by coronal free electrons in the K corona \citep{Aschwanden2004}. Outside of solar eclipses, white-light images are generated using a coronograph from a spacecraft (e.g.\ SOHO/LASCO) or from ground-based observatories (e.g.\ COSMO/K-COR). The problem with these techniques is that the coronagraph extends above 1 solar radius, thus dimming some structures. It is actually during the solar eclipses on Earth that the solar disk is perfectly covered by the Moon, and that we can see the most precisely the shape of the streamers. For this reason, white-light pictures of eclipses have been traditionally used to constrain coronal models \citep{Mikic1999}. They are extremely useful to determine the shape of the streamers in the corona, as they reveal the underlying magnetic field structure. The white-light image we selected for $2^{nd}$ July 2019 is a composite image (128 pictures) from an open database \footnote{\url{http://www.zam.fme.vutbr.cz/~druck/Eclipse/index.htm}} maintained by Miloslav Druckmüller, that has already been used for other studies \citep{boe2020}. Some procedures have thus been developed to compare directly the magnetic field lines obtained from simulations with white-light pictures \cite{Wagner2022}. This is however limited to the fact that white-light images are 2D projections of the 3D configuration, which makes automatic comparisons challenging. A more quantitative approach relies upon developing a pipeline to produce artificial white-light images from simulations \citep{Mikic2018}. But this approach actually shifts the problem to the modeling of the white-light emission and the filters which are applied as post-processing for selecting the right features. In this study, we suggest another approach that tries to be both robust, so that it can be automatized, and simple enough to be implemented for all MHD models. What we do is that we compute the magnetic field lines in our simulations based on $40\times 40$ seeds which are located on a sphere at $1.01\;R_\odot$. This resolution was chosen as a good compromise between accuracy and speed. Then we select the seeds and corresponding field lines that are in the plane perpendicular to the observer line of sight at the date of the event. From these we can extract the largest closed magnetic field line, which corresponds then to the edge of the streamers as seen from the Earth. We can finally superpose these edges on the white-light images, by projecting the field lines in the 2D plane and adjusting them to the size of the picture (the reference is the radius of the solar disk where we find the conversion between physical and pixel size). The entire procedure is completely automatic and operated by Python scripts. The results are shown in figure \ref{fig:wl_eclipse_streamers}. As stated before, we divide the simulations into 3 categories based on the frame of the maps (Carrington frame diachronic, synchronic frame and GONG-ADAPT realizations). For each subgroup, we show the white-light image in gray scale in the background to enhance the features. On top of it, we show the solar disk edge as a red circle. This feature is important because it is actually detected automatically using hysteresis thresholding, and used to adjust the size of the streamers from the simulation to the eclipse picture. Finally, we plot the streamer edges extracted from each simulation in shades of gray. We note that for this date, the streamers are remarkably large, as shown by the white-light image. We can distinguish by eye one streamer on the left, and two streamers on the right that overlap, which probably means that they are not located at the same longitude. At the poles, we can clearly see open magnetic field lines that are almost vertical. This is typical of a minimum of activity configuration. The size of the streamers and the complexity of the structures visible between 1 and 1.5 solar radii indicate that they may be overarching pseudo-streamers rather than helmet streamers \citep{Wang2007}. These structures are still the most relevant as they indicate the limit between closed and open magnetic field lines. Inside each subgroup, we can already see a wide variety of results. For the Carrington frame diachronic maps, the HMI and GONG mrnqs runs yield very good results, but the two other simulations are completely off. The WSO streamers are way too thin, while the GONG mrmqs streamers are shifted upwards to a position that does not match anymore the white-light image. This is not surprising, because the GONG mrnqs is supposed to be more accurate than the GONG mrmqs thanks to its zero-point correction. For the synchronic frame maps, the best result is given by the HMI run, although the left streamer is too big (5 solar radii instead of 3.5). Between the GONG cases, the best result is given by GONG mrzqs, although the left streamer is too small and shifted too downwards. The difference between GONG mrbqs and GONG mrbqj is minimal, with just the right streamers having a better size with GONG mrbqs. This is what we would have expected, since the GONG mrzqs is the most accurate and physical map. It is however surprising that our model performs less efficiently with the synchronic frame maps than with the Carrington frame diachronic maps which have a bigger asynchronicity between the data. For the GONG-ADAPT runs, there is a bigger diversity in the results that what could have been expected based on the standard deviation study, with the left streamer edge ranging from 2.5 to 3.5 solar radii, and the right streamer from 2 to 4 solar radii. The overall agreement is still very good, although it is clearly visible that some realizations yield better simulations than others. All results are summed up in a more quantitative way in table \ref{tab:metrics} (see section \ref{subsec:map_scores} for the corresponding discussion). \subsection{Comparison with EUV images for coronal hole boundaries} \label{subsec:min_ch} \begin{figure} \centering \gridline{ \fig{ch_monthly.png}{0.49\textwidth}{(a) Carrington frame diachronic maps.} \fig{ch_daily2.png}{0.49\textwidth}{(b) Synchronic frame maps.} } \gridline{\fig{ch_adapt.png}{0.49\textwidth}{(c) GONG-ADAPT realisations.}} \caption{Comparison of the contours of the coronal holes (CH) with the EUV synoptic map from Carrington rotation 2219 from SDO/AIA (channel 195). The first panel compares the streamers from Carrington frame diachronic maps, the second one from synchronic frame maps and the last one from all 12 GONG-ADAPT realizations for the same map. Coronal hole contours from simulations are shown as shades of grays.} \label{fig:euv_ch} \end{figure} The second physical quantity we use for comparison is the EUV emission at 195 \AA, which is the wavelength recommended to automatically extract coronal hole boundaries \citep{Wagner2022, Badman2022}. Coronal holes are dimmings in the EUV emission, which corresponds to regions of open magnetic field lines associated with cooler plasma \citep{Cranmer2009}. The synoptic map we use is from the official SDO/AIA website and consists of a reconstruction of the full solar disk based on daily data, following the same principle as the HMI magnetic maps. It has also been remapped to latitude coordinates, which can create some artifacts at the poles due to the line of sight constraints. Again, artificial EUV emissions can be generated from simulations to provide an accurate comparison \citep{Lionello2009, Parenti2022}. The polytropic approximation we use for the coronal heating does not allow us to use such techniques, but we have access to the information about the open magnetic field lines in the simulation. We then proceed to find the boundaries between closed and open field lines at the surface of the star, using a sphere of 400x200 seed points at $1.01\;R_\odot$. We follow the field lines to see if they reach the end of the computational domain at $20\;R_\odot$: if they do, they are open field lines, if not, they are closed field lines. This allows us to retrieve contours of the open field line regions at the surface of the star, that we can directly compare with the coronal hole synoptic map. This is not completely a direct comparison, as the EUV emission corresponds to the photosphere, while the wind simulations start at the lower corona above the transition region, but we do not have measurements at this height, and we assume that the change of structure in the coronal hole is minimal over this interval. Similar comparisons have been performed in previous studies with positive results \citep{Badman2022}. We plot the results in figure \ref{fig:euv_ch}. For each subgroup of map, we over-plot the contours obtained from the various simulations on the synoptic EUV map. At the chosen date, there are mostly polar coronal holes in dark, and also several dimmer equatorial coronal holes at 220, 270 and 330 degrees in longitude. The contours from the simulations have to match as closely as possible the contours of these dark regions. For the Carrington frame diachronic maps, we can see that most of the simulations cover reasonably the northern coronal hole, except for the WSO map which has an incursion towards the equator at 270 degrees in longitude, which is not visible in the EUV data. The HMI and GONG mrnqs simulations capture well the southern and equatorial coronal holes, but the WSO and GONG mrmqs both overestimate them. Once again, this is not surprising due to the fact that GONG mrnqs is the corrected version of the GONG mrmqs map. For the synchronic frame maps, we observe that the northern and equatorial coronal holes are well captured. The best results for the southern coronal hole are given by the HMI simulation, while all the GONG simulations tend to overestimate it. We can see however the effect of the correction in the GONG mrzqs map, since it is the only GONG map not to exhibit closed field lines at the southern pole. For the GONG-ADAPT simulations, there is little to no disagreement between the different realizations, although the southern coronal hole is still the one with the most differences. The agreement is very good for both polar coronal holes, but all realizations completely miss the equatorial coronal holes, which is surprising given the accuracy of the models used and the fact that other maps capture them with the same pre-processing. All results are summed up in a more quantitative way in table \ref{tab:metrics} (see section \ref{subsec:map_scores} for the corresponding discussion). \subsection{Comparison with white-light coronagraphs for streamer belt} \label{subsec:min_hcs} \begin{figure} \centering \gridline{ \fig{hcs_monthly2_5rs.png}{0.49\textwidth}{(a) Carrington frame diachronic maps.} \fig{hcs_daily2_5rs.png}{0.49\textwidth}{(b) Synchronic frame maps.} } \gridline{\fig{hcs_adapt2_5rs.png}{0.49\textwidth}{(c) GONG-ADAPT realisations.}} \caption{Comparison of the shape of the streamer belt (SB) with the white-light synoptic maps from 2nd of July 2019 from SoHO/LASCO/C2. The first panel compares the streamers from Carrington frame diachronic maps, the second one from synchronic frame maps and the last one from all 12 GONG-ADAPT realizations for the same map. The SMB line inferred from observations is in yellow and dashed line, while the current sheet inferred from simulations is in shades of grays. Credits for the SMB maps: Nicolas Poirier (IRAP).} \label{fig:hcs_wl_5rs} \end{figure} The last comparison with observational data we want to make is the comparison between the white-light streamer belt and the heliospheric current sheet. The coronagraph LASCO C2 aboard SoHO capture white-light images between 1.5 and 6 solar radii. This data can then be assembled as a synoptic map over a Carrington rotation to give an estimate of the streamer belt (SB), which can be assumed to host the heliospheric current sheet (HCS) and act then as a proxy for it at around $5\;R_\odot$ \citep{Poirier2021}. From the simulations, it is easy to directly extract the HCS, as it is the separation between the positive and negative polarity of the radial magnetic field in the computational domain. Once again, this method has already been used in previous studies with positive results \citep{Badman2022}. We plot the results in figure \ref{fig:hcs_wl_5rs}. The background shows the white-light synoptic maps in gray scale, with the SB highlighted with a yellow dashed line. Because we are looking at a minimum of activity, the HCS is very flat as the current sheet is almost horizontal, with a slight deviation between 250 and 330 degrees in longitude that is due to the active region discussed before. The HCS extracted from simulations is plotted as a line in gray scale. For the Carrington frame diachronic runs, we see once again that the HMI and GONG mrnqs simulations yield the best result, although the gap between 250 and 330 degrees in longitude seems more difficult to capture, most probably because of the active region located at this exact spot. The WSO and GONG mrmqs simulations show a shift upwards compared to the actual SB, and the WSO simulation shows the biggest deviation between 300 and 360 degrees in longitude. For the synchronic frame maps, most of the simulations agree very well, with just a slight overestimation of the SB by the GONG mrbqj simulation. For the GONG-ADAPT realizations, there is also little to no variation between all the various simulations, that capture the SB quite well. The better agreement between simulations can be explained by the fact that this quantity is observed at $5\;R_\odot$, a distance at which the magnetic field is more uniform. All results are summed up in a more quantitative way in table \ref{tab:metrics} (see section \ref{subsec:map_scores} for the corresponding discussion). \section{Discussion for space-weather applications} \label{sec:discussion} \subsection{Assessing the impact for space weather forecasting} \label{subsec:min_space_weather_forecast} \begin{figure} \centering \gridline{ \fig{hcs_monthly.png}{0.49\textwidth}{(a) Carrington frame diachronic maps.} \fig{hcs_daily3.png}{0.49\textwidth}{(b) Synchronic frame maps.} } \gridline{\fig{hcs_adapt.png}{0.49\textwidth}{(c) GONG-ADAPT realisations.}} \caption{Comparison between the typical HCS extrapolated by a PFSS+SCS method at 0.1 AU and the ones extracted from our MHD simulations. The first panel compares the HCS from Carrington frame diachronic maps, the second one from synchronic frame maps and the last one from all 12 GONG-ADAPT realizations for the same map. The background color shows the radial magnetic field $B_r$ polarity for the extrapolation (red for positive, blue for negative). For panels (a) and (c), the PFSS is based on a GONG-ADAPT map to provide a Carrington-rotation frame. For panel (b), it is based on a GONG mrbqs map to provide a synchronic frame.} \label{fig:hcs_wsa_01au} \end{figure} In an operational set-up for space weather forecast, the coronal part of the model chain is useful for providing the physical quantities at around $20\;R_\odot$ to heliospheric propagators that can compute them all the way to Earth. Currently, in operational environments the coronal part is handled through semi-empirical extrapolations, such as the WSA model combined with PFSS and SCS for the magnetic field part \citep{Pomoell2018}. This is due to the fact that current MHD models are too slow to be used in an operational context, although it has been demonstrated on numerous occasions that they are more accurate \citep{Samara2021}. This is a limitation that our code does not have, thanks to its implicit solving method \citep{Perri2022}. It is then interesting to wonder what are the differences we would observe if we were to couple our MHD model to EUHFORIA for example, and see the modifications at this interface. As we use a polytropic version of the code for now, thus it is not interesting to do the coupling all the way to Earth, because we already know it will not compare well with in-situ measurements at L1. However, we can already compare to typical forecasts. Our velocity, density and temperature are also going to be limited by the polytropic assumption, so for the moment the best quantity to compare is the radial magnetic field $B_r$. We plot the results in figure \ref{fig:hcs_wsa_01au}. The background color shows the radial magnetic field $B_r$ extrapolated at 0.1 AU by PFSS+SCS. The positive polarity is shown in red, negative polarity in blue, and the HCS is located at the border between the two. For panels (a) and (c), the PFSS extrapolation is based on the realization 1 from GONG-ADAPT to provide the right frame. At the moment, the prevision models do not offer other maps to work with. For panel (b), it is based on a GONG mrbqs map to have the synchronic frame. We over-plot the HCS extracted from our MHD simulations around 0.1 AU for comparison. We can see that compared to the HCS at $5\;R_\odot$, the HCS at 0.1 AU is not very different, as the global geometry of the magnetic field is already fixed at this distance. We can see however a significant deviation from the HCS from the WSA model. This is surprising for synchronic frame maps, since they are based on exactly the same map for GONG mrbqs, only the model changes. From empirical to MHD, we can see that the gap around the active region is accentuated for the PFSS extrapolation. For the GONG-ADAPT realizations, the MHD model also tends to reduce the north-south variations and flatten the HCS. This is important for space weather forecasts, as a difference of several tens of degrees at 0.1 AU will increase even further and become even more significant at 1 AU. It is well known that a southwards inclined IMF $B_z$ for CMEs leads to more geo-effective intense magnetic storms, which means that these difference will have a significant impact on forecasts at Earth \citep{Balan2014cme}. \subsection{Which map should we choose?} \label{subsec:map_scores} \begin{table}[] \centering \begin{tabular}{|c||c|c|c|c|} \hline Map & Streamers ratio & Polar CH ratio & Eq. CH ratio & SB deviation \\ \hline \hline WSO & \colorbox{red}{left: 28.0\%}, \colorbox{red}{right: 24.0\%} & \colorbox{red}{North: 72.8\%}, \colorbox{yellow}{South: 33.7\%} & \colorbox{orange}{10.7\%} & \colorbox{red}{$\delta_{max}=30.8\degree$}, \colorbox{orange}{$\delta_{mean}=9.22\degree$} \\ \hline HMI & \colorbox{lime}{left: 84.2\%}, right: 74.7\% & North: 86.1\%, South: 40.6\% & \colorbox{green}{37.4\%} & $\delta_{max}=17.5\degree$, $\delta_{mean}=4.88\degree$ \\ \hline GONG (mrmqs) & left: 54.4\%, \colorbox{yellow}{right: 37.7\%} & North: 87.1\%, \colorbox{red}{South: 23.9\%} & 8.8\% & \colorbox{orange}{$\delta_{max}=27.9\degree$}, \colorbox{red}{$\delta_{mean}=11.9\degree$} \\ \hline GONG (mrnqs) & left: 74.9\%, right: 65.6\% & North: 86.2\%, South: 42.0\% & \colorbox{teal}{26.2\%} & $\delta_{max}=19.1\degree$, $\delta_{mean}=4.98\degree$ \\ \hline HMI (sync.) & left: 66.2\%, right: 70.1\% & North: 86.3\%, South: 40.1\% & \colorbox{lime}{65.5}\% & $\delta_{max}=16.1\degree$, \colorbox{lime}{$\delta_{mean}=4.30\degree$} \\ \hline GONG (mrbqs) & \colorbox{yellow}{left: 39.1\%}, right: 41.6\% & \colorbox{yellow}{North: 80.3\%}, South: 33.9\% & 11.6\% & \colorbox{yellow}{$\delta_{max}=23.9\degree$}, \colorbox{yellow}{$\delta_{mean}=7.35\degree$} \\ \hline GONG (mrbqj) & left: 47.3\%, \colorbox{orange}{right: 32.8\%} & \colorbox{orange}{North: 79.2\%}, \colorbox{orange}{South: 32.5\%} & \colorbox{yellow}{11.4\%} & $\delta_{max}=20.5\degree$, $\delta_{mean}=6.43\degree$ \\ \hline GONG (mrzqs) & \colorbox{orange}{left: 29.1\%,} right: 53.6\% & North: 85.2\%, South: 39.2\% & 20.4\% & $\delta_{max}=19.7\degree$, \colorbox{green}{$\delta_{mean}=4.66\degree$} \\ \hline ADAPT (1) & left: 64.3\%, right: 77.8\% & \colorbox{green}{North: 88.1\%}, South: 44.4\% & \colorbox{red}{0.0\%} & $\delta_{max}=10.5\degree$, $\delta_{mean}=5.36\degree$ \\ \hline ADAPT (2) & left: 61.7\%, right: 77.1\% & North: 87.9\%, South: 44.1\% & \colorbox{red}{0.0\%} & $\delta_{max}=9.99\degree$, $\delta_{mean}=5.60\degree$ \\ \hline ADAPT (3) & left: 69.4\%, right: 72.4\% & \colorbox{lime}{North: 88.3\%}, South: 44.0\% & \colorbox{red}{0.0\%} & $\delta_{max}=10.5\degree$, $\delta_{mean}=5.57\degree$ \\ \hline ADAPT (4) & \colorbox{teal}{left: 77.0\%}, \colorbox{teal}{right: 85.5\%} & North: 87.9\%, South: 43.9\% & \colorbox{red}{0.0\%} & \colorbox{teal}{$\delta_{max}=9.69\degree$}, \colorbox{teal}{$\delta_{mean}=4.76\degree$} \\ \hline ADAPT (5) & left: 61.4\%, right: 79.5\% & North: 87.8\%, South: 44.5\% & \colorbox{red}{0.0\%} & $\delta_{max}=9.84\degree$, $\delta_{mean}=5.09\degree$ \\ \hline ADAPT (6) & left: 66.3\%, right: 78.1\% & North: 87.5\%, South: 44.1\% & \colorbox{red}{0.0\%} & $\delta_{max}=10.0\degree$, $\delta_{mean}=5.84\degree$ \\ \hline ADAPT (7) & left: 72.1\%, right: 78.5\% & North: 87.2\%, South: 43.6\% & \colorbox{red}{0.0\%} & $\delta_{max}=10.4\degree$, $\delta_{mean}=6.20\degree$ \\ \hline ADAPT (8) & left: 61.9\%, \colorbox{lime}{right: 87.9\%} & North: 87.4\%, \colorbox{lime}{South: 45.3\%} & \colorbox{red}{0.0\%} & \colorbox{green}{$\delta_{max}=9.63\degree$}, $\delta_{mean}=5.75\degree$ \\ \hline ADAPT (9) & left: 75.4\%, right: 77.6\% & North: 87.7\%, South: 43.4\% & \colorbox{red}{0.0\%} & $\delta_{max}=10.3\degree$, $\delta_{mean}=5.91\degree$ \\ \hline ADAPT (10) & left: 61.3\%, right: 80.5\% & \colorbox{teal}{North: 88.0\%}, \colorbox{green}{South: 44.9\%} & \colorbox{red}{0.0\%} & \colorbox{lime}{$\delta_{max}=9.39\degree$}, $\delta_{mean}=4.99\degree$ \\ \hline ADAPT (11) & \colorbox{green}{left: 80.0\%}, right: 64.1\% & \colorbox{green}{North: 88.1\%}, \colorbox{teal}{South: 44.7\%} & \colorbox{red}{0.0\%} & $\delta_{max}=10.4\degree$, $\delta_{mean}=5.73\degree$ \\ \hline ADAPT (12) & left: 76.1\%, \colorbox{green}{right: 85.8\%} & North: 87.9\%, South: 44.5\% & \colorbox{red}{0.0\%} & $\delta_{max}=10.0\degree$, $\delta_{mean}=5.52\degree$ \\ \hline \end{tabular} \caption{Summary of the comparison between COCONUT MHD simulations of July $2^{nd}$ 2019 based on various magnetic maps and the available observational data. We use 3 quantitative metrics to evaluate the maps: we compute the percentage of overlap between the streamers edges, the percentage of coverage between the polar and equatorial coronal holes, and the mean and maximum angle of deviation between the SB and the HCS. For more details on the metrics, see appendix \ref{app:metrics}. The best result for each metrics is highlighted in \colorbox{lime}{lime}, the second best in \colorbox{green}{green} and the third best in \colorbox{teal}{deep green}. The worst result for each metrics is highlighted in \colorbox{red}{red}, the second worse in \colorbox{orange}{orange} and the third worse in \colorbox{yellow}{yellow}. The sync. abbreviation stands for "synchronic frame".} \label{tab:metrics} \end{table} Based on the previous results, we have summarized all our results in table \ref{tab:metrics} to create a scoreboard of all the studied maps for this given date with the COCONUT model. In order to be able to be more quantitative, we have used 3 metrics based on the comparisons described in the previous section. At first, in order to better compare the streamers' edges, we have computed the percentage of overlap between the observations and the simulations. From the white-light eclipse picture, we have extracted a visual estimation of the streamers' edges in the plane perpendicular to the observer's line of sight. This method is of course limited by the fact that the white-light picture without post-processing offers only a 2D projection of the 3D structure of the streamers. We have then identified all the points that are inside the selected contour as belonging to the streamer, and have plotted the defined surface along with the streamer from the simulation. We then compute the ratio between the number of pixels that belong to the two streamers (the one from the eclipse picture and the one from the simulation) and the number of pixels within the biggest streamer of the two. This way of computing the ratio allows us to not give a perfect score to simulation streamers that are bigger than the observation streamer and would then include it. The corresponding maps for the computation of this ratio can be found in figure \ref{fig:streamers_comp_all} which provides a visual representation. Then, for the coronal holes comparison, we have used a similar technique of area ratio. We have extracted the pixels that belong to the coronal holes from the EUV synoptic map by applying the EZSEG algorithm developed by Predictive Science Inc. \citep{Caplan2016ApJ}. The software is available as part of the EUV2CHM Matlab package from the Predictive Science Inc. website \footnote{\url{https://www.predsci.com/chd/}}. We have converted the algorithm in Python to be able to use directly on our pipeline for the EUV synoptic map. This algorithm uses an initial intensity threshold to acquire coronal hole locations in an EUV image, and then uses an area growing technique to define connected regions. This continues until a second intensity threshold is reached, or the condition for connectivity is not met. The dual-thresholds and connectivity conditions (essentially the number of consecutive pixels), are defined on input. We experimented with the optimal input parameters, and found that for this map the best result was obtained with a connectivity of 3 neighbors, a first threshold at 20 and a second threshold at 35. The coronal hole for the simulations were determined as said before by using seeds for the field lines and checking whether the field line would reach the outer boundary of the computational domain. We then computed the ratio of the number of pixels in both coronal hole detections to the number of pixels from the coronal holes from the simulation. That way, this percentage represents how accurate the coronal hole from the simulation is. We have separated polar and equatorial coronal holes by defining the equatorial region as being between -40 and 40 degrees in latitude. The corresponding maps for the computation of this ratio can be found in figure \ref{fig:ch_comp_all} to have a visual representation. Finally, we compute the deviation of the HCS from the SB. In order to do so, we interpolate the two lines at the same resolution, and compute for each longitude the difference in latitude in degrees. We then process the results to compute the maximum and mean deviation for each map. Table \ref{tab:metrics} gives thus an overview of the quality of the maps for this specific date in combination with the current standard set-up of the COCONUT model (described in section \ref{sec:cf}). What we see is that the GONG-ADAPT runs yield very good results for the streamers, the polar coronal holes and the HCS, but completely fail to capture the equatorial coronal holes. This may be due to the fact that the coronal holes were quite small, but other simulations with different maps managed to capture them with good accuracy with the same pre-processing. This then means that this pre-processing does not work well for our use of the GONG-ADAPT maps with the COCONUT model, and thus should be adapted for these maps. This is important information for forecasts, as equatorial coronal holes are often the sources of high-speed streams that are going to reach Earth and can cause mild space weather events. The other category of runs that score well are the ones base on the HMI maps, both Carrington frame diachronic and synchronic frame. Contrary to the GONG-ADAPT simulations, they have a high score for the equatorial coronal holes, and they manage to score high in almost all the metrics. For the GONG runs, the results are overall pretty unsatisfactory, especially for the GONG mrbqs and mrbqj, which is surprising because these are the most used maps for forecasting. This table gives us some useful guidelines in order to use the COCONUT code for space-weather applications in the most efficient way. From the table, it seems clear that the only acceptable synchronic frame map we could use with COCONUT is the GONG mrzqs. Same thing for the Carrington frame diachronic maps, the correction for the GONG mrnqs map really improves the quality of the simulation. Finally, the WSO runs score the worst in almost all the metrics, and are thus not recommended to use as it is with our code. They can however be adjusted with a more elaborate and custom pre-processing, but it is not clear whether this is applicable to space weather forecasting \citep{Samara2021}. To conclude, the runs that agree with most of the metrics are the ones based on the GONG-ADAPT maps, although they may require additional pre-processing in order to better treat the equatorial coronal holes. The second-best choices that score good on average are the HMI simulations, both Carrington frame diachronic and synchronic frame. This may be actually the best choice for operational previsions with COCONUT, and yet to this date few data centers have tested them with other models in operational set-ups. Instead, the second choice is usually the GONG mrbqs map. For our model, it scores relatively bad (third-worse). A better choice would be for us the GONG mrzqs for the synchronic frame maps and the GONG mrmqs for the Carrington frame diachronic maps. To this date, not all prevision centers use the zero-corrected GONG maps, which appears to be better suited since they were designed to provide better results for the solar poles. These conclusions are of course tied to the date and model that we used, and would need a more extensive statistical study to be generalized. It is however likely that for the same approximations (ideal MHD and polytropic heating) and similar boundary conditions, other models would find similar results. It would also be interesting to see if the same conclusion holds for a maximum of activity configuration, which would probably show even more disparities between the maps \citep{Yeates2018}. Finally, this is based on remote-sensing coronal validation, and should also be confronted with in-situ heliospheric metrics to have a complete view of the impact for space weather forecasting, but this requires a better description of the coronal heating that we leave for future work. \subsection{Do solar poles matter for space weather?} \label{subsec:poles} \begin{figure} \centering \gridline{ \fig{visu_1d_monthly.png}{0.49\textwidth}{(a) Carrington frame diachronic original maps.} \fig{visu_1d_monthly_bcfile.png}{0.49\textwidth}{(b) Carrington frame diachronic pre-processed maps.} } \gridline{ \fig{visu_1d_daily.png}{0.49\textwidth}{(c) Synchronic frame original maps.} \fig{visu_1d_daily_bcfile.png}{0.49\textwidth}{(d) Synchronic frame pre-processed maps.} } \gridline{ \fig{visu_1d_adapt.png}{0.49\textwidth}{(e) GONG-ADAPT original maps.} \fig{visu_1d_adapt_bcfile.png}{0.49\textwidth}{(f) GONG-ADAPT pre-processed maps.} } \caption{Comparison of 1D cuts of the radial magnetic field $B_r$ at the longitude of the event ($2^{nd}$ of July 2019). On the left column (panels (a), (c), (e)), the cuts are made through the original magnetic maps. On the right column (panels (b), (d), (f)), the cuts are made through the pre-processed maps used as input for the simulations. The first row (panels (a) and (b)) shows the Carrington frame diachronic maps, the second row (panels (c) and (d)) shows the synchronic frame maps, the last row (panels (e) and (f)) the GONG-ADAPT realizations.} \label{fig:1d_cut_maps} \end{figure} The other point we want to stress is the question of the roles of the solar poles in space weather forecasts. This is important because most space weather models actually remove the solar poles, arguing that they are not relevant for forecasts at the Earth. However, it has been shown previously that the HCS location for example is very sensitive to the value of the polar field \citep{svalgaard1978}, and it is an important feature for space weather forecasts due to its possible interaction with CMEs \citep{Lavraud2014}. It is undeniable that saving precious computational time can help, however it is essential to quantify the impact of this decision. It may be justified for heliospheric propagators, since the polar boundary condition has little impact on the structures at Earth, but it is way more difficult to be sure for coronal models. That is why we want to focus specifically on this point in our study. We have shown in figure \ref{fig:std_bcfile} that the poles are actually the largest source of differences between all the various maps at the selected date. To show these differences in a more quantitative way in figure \ref{fig:1d_cut_maps}, we perform a 1D cut though all the maps at the Carrington longitude of the date we have chosen, which is around 315 degrees (panels (a), (c) and (e)). We also show the same cut after the pre-processing, to show what is actually used in the simulation (panels (b), (d) and (f)). The main difference is the amplitude of the magnetic field: before the pre-processing, the amplitudes range from -50 to 35 G, while afterwards they range from -9 to 7.5 G. The resolution is also affected as the pre-processing cuts off the smallest spatial structures. The first row (panels (a) and (b)) shows the Carrington frame diachronic maps, the second row (panels (c) and (d)) shows the synchronic frame maps, the last row (panels (e) and (f)) the GONG-ADAPT realizations. It is already visible from the original maps that the poles exhibit significant difference, but it is even more dominant after the pre-processing. It is then clear that the maps with which we obtain the best results are the ones that gather more magnetic field at the poles: the GONG-ADAPT maps (panel (f)) because of their flux-transport model, and the HMI maps (orange line in panel (b) and blue line in panel (d)), probably thanks to their high resolution. The GONG zero-corrected products also show some decent magnetic field at the poles (red line in panels (b) and (d)), which probably explains their good scores as well. Bad scores can also be related to bad assessment of polarity at the poles: both WSO and GONG mrmqs (blue and green lines in panel (b)) have extremely inaccurate extrapolations of the poles, with GONG mrmqs even having the wrong polarity at the southern pole, which explains why they get the worse scores. Too much magnetic field at the poles in combination with the numerical diffusion of our model may however lead to underestimating the equatorial regions, as we have seen that the GONG-ADAPT runs completely miss the equatorial coronal holes in a typical operational set-up (see figures \ref{fig:euv_ch} and \ref{fig:ch_comp_all}, and table \ref{tab:metrics}). We have shown in section \ref{subsec:min_hcs} that depending on the input map, our simulations exhibited different shifts of the HCS. Since we have also shown in section \ref{subsec:input_br} that the biggest source of difference between the input maps was the treatment of the solar poles, we can assume that it is an important factor to explain this shift. It is also expected from \cite{svalgaard1978} that the magnetic field at the solar poles is going to impact the HCS, causing a shift of several degrees that can completely change its location at 1 AU with respect to Earth and hence change the geo-effectiveness and intensity of space weather events. Most of the differences between the maps we selected at minimum of activity also came exclusively from the poles, and this had very visible effects on the organization of the corona. In particular, the flux accumulation for the GONG-ADAPT map seems to cause our model in this standard operational set-up to miss the equatorial coronal holes contrary to other input maps, which are sources of high-speed streams that hit the Earth and trigger space weather events. This reinforces the importance of the ongoing mission Solar Orbiter, which will be the first imager to capture a global vision of the solar poles, hence helping the filling and calibration of the maps more accurately. With a combination of resolution and accuracy, we can combine the two advantages of HMI and GONG-ADAPT, and thus produce the best map that will be able to yield reliable simulations for forecasts. \section{Conclusion} \label{sec:conclusion} We have tested the impact of the choice of the input magnetic map on the results of our coronal solar wind simulations using our new MHD implicit code COCONUT. To this end, we have selected a strategic date ($2^{nd}$ of July 2019) at minimum of activity in order to focus on the influence of the solar poles. This choice is recent enough for having a well-documented case and is during a total solar eclipse on Earth, allowing for having precise observations of the coronal structures at that time. We have gathered all 20 publicly available magnetic maps for this date from 4 different providers (WSO, HMI, GONG and GONG-ADAPT), spanning various resolutions and pole-filling techniques. We have pre-processed all maps the same way, with a spherical harmonics cut-off at $\ell_{max}=15$, which would be a standard pre-processing in space weather forecasting operational mode. In order to assess the quality of the resulting simulations, we have used three validation techniques with three different remote-sensing observations: we have estimated the magnetic field configuration (especially the shape and size of the streamers) from white-light total solar eclipse images, the open magnetic field lines repartition from EUV maps from SDO/AIA and the position of the HCS using white-light images from SoHO/LASCO/C2. We have also computed automatic metrics in order to evaluate automatically the quality of these comparisons. What we have seen is that our model performs decently when using input from most maps, and allow for a comfortable visual comparison. However, we have obtained quite different results depending on the choice of the map, which shows that even at minimum of activity (i.e. event for quiet configurations) the input data has a strong impact. The quality of estimation for the streamers varies from 24\% to 85\%, with an average quality of about 60\%. Coronal holes estimation varies from 24\% to 88\% for the polar coronal holes (with an average of 80\% for the northern coronal hole, and 40\% for the southern coronal hole), and from 0\% to 65\% for the equatorial ones as some simulations completely fail to reproduce them. The HCS deviation from the SB estimate ranges on average from 4 to 12 degrees. We have tried to use these results in order to provide guidelines for using our model for space weather applications, which could probably be extended to other models with similar approximations (ideal MHD and polytropic heating) and boundary conditions. We can already estimate that a similar deviation of the HCS would be observed at 0.1 AU, which means that the input boundary condition for heliospheric propagators would definitively be affected. We have also assembled a scoreboard of the performances of our model for each map, which shows that with our model we should not use GONG mrbqs maps as they yield poor results. Instead, a better alternative would be the zero-corrected products such as GONG mrzqs and GONG mrnqs. Runs with GONG-ADAPT products perform very well, except for the equatorial coronal holes which are not reproduced at all. This could be a major issue for the inclusion of SIRs in the prevision. In the end, the best runs are actually the ones based on the HMI products, which should then become standard inputs for our model when used in space weather frameworks. We have linked these differences to the difference of resolution but also of treatment of the solar poles, as the flux-transport model from GONG-ADAPT is probably responsible for not reproducing the equatorial coronal holes in this operational set-up. This shows that the solar poles are needed to model accurately the first 20 solar radii and thus cannot be neglected without loss of information. This also highlights the importance of the ongoing Solar Orbiter mission that will provide more images of the solar poles in order to hopefully unify all these magnetic field measurements. Of course, this study is just the first step towards better quantifying the requirements for space weather forecasts. It has proven that our model COCONUT is robust enough to take as input a large variety of maps, and has allowed us to identify the best maps to use to initialize it and provide inputs for space weather previsions, but there is still the need to see if these results can be generalized. We have studied only one minimum of activity, more cases would be needed to reach a conclusion for all minima. Another interesting point is whether these results still hold for maximum of activity cases: we actually expect the results to potentially vary a lot, since it is not the poles anymore that are driving the simulations, but rather the active regions, so probably that resolution and saturation effects would become more important. It is also not clear if these results hold for other numerical codes, although the previous comparison we did with Wind-Predict would suggest that at least for polytropic models we should find similar results \citep{Perri2022}. We will of course keep improving our model to be able to include more physics: the next key-points are the improvement of the modeling for the coronal heating in order to be able to have a bimodal distribution of the solar wind, as well as a multi-fluid treatment to be able to include a realistic transition region up to the chromosphere. Both these treatments will help include structures such as SIRs, and thus enable in-situ comparisons through coupling with heliospheric propagators such as EUHFORIA. In the end, we hope to be able to prove that our new coronal model not only helps to improve space weather forecasts of the wind structures, but also the transients propagating through this description of the interplanetary medium. \begin{acknowledgements} The authors would like to thank Nicolas Poirier for providing the white-light SMB maps, and Jasmina Magdaleni\'c for useful discussions. This work has been granted by the AFOSR basic research initiative project FA9550-18-1-0093. This project has also received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No.~870405 (EUHFORIA 2.0) and the ESA project "Heliospheric modelling techniques“ (Contract No. 4000133080/20/NL/CRS). F.Z.\ is supported by a postdoctoral mandate from KU Leuven Internal Funds (PDMT1/21/028). These results were also obtained in the framework of the projects C14/19/089 (C1 project Internal Funds KU Leuven), G.0D07.19N (FWO-Vlaanderen), SIDC Data Exploitation (ESA Prodex-12), and Belspo projects BR/165/A2/CCSOM and B2/191/P1/SWiM. The resources and services used in this work were provided by the VSC (Flemish Supercomputer Centre), funded by the Research Foundation - Flanders (FWO) and the Flemish Government. Wilcox Solar Observatory data used in this study was obtained via the web site \url{http://wso.stanford.edu} courtesy of J.T.\ Hoeksema. The Wilcox Solar Observatory is currently supported by NASA. Data were acquired by GONG instruments operated by NISP/NSO/AURA/NSF with contribution from NOAA. HMI data are courtesy of the Joint Science Operations Center (JSOC) Science Data Processing team at Stanford University. This work utilizes data produced collaboratively between AFRL/ADAPT and NSO/NISP. \\ Data used in this study was obtained from the following websites: \\ WSO: \url{http://wso.stanford.edu/synopticl.html} \\ GONG: \url{https://gong2.nso.edu/archive/patch.pl?menutype=z} \\ HMI: \url{http://jsoc.stanford.edu/HMI/LOS_Synoptic_charts.html} \\ GONG-ADAPT: \url{https://gong.nso.edu/adapt/maps/} \end{acknowledgements}
1,116,691,500,570
arxiv
\section{Algorithm Design and Bounds}\label{sec: algo} Here we describe our algorithms for constructing a data structure for k-nearest neighbors, querying the structure, and batch updating it. \textbf{Data structure.} The data structure we use for nearest neighbor searching is a kd-tree whose splitting rule uses the Morton ordering; this is what we refer to as the zd-tree. Since the Morton ordering is just the interleaving of the bits of each coordinate, the tree is built by letting the root represent the entire bounding box, and splitting the points into child nodes at level $i$ based on whether the bit at place $i$ is 0 or 1. In three dimensions, our tree is almost equivalent to an oct-tree in which every three levels of our tree corresponds to one level of the oct-tree; however, the leaves can be at different levels. Each internal node of the tree stores the two opposing corners defining its bounding box, its two children, and its parent. Each leaf node stores its two opposing corners, its parent, and the set of points it contains. We bound the number of points in a leaf by a constant, and a leaf can be empty. Note that every point covered by the root bounding box is included in exactly one leaf node. \textbf{Construction.} Before the zd-tree can be built, we preprocess the input. Firstly, motivated by Chan~\cite{chan2006minimalist}, and necessary for our bounds (the proof of Theorem~\ref{thm: querytime}), we select a random shift for each coordinate, and shift all the coordinates by this amount. This shift is kept throughout. We then sort the points by the Morton order. This can use Chan's comparison function, which leads to an $O(n\log n)$ work sort, but as we describe in Section~\ref{subsec: theory} can be reduced to a linear time radix sort with span $O(n^\epsilon)$~\cite{jaja1992parallel} when assuming a bounded expansion constant. In this case the number of bits needed for the Morton order can bounded by $O(\log n)$. After shifting and sorting we apply a divide-and-conquer algorithm (Algorithm~\ref{algo: treebuild}) to build the zd-tree. The algorithm recurses at each level of the tree on the two sides of the cut for the given bit of the Morton ordering. Importantly, finding the cut in the routine \texttt{splitUsingBit} only requires a binary search since the points are sorted by Morton order. This implies that when the tree is sufficiently shallow (guaranteed by bounded ratio) the work to build the tree is only linear, and the parallel depth is low. Even if completely imbalanced the work would be $O(n \log n)$. \paragraph{Downward search algorithm.} Our downward search algorithm is detailed in Algorithm~\ref{algo: naivesearch}. The algorithm maintains a current set of $k$ nearest neighbors, which starts empty and is improved over time by inserting closer points. In our pseudocode, we use $N$ to represent the nearest neighbor candidate set. The downward search works as follows: let $r$ be the distance from $p$ to the furthest element in $N$ if $N$ contains at least $k$ elements, or infinity otherwise. Now search vertex $v$ only if the bounding box for $v$ intersects a ball of radius $r$ around $p$. This is determined by the \texttt{withinBox} function. If the node is a leaf, iterate through the points it contains and update the set of nearest neighbors if necessary. If it is not a leaf, recurse on its children, searching first the child whose center is closest to the query point $p$. Our \textbf{root-based} algorithm simply starts at the root of the zd-tree with an empty $N$ and applies \texttt{searchDown}, but we also use \texttt{searchDown} in our upward algorithm. \paragraph{Upward search algorithm.} Our upward search algorithm is detailed in Algorithm~\ref{algo:searchup}. It always starts at the leaf in the tree containing the point $p$ and works its way up the tree. The idea is that in general, only a small part of the tree needs to be examined. It uses the downward search as a subroutine. As in the downward search, it maintains a priority queue $N$, initially empty, of the current estimate of the nearest $k$ neighbors, which is improved over time helping to prune further search. The algorithm starts at the leaf by adding any points in the leaf to $N$. Then, as with the downard version, let $r$ be the distance between $p$ and $k$-th nearest neighbor in $N$, or infinity if there are not $k$ neighbors in $N$ yet. Now search the parent of the current node if and only if the ball of radius $r$ around $p$ extends outside the bounding box of the current node. Otherwise we know there are no points not included in the current node that could be closer than those in $N$. This can use the same \texttt{withinBox} as used in the downward algorithm, but with a negative $r$. When searching the parent we search the parent's other child using the downward algorithm. Finding the leaf in which a point $p$ belongs, which is needed, depends on whether we are generating a $k$ nearest neighbor graph or using the the structure for dynamic searches for points not in the set. In the first case we know the leaf since each point is in a leaf. Therefore to generate a k-nearest neighbor graph we need just build the tree and then run \texttt{searchUp} on each point in each leaf. We refer to this as the \textbf{leaf-based} version of our algorithm. In the second case we have to search down the tree from the root to find the location of the leaf. This can use the bits of the Morton ordering to decide left or right. We refer to this as the \textbf{bit-based} version, and the downward search from the root as the \textbf{root-based} version. \textbf{Batch-dynamic updates.} The tree data structure naturally lends itself to the possibility of dynamic insertions and deletions. Insertion of a new point $q$ into a zd-tree $T$ is conceptually simple: locate the leaf of $T$ which $q$ should be inserted into; then either add $q$ to the sequence of points contained in the leaf, or if $q$ would cause the number of points in the leaf to exceed some cutoff, split the leaf into two children. This concept can be refined to a parallel batch-dynamic algorithm, which takes a set of points and recurses in parallel down the right and left children of the root. One small subtlety is that to avoid cases where an insertion might require a rebuild of the entire tree, we require a bounding box that all data will be contained in to be specified before building the initial tree. As with building the tree from scratch, a batch-dynamic insert starts by using the random shift to offset the points and sorting the points to be added based on their Morton ordering. We then apply the recursive algorithm shown in Algorithm~\ref{algo: batchdynrec}. Deletions use an almost identical algorithm. \input{Algorithms/algo-treebuild} \input{Algorithms/algo-naivesearch} \input{Algorithms/algo-searchup} \input{Algorithms/algo-batchdynrec} \subsection{Theoretical Results.}\label{subsec: theory} In this section, we give theoretical results on the performance of our algorithms when assuming bounded expansion constant. The results in the rest of the section assume that the point set $P$ has bounded expansion constant $\gamma \geq 2$ as well as bounded ratio, and they assume that the dimension $d = O(1)$. We also require that every coordinate of every point is unique. This is a fair assumption to make in the context of nearest neighbors, since every point set that does not have this property can be transformed into one that does and where each point retains the same nearest neighbors. The presented proofs in this section assume that $X$ is a bounding cube, but the full version of the paper contains the proofs of the same results where $X$ is any convex region. Wherever not otherwise specified, we use $B$ to denote the bounding box of the randomly shifted point set; note that the side lengths of $B$ can be at most twice the side lengths of the smallest bounding box containing $X$. \input{figure-theoryfig} Our first theorem concerns the height and build time of a zd-tree on a point set with bounded ratio. \begin{theorem}\label{thm: treebuild} For a point set $P$ of size $n$ with bounded ratio, the zd-tree can be built using $O(n)$ work with $O(n^{\epsilon})$ span, and the resulting tree height $O(\log n)$. \end{theorem} \begin{proof} For the tree depth and work bound, we need to show that the longest path in the tree has length $O(\log n)$. The bounding cube of $X$ has side length within a constant factor of $d_{\max}$, and it must be divided until the two points whose distance between them is $d_{\min}$ are in separate cubes. Since $d_{\max}/d_{\min} = \text{poly}(n)$, $d_{\max}$ must be halved $O(\log n)$ times to reach $d_{\min}$. Thus the tree has $O(\log n)$ depth. For the sorting claim, we wish to show that a parallel radix sort can be used. For the radix sort to require $O(n)$ work, we need to guarantee that only $O(\log n)$ bits are required to sort the dataset. This follows directly from the fact that the tree has depth $O(\log n)$. \end{proof} Now, we work towards the theorem on the expected time required to query the k-nearest neighbors of a point $p$. Like the result from Connor and Kumar in~\cite{connor2010knn}, the $O(k \log k)$ bound only applies when searching for the nearest neighbors of a point already in the tree; for a dynamic query, finding the leaf to start from requires $O(\log n)$ time. The following proofs assume without loss of generality that a leaf of the zd-tree contains no more than $k$ points. Furthermore, we assume that the tree is a true quad-tree, meaning that each internal node has $2^d$ children; thus we use the term ``quad-tree box" to refer to the box belonging to a tree node. This will slightly simplify the analysis, and it can only be worse than the performance of the actual algorithm. \begin{theorem}\label{thm: querytime} For a zd-tree representing a point set $P$ of size $n$ with bounded expansion, finding the k-nearest neighbors of a point $p \in P$ requires expected $O(k \log k)$ work. \end{theorem} The proof of the theorem separates the work into two parts: the work of searching through the points in a leaf of the zd-tree, and the work of traversing the zd-tree to get to those leaves. The first lemma concerns the former. \begin{lemma}\label{lem: numboxes} When searching for the k-nearest neighbors of a point $p$, $O(k)$ candidate points will be considered, resulting in $O(k \log k)$ work to evaluate all the candidate points. \end{lemma} \begin{proof} The multiplicative factor of $\log k$ comes from the fact that a priority queue is used to store nearest neighbors, so an $O(\log k)$ cost is incurred each time the priority queue is updated. Consider the leaf $L$ of the tree that would contain $p$. The initial approximation is found by recursing up to $L$'s ancestor until the ancestor has more than $k$ descendants, then adding those descendants to the priority queue. The ancestor's bounding box $B$ must contain $O(k)$ points by the fact of bounded expansion constant, since one of its children contains fewer than $k$ points. Now, let $r$ be the side length of $B$. Our algorithm will search a leaf only if the box belonging to that leaf overlaps with $\bbox(p, r)$. Those neighbors are contained within radius $r$ of the quad-tree box containing our initial guess. The search algorithm evaluates every point at radius $r$ from $p$ as a candidate point. If the radius $r$ of $B$ is expanded twice, all the candidate points will be contained in the resultant box; call this box of candidates $Q$. If $k' = O(k)$ points are in $B$, the expansion condition guarantees that at most $\gamma^2 k' = O(k)$ points are in $Q$. However, the search algorithm does not directly search each point in $Q$; rather, it searches every leaf whose bounding box overlaps with $Q$. This means that if a leaf $L$'s bounding box were to fall partially inside $Q$ and partially outside, all the points in $L$ would be counted. Since $B$ is a quad-tree box, this could only occur if the leaf $L$ were to have a bounding box with radius larger than $r$. Since the radius is larger than $r$ and $d=O(1)$, there can only be a constant number of such leaves. Since each leaf has at most $k$ points, the original bound still holds. \end{proof} Now, we move on to considering the expected work required to traverse the zd-tree to access the leaves. The probability calculation given here can also be found in~\cite{connor2010knn}. \begin{lemma}\label{lem: traversalcost} When traversing the zd-tree, the expected number of tree edges traversed to find all the nearest neighbors is $O(k)$. \end{lemma} \begin{proof} The worst case for the cost of the search algorithm is when the search space described in Lemma~\ref{lem: numboxes} is only contained in the bounding box containing all of $P$, as illustrated in Figure~\ref{fig: theoryfig}(a). First, we show that the length of the traversal is bounded by the longest path between two leaves in the search space; all other searches can be charged to the $O(k)$ points that are searched. We will use the box $B$ to refer to the search space. The largest cut of $B$ divides $B$ into $2^d$ quad-tree boxes. Each of these boxes must be contained within a quad-tree box $Q_i$ of at most twice the side length of $B$. Thus, traversing every leaf in $Q_i$ incurs cost at most $O(k)$; since $d$ is constant, the claim follows. All that remains is compute the expectation of the length of the longest path in the zd-tree. Without loss of generality, assume that the search space is a box with side length $2^h$. Then the probability that the search space is contained within a box of side length $2^{h+j}$ is $\left(\frac{2^j-1}{2^j}\right)$, since the box must have its upper left corner in one of $2^j-1$ grid squares along each dimension. Thus, due to the random shift, the probability that the search space is NOT contained within a box of side length $2^{h+j}$ is $1-\left(1-\frac{1}{2^j}\right)^d$. From the perspective of traversing the zd-tree, this is the event that the path between two leaves in the search space is length $j$. Thus its expectation can be upper bounded by the following summation, which charges a cost of one for each box the search space is not contained in: \begin{align*} \sum_{j=1}^\infty \left(1 - \left(1-\frac{1}{2^j} \right)^d \right) = O(1) \end{align*} and the result follows. \end{proof} Lemmas~\ref{lem: numboxes} and~\ref{lem: traversalcost} together compose the proof of Theorem~\ref{thm: querytime}. Now we move on to batch-dynamic updates. In a weight-balanced tree, the argument for the desired $O(k \log (n/k))$ bound would be as follows: when a batch of points is inserted into the tree, the work required to insert them into $\log k$ levels can be no more than the work that would be required to build them into their own tree. Thus insertion into the first $O(\log k)$ levels of the tree uses $O(k)$ work, and the work bound on an insertion is $O(k(\log n - \log k)) = O(\log (n/k))$. Hence the goal of Theorem~\ref{thm: batchupdate} is to show that in addition to the traditional notion of balance, the zd-tree also obeys some notion of weight balance---that is, that each split of a point set must produce two halves where each contains a constant fraction of the points. Unfortunately, this is not strictly true, since a set of points with bounded expansion may, for example, have only one element with a 1 at the largest bit if that element is very close to the rest of the elements. However, we will be able to show a slightly weaker notion than weight balance: that enough nodes in the tree are weight-balanced that the same work bound still applies. One more piece of terminology is needed before the theorem statement. When building the zd-tree by successively splitting the bounding box of the input, a split may have no points in $X$ on one side. During the tree building phase, we face a choice regarding empty cuts: when the algorithm makes an empty cut, we could either fork off two child nodes where one is a leaf containing no points, or we could simply not fork off an empty node, and divide the other node using the next bit. Since it is strictly cheaper, we choose the latter. Thus the following analysis will not deal with empty cuts; in particular, Lemmas~\ref{lem: exteriorcuts} and~\ref{lem: slicedensity} do not consider empty cuts in their analysis, since empty cuts do not affect the length of paths in the tree. \begin{theorem}\label{thm: batchupdate} Let $T$ be a pruned zd-tree representing point set $P$, and let $Q$ be a point set of size $k$, such that $|P|+|Q|=n$. Then if $P \cup Q$ and $Q$ both have bounded expansion and bounded ratio in the same hypercube $X$, $Q$ can be inserted into $T$ in $O(k\log(n/k))$ work and $O(k^{\epsilon} + \polylog(n))$ span. \end{theorem} When $X$ in its bounding box $B$ is being recursively divided using Algorithm~\ref{algo: treebuild}, it will be useful to separate the divisions or cuts into several categories. A cut along dimension $d$ divides some sub-cube $S$ of $B$ in two with cutting plane $\ell$. The cut is either \textbf{empty}, meaning that $\ell$ does not touch any points in $X$; or it is \textbf{exterior}, meaning it touches the boundary of $X$; or it is \textbf{interior}, meaning that within $S$, $\ell$ does not touch any points on the boundary of $X$. See Figure~\ref{fig: theoryfig}(c) for an illustration. The first step towards Theorem~\ref{thm: batchupdate} is to show that interior cuts are weight balanced. \begin{lemma}\label{lem: interiorcuts} One out of every $d$ interior cuts must be weight balanced; that is, it must split its bounding box into sets of size $\alpha n$ and $(\alpha-1)n$ for constant $\alpha \in (0,1)$. \end{lemma} \begin{proof} Consider a point $p \in P$. As the zd-tree is built, the point set $P$ is split into smaller pieces along each dimension. Call a split \textbf{unbalanced} if it splits $P$ into pieces of size $n_1, n_2$ such that one of $n_1, n_2 < \frac{1}{2(1+\gamma)}n$. Refer to a split as ``involving" $p$ if it splits a box containing $p$. We will show that after $d-1$ unbalanced splits involving $p$, the next split must be balanced. Assume for contradiction that there are $d$ consecutive unbalanced splits involving $p$. Let $B$ be the quad-tree box containing $p$ after those splits, and let the length of $B$'s longest side be $2^j$. By the assumption that $d$ consecutive unbalanced splits have already happened, there must be some side of $B$ where the most recent split on that side was of a region with maximum side length $2^{j+1}$; call the hyperrectangle resulting from that split $Q$. Let $q \in B$ be the unique closest point to $Q$, as shown in Figure~\ref{fig: theoryfig}(b). Then, consider any $x \in Q$ such that $B_q = \bbox(x, 2^j)$ contains $q$ and no other point in $B$, and overlaps only $B$ and $Q$. Due to its proximity to $B$, $\bbox(x, 2^{j+1})$ must completely contain $B$. By our assumption, $B_q$ contains fewer than $\frac{1}{2(\gamma+1)}n+1$ points, since it contains one point from $B$ and otherwise only points from $Q$, which has at most $\frac{1}{2(\gamma+1)n}+1$ points by our assumption. The box $B$ contains at least $\left( 1 - \frac{1}{2(1+\gamma)}\right)^d n$ points, so since $d \geq 2$, the expansion constant is violated and we reach a contradiction. \end{proof} While interior cuts are easily shown to be balanced, the same argument does not hold for a sequence of exterior cuts, since Lemma~\ref{lem: interiorcuts} relies on being able to choose a certain point $x$ as the center of a box, and this point might not be included in $X$ if some of the $d$ unbalanced cuts were exterior. Since an arbitrarily long sequence of nodes formed from exterior cuts might not be weight-balanced, we take a different approach: showing that even if we have to pay the maximum possible cost for each unbalanced path, the number of points on such an unbalanced path is small enough that the overall bound is unchanged. The first step towards this goal is to quantify, for a given point $p \in P$, how many exterior cuts involving $p$ must be made before the first interior cut involving $p$. \begin{lemma}\label{lem: exteriorcuts} Normalize the length of $B$ to $n$. Then for every point $p \in P$, let $f(p)$ denote the minimum distance from $p$ to the boundary of $X$, perpendicular to some side of the bounding box $B$. Then $O\left(\log \frac{n}{f(p)}\right)$ cuts will be made before the next cut containing $p$ is interior. \end{lemma} \begin{proof} A cut involving $p$ is guaranteed to be interior when the quad-tree box containing $p$ has radius less than $f(p)$. Since the radius of the quad-tree box is halved every $d$ cuts and side length of $B$ is $n$, the aforementioned condition is met after $d \cdot \log(n/f(p))$ cuts. \end{proof} Lemma~\ref{lem: exteriorcuts} gives us a way to bound the number of exterior cuts along the path to $p$. The next step is to bound the \textit{number} of points that can be a given distance or closer to the boundary of one of the faces. \begin{lemma}\label{lem: slicedensity} Normalize the side of $B$ to $n$. Let $S$ be a subset of $B$ formed by cutting $B$ parallel to one of its faces at distance $r$ away from the face. Then at most a $\left(\frac{\gamma^2}{\gamma^2+1} \right)^{\log n/r}$ fraction of the total points in $B$ are contained in $S$. \end{lemma} \begin{proof} One way of upper bounding the number of points in $S$ is as follows: $S$ can be formed by dividing the bounding box in half along one dimension $\log(n/r)$ times. On each division, at most how many points can be in the resultant rectangle? Consider the first cut which divides the bounding cube in half; the goal is to maximize the number of points in one half without violating the expansion constant. The ``sparse" half can be separated into $2^{d-1}$ hypercubes of radius $n/2$, each of which is expanded twice before the whole space $X$ is encompassed. Note that it is optimal for all the ``sparse" sub-cubes to contain the same amount of points since each must be able to expand a box around its center. The following equation solves for the largest fraction $f$ of the total points that can be in any one sub-cube without violating the expansion constant. \begin{align*} \gamma^2 \frac{1-f}{2^{d-1}-1} \geq x + (2^{d-1}-2)\frac{1-f}{2^{d-1}-1} \\ \implies f \leq \frac{\gamma^2-2^{d-1}+2}{\gamma^2+1}. \end{align*} The following cuts need to be upper bounded in a slightly different way, since they are cutting a hyperrectangle with $d-1$ sides of length $n$ and one side of length $s$. Thus the area can be decomposed into $2n/s$ boxes of side length $s/2$. Half the boxes will receive the maximum number of points possible; both halves will evenly distribute their points across the boxes. Each ``sparse" hypercube of side length $s/2$ must be expanded twice before it contains a cube $C$ of side length $s$ that is completely contained within the region. For that cube $C$, the same equations as before can be written to bound the number of points in the dense half of $C$ to a $\frac{\gamma^2-2^{d-1}_2}{\gamma^2+1}$ fraction of the total points in $C$. The total number of points in $C$ is upper bounded by $\left(\frac{\gamma^2-2^{d-1}_2}{\gamma^2+1} \right)^{k}$ where $k$ is the total number of cuts that have occurred, assuming that the maximum possible fraction is on one side each time. The overall bound follows. \end{proof} Now, we can put all these pieces together to prove the theorem. \begin{proof}[Proof of Theorem~\ref{thm: batchupdate}] The sorting cost bound follows directly from Theorem~\ref{thm: treebuild}. For the update bound, we know that over all weight-balanced paths in the tree, the cost of insertion the $k$ points down those paths is $O(k \log (n/k))$. Thus our task is to account for the paths in the tree that are not weight-balanced. In the worst case, for every non-weight-balanced path of length $\ell$, we incur an $O(\ell)$ cost for each point that traverses it. We will show that the number of points in $Q$ that are distributed among the unbalanced parts of the tree is small enough that the overall bound is unchanged. Consider one face $S$ of $X$. For a point $p$ at a given length $r$ away from the boundary of $x$ perpendicular to $S$, the path traveled from the root to $p$ can encounter $O(\log (n/r))$ unbalanced nodes. Consider all points at a distance $r$ or closer to $S$. Lemma~\ref{lem: slicedensity} shows that there are at most $k \cdot\left(\frac{\gamma^2}{\gamma^2+1} \right)^{\log (n/r)}$ such points. The cost incurred for each depth is also $\log (n/r)$. Thus, the maximum cost for all the points at depth $f(n)$ or smaller is $k\left(\frac{\gamma^2}{\gamma^2+1} \right)^{\log (n/r)} \log (n/r)$. Renaming $\log (n/r)$ as the variable $x$ and integrating over values of $x$ from $0$ to $n/2$ gives: \begin{align*} k \int_0^{n/2} \left(\frac{\gamma^2}{\gamma^2+1}\right)^x x \; dx \\ < k \int_0^{\infty} \left(\frac{\gamma^2}{\gamma^2+1}\right)^x x \; dx \\ = O(k). \end{align*} Since the number of faces is constant, the overall bound follows. This shows that even in the worst case where every exterior cut is unbalanced and the maximum number of points are distributed in nodes formed from exterior cuts, the extra work incurred does not change the overall bound. \end{proof} We conclude the section with a note on the tradeoff between work and span. The versions of the theorems given in this section have a span that is greater than polylogarithmic due to the radix sort. If the algorithms used a comparison sort, they could run in polylogarithmic span at the cost of needing $O(n\log n)$ work for the sort. \section{Other Nearest Neighbor Implementations}\label{apdx: otherimpls} For the purpose of comparison we use three existing implementations of nearest neighbor search: CGAL~\cite{alliez2016cgal}, STANN~\cite{connor2010knn}, and Chan~\cite{chan2006minimalist}. Here we describe some performance issues with their code, and some modifications we made to improve the performance of their code. Furthermore, we give a brief explanation for why we do not benchmark Delaunay triangulation based methods, since those are sometimes used for computing nearest neighbors in low dimensions. Firstly, the existing implementations for finding Delaunay triangulations were very slow: on 10 million points, the ParlayLib built-in function takes a few seconds, compared to less than .05 seconds to build the kd-tree. Secondly, the literature we found on Delaunay triangulations focused on the 2D case~\cite{birn2010simple}, while our experiments focused on the 3D case. \paragraph{Chan.} Chan's code was fully sequential so we needed to parallelize it. Conceptually this is relatively straightforward since the algorithm just requires using a parallel sort instead of a sequential one, and then running the queries in parallel. Indeed the first step of using a parallel sort was easy and we just replaced the C++ STL sort with the ParlayLib sort. The second step was required some work since the code was not thread safe. However, once modified and using the parallel loop from ParlayLib the code achieves very good speedup---about 75-fold speedup on 72 cores with 144 threads (see Section~\ref{sec: experiments} for the full details). Chan's algorithm only describes how to search from the root, and correspondingly his code only searches from the root. There seems to be no inherent reason that it would not be possible to start at the leaves when generating a nearest neighbor graph, but we did not implement such a variant. We note that even the root-based implementation of our code is significantly faster than Chan's. Chan's code uses arrays to store the points and therefore does not support dynamic updates. In his paper he mentions that his algorithm could support dynamic updates by storing the points in a balanced binary search tree in Morton order. This would require completely rewriting their code. Experiments with Chan's code only deal with the $k=1$ case since his algorithms did not provide support for higher $k$. \paragraph{STANN.} STANN includes both a k-nearest neighbor graph (KNNG) function and a $k$-nearest neighbor (KNN) function. The first finds the $k$ nearest neighbors among a set of points, and the second supports a function to build a tree and a separate function to query a point for its $k$-nearest neighbors. They supply a parallel version of KNNG, that was parallelized with OpenMP, and only a sequential version of KNN. On our initial tests we were able to get performance on the parallel KNNG that closely match what they report. However, their algorithm did not scale well beyond 16 threads (their numbers agree with this). The main issues is that the algorithm left some components sequential, including the Morton order sort, and initializing various standard template library (STL) vectors. For a larger number of cores this became the bottleneck. We therefore updated their code to use ParlayLib using a parallel sort and replacing uses of STL vectors with ParlayLib sequences, which are initialized in parallel. We also made a couple other optimizations, including changing the size of the base case of the recursive query from 4 to 10, and using vectors instead of a priority queues to store the nearest neighbors for a point when $k$ is small. These changes made a significant improvement in performance, especially at a larger number of cores, as indicated in Figure~\ref{fig: STANNvParlay}. The STANN KNN code was fully sequential. We therefore parallelized this as well, which required much the same changes as we made to the parallel code (using a parallel sort, making all loops parallel, and using parlay sequences instead of STL vectors). We also run the queries in parallel, which required some minor changes to make their code thread safe. As with Chan's algorithm, STANN stores the points in an array (STL vector) and therefore does not support dynamic updates. \paragraph{CGAL.} CGAL implements a parallel version of their $k$-nearest neighbor code using the threading building blocks (TBB)~\cite{TBB}. We use their code directly with no modifications. We note that as with the original version of STANN KNNG, their code does not scale well past 16 or so threads. We looked into this and there are several reasons. Perhaps most fundamentally, although in their recursive routine for building the kd-tree they invoke the two recursive calls in parallel, they do the splitting within each node completely sequentially. At the root of the tree this means they do linear work completely sequentially. Indeed from a theoretical point of view their algorithm does a total of $O(n \log n)$ work (assuming uniform input) and has $O(n)$ span, meaning it only has $O(\log n)$ parallelism. Fixing this problem would require a major rewrite of their code. A second issue is that they allocate their tree nodes by pushing onto the back of a TBB concurrent vector. Although this is thread safe, it requires a lock and becomes a bottleneck on a large number of threads. A similar issue appears to be true in the query. In particular although the code appears to be thread safe giving correct answers when run in parallel, there seems to be contention when there are many threads slowing them all down. This is often caused by some form of memory allocation, as with the build tree, but in this case we were not able to track down the source of the problem. Due to the particularly bad performance beyond 36 threads (which all are on one chip), we only report numbers up to 36 threads. Furthermore, since we observed wildly varying times with higher $k$, we only included times for $k<10$ in our experiments. \section{Dynamic Queries}\label{apdx: dynqueries} In this appendix, we provide data and commentary on our experimental results for non-dynamic queries. The results are presented graphically in Figure~\ref{fig: nondynqueries}. \paragraph{Varying $k$, work efficiency.} (Figure~\ref{fig: dynamicupdates}(a) and (b).) All the algorithms we tested had similar performance for work efficiency and scalability of the number of nearest neighbors as they did in the k-nearest neighbor graph building case. \paragraph{Scaling size of dataset.} (Figure~\ref{fig: dynamicupdates}(c).) Our algorithms are particularly robust compared to others when scaling the size of the dataset. One thing to notice in the relevant panel (c) of Figure~\ref{fig: dynqueries} is that with the exception of CGAL, all algorithms experience a local minima when the size of the dataset reaches $10^5$. While Chan, ParlayKNN, and CGAL's performances begin to slowly increase after this point, our algorithms' do not. The dataset of size $10^5$ is approximately where we expect cache misses to start affecting the performance; as explained in Section~\ref{sec: other_impls}, pre-sorting the data for dynamic queries helps alleviate this problem. \paragraph{Other datasets.} Since the bit-based and root-based algorithms performed very similarly on the random distribution for dynamic queries, we refer to Figure~\ref{fig: nondynqueries}(f) for evidence that the bit-based algorithm outperforms the root-based for some distributions. \input{figure-dynquries} \section{Full Proof of Theorem~\ref{thm: batchupdate}}\label{apdx: fulltheory} Here we extend the results in the main body where $X$ is a general convex space rather than a bounding cube. This is only a concern for Theorem~\ref{thm: batchupdate}; Theorems~\ref{thm: treebuild} and~\ref{thm: querytime} did not actually use the assumption on $X$. Theorem~\ref{thm: batchupdate} works by giving an upper bound on the number of points in an update that can be inserted into an unbalanced path of length $\ell$. To generalize it to arbitrary convex spaces, we need to show that the upper bound a) still holds when the bounding box $B$ of $X$ is not a hypercube and b) when $X$ is not a hypercube or hyperrectangle. The following lemma concerns the former property. \begin{lemma}\label{lem: squaremax} The number of points in $X$ which can be part of an unbalanced path of length $\ell$ in $T$ is maximized when the bounding box $B$ of $X$ is a hypercube. \end{lemma} \begin{proof} This follows from noting that the proof of Theorem~\ref{thm: treebuild} treated a split of $X$ as a split of an oct- or quad-tree---that is, each split is a $d$-dimensional split of a bounding cube into $2^d$ boxes. Thus, if the longest length in the bounding hyperrectangle $B$ of $X$ is normalized to $n$ and another side of $B$ has length $r<<n$, there will only be $\log r$ nonempty cuts of $X$ perpendicular to that side. Thus the length of an unbalanced path in $T$ will always be dominated by the length of the longest edge, and $B$ being a bounding rectangle can only result in fewer points in an update traversing an unbalanced chain than if $B$ were a hypercube. \end{proof} The second piece needed to apply Theorem~\ref{thm: batchupdate} to general convex spaces $X$ is to verify that the number of points near the boundary of $X$ is still upper bounded when $X$ is not a hyperrectangle. \begin{lemma}\label{lem: volmax} Let $X$ be convex with bounded expansion. Consider a subset of $X$ with volume $V$ such that volume $V$ is a fraction $f$ of $X$'s volume. Then $V$ contains at most $\left(\frac{\gamma^2}{\gamma^2+1} \right)^{\log 1/f}n$ points. \end{lemma} \begin{proof} $O(\log 1/f)$ divisions of $X$ in half are needed to produce a shape of volume $V$. To maximize the number of points in $V$, we need to minimize the number of points that can be in the ``sparse" half of each cut. The number of points in the sparse half is dictated by how many individual boxes can be packed into the sparse area and then expanded twice to reach the boundary of the dense area. Thus, the number of points in the dense area is maximized when there is only one such box, i.e. when $d=1$. This implies that we can upper bound the number of points in $V$ by its upper bound when $d=1$, i.e. a $\left( \frac{\gamma^2}{\gamma^2+1} \right)^{\log 1/f}$ fraction of the total points. This bound will be sufficient to finish the proof of Theorem~\ref{thm: batchupdate}. \end{proof} Now we are ready to prove the extended version of Theorem~\ref{thm: batchupdate}. \begin{proof}[Proof of Theorem~\ref{thm: batchupdate}] Fix a bounding box $B$. Now, for all convex shapes with bounding box $B$ and more sides than a hyperrectangle, the surface-area-to-volume ratio decreases from that of a hyperrectangle. Thus, for a given distance to the boundary of $X$, the volume in $X$ that can be at that distance or closer is only smaller when $X$ has more sides than a hyperrectangle. Since Lemma~\ref{lem: volmax} tells us that the upper bound given in Lemma~\ref{lem: slicedensity} applies no matter how the volume is arranged, the number of points in an update that can travel down an unbalanced path is no more than it would be if $X$ were a hyperrectangle. Thus, we can take the number of points (and their corresponding cost) in an update to a hyperrectangle as an upper bound for this case. However, triangles, tetrahedra, and their higher-dimensional counterparts have a higher surface-area-to-volume ratio than the hyperrectangle. Since the dimension is constant and the bounding box $B$ is assumed to be the smallest possible, the surface-area-to-volume ratio of the tetrahedron is only a constant factor larger than that of the hypercube, and thus the result still holds. \end{proof} \section{Conclusion}\label{sec: conclusion} In this work, we presented the zd-tree, a data structure for k-nearest neighbors that combines the ideas of kd-trees and Morton ordering and supports batch-dynamic updates. We showed that the zd-tree is both theoretically efficient and fast in practice, performing well even on data sets which do not have bounded ratio or bounded expansion. One future experimental direction is to experiment with datasets with higher dimensions, or with using our algorithms as a sub-step in calculating nearest neighbors in high dimensions. Another direction worth exploring is to use the zd-tree or a similar data structure for other problems in low-dimensional geometry, such as closest pair or n-body interactions. \section{Experiments}\label{sec: experiments} In this section, we provide experimental results which show that 1) our algorithms perform well under many types of scaling and across different architectures, and 2) our algorithms outperform every implementation we test against. \subsection{Experimental Setup.}\label{subsec: experimentsetup} \paragraph{Machines.} We ran most all of our experiments on a 72-core Dell R930 with 4x Intel(\textregistered) Xeon(\textregistered) E7-8867 v4 (18 cores, 2.4GHz and 45MB L3 cache), and 1Tbyte memory. With hyperthreading the total number of threads is 144. To check whether the results were robust across machines, we also ran one set of experiments on a 4-socket AMD machine with 32 physical cores in total, each running at 2.4 GHz, with 2-way hyperthreading, a 6MB L3 cache per socket, and 200 GB of main memory. \paragraph{Test data.} After testing our algorithms on some real-world image data, we discovered, similarly to Connor and Kumar~\cite{connor2010knn}, that uniformly random points perform very similarly to real-world datasets. To facilitate testing at various sizes we therefore use a few distributions of random points in 2 and 3 dimensions, on sizes up to 100 million points. The point sets we used are listed in Figure~\ref{fig: databarchart}. The 2D and 3DinCube datasets are points picked uniformly at random in a square and cube, respectively. The points in the 3DonSphere dataset are selected on the 2D surface of a sphere in 3 space. This is meant to represent various graphics applications where the point sets are on a 2D surface embedded in 3D. The 3Dplummer distribution uses the Plummer model~\cite{AarsethHW74}, which is based on the study of galaxies, and highly dense in at the center, becoming very sparse on the outside. The 2Dkuzmin distribution is a similarly skewed distribution in two dimensions. As can be seen from the various statistics, the Plummer and Kuzmin distributions are significantly more skewed than the others. Indeed. these distributions do not have bounded expansion constant. The performance is slower due to the fact that a few small points are extremely far away from most of the points, which are in a dense cluster at the center. This causes the tree to be unbalanced and the searches for the nearest neighbors of these far points to be expensive. However the overall time is hardly affected by the skewed distribution for our leaf and bit-based implementations (Figure~\ref{fig: databarchart}) showing the algorithms are robust under quite skewed distributions. As expected, the depth of the trees for the uniform distributions in a cube are just the logarithm of the number of points. \paragraph{Algorithms tested.} We ran three classes of experiments: (1) generating a $k$-nearest neighbor graph on a set of $n$ points, (2) building a $k$-nearest neighbor query structure on $n$ points followed by dynamic queries on a different set of $n$ points, and (3) batch insertions for a total of $n$ points (after the insertion). The results from (2) can be found in the full version of our paper; they are not significantly different from building the k-nearest neighbor graph. Altogether we tested 9 variants of the algorithms: our parallelized version of Chan's algorithm, the CGAL algorithm, four variants of STANN (KNN, KNNG, parlayKNN and parlayKNNG), and three variants of our algorithm (leaf-based, root-based, and bit-based). The parlayKNN and parlayKNNG are our modified versions of Connor and Kumar's algorithms. Since our modified versions are always significantly faster, we only report numbers for our versions. For all implementations, we sort by Morton order before querying. This is to ensure that all algorithms are getting the same benefit of locality in the tree when querying. The experiments on batch insertion only use our algorithm since the others do not support dynamic updates. \subsection{Leaf vs. Root Based.}\label{subsec: leafvroot} In Figure~\ref{fig: databarchart}, we show the performance of our three search algorithms for finding the k-nearest neighbor graph on varying datasets and $k=1$. Figure~\ref{fig: databarchart} shows the same result using a different measurement: the average and maximum number of nodes visited during a query. One takeaway from the figures is that even though the leaf-based method takes $O(n)$ work, and the bit-based method takes $O(n \log n)$ work, the prior is only slightly faster. This is because the constant in the $O(k \log k)$ term for a search from the leaf is much larger than the constant in the $O(\log n)$ search from the root. Another takeaway is that for the Kuzmin and Plummer distributions, when starting from the leaf using the root-based algorithm (Algorithm~\ref{algo: naivesearch}) makes an enormous difference. \subsection{$k$-Nearest Neighbor Graphs.}\label{subsec: knng} The results of our experiments for generating the $k$-nearest neighbor graph can be found in Figure~\ref{fig: nondynqueries}. \paragraph{Varying dataset size.} (Figure~\ref{fig: nondynqueries}(a) and (b).) We measured the total time per point (that is, to build the tree and perform the query) by dividing the total time to build and search by the number of points. As discussed in Section~\ref{sec: other_impls}, we took measures to limit the number of cache misses where possible. \paragraph{Work efficiency.} (Figure~\ref{fig: nondynqueries}(c) and (d).) Experiments showed that our algorithms performed significantly less work than our competitors as the number of threads increased to 144. To show that we maintain work efficiency on different architectures, we also ran the same experiments on a 32-core AMD machine. \paragraph{Varying $k$.} (Figure~\ref{fig: nondynqueries}(e).) Our results on varying numbers of neighbors show that our algorithms remain fast and scalable. \paragraph{Tree building.} (Figure~\ref{fig: nondynqueries}(f).) To illustrate that the tree-building step itself is efficient (except in the case of CGAL as explained in Section~\ref{subsec: other_impls}), we show time required to build the data structure for the 3DinCube and 3Dplummer distributions. \subsection{Dynamic Updates.} \label{subsec: dynupdates} We test the efficiency of our batch-dynamic updates by measuring the time required per update as the number of updates in the batch increases. Figure~\ref{fig: dynamicupdates} shows the time taken per point as the size of the batch increases, with both insertions and deletions shown. The figure shows a drastic change in time, spanning almost four orders of magnitude from $10^{-4}$ seconds for a single update to $10^{-8}$ seconds per update for a batch of 5 million. The first period of decrease as the batch size increases to $10^4$ or $10^5$ can be explained by parallelism---this is the point at which the parallel sort and the parallel recursion down the tree begin to save significant time. The fact that time continues to decrease even after the size grows large enough to see the full effects of parallelism can be attributed to the work efficiency of the batch-dynamic updates, as shown in Theorem~\ref{thm: batchupdate}. \subsection{Code availability.} Our implementation is part of the publicly available Problem-Based Benchmark Suite~\cite{shun2012pbbs}. \section{Introduction}\label{sec: intro} Computing nearest neighbors is one of the most fundamental problems in computer science, with applications in diverse areas ranging from graphics~\cite{clarenz2004finite, mitra2004estimating, pauly2003shape, pajarola2005stream} to AI~\cite{javier2012fast} to as far afield as particle physics~\cite{salam2006jet}. Research on nearest neighbors can be roughly divided into two areas: one area focuses on computing approximate nearest neighbors in high dimensions, primarily with clustering as an application. The second focuses on exact (or closer to exact) nearest neighbors in lower dimensions, with tasks such as surface reconstruction~\cite{alexa2001point, fleishman2005robust} as a prominent application. This work focuses on the latter category. The most common method of computing nearest neighbors in low dimensions is via a kd-tree~\cite{bentley1975multidimensional}, a tree which keeps the entire bounding box of the point set at its root, and whose children represent progressively smaller enclosed bounding boxes. Kd-trees have many applications in point-based graphics, and have been the data structure of choice for many graphics practitioners~\cite{pajarola2005stream}, even though other methods have better worst-case guarantees. One of the best kd-tree implementations is Arya et al's~\cite{arya1994knn}, which has been used widely by researchers~\cite{clarenz2004finite, mitra2004estimating, pauly2003shape}. Another commonly used library of kd-trees is that of the Computational Geometry Algorithms Library (CGAL)~\cite{tangelder2020spatial}. Recent work on kd-trees has focused on better theoretical guarantees~\cite{ram2019revisiting}, and with better performance in high dimensions~\cite{chan2019fast}. Another approach for computing nearest neighbors uses space-filling curves known as the Morton ordering, z-ordering, or Lebesgue ordering (henceforth Morton ordering). Recursing by splitting the Morton ordering roughly splits space, making it possible to effectively search for nearest neighbors. Two nearest neighbor algorithms that make use of Morton ordering are Chan's minimalist nearest neighbor algorithm~\cite{chan2006minimalist}, and Connor and Kumar's k-nearest neighbor graph algorithm~\cite{connor2010knn}. Other approaches to computing nearest neighbors include well-separated decompositions~\cite{callahan1995decomposition}, and Delaunay triangulation~\cite{birn2010simple}. Some important considerations when choosing a k-nearest neighbors algorithm are how it performs (theoretically as well as practically), does it run efficiently in parallel (since todays machines only have multiple processors), what kind of point sets it handles, and whether it supports dynamic updates (since in many applications point sets change over time~\cite{singh2021fresh}). Vaidya~\cite{Vaidya86} and Callahan and Kosaraju~\cite{callahan1995decomposition} give strong bounds for general point sets computing all nearest neighbors in $O(n \log n)$ time using variants of kd-trees. Chan improved this to $O(n)$ time if the the ratio of the largest distance to the smallest is polynomially bounded~\cite{chan2008well}. However these results are limited to static point sets and have not yet shown to be practical. Connor and Kumar give bounds under the assumption of bounded expansion constants~\cite{connor2010knn} for a practical algorithm they implement. There has also been significant interest in parallel algorithms for the problem. This includes implementations based on MapReduce~\cite{agarwal2016parallel}, for GPUs~\cite{hu2015massively}, the STANN library~\cite{connor2010knn}, and an implementation in CGAL~\cite{alliez2016cgal}. Although in principle kd-trees should be able to support dynamic updates we know of no libraries that efficiently support them, and few interesting theoretical bounds for the problem in low dimensions. When considering parallelism and updates together one should be interested in batches of updates that can be processed in parallel. In this paper, we present a technique that combines the ideas of kd-trees and Morton ordering to achieve efficient algorithms for k-nearest neighbors in bounded dimension. Some guiding intuition for such a combination is that Morton-based algorithms tend to have quick preprocessing (since only a sort is required) and slower queries; on the other hand, tree-based algorithms can have slower building times but their additional structure leads to faster queries. Thus, combining these approaches may allow us to achieve the advantages of both. In particular we present a k-nearest neighbor algorithm that hybridizes the kd-tree and Morton order approaches by using a kd-tree whose splitting rule is based on the Morton ordering; we call this tree the \textbf{zd-tree}. We also present what is to our knowledge the first parallel batch-dynamic update algorithm for a k-nearest neighbor data structure. We prove the following theoretical results in the context of a point sets with bounded expansion constant and bounded ratio, two reasonable and broadly used assumptions when computing nearest neighbors~\cite{karger2002finding,kazana2013enumeration,segoufin2017constant, gago2009bounded,beygelzimer2006cover,connor2010knn,anagnostopoulos2015low,anagnostopoulos2018randomized, chan2008well, arya1994knn}. The first result concerns the work and span required to build the zd-tree: \begin{oneshot}{Theorem~\ref{thm: treebuild}} For a point set $P$ of size $n$ with bounded ratio, the zd-tree can be built using $O(n)$ work with $O(n^{\epsilon})$ span, and the resulting tree height $O(\log n)$. \end{oneshot} The second result bounds the work for a k-nearest neighbor query on the zd-tree. \begin{oneshot}{Theorem~\ref{thm: querytime}} For a zd-tree representing a point set $P$ of size $n$ with bounded expansion, finding the k-nearest neighbors of a point $p \in P$ requires expected $O(k \log k)$ work. \end{oneshot} These two theorems together imply a linear-work algorithm for finding the k-nearest neighbors among a set of points (i.e. the k-nearest neighbor graph). They also imply that for a point $q \not \in P$, finding the nearest neighbors requires $O(\log n + k \log k)$ work. The third result bounds the work and span for batches of updates. \begin{oneshot}{Theorem~\ref{thm: batchupdate}} Let $T$ be a pruned zd-tree representing point set $P$, and let $Q$ be a point set of size $k$, such that $|P|+|Q|=n$. Then if $P \cup Q$ and $Q$ both have bounded expansion and bounded ratio in the same hypercube $X$, $Q$ can be inserted into $T$ in $O(k\log(n/k))$ work and $O(k^{\epsilon} + \polylog(n))$ span. \end{oneshot} In additional to the theoretical contributions, we implement both our nearest neighbor searching algorithm and the batch-dynamic updates described above, and we measure our nearest neighbor searching algorithm against a large number of competitors. Our algorithms are optimized for parallelism: in addition to presenting a thread-safe data structure so that queries can be conducted in parallel, we use parallelism when recursively building or updating our kd-tree. A snapshot of our practical results can be found in Figure~\ref{fig: bythreadall}, which compares the work needed to preprocess and query a point set across our implementation and competitors. \input{figure-bythreadall} Our experimental results show the following: \begin{enumerate} \item Our k-nearest neighbor algorithms achieve \textbf{high parallelism.} Using our basic algorithm to query all nearest neighbors of a 3D dataset with 10 million points achieves 75-fold speedup on a 72-core Dell R930 with 144 hyper-threads. \item Our algorithms are \textbf{fast}. Our algorithm's speed is robust across all the measures which we tested---adversarial datasets, varying $k$, varying the size of the dataset, and varying the number of threads. In most cases, it beats its competitors by close to an order of magnitude. \item Our batch-dynamic updates \textbf{drastically decrease the cost per insertion}. An insertion of one point into a tree of $5,000,000$ points takes about $10^{-5}$ seconds, while an insertion of $100,000$ points takes $10^{-7}$ seconds per pointelement. \end{enumerate} \subsection{Preliminaries.}\label{subsec: prelims} For the special case where we are given a point set $P$ and wish to calculate the k-nearest neighbors of all points in $P$, we refer to the result as the \textbf{k-nearest neighbor graph} of $P$. A query of a point not in $P$ is a \textbf{dynamic query}, and a query of a point in the tree is sometimes referred to as as a \textbf{non-dynamic query}. Similarly, adding a point to or deleting a point from the tree is referred to as a \textbf{dynamic update}, which is \textbf{batch-dynamic} if the updates are processed in batches rather than one at a time. \paragraph{Kd-trees.} Many nearest neighbor algorithms use a kd-tree as the data structure to query for nearest neighbors. Given a set of points $P$ in a $d$-dimensional bounding box, a kd-tree splits the data into two smaller bounding boxes at every level of the tree. Designing a kd-tree requires making a choice of \textbf{splitting rule}---that is, how the bounding box will be divided. One common splitting rule is to divide the bounding box of points along its largest dimension; another is to divide the space such that equal numbers of points are on each side. A variant of the kd-tree is the \textit{\{quad,oct\}-tree}, where each internal node of the tree has $2^d$ equal sized children in $d$-dimensional space. \paragraph{Morton ordering.} A common tool used in designing k-nearest neighbor algorithms is the Morton ordering. For a set of points whose coordinates are $d$-vectors of integers $(x_1, x_2, \ldots, x_d)$, the Morton ordering is calculated by taking each integer coordinate in binary form, interleaving the coordinates to create one integer per point, then sorting using the interleave integers. Nearest neighbor algorithms take advantage of the following property of the Morton ordering: given any two points $p$ and $q$ and the rectangle defined by those points as the corners, then all points in the rectangle must fall between $p$ and $q$ in the Morton ordering. This allows pruning regions of the ordering. \paragraph{Bounded expansion constant.} Several previous works for nearest neighbors in metric spaces have assumed a bounded expansion constant~\cite{karger2002finding, kazana2013enumeration, segoufin2017constant, gago2009bounded,beygelzimer2006cover,connor2010knn,anagnostopoulos2015low, anagnostopoulos2018randomized}, which roughly requires that the density of points in the metric space does not change rapidly. In the context of Euclidean space we use the following definition. Given a point $p_i$ and a positive real $r$, let $\bbox(p_i, r)$ denote the box centered at $p_i$ with radius $r$ (that is, half the side length). \begin{definition}\label{def: expansion} Given a point set $P$ contained in a bounded Euclidean space $X$, $P$ has \textbf{expansion constant} $\gamma$ if for all $x \in X$ and all positive real $r$, if $|\bbox(x, r)| = k$ for any $k > 1$ then \begin{align*} |\bbox(x, 2r)| \leq \gamma k. \end{align*} The expansion constant is referred to as \textbf{bounded} if $\gamma = O(1)$. \end{definition} \paragraph{Bounded Ratio.} Another property that will be needed to prove some of our theorems is that of the point set having bounded ratio. This is a commonly used property in problems such as nearest neighbors and closest pair~\cite{arya1994knn, chan2008well}. \begin{definition}\label{def: ratio} Given a point set $P$ of size $n$, let $d_{\max}$ denote the maximum distance between any two points in the set, and let $d_{\min}$ denote the minimum distance between any two points in the set. Then $P$ has \textbf{bounded ratio} if \begin{align*} \frac{d_{\max}}{d_{\min}} = \text{poly}(n). \end{align*} \end{definition} \paragraph{Model of computation.} Our results for the parallel algorithms are given for the binary-fork-join model~\cite{BlellochF0020}. In this model, a process can fork two child processes, which work in parallel and when both complete, the parent process continues. Costs are measured in terms of the work (total number of instructions across all processes) and span or depth (longest dependence path among processes). Any algorithm in the binary forking model with $W$ work and $S$ span can be implemented on a CRCW PRAM with $P$ processors in $O(W/P + S)$ time with high probability~\cite{ABP01,blumofe1999scheduling}, so the results here are also valid on the PRAM, maintaining work efficiency. \subsection{Related Work.}\label{subsec: relatedwork} Arya and Mount's k-nearest neighbor implementation~\cite{arya1994knn} is commonly referred to as the state-of-the-art sequential k-nearest neighbor algorithm for low dimensions. Their implementation uses a kd-tree known as a \textit{balanced box decomposition (BBD) tree}, whose splitting rule attempts to get the best of both commonly used splitting rules---that is, splitting a bounding box into approximately equal areas which also have approximately equal numbers of points. They show that using a BBD tree, an approximate k-nearest neighbor query takes $O(\frac{k}{\epsilon} \log n)$ work. The BBD tree theoretically supports insertions, but their given implementation does not. In~\cite{connor2010knn}, Connor and Kumar parallelize Arya and Mount's all-nearest neighbors implementation and show that theirs produces faster results, so we compare against Connor and Kumar's instead of theirs. One nearest neighbor algorithm which uses the Morton ordering instead of a kd-tree is Chan's ``minimalist" nearest neighbors algorithm~\cite{chan2006minimalist}, which has a theoretical guarantee of $O(n \log n)$ expected preprocessing time and $O(\frac{1}{\epsilon} \log n)$ expected time per query for approximate nearest neighbors. The algorithm is notable for both its simple proof and strikingly minimalist implementation, whose sequential version requires fewer than 100 lines of code in C++. Chan's algorithm first randomly shifts the coordinate of each point, then sorts the points using the Morton ordering. The algorithm then uses an implicit tree, recursively dividing the sorted points and visiting every implicit vertex which is within some radius of the query point. An adversarial case for this algorithm is when some query point $q$ is in the right half of the sorted data and its nearest neighbor $p$ is in the left half, causing the algorithm to search a large number of vertices. The random shift helps avoid this case in expectation. The k-nearest neighbor implementation that most closely matched ours---in that it is tailored for parallelism and for exact nearest neighbor searching in low dimensions---is that of Connor and Kumar in~\cite{connor2010knn}, where STANN stands for Simple Threaded Approximate Nearest Neighbor. Their algorithm makes several improvements on Chan's algorithm, especially for the case of computing the k-nearest neighbor graph. Their main improvement is to search from the leaf of the implicit tree rather than the root, which allows for the possibility of searching only $O(k)$ implicit nodes instead of at least $O(\log n)$ (and this would be a best case scenario where the tree is perfectly balanced). Indeed, they find that if the input point set has bounded expansion constant, their data structure uses $O(n \log n)$ work and their nearest neighbor queries use expected $O(k \log k)$ work. Their algorithm only works for static point sets and as our experiments show is not as fast as ours. Another well-known tree used for computing nearest neighbors is Callahan and Kosaraju's well-separated pair decomposition~\cite{callahan1995decomposition}. For $n$ points, they can build their tree (similar to a kd-tree) in $O(n \log n)$ work and polylogarithmic span (in parallel). Based on the tree, they can build the decomposition and find nearest neighbors in $O(n)$ work and polylogarithmic span. The approach is only described for the static case. Another approach is to use the Delaunay triangulation of the set of points~\cite{birn2010simple}. Although this seems to work reasonably well in two dimensions, in three and higher dimensions it can be very expensive. Beyond bounded expansion constant, another common geometric assumption used for finding nearest neighbors or the closest pair is a bound on the ratio of the furthest pair and the closest pair in a dataset~\cite{arya1994knn, clarkson1994algorithm, clarkson1999nearest, erickson2003nice,chan2008well}. \section*{Acknowledgments} We thank the anonymous referees for their comments and suggestions. This research was supported by NSF grants CCF-1901381, CCF-1910030, and CCF-1919223, and the NSF GRFP. \small \bibliographystyle{abbrv} \section{Implementation Details}\label{sec: other_impls} In this section we give more details on the practical implementation of both our algorithms and the other algorithms we use to benchmark our code. \subsection{Our Algorithms.}\label{subsec: our_algs} We implemented our algorithms in C++ using the parallel primitives from ParlayLib~\cite{parlay20}. Our search implementation closely matches the algorithms shown in Section~\ref{sec: algo}, so here we focus mostly on implementation details and other optimizations. \paragraph{Numerical details.} We work with double-precision floats, which we round to 64-bit integers for building the tree and computing the Morton ordering. \paragraph{Miscellaneous optimizations.} We mention a few optimizations that made significant differences in our runtime. Whenever possible, we used squared distances instead of Euclidian distances in our computations, which made our code about 10\% faster. To store the current k-nearest neighbors when traversing the tree we use a vector for small $k$ and the C++ STL priority queue for larger $k$. The overhead of the vector was significantly less for small $k$, but the linear instead of logarithmic cost dominates for $k > 40$ or so. A third optimization was to sort the sequence of queries using their Morton ordering so that nearby queries in this order access nearby nodes in the tree, thus reducing cache misses. The savings from reducing cache misses more than compensates for the cost of the sort, in some cases decreasing runtime by a factor of two. This is useful even when querying in parallel since the parallel scheduler processes chunks of the iteration space on the same core. \subsection{Other Implementations.}\label{subsec: other_impls} For the purpose of comparison we use three existing implementations of nearest neighbor search: CGAL~\cite{alliez2016cgal}, STANN~\cite{connor2010knn}, and Chan~\cite{chan2006minimalist}. Here we describe some performance issues with their code, and some modifications we made to improve the performance of their code to ensure a fair comparison. An extended version of this discussion can be found in the full version of this paper. \paragraph{Chan.} Chan's code was fully sequential so we needed to parallelize it. Conceptually this is relatively straightforward since the algorithm just requires using a parallel sort instead of a sequential one, and then running the queries in parallel. Chan's code only searches from the root of his implicit tree. We note that the root-based implementation of our code is significantly faster than Chan's. \paragraph{STANN.} STANN includes both a k-nearest neighbor graph (KNNG) function and a k-nearest neighbor (KNN) function. The first finds the $k$ nearest neighbors among a set of points, and the second supports a function to build a tree and a separate function to query a point for its $k$ nearest neighbors. They supply a parallel version of KNNG, that was parallelized with OpenMP, and only a sequential version of KNN. Their algorithm did not scale well beyond 16 threads, since it left some components sequential. We therefore updated their code to use the parallel primitives and built-in functions from ParlayLib~\cite{parlay20}; this drastically improved their performance. \paragraph{CGAL.} CGAL implements a parallel version of their k-nearest neighbor code using the threading building blocks (TBB)~\cite{TBB}. We use their code directly with no modifications. We note that their code does not scale well past 16 or so threads. Furthermore, although the code appears to be thread safe, there seems to be contention when there are many threads, thereby slowing them all down. Due to the particularly bad performance beyond 36 threads (which all are on one chip), we only report numbers up to 36 threads. Furthermore, since we observed wildly varying times with higher $k$, we only included times for $k<10$ in our experiments. \section{Preliminaries}\label{sec: prelims} Here we provide a few preliminaries.
1,116,691,500,571
arxiv
\section{Introduction} \label{sec_intro} Uniaxial extension is the dominant flow in many industrial processes and, therefore, accurate measurements of the extensional rheological properties of polymer melts are very important. From a more fundamental point of view, reliable measurements of the rheological properties in extension are crucial for validating existing theoretical models and suggesting new approaches. In spite of the universally recognized need for reliable elongational measurements of polymer melts, the development of extensional rheometric equipment has progressed slowly during the past three decades. A reliable design of an extensional rheometer has met several practical difficulties and one of the taughest is to generate a homogeneous extensional flow. Several techniques to measure the elongational properties of polymer melts have been proposed: the Rheometrics Melt Extensiometer (RME) by Meissner \cite{meissner1}, the supporting oil bath design by M\"{u}nstedt \cite{m4}, and the SER by Sentmanat \cite{sentamat1, sentamat2}. A comprehensive review of these different approaches to extensional rheology of polymer melts is beyond the scope of this investigation and can be found in \cite{schweizer} and more recently in \cite{handbook}. We note that for each of these approaches the homogeneity of deformation states is crucial for reliably assessing the elongational properties of the material. Though previously recognized by most experimentalists, it is our belief that this issue did not receive the proper attention. Only very recently the true danger of sample non-uniformity during elongation has been made explicit, by measuring locally both the stresses and the strain and showing that in the case of strongly nonuniform samples the classical extensional measurements become completely unreliable, \cite{localelongational1}. In \cite{localelongational1} it has been demonstrated by combined traditional integral viscosity measurements and local viscosity measurements based on in-situ local measurements of the sample diameter that geometric non-uniformities of the sample under elongation typically result in completely unreliable viscosity data. Moreover, it has been shown that even initially homogeneous (perfectly cylindrical) samples loose their uniformity at high enough Hencky strains and thus, the impact of sample non-homogeneity on the viscosity measurements is always an issue to worry about during extensional tests. Also very recently, a full numerical simulation of the extension process in a $SER$ rheometer demonstrated that the loss of sample homogeneity during deformation leads to a strong strain localization along the sample which ultimately translates into unreliable measurements of the transient elongational viscosity \cite{hassager2009, hassager2009PRL}. Here a more elaborated version of the method proposed in \cite{localelongational1} is employed to investigate a long standing problem in the extensional rheology of polymer melts, the viscosity (or tensile stress) overshoot observed during the uniaxial extension of some polymer melts at a constant rate of deformation. Since the early days of extensional rheometry it has been observed that some strain hardening materials under uniaxial extension display a clear maximum in the transient extensional viscosity right before (typically within less than a Hencky strain unit) the physical rupture of the sample \cite{m2, meissnerovershoot, meissner1, nielsen1}. There is only one experimental paper we are aware of \cite{rasmussen}, which also presents an extended (over several Hencky strain units prior to the physical rupture of the sample) plateau after the stress maximum. However, the authors of this study present no experimental evidence on the homogeneity of the sample during the elongation process. This local maximum in the tensile stress, followed or not by a plateau, has been coined as \textit{"viscosity overshoot"}. The existence of a true viscosity overshoot is important from both a practical and a fundamental point of view. In many processing and industrial settings it is important to know whether a true steady state behaviour can be reached under extension at a constant rate and if not, to understand how this fact influences the physical rupture of the material. From a theoretical point of view, in our opinion, this phenomenon is not yet fully understood. The POM-POM model for branched polymer melts \cite{mcleish} and the molecular stress function (MSF) model \cite{wagner4} predict a monotone increase of the transient extensional viscosity. Other theoretical works, however, are able to predict an overshoot in viscosity, \cite{wagnerovershoot, wagner3}. Even more worrying, recent theoretical models seem to be able to fit both a maximum and/or a steady state of the transient extensional viscosity \cite{wagner3}. This simply means that the phenomenology behind the stress maximum/overshoot remains elusive. The implications of the stress overshoot phenomenon during uniaxial extension are, in our opinion, even more important. Recently, based on the observation of the stress overshoot phenomenon in both uniaxial extension of polymer melts \cite{wang1, wang3} and startup shear of entangled polymer solutions \cite{wang4}, a universality claim \textit{"Entangled liquids are solids"} has been very recently formulated \cite{wang2}. Though we do understand how important and appealing a universal behaviour is and we do accept that some similarities between polymer melts under elongation and entangled solutions in startup shear may exist, we believe that the claim above should be still considered very cautiously, at least because of the reasons below: \begin{enumerate} \item The term "overshoot" implies, to our best understanding, a local maximum followed by a plateau corresponding to lower stress values. Whereas such plateau has been observed for entangled solutions \cite{wang4}, the data concerning polymer melts presented in \cite{wang1, wang3} display only a maximum but no plateau: the sample breaks just after the maximum. A true overshoot behavior (a maximum followed by a plateau) has been observed in \cite{rasmussen} but at much higher Hencky strains. \item The stress overshoot during the uniaxial extension of entangled polymer melts reported in \cite{wang1, wang3} refers to the so-called "engineering stress" which is not a real stress but just a tensile force normalized by a constant (the initial area of the sample under investigation). The (physical) true stress (which is calculated by dividing the tensile force by the actual cross section of the sample) does not always exhibit an overshoot behaviour and if it does (e.g. for the strain hardening materials at high enough rates of deformation) this occurs at significantly larger Hencky strains. \end{enumerate} In the view of the remarks above, we believe that before an analogy between the stress maximum (and only rarely a true stress overshoot, \cite{rasmussen}) observed for polymer melts under extension and the stress overshoot observed for entangled polymer solutions in startup shear is stated, a deeper understanding of each phenomenon is needed. As the elongational viscosity overshoot phenomenon has been observed at high Hencky strains, prior to the physical rupture of the sample, a proper understanding of this phenomenon might also shed light on different mechanisms of failure during elongation, which remains an elusive goal \cite{denn2003, denn2004}. \section{Description of the experiments} \label{sec_experimental} \subsection{Experimental apparatus and techniques} \label{subsec_apparatus} The experiments have been conducted with a M\"{u}nstedt type extensional rheometer built in the house which is illustrated in Fig. \ref{f1}(a). A detailed description of this device can be found elsewhere, \cite{m1}. The specimen \textbf{S} under investigation is clamped between the plates $\mathbf{P_{1}}$ and $\mathbf{P_{2}}$ of the rheometer and immersed in a silicone oil bath \textbf{C} to minimize gravity and buoyancy effects, Fig. \ref{f1}(a). \begin{figure*} \begin{center} \centering \includegraphics[width=12cm]{f1.eps} \caption{(a) Schematic view of the experimental apparatus: \textbf{C}- oil bath, $\mathbf{P_{1}}$ and $\mathbf{P_{2}}$ - top and bottom plates of the rheometer, \textbf{S}- the sample under investigation, \textbf{M}- AC servo motor, \textbf{D}- the control drive of the rheometer, $\mathbf{PC_{1,2}}$ - personal computers, \textbf{TL}- telecentric lens, \textbf{CCD}- video camera. (b) Sample illumination and imaging: $\mathbf{LS_{1}}$ and $\mathbf{LS_{2}}$- linear light sources, \textbf{S}- the sample under investigation. (c) Example of a telecentric sample image corresponding to $\epsilon_{H}=2.7$. The field of view was actually larger but the image has been cropped for clarity reasons. (d) Principle of the local measurements of the extensional viscosity. The vertical dotted lines represent the contour of an ideal uniform sample.} \label{f1} \end{center} \end{figure*} While the bottom plate $\bf{P_2}$ is stationary, the top plate $\bf{P_1}$ is moved vertically by an AC-servo motor \textbf{M}, controlled by an analogue to digital converter installed on the computer $\bf{PC_1}$ . The sample is illuminated from behind by to linear light sources $\bf{LS_{1}}$ and $\bf{LS_{2}}$ disposed as shown in the schematic top view presented in Fig. \ref{f1} (b). The idea behind the back-light illumination arrangement is to obtain a maximum of brightness only on the edges of the sample and thus to allow accurate identification of the sample edges and reliably measure its diameter. A major difficulty in imaging a considerably elongated sample comes from the high aspect ratio (height to width) of the corresponding field of view, which during extensional experiments at large Hencky strains may be as large as $1:50$. If a regular entocentric lens (with the entrance pupil located inside the lens) is used both the resolution and the level of geometrical distortion are unsatisfactory for high accuracy measurements of the sample diameter. Additionally, corresponding to large Hencky strains, both the frame brightness and the degree of focusing become uneven through the field of view if the sample is imaged in divergent light. To circumvent these problems, we use in our study a high resolution telecentric lens with the entrance pupil located at infinity, (VisionMes 225/11/0.1, Carl Zeiss) which images the sample in parallel light and delivers frames with very uniform brightness and free of distortions (geometrical aberrations), perspective errors and edge position uncertainties. A typical image of the sample is presented in Fig. \ref{f1} (c). Images of the sample under elongation are acquired in real time using a high resolution (3000 by 1400 pixels full frame, which translates into roughly $60 ~\mu m$ spatial resolution) low noise camera (Pixelink from Edmunds Optics) at a speed of $3$ frames per second. The video camera is installed on a second computer, $\bf{PC_2}$. The image acquisition is digitally synchronized with the rheometer via a transistor-transistor logic ($TTL$) trigger signal sent by the rheometer drive \textbf{D} to the camera. \subsection{Materials and their rheological properties} \label{subsec_materials} The material used in this study is a low-density polyethylene from Lyondell Bassel with the trade name Lupolen 1840 D. Several molecular and rheological characteristics of the material are summarized in Table \ref{tabel_rheo}. LDPE 1840 D has a branched molecular structure which has been systematically characterized by Nordmaier and co-workers \cite{nordmaier1, nordmaier2}. It has a broad molar mass distribution with a rather large molar mass, $M_w$, and a pronounced high molar mass tail. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\mathbf{M_w~(kg/mol)}$ & $\mathbf{M_w/M_n}$ & $\mathbf{\eta_0~(Pas)}$ & $\mathbf{J_e^0~(10^{-4}Pa^{-1})}$ & $\mathbf{\lambda~(s)}$ \\ \hline \hline 377 & 18 & 833000 & 13.5 &1100 \\ \hline \end{tabular} \end{center} \caption{Molecular and rheological characteristics of Lupolen 1840 D at $T=140 ^\circ C$.}\label{tabel_rheo} \end{table} The influence of the broadly distributed molar mass and the branched molecular structure of the material on its rheological properties in shear has been recently investigated experimentally \cite{resch1}. Due to the broadly distributed molar mass and the degree of chain branching, the maximum relaxation time of the material, $\lambda$, is quite high. It is calculated as $\lambda=J_e^0 \cdot \eta_0$, where $J_e^0$ and $\eta_0$ are the steady -state recoverable compliance and the zero shear viscosity, respectively. \subsection{Preparation of the samples}\label{subsec_preparation} For elongational measurements in the M\"{u}nstedt rheometer cylindrical specimens were used. At first a strand was extruded through a capillary at $190 ~^\circ C$ using a piston extrusion machine. The diameter $D$ of the die was $4.6~mm$, the length $L=18.4~mm$, and the apparent shear stress applied was $43.7~ kPa$. These extrusion conditions give rise to a strand with a diameter of about $8 ~mm$ after annealing. The strand was extruded into a vessel containing an ethanol-water mixture ($90/10 ~vol. ~\%$) in order to ensure homogeneous strand diameter along the axes of the extrusion. This procedure leads to specimens with a relative deviation of a diameter smaller than $2 \%$. After extrusion the strand was annealed in a silicone oil bath at $150 ~^\circ C$ for $20~ min$. This step ensures a complete relaxation of the thermal stress accumulated within the sample, which is necessary in order to suppress any stress history effects and obtain reliable and accurate rheological data. The initial length and diameter of each sample were $D_0=8~mm$ and $L_0=5~mm$, respectively. The specimen's surface was etched by air plasma at room temperature in order to increase its surface energy. The samples were glued to aluminium clamps using a two-component epoxy resin adhesive, Technicoll $8266/67$. These clamps serve to fix the specimen to the pulling rod and the force transducer of the rheometer. At last, the specimens were kept in an oven at $80 ^\circ~ C$ for $2$ hours for a complete curing of the glue. \subsection{Data analysis} \label{subsec_anlysis} The first step of our data analysis procedure was to interpolate both the image sequence and the data acquired by the M\"{u}nstedt rheometer on a common time axis, so a direct comparison between the integral viscosity measurements and the shape of the sample under deformation can be made. Prior to analysis, each image has been compensated for non uniform brightness using a standard adaptive histogram equalization algorithm implemented under Matlab. By identifying the edges of the sample from each image, the distribution of diameters along the actual length of the sample is measured. This allows the calculation of the stress distribution along the sample. The true tensile stress corresponding to each deformation state is defined by the mean of the stress distribution and the error bars are defined by the root mean square deviation (rms) of the stress distribution. \section{Results} \subsection{On the relation between a maximum in the tensile stress and sample uniformity}\label{subsec_analytical_condition} Previous theoretical (\cite{wagnerovershoot, wagner3}) and experimental (\cite{rasmussen}) studies aimed high by attempting to explain a stress maximum in terms of the molecular scale dynamics during extension but very little questioned the reliability of the existing extensional data (particularly in relation with the sample homogeneity during extension). We would like to address in the following a more modest question which is decisive for a fundamental understanding of the elongational behaviour of polymer melts and its theoretical description: \textit{Is a maximum of the transient tensile stress compatible with a uniform deformation process?}. By uniform deformation we understand a uniaxial deformation at a constant rate, $\dot{\epsilon}$, for which the diameter of the sample, $D(t)$, is constant along the actual length of the sample $L(t)$ (see Fig. \ref{f1}(d)) and, based on the incompressibility condition, is given by $D_u(t)=D_0~e^{-\dot{\epsilon}t/2}$. The transient stress $\sigma(t)$ during the extension of a real sample with a coordinate-dependent diameter $D(z,t)$, Fig. \ref{f1}(d), can be written as an average of the local stresses along the direction of extension, $z$: \begin{equation} \sigma(t)=\frac{4 F(t)}{\pi L(t)} \int_0^{L(t)} {\frac{1}{D^2(z,t)}dz}\label{eq_stressaverage} \end{equation} Here $F(t)$ stands for the tensile force which is coordinate independent. By a simple algebraic manipulation using Leibniz's theorem one can easily show that, corresponding to a stress maximum ($\frac{d\sigma (t)}{dt}=0$), the following condition should be fulfilled: \begin{equation} \left\lbrace \frac{\pi \sigma(t)}{4} - \frac{F(t)}{D^2[L(t),t]} \right\rbrace \frac{dL(t)}{dt} =\frac{1}{L(t)}\int_0^{L(t)} \frac{\partial \left( \frac{F(t)}{D^2[z,t]} \right)}{\partial t}dz\label{eq_condition} \end{equation} One can notice that the curly bracket on the left hand side of the equation above is nothing but a measure of the sample uniformity. Indeed, if one imposes the condition that the sample preserves its cylindrical shape at all times ($D[z,t]=D[0,t]=D[L(t),t]=D_u(t), \forall z\in [0, L(t)]$), the left hand side of the equation above vanishes and the stress maximum condition reduces to $\frac{d \left( \frac{F(t)}{D_u^2(t)} \right)}{d t}=0$. Thus, if the sample is assumed to deform uniformly (the curly bracket in the left hand side of Eq. \ref{eq_condition} vanishes), corresponding to a stress maximum the tensile force should scale exponentially, $F(t) \propto e^{-\dot{\epsilon}t}$. However, we point out that the tensile force may scale nearly exponentially if the rates of deformation are very small (note that the $dL(t)/dt \propto \dot{\epsilon}$) even in the case that the deformation process is non uniform (the curly bracket in Eq. \ref{eq_condition} is non zero). To conclude, if the deformation process is assumed to be homogeneous and if a local maximum in stress is attained then, in the neighbourhood of the stress maximum, the tensile force should scale as an exponentially decaying function with a rate equal to the rate of elongation, $\dot {\epsilon}$. The validity of the analytical condition for a maximum in stress is discussed in the next subsection, \ref{subsec_extensionregimes}. \subsection{The transient tensile force and tensile stress in different regimes of extension}\label{subsec_extensionregimes} Integral measurements of the transient elongational viscosity are presented in Fig. \ref{moreetas}. The integral viscosity is obtained by measuring the transient tensile force, $F(t)$, using the assumption that the diameter of the sample is independent on the vertical coordinate, $D(z,t)=D_u(t)$. Each data set has been acquired until the physical rupture of the sample occurred. Except for the linear range of deformation, $\epsilon_H <1$, the shape of the transient elongational viscosity depends considerably on the (constant) rate at which the material is deformed, $\dot{\epsilon}$. Thus, depending on the rate of deformation, the integral transient viscosity may display either a clear maximum (curves 1-4, Fig. \ref{moreetas})or a monotonic increase (curve (5), Fig. \ref{moreetas}). As clearly suggested by Eq. \ref{eq_condition} and the discussion presented in Sec. \ref{subsec_analytical_condition}, in order to understand the physical reasons underlying the viscosity maximum visible for the curves (1-4), one has to focus not only on the tensile stress but on the tensile force as well. \begin{figure*}[h] \begin{center} \centering \includegraphics[width=10cm]{f2.eps} \caption{Transient elongational viscosities at various rates of deformation: (1)- $\dot{\epsilon= 0.002 s^{-1}}$, (2)- $\dot{\epsilon=0.015 s^{-1}}$, (3)- $\dot{\epsilon=0.02 s^{-1}}$, (4)- $\dot{\epsilon= 0.025 s^{-1}}$, (5)- $\dot{\epsilon=0.09 s^{-1}}$. Each data set has been acquired until the physical rupture of the sample occurred.} \label{moreetas} \end{center} \end{figure*} In Fig. \ref{forces} the transient tensile forces and stresses measured for three different values of the Weissenberg number, $Wi$, are presented. The Weissenberg number is defined as $Wi=\dot \epsilon \cdot \lambda$. Corresponding to $Wi=1.1$, the tensile stress displays a broad maximum, Fig. \ref{forces} (a). In order to connect the emergence of the stress maximum with the discussion presented in \ref{subsec_analytical_condition}, we need to discuss the homogeneity of the sample around the stress maximum. The force maximum visible in Fig. \ref{forces} (a) (which corresponds to a shoulder in the transient tensile stresses) represents the onset of a primary non-uniformity of the specimen, as predicted by the Consid\`{e}re criterion, \cite{considere}. Such geometric non-uniformity of the sample (initially localized near the plates $\mathbf{P_{1,2}}$ of the rheometer) occurs in most of the extensional experiments and it is related to the rigid boundary conditions near the clamping points of the sample under investigation. Thus, the emergence of this effect depends little on the molecular structure of the material: it can be observed for rubbers, for linear polymer melts and even during cold drawing experiments (Ref. \cite{strobl} and the references therein). With increasing time the tensile stress reaches a maximum around $\epsilon_H \approx 3$. It is interesting to note that in the neighbourhood of the stress maximum the tensile force scales exponentially, in agreement with the derivation presented in Sec. \ref{subsec_analytical_condition}. This fact deserves a brief discussion. As shown in Sec. \ref{subsec_analytical_condition}, a nearly exponential scaling of the tensile force around the stress maximum can be found either when the sample deforms uniformly or when the rates of deformation are very small. Both these situations make the left hand side of Eq. \ref{eq_condition} very small allowing one to solve it for a nearly exponential tensile force. Corresponding to the Hencky strain where the local stress maximum is observed, the deformation of the sample is not homogeneous, as illustrated in the inset in Fig. \ref{forces} (a). Therefore, the nearly exponential scaling of the tensile force is due to the smallness of the deformation rate $\dot \epsilon =0.001~s^{-1}$ solely. This broad stress maximum observed at very low $Wi$ should be not be confused with the viscosity overshoot phenomenon which was observed in a faster regime of stretching where significant strain hardening effects were present, \cite{wagnerovershoot, rasmussen}. \begin{figure*} \begin{center} \centering \includegraphics[width=14cm]{f3.eps} \caption{Transient tensile forces and stresses corresponding to different regimes of extension: (a) $\dot{\epsilon} =0.001 s^{-1} (Wi=1.1)$ , (b) $\dot{\epsilon} =0.05 s^{-1} (Wi=55)$, (b) $\dot{\epsilon} =0.3 s^{-1} (Wi=330)$. The inset in panel (a) displays the image of the sample corresponding to the stress maximum. } \label{forces} \end{center} \end{figure*} A stress maximum is clearly observed at the larger rate of deformation $\dot{\epsilon} =0.05 s^{-1} (Wi=55)$, Fig. \ref{forces} (b). We note that we do not observe a true stress overshoot in the sense that the local stress maximum is not followed by a plateau. Based on the derivation presented in Sec. \ref{subsec_analytical_condition}, one can easily conclude that, corresponding to the local maximum of the tensile stress, the deformation is inhomogeneous. Indeed, as shown in Sec. \ref{subsec_analytical_condition} if a homogeneous deformation is assumed then, corresponding to the stress maximum, the tensile force should decay exponentially. This is clearly not the case for the data presented in Fig. \ref{forces} (b). Corresponding to $\dot{\epsilon} =0.3 s^{-1} (Wi=330)$, a local maximum of the tensile stress is no longer observed: the sample breaks before the tensile stress reaches either a maximum or a steady state. In order to get a more complete picture of how the shape of the transient tensile force/stress is influenced by the forcing conditions and identify the deformation regime where a stress maximum is observed, measurements similar to those presented in Fig. \ref{forces}(a-c) were performed in a wide range of Weissenberg numbers, spanning nearly three decades. \begin{figure*} \begin{center} \centering \includegraphics[width=10cm]{f4.eps} \caption{Dependence of the Hencky strain corresponding to the stress maximum (squares) and physical rupture of the sample (circles) on the Weisenberg number, $Wi$. The vertical dotted lines delineate the extension regimes (I), (II) and (III).} \label{phasediagram} \end{center} \end{figure*} The results of these measurements are summarized in Fig. \ref{phasediagram} which presents the $Wi$ dependence of the Hencky strains corresponding to a maximum in the tensile stress (the squares) and to the physical rupture of the sample (the circles). The Hencky strains presented in Fig. \ref{phasediagram} have been identified using integral measurements of the transient tensile force and stress, similar to those presented in Fig. \ref{forces}. Based on a careful inspection of the dependencies $F=F(t)$ and $\sigma=\sigma(t)$, three different regimes of extension can be distinguished, Fig. \ref{phasediagram}. For $Wi \leq 10$ the Hencky strains corresponding to a stress maximum and to the physical rupture of the sample are practically independent on $Wi$. In the deformation regime (I), the maximum of the tensile stress and the physical rupture of the sample occur at a nearly constant Hencky strain each and they are separated by (roughly) $0.8$ Hencky strain units. We once more emphasize that the broad stress maximum observed within this regime can be explained as a result of an inhomogeneous deformation process at small deformation rates and is not necessarily related to the molecular structure of the material \footnote{Such a broad stress maximum has been observed at the \textit{Institute of Polymer Materials} for polystyrene melts and several polymer blends as well during elongational experiments at low rates of deformation.}. As the rate of deformation is increased ($Wi>10$) a second deformation regime is observed. The Hencky strains corresponding to a stress maximum and to the physical rupture of the sample depend significantly on the Weissenberg number and they get progressively closer to each other as $Wi$ is increased. This finding suggests that the emergence of a stress maximum and the physical rupture of the sample are interconnected phenomena. An argument supporting this hypothesis is that, corresponding to the lower bound of the second deformation regime, (II), the time scale of the flow (estimated here as $\tau_f=1/ \dot{\epsilon} \approx 100~$) is significantly smaller than the largest relaxation time of the material, $\lambda$, given in Table \ref{tabel_rheo} as $1100~s$. Thus, the deformation state corresponding to a maximum in the tensile stress is likely to be "remembered" until the physical rupture of the sample occurs. Within the deformation regime (II), the transient elongational viscosity displays a local maximum which narrows as the rate of deformation increases. Ultimately, if the Weissenberg number is increased even further, $Wi>150$, a third deformation regime, (III), is observed. Within this deformation a local maximum of the tensile stress is no longer observed and the Hencky strain corresponding to the physical rupture of the sample becomes practically independent of $Wi$. A more detailed characterization of the deformation regimes (I-III) in connection with different failure mechanisms will be presented elsewhere. This paper is mostly dedicated to the deformation regime (II) with a particular focus on the maximum of the transient tensile stress or extensional viscosity, respectively. \subsection{Integral versus local measurements of the tensile stress} Integral stress measurements rely heavily on the homogeneity of the deformation because they use the assumption that the diameter of the sample is coordinate-independent, $D(z,t)=D_u(t)$. As opposed to this, the local stress measurements using Eq. \ref{eq_stressaverage} can properly account for the deviations from a homogeneous deformation process. In the following a direct comparison between integral and local measurements of the tensile stress is presented. \begin{figure*} \begin{center} \centering \includegraphics[width=7cm]{f5.eps} \caption{Comparison between the integral (full lines) and the locally measured elongational viscosity in each of the deformation regimes presented in Fig. \ref{phasediagram}: squares - $Wi= 2.2$ (regime I), circles - $Wi=16.5$ (regime II), triangles - $Wi=300$ (regime III). The error bars are defined by the root mean square deviation of the local stresses along the sample. The inset presents a magnified view of the data acquired in regime II. The vertical arrows indicate the physical rupture of the sample.} \label{eta} \end{center} \end{figure*} Results of such a comparison corresponding to each of the deformation regimes previously discussed are presented in Fig. \ref{eta}. One can note that, regardless the value of $Wi$, the integral transient viscosity lies systematically above the locally measured one even in a linear range of deformation, $\epsilon_{H} < 1$. It has recently been shown that this systematic overestimation of the transient extensional viscosity in the linear range is related to the large retardation times of LDPE 1840 D, \cite{localelongational2}. In the nonlinear range, the differences between integral and local measurements of the transient elongational viscosity seem to increase with the rate of deformation. Corresponding to the second deformation regime (II) ($Wi=16.5$) where the integral transient elongational viscosity displays a maximum, the locally measured elongational viscosity has no maximum but seems to reach a plateau, as clearly visible in the inset of Fig. \ref{eta}. This result has systematically been reproduced over the entire second regime of deformation (data not shown here), suggesting that maximum of the transient elongational viscosity might not be a true rheological feature of the material but merely an artefact related to the experimental procedure. In the range of high $Wi$, (regime III) neither a maximum nor a plateau of the transient elongational viscosity is observed. Within this regime, the local and the true viscosity measurements agree qualitatively, though they are quantitatively different. \subsection{Geometric non-uniformity of the sample and its relation with the stress maximum} In Fig. \ref{frames} we display images of the sample corresponding to each of the deformation regimes presented in Fig. \ref{phasediagram} and at several Hencky strains. The images corresponding to the highest Hencky strains (the last column in Fig. \ref{frames}) are the last images acquired prior to the physical rupture of the sample. The images presented in Fig. \ref{frames} have been rescaled in order to enhance the clarity of the presentation. However, this does not alter the main message concerning the geometric uniformity of the sample. \begin{figure*} \begin{center} \centering \includegraphics[width=15cm,height=11cm]{f6.eps} \caption{Sequence of specimen images under deformation at different Wi. The image rows (from top to bottom) correspond to: $Wi= 2.2$ (region I), $Wi=16.5$ (region II), $Wi=99$ (region III). The aspect ratio of each image has been modified in order to enhance the clarity. The dotted squares indicate the location of the necks. The Hencky strains are indicated on the top of each image.} \label{frames} \end{center} \end{figure*} Within the first regime of deformation (I) (first row from the top in Fig. \ref{frames}), the shape of the sample deviates strongly from a cylindrical one. The onset of these geometric non-uniformities (the primary neck extended over the entire length of the sample) occurs at low Hencky strains ($\epsilon_{H} \leq 1$) and, according to the Consid\`{e}re criterion, \cite{considere}, is related to the local maximum in the tensile force observed in Fig. \ref{forces}(a). In a range of high Hencky strains ($\epsilon_H \approx 3.5$) prior to the physical rupture of the sample, secondary necks develop in the proximity of the midpoint of the sample as visible in row (I) panels (e-g) of Fig. \ref{frames}. The exact location of these secondary necks is not reproducible in subsequent experiments. In regime (II) of deformation (second row from the top in Fig. \ref{frames}) the geometric inhomogeneity of the sample becomes even more pronounced than in regime (I): above the onset of the primary necking, the diameter of the sample is non constant over the entire length of the sample. Just after a local maximum in the viscosity is observed at $\epsilon_{H} \approx 3.3$, a secondary neck emerges slightly below the center point of the sample, second row, panel (a), Fig. \ref{frames}. A magnified view of these necks is presented in Fig. \ref{necks}. \begin{figure*} \begin{center} \centering \includegraphics[width=16cm]{f7.eps} \caption{Magnified views of the necks highlighted in Fig. \ref{frames} corresponding to regime II (second row).} \label{necks} \end{center} \end{figure*} As the Hencky strain increases, the secondary neck becomes sharper (its local diameter decreases rapidly ) and moves slowly along the sample. Another localized neck is formed at $\epsilon_{H} \approx 3.59$ and this ultimately leads to the physical rupture of the sample in a finite time. The monotonic increase of the error bars during true viscosity measurements within the regimes (I, II) and corresponding to large Hencky strains, Fig. \ref{eta}, can now be easily explained as a result of a systematic increase of sample inhomogeneity due to the emergence of secondary necks. The emergence of secondary necks can also explain the discrepancy between the integral and local transient extensional viscosity observed within the second regime of deformation (the circles in Fig. \ref{eta} and the inset). Indeed, after the secondary necks are formed along the sample, the integral viscosity measurement which uses a position independent value of the sample diameter, $D_{u}(t)=D_0 exp\left ( -\epsilon_H /2\right )$, systematically overestimates the actual average sample diameter, $D(t)= \langle D(z,t) \rangle_z $. As a consequence, above the onset of the secondary necking, the integral transient elongational viscosity decreases and a viscosity maximum is observed. On the other hand, if the emergence of the secondary necks is accounted for by averaging the stresses along the actual length of the sample, no decrease of viscosity is observed and a rather convincing steady state seems to be reached instead (the inset in Fig. \ref{eta}). These experimental findings suggest that the long debated maximum of the transient extensional viscosity does not reflect true rheological features of the material and is solely related to a severe inhomogeneity of deformation states due to the emergence of secondary necks along the sample. This conclusion is consistent with the discussion presented in Sec. \ref{subsec_analytical_condition}: if the viscosity maximum would emerge as a true rheological feature (that is in the absence of geometric inhomogeneities) than, corresponding to this maximum, the tensile force should scale exponentially which, as already discussed above, is not the case within regime (II). Finally, we turn our attention to the evolution of the sample inhomogeneity during measurements of the transient elongational viscosity in regime (III). Within this deformation regime, the overall homogeneity of the sample is better than within the regimes (I), (II), though curvature effects are visible in the proximity of the plates of the rheometer, row (III), Fig. \ref{frames}. In spite of a better sample homogeneity (no secondary necks are observed in this deformation regime), however, the differences between local and integral measurements of the viscosity are significant (the triangles, Fig. \ref{eta}). This fact deserves a brief explanation. As recently shown in \cite{localelongational2} the relative difference between local and integral measurements of the tensile stress is given by: \begin{equation} \frac{\sigma(t)-\sigma_u(t) }{\sigma_u(t)} =4L^{-1}(t) \int_0^{L(t)} \frac{\delta(z,t)[D_u(t)+\delta(z,t)]}{[D_u(t)+2\delta(z,t)]^2}dz \label{eq_error} \end{equation} where $\delta(z,t)=\frac{D(z,t)-D_u(t)}{2}$ quantifies the deviation of the sample shape from the ideal cylindrical form and $\sigma_u(t)=\frac{4F(t)}{\pi D^2_u(t)}$ is the integral stress. Assuming $\xi=\delta(z,t)/D_u(t)<1$, it can be easily shown that, to a leading order in $\xi^2$, $\left | \frac{\sigma(t)-\sigma_u(t) }{\sigma_u(t)} \right | \approx 4\xi \propto exp \left( \dot{\epsilon}/2t\right)$. This explains the increase of the relative stress error with the rate of deformation at a fixed time instant observed in Fig. \ref{eta}. \subsection{Comparison with results from literature} In the following, a comparison of our experimental findings with experimental work performed by others is presented. The recent work by Rasmussen et al., \cite{rasmussen} presents a detailed experimental observation of a true viscosity overshoot (i.e. a maximum in the extensional viscosity followed by an extended plateau). Whereas in our experiments a local maximum of the integral elongational viscosity was found during each experiment conducted in region II, a plateau following such maximum has never been observed. In order to clarify the reasons underlying this discrepancy, we first point out several similarities and differences between our experiments and those reported in \cite{rasmussen}. The material used in both experiments was the same, namely Lupolen 1840 D, and the temperaturea during the experiments were quite comparable: $T=140 ^{\circ} C$ for our experiments and $T=130 ^{\circ} C$ for the experiments reported in \cite{rasmussen}. Within $10$ degrees difference in temperature, we do not expect a significant change in the qualitative behaviour of the transient extensional viscosity. To check this, additional integral viscosity measurements (these findings will be published elsewhere) have been conducted in a wide range of temperatures (from $T=130 ^{\circ} C$ to $T=190 ^{\circ} C$) and a wide range of deformation rates (from $0.001 s^{-1}$ up to $0.3 s^{-1}$) but whenever a viscosity maximum was present it simply led to the physical rupture of the sample without any hint of a plateau regime. Therefore, we rule out the temperature as a decisive factor for the emergence of the viscosity plateau. There are,however, several other differences between the two approaches compared here. Whereas during our experiments the sample under investigation was immersed in an oil bath in order to minimize the buoyancy effects, the experiments presented in \cite{rasmussen} were performed in air and a correction for the gravity effects was employed, \cite{szabo,szabo1}. However, as this correction only subtracts the weight of the sample from the measured tensile force, it cannot be responsible for the "true overshoot" behaviour observed by Rasmussen et al. To sum up the arguments above, we believe that the disagreement between our experimental findings and those reported by Rasmussen et al. has little or nothing to do with the preparation of the samples, the operating temperature and the design of the extensional apparatus. In a last attempt to understand this discrepancy, we compared our data analysis procedure (see the description in Sec. \ref{subsec_anlysis}) with the procedure used by Rasmussen et al, \cite{rasmussen}. There exists a fundamental difference between the two approaches. Whereas we have defined the Hencky strain using the actual length of the sample, $\epsilon_H(t)=ln \left [ \frac{L(t)}{L_0} \right]$, and measured it accordingly by monitoring the position of the top plate of the rheometer, Rasmussen et al. have defined it as $\epsilon^{*}_{H}(t)=-2ln \left [ \frac{D_{mid}(t)}{D_0} \right]$, using the middle plane diameter of the sample, $D_{mid}(t)$. It is obvious that in the case of uniaxial extension at a constant rate of deformation(in time and along the entire sample), the two ways of calculating the Hencky strain are entirely equivalent. In the case of the experiments presented in this paper, however, one clearly deals with a geometrically non-uniform deformation process which ultimately translates into a strong deviation from the idealized uniaxial case. This experimental fact is illustrated in Fig. \ref{frames} where one can clearly see that, within the second deformation regime (the second row from the top), the sample is far from being cylindrical when a viscosity maximum is observed. The impact of the geometric non-uniformity of the sample on the kinematics of the deformation process is illustrated in Fig. \ref{comparison1}. \begin{figure*} \begin{center} \centering \includegraphics[width=14cm]{f8.eps} \caption{(a) Time dependence of the minimum diameter of the sample (circles), the average diameter (squares) and the maximum diameter (triangles). (b) Time dependence of the Hencky strain (calculated using the minimum diameter of the sample, $D_{min}$), $\epsilon^{*}_{~H}$. The full line is the Hencky strain measured using the actual length of the sample, $\epsilon_{~H}$. The data were acquired at constant rate of deformation, $\dot{\epsilon}=0.015~s^{-1}$.} \label{comparison1} \end{center} \end{figure*} Above the onset of the primary non-uniformity of the sample (the first maximum of the tensile force which corresponds here to $t \approx 24 s$), the strain becomes strongly localized along the sample. This can be clearly noticed in Fig. \ref{comparison1} (a) where the time dependencies of the minimum sample diameter $D_{min}$, the averaged (along the actual length of the sample) diameter $D_{av}$ and the maximum sample diameter $D_{max}$ are displayed together with the diameter corresponding to a uniform deformation at constant rate (the full line). \begin{figure*} \begin{center} \centering \includegraphics[width=8cm]{f9.eps} \caption{Comparison between the transient elongational viscosity measured by our averaging method (squares) and the transient elongational viscosity obtained following the procedure of Rasmussen et al., \cite{rasmussen} (circles).} \label{comparison2} \end{center} \end{figure*} The comparison between the Hencky strains $\epsilon_H$ and $\epsilon^*_{H}$ is presented in Fig. \ref{comparison1} (b). At late stages of the deformation process (after the viscosity overshoot is observed) the local slope $\frac{dD_{min}(t)}{dt}$ increases drastically suggesting that the highest rate of material deformation corresponds to the necked region of the sample. During our experiments, the neck is roughly located around the middle of the sample (though sometimes additional necks may emerge in other places), which is precisely the point where Rasmussen et al. measure the diameter of the sample, \cite{rasmussen}. Although in Ref. \cite{rasmussen} a feedback mechanism has been employed to ensure a constant rate of decay of the mid sample diameter, the assessment of the transient elongational viscosity remains conceptually problematic, because it implies combining an integral quantity (the measured tensile force which reflects the response of the \textit{entire} sample under deformation) with two locally measured (around the neck!) kinematic quantities: the strain and the rate of deformation. In order to clearly illustrate this, we analyse in the following our data using the procedure described in Ref. \cite{rasmussen}. The result of such an analysis is presented in Fig. \ref{comparison2} (circles) together with the true extensional viscosity obtained by our method (squares). The extensional viscosity $\mu_{1}^+$ is defined as $\mu_{1}^+(t)=\frac{4F(t)}{\pi \left ( D_{mid}(t) \right )^2 \frac{d \left (\epsilon^{*}_{H}(t) \right)}{dt} }$. One can clearly see that the two data analysis procedures applied to the same raw data (namely the same force signal and the same sequence of sample images) yield strikingly different results. Whereas our procedure hints to a plateau of the transient elongational viscosity, the procedure employed in Ref. \cite{rasmussen} leads to a clear viscosity overshoot behaviour: a viscosity plateau following the viscosity maximum is now visible up the $\epsilon^{*}_H =5$. As a conclusion, the discrepancy between our transient viscosity measurements and the results presented by Rasmussen et al. originates in the differences between the two approaches: whereas we have used an integral definition for the Hencky strain and averaged the tensile stress along the sample, Ref. \cite{rasmussen} used local values for both the Hencky strain and the stress. \section{Conclusions} A systematic investigation of the long debated "viscosity overshoot" during the uniaxial extension of a strain hardening polymer melt was presented. The mathematical condition for the tensile stress to have a local maximum is presented in Sec. \ref{subsec_analytical_condition}, using no other assumptions except the differentiability of both the tensile force and the tensile stress. According to Eq. \ref{eq_condition}, a local stress maximum may be observed during a homogeneous deformation process only if the tensile force scales exponentially around this maximum. If the deformation process is not homogeneous, a stress maximum and an exponential scaling of the tensile force may still be observed if the rates of deformation are small. These theoretical considerations are investigated experimentally corresponding to voarious deformation regimes. Depending on the magnitude of the Weissenberg number, we identify three distinct deformation regimes. At low Wi, (regime (I), Fig. \ref{phasediagram}) the integral tensile stress displays a broad maximum, Fig. \ref{forces}(a). In the neighbourhood of the stress maximum, the tensile force decays nearly exponentially with a rate set by the deformation rate, $\dot {\epsilon}$. As within this regime the deformation is inhomogeneous (Fig. \ref{frames}, row I), this nearly exponential scaling can only be explained, according to Eq. \ref{eq_condition}, by the smallness of the deformation rate. The stress maximum observed in regime I should not be confused with the viscosity overshoot phenomenon, which was observed at substantially larger $Wi$, \cite{wagnerovershoot, rasmussen}. We observe such a viscosity maximum for intermediate values of $Wi$, in regime II, Fig. \ref{phasediagram}. This maximum is clearly not consistent with a homogeneous deformation process, because in the neighbourhood of this maximum the tensile force does not scale exponentially, Fig. \ref{forces} (b). As suggested by the convergence of the stress maximum and physical rupture lines (the Hencky strains corresponding to the physical rupture of the sample) visible in Fig. \ref{phasediagram} within regime (II), the two phenomena are interconnected: the viscosity maximum is just a precursor of the physical rupture of the sample. Indeed, real time imaging of the sample confirms that right above the stress maximum, secondary necks develop along the sample leading to sample's rupture, Fig. \ref{frames}, row II. Based on the images of the sample, we measure the true tensile stress by averaging the local stresses along the actual length of the sample. Whereas in regime (II) the integral elongational viscosity displays a clear maximum, the true viscosity measurements (which properly account for the presence of necks along the sample) indicate a plateau instead. Therefore we conclude that the viscosity maximum is merely an experimental artefact introduced by the strong geometric inhomogeneity of the sample. In the fast stretching limit (regime III, Fig. \ref{phasediagram}), the homogeneity of the sample is better preserved (Fig. \ref{frames}, row III) and no viscosity maximum is observed, Fig. \ref{forces}(c). Finally, our experimental findings are compared with a recent experimental investigation of the viscosity overshoot phenomenon by Rasmussen et al., \cite{rasmussen}. The discrepancy between the true extensional viscosity measurements presented in this paper and the results presented in Ref. \cite{rasmussen} is explained by differences in the data analysis procedure. As clearly illustrated in Fig. \ref{comparison2}, using the same procedure as in \cite{rasmussen}, one can qualitatively reproduce a viscosity overshoot behaviour as well. As a final conclusion, neither a local maximum in the transient elongational viscosity nor a true viscosity overshoot behaviour are, according to our study, real rheological features but they only emerge as artefacts due to the strong geometric non-uniformity of the sample at high Hencky strains. The main issue responsible for these artefacts is the geometric inhomogeneity of the sample which becomes critical when secondary necks are formed. Existing experimental work on extensional rheology melts in a non-linear range should be reconsidered particularly in relation with the inhomogeneity of sample deformation. Theoretical work should take these findings into account. \begin{appendix} Aici trebuie sa adaug calculele \end{appendix} \subsection*{Acknowledgements} T. B. and Z.S. gratefully acknowledge the financial support from the German Research Foundation (grants $MU 1336/6-4$ and $STA 1096/1-1$, respectively). T. B. and Z.S. thank Mrs. Magdalena Papp for her assistance during some of the experiments presented in this study. One of us (T.B.) thanks Alfred Frey for valuable technical advice, assistance with the M\"{u}nstedt rheometer, and for the implementation of the digital trigger for the camera. \bibliographystyle{apalike}
1,116,691,500,572
arxiv
\section*{INTRODUCTION} Reactions between chemically active molecules in condensed matter systems are typically controlled by two factors: the diffusive search of the species for each other \cite{Calef83,Weiss86,Shoup82,Lindenberg} and the intrinsic reactivity $\kappa$ associated with the probability that a reaction indeed occurs when the particles collide with each other \cite{Hanggi90}. For chemical reactions involving sufficiently high concentrations of particles, which are initially uniformly distributed in the container or reactor such that encounters between reactive species occur more or less uniformly in time, theories based on mean effective reaction rates provide an adequate description of the reaction kinetics \cite{Calef83,Weiss86,Shoup82}---apart from some singular and well-known reaction schemes which exhibit anomalous, fluctuation-induced kinetics under special physical conditions (see, for instance, \cite{Lindenberg,Krapivsky,Oshanin89a,Oshanin89b,Yuste08,Grebenkov10a,Grebenkov10b}). Since the seminal works by Smoluchowski \cite{Smoluchowski1917} and Collins and Kimball \cite{Collins49} a vast number of theoretical advances have scrutinised a combined effect of both rate-controlling factors on the mean effective rates providing a comprehensive understanding of this effect \cite{Calef83,Weiss86,Shoup82,Lindenberg,Sapoval94,Grebenkov06,Holcman13,Gudowska17}. In particular, the mean reaction time is the sum of two time scales corresponding to the inverse diffusion coefficient and the inverse intrinsic reactivity (see equation \eqref{eq:MFPT}), such that the influence of diffusion control and (chemical) rate control are separable \cite{Collins49}. For many biochemical reactions, however, the reactive species do not exist in sufficiently abundant amounts to give rise to smooth concentration levels. In contrast, only small numbers of biomolecules, released at certain prescribed positions, are often involved in the reaction process. Indeed, in systems such as the well studied Lac and phage lambda repressor proteins only few to few tens of molecules are typically present in a living biological cell, corresponding to nanomolar concentrations. The starting positions of biomolecules can either be rather close to the target or relatively far away. Particularly in the context of the rapid search hypothesis of gene expression it was shown that the geometric distance between two genes, communicating with each other via signalling proteins -- is typically kept short by design in biological cells \cite{Kolesov07}, guaranteeing higher-than-average concentrations of proteins around the target in conjunction with fast and reliable signalling \cite{Pulkkinen13}. Quite generically, many intracellular processes of signalling, regulation, infection, immune reactions, metabolism, or transmitter release in neurons are triggered by the arrival of one or few biomolecules to a small spatially localised region \cite{Alberts,Snustad}. In such cases it becomes inappropriate to rely on mean rates, and one needs to know the whole distribution of random reaction times, also called the first passage times to a reaction event. Lacking a large number of molecules, reaction times become strongly defocused such that the mean reaction time is no longer representative and the most probable reaction time becomes relevant. We note that even for perfect reactions that occur immediately upon the first encounter between two particles and have thus infinitely large intrinsic reactivity, the mean and the most probable first passage times can differ by orders of magnitude \cite{Godec16a,Godec16b} and two first passage events in the same system may be dramatically disparate \cite{Mejia11,Mattos12,Mattos14}. For such effectively few-body reactions, most of the available theoretical effort has been concentrated on the analysis of perfect reactions and hence, on the impact of diffusion control only \cite{Redner,Benichou10,Benichou14}. In particular, in \cite{Benichou10} it was argued that for perfect reactions the reaction time density (RTD) can be accurately modelled as \begin{equation} \label{benichou} H(t)\approx q\delta(t)+(1-q)\exp(-t/t_{\mathrm{mean}})/t_{\mathrm{mean}}, \end{equation} where $t_{\mathrm{mean}}$ is the MFPT and $q$ is the contribution of trajectories which arrive to the target site immediately. Conversely, fluctuations of the cycle completion time for enzymatic reactions, in absence of any diffusion stage, have been quantified through the coefficient of variation, $\gamma$, of the corresponding distribution function of these times \cite{Moffitt10}. Few other works \cite{Shoup81,Hippel89,Zon05,Grebenkov17a,Grebenkov17b} analysed the combined effect of both rate-controlling factors but solely for the mean reaction time. These works have shown that the effect of the intrinsic reactivity is certainly significant and even most likely is the dominant factor. The question of the combined influence of both factors on the full distribution of reaction times has been only addressed most recently \cite{Grebenkov2018}, with the focus on the target search kinetics in cylindrical geometries. However, the results of \cite{Grebenkov2018} rely on the so-called self-consistent approximation \cite{Shoup81} and moreover, have a somewhat cumbersome and thus less practical form. Hence, it is highly desirable to consider particular yet generic examples for which the RTD can be calculated exactly and the results can be presented in a lucid, compact and easy to use form revealing numerous insightful features well beyond the simple approximation in equation \eqref{benichou}. This is clearly an appealing problem of utmost significance for a conceptual understanding of the kinetics of biochemical reactions. We here focus on the conceptually and practically relevant question of the influence of the intrinsic chemical reactivity and the initial position of the reacting particles onto the form of the full distribution of reaction times. We demonstrate that when the reactivity is finite and no longer guarantees immediate reaction on mutual encounter, the defocusing of reaction times is strongly enhanced. Remarkably, an extended plateau of the reaction time distribution emerges due to this reaction control, such that the reaction times turn out to be equally probable over several orders of magnitude. A direct consequence of the defocusing is that the contributions of diffusion and rate effects are no longer separable---to distinguish from the classical concepts of diffusion and kinetic control, we will talk about geometry (initial distance) control and reaction (intrinsic reaction rate) control, keeping in mind that the latter not only specifies the dominant rate-controlling factor for the MFPT, but affects the shape of the full RTD. An exact solution for the RTD provides us a unique opportunity to derive explicit formulae, for arbitrary initial conditions and arbitrary values of the intrinsic reaction constant $\kappa$ for several characteristic properties of the distribution such as, e.g., its precise functional forms in different asymptotic regimes, the corresponding crossover times between these kinetic regimes, and also the reaction depths corresponding to these time scales. \section*{RESULTS} \subsection*{Mathematical model} We consider a model involving a pair of reactive molecules: a partially absorbing, immobile target site of radius $\rho$ within a bounded domain of radius $R$ limited by an impenetrable boundary, and a molecule, initially placed at some prescribed position and diffusing with diffusivity $D$. Once the diffusing particle hits the surface of the target site it reacts with (binds to) the latter with a finite, intrinsic reaction rate $\kappa$. The reflecting outer boundary can mimic an impenetrable cell membrane, the reaction container's surface, or be an effective virtual frontier of the ``zone of influence'' of the target molecule, separating it from other remotely located target molecules. \begin{figure*} \includegraphics[width=18cm]{figure1.eps} \caption{ Reaction control. Reaction time density $H(r,t)$ for a reaction on an inner target of radius $\rho/R=0.01$, with starting point {\bf (a)} $r/R=0.2$ and {\bf (b)} $r/R=0.02$ for four progressively decreasing (from top to bottom) values of the dimensionless reactivity $\kappa' = \kappa R/D$ indicated in the plot. Note that $\kappa'$ includes $R$ and $D$ such that smaller values of $\kappa'$ can also be achieved at a fixed $\kappa$ upon lowering $R$ or by increasing the values of $D$. The coloured vertical arrows indicate the mean reaction times for these cases. The vertical black dashed line indicates the crossover time $t_c=2(R-\rho)^2/(D\pi^2)$ above which the contribution of higher order Laplacian eigenmodes become negligible. This characteristic time marks the end of the hump-like region (L\'evy-Smirnov region specific to an unbounded system, see below and the Method section for more details) and indicates the crossover to a plateau region with equiprobable realisations of the reaction times. This plateau region spans a considerable window of reaction times, especially for lower reactivity values. Thin coloured lines show the reaction time density $H_\infty(r,t)$ from equation \eqref{eq:Ht_Rinf} for the unbounded case ($R\to\infty$). Length and time units are fixed by setting $R=1$ and $R^2/D=1$. Note the extremely broad range of relevant reaction times (the horizontal axis) spanning over 12 orders of magnitude for the panel {\bf (b)}. Coloured bar-codes {\bf (c,d)} indicate the cumulative depths corresponding to four considered values of $\kappa'$ in decreasing order from top to bottom. Each bar-code is split into ten regions of alternating brightness, representing ten $10\%$-quantiles of the distribution (e.g., the first dark blue region of the top bar-code in panel {\bf (c)} indicates that $10\%$ of reaction events occur till $Dt/R^2 \simeq 1$).} \label{fig:Ht} \end{figure*} Assuming that the domain has a spherical shape and placing the target at the origin of this domain renders the model exactly solvable. We note that although such a geometrical setup is simplified as compared to realistic situations (e.g., the target site is not necessarily located at the centre of the domain \cite{Benichou10,Benichou14} or may be attached to some structure which partially screens it \cite{Grebenkov17b,Grebenkov2018}), this model captures explicitly two essential ingredients of the reaction process: the diffusive search for the target site and its finite intrinsic reactivity. Importantly, the fact that the model is exactly solvable, permits us to unveil some generic features of the full RTD without resorting to any approximation. The probability density function $H(r,t)$ of the reaction time $t$ for a particle released a radial distance $r-\rho$ away from the spherical target of radius $\rho$ is calculated using standard tools \cite{Redner,Gardiner,Carslaw}: one first finds the survival probability $S(r,t)$ of a diffusing particle in a radially symmetric situation subject to the zero-current boundary condition on the outer boundary of the domain, and the ``radiation'', or partially-reflecting boundary condition \cite{Weiss86,Calef83,Shoup82} \begin{equation} \label{bc} D\left.\frac{\partial S(r,t)}{\partial r}\right|_{r=\rho}=\kappa S(\rho,t), \end{equation} imposed on the surface of the target site. The proportionality factor $\kappa$ in equation \eqref{bc} is an intrinsic rate constant (of dimension length/time) whose value shows how readily the particle reacts with the target site upon encounter. When $\kappa=0$ no reaction occurs, while the limit $\kappa=\infty$ corresponds to a perfect reaction, when a particle reacts with the target site upon a first encounter. These limiting cases therefore correspond to perfectly reflecting or absorbing boundaries, respectively. The RTD $H(r,t)$ is obtained as the negative derivative of $S(r,t)$ and is valid for arbitrary values of the system parameters. Details of these calculations are presented in the beginning of the Method section. \begin{figure*} \includegraphics[width=18cm]{figure2.eps} \caption{ Geometry control. Reaction time density $H(r,t)$ for a reaction on an inner target of radius $\rho/R = 0.01$, for the different initial radii $r$ indicated in the panels ($r$ increasing from top to bottom). The values of the reactivity are \textbf{(a)} $\kappa = \infty$ (perfectly reactive) and \textbf{(b)} $\kappa R/D =1$ (partially reactive). The coloured vertical arrows indicate the mean reaction times for these cases (note that some arrows coincide). The vertical black dashed line indicates the crossover time $t_c = 2(R-\rho)^2/(D\pi^2)$ from the hump-like L\'evy-Smirnov region to a plateau-like one. Thin coloured lines show the reaction time density $H_\infty(r,t)$ from equation \eqref{eq:Ht_Rinf} for the unbounded case ($R\to\infty$). The length and time units are fixed by setting $R=1$ and $R^2/D = 1$. Clearly the positions of the most likely reaction times are geometry-controlled by the initial distance to the target. Not surprisingly, for the largest initial distance the solution for the unbounded case underestimates the RTD hump. Note the extremely broad range of relevant reaction times (horizontal axis) spanning over 12 orders of magnitude in panel {\bf (b)}. Coloured bar-codes {\bf (c,d)} indicate the cumulative depths corresponding to four considered values of $r/R$ in increasing order from top to bottom. Each bar-code is split into ten regions of alternating brightness, representing ten $10\%$-quantiles of the distribution. In spite of distinctions in the probability densities in panel {\bf (b)}, the corresponding cumulative distributions are close to each other and result in very similar reaction depths.} \label{newfig} \end{figure*} \begin{figure*} \includegraphics[width=18cm]{figure3.eps} \caption{ Reaction versus geometry control. Impact of the finite reactivity (reaction control, {\bf a}) and of the distance to the target (geometry control, {\bf b}) onto the reaction time density shown as a ``heat map'', in which the value of the reaction time density (in arbitrary units) is determined by the colour code. Blue and white lines respectively show the mean and the most probable reaction times that differ by orders of magnitude. The grey vertical line indicates the crossover time $t_c$ that does depend neither on reactivity, nor on distance. {\bf (a)} when the reactivity $\kappa$ decreases (with $r/R = 0.2$ being fixed), the distribution becomes much broader and extends towards longer reaction times. {\bf (b)} when the distance to the target decreases (with $\kappa R/D = 1$ fixed), the most probable reaction time shifts to the left, whereas the mean reaction time remains constant.} \label{fig:Ht_heatmap_kappa} \end{figure*} \subsection*{Structure of the full distribution of reaction times} The typical shapes of the reaction time density $H(r,t)$ are shown in figure \ref{fig:Ht} for two different release radii $r$ and different values of the dimensionless reactivity $\kappa R/D$. Note that the parameter $\kappa R/D$ represents a combined effect of two factors: based on the definition of the standard chemical constant $K_{\mathrm{on}}=4\pi\rho^2\kappa$ for a forward reaction and the definition of the so-called Smoluchowski constant $K_S=4\pi D\rho$ we see that $\kappa R/D=(K_{\mathrm{on}}/K_S)(R/\rho)$ and, hence, this is the ratio of the chemical rate and the Smoluchowski rate constant, multiplied by the ratio of the sizes of the domain and of the target site. We notice that $H(r,t)$ has a much richer structure than the previously proposed simple form \eqref{benichou}. The RTD consists of four distinct time domains seen in figures \ref{fig:Ht}, \ref{newfig}, and \ref{fig:Ht_heatmap_kappa}: first, a sharp exponential cut-off at short reaction times terminating at the most probable time $t_{\rm mp}$; second, a region spanning from the most probable reaction time to the crossover time $t_c$ in which $H(r,t)$ shows a slow power-law decrease; third, an extended plateau region beyond $t_c$ which stretches up to the mean reaction time $t_{\rm mean}$; and fourth an ultimate long-time exponential cut-off. The shape of the RTD for varying reactivities highlighting the geometry-controlled L{\'e}vy-Smirnov hump and the reaction-controlled plateau region is our central result. In order to a get a deeper understanding of the time scales involved in the reaction process, we also introduce and analyse in the Method section the forms of two complementary characteristic times: the harmonic mean reaction time $t_{\rm harm} = 1/\langle 1/t\rangle$ and the typical reaction time $t_{\mathrm{typ}}= t_0 \exp(\langle\ln t/t_0\rangle)$, where the angular brackets denote averaging with respect to the RTD depicted in figures \ref{fig:Ht} and \ref{newfig}, and $t_0$ is an arbitrary time scale. Since the logarithm is a slowly varying function, its average value is dominated by the most frequent values of $t$, while anomalously large/small values corresponding to rare events provide a negligible contribution. Such an averaged value is widely used to estimate a typical behaviour in diverse situations \cite{Evans11,Dean14}. \subsection*{Three characteristic time scales} The most probable reaction time, corresponding to the very pronounced maximum, can be calculated explicitly (see the Method section) and has the approximate form \begin{equation} \label{most} t_{\mathrm{mp}}\approx(r-\rho)^2/(6D). \end{equation} Interestingly, this simple estimate, which depends only on the diffusion coefficient and the initial distance to the target site, appears to be very robust: $t_{\rm mp}$ indeed shows very little variation with the reactivity $\kappa$, as one may infer from figures \ref{fig:Ht} and \ref{fig:Ht_heatmap_kappa}. In the Method section, we show that when $\kappa$ decreases from infinity to zero, the value of $t_{\mathrm{mp}}$ varies only by a factor of $3$. This characteristic time is always strongly skewed towards the left tail of the distribution, that is, to short reaction times: $t_{\mathrm{mp}}$ in fact corresponds to particles moving relatively directly from their starting point to the target followed by an immediate reaction and thus generalises the concept of direct, purely geometry-controlled trajectories \cite{Godec16a} to systems with reaction control. Note that expression (\ref{most}) is different from the diffusion-controlled additive contribution proportional to $1/D$ in the mean reaction time (\ref{eq:MFPT}). The second characteristic time scale is the crossover time $t_c$ from the hump-like L\'evy-Smirnov region specific to an unbounded system, to the plateau region. Hence, $t_c$ can be interpreted as the time at which a molecule starts to feel the confinement. This can be nicely discerned from comparison with the density $H_\infty(r,t)$ for the unbounded case (figure \ref{newfig}). Thus, reaction times beyond $t_c$ correspond to indirect trajectories \cite{Godec16a}. From the result \begin{equation} \label{tc} t_c\approx2(R-\rho)^2/(\pi^2D) \end{equation} obtained in the Method section, we see that $t_c$ is independent of the starting point and of the reactivity $\kappa$, being entirely dominated by the diffusivity and the difference between the sizes of the domain $R$ and of the target. Writing $t_{\rm mp}/t_c = \pi^2(r-\rho)^2/[12(R-\rho)^2]$, one realises that the crossover time can be comparable to the most probable time (such that the hump-like region shrinks), but may also become much larger than the latter when $r$ is close to $\rho$, as it happens, e.g., when proteins are produced in a close vicinity of a first gene activated at $t=0$. In this case, of course, the hump-like region will be most pronounced (figure \ref{newfig}). Finally, the onset of the right exponential shoulder at long reaction times coincides with the mean reaction time, as indicated by the arrows in figures \ref{fig:Ht} and \ref{newfig}. The latter is obtained from the Laplace transformed distribution (see the Method section) and is given by the exact formula \begin{equation} \label{eq:MFPT} t_{\mathrm{mean}}=\frac{(r-\rho)(2R^3-\rho r(r+\rho))}{6Dr\rho}+\frac{R^3- \rho^3}{3\kappa\rho^2}, \end{equation} which can be thought of as an analogue of the celebrated Collins-Kimball relation for the apparent reaction rate \cite{Collins49}. The first term in equation \eqref{eq:MFPT} is the standard MFPT to a perfectly reactive target and corresponds to the classical notion of diffusion-controlled rate. The additional contribution to $t_{\mathrm{mean}}$ proportional to $\kappa^{-1}$ accounts for the imperfect reaction with finite reactivity, independent of the particle's starting point. When $t_{\mathrm{mean}}$ is a unique time scale characterising exhaustively well the reaction kinetics, as it happens for reactions with sufficiently high concentrations of reactants, one can indeed distinguish between diffusion or kinetic control. In contradistinction, for reactions with nanomolar concentrations of reactive species, the other time scales $t_{\mathrm{mp}}$ and $t_c$ are equally important and no clear-cut separation between diffusion and kinetic control can be made. In the Method section, we also present an explicit exact expression for the variance of the first reaction time, which permits us to determine the coefficient of variation of the RTD and hence, to quantify its broadness. \subsection*{Geometry versus reaction control} We emphasise that even for perfect reactions, for which $\kappa=\infty$, the mean reaction time is orders of magnitude longer than the most probable reaction time. For imperfect reactions (finite $\kappa$ values) the mean reaction time becomes even longer, and diverges as $1/\kappa$ when $\kappa\to 0$. The fact that the most probable reaction time is very weakly dependent on $\kappa$ renders the difference between the most probable and the mean reaction times so much more severe for finite $\kappa$. Another remarkable and so far unnoticed feature is that a pronounced plateau develops beyond $t_c$, reflecting an emergent regime of reaction-control. This plateau exists even for $\kappa=\infty$ (figure \ref{fig:Ht}) and becomes increasingly longer with decreasing reactivity $\kappa$, implying that over several decades the values of the reaction time become equally probable. Mathematically speaking this plateau appears due to the fact that the smallest eigenvalue of the boundary value problem---the only eigenvalue with an appreciable dependence on $\kappa$---disentangles from the remaining eigenvalues. This point is discussed in more detail in the Method section. Physically, the emergence of the plateau implies that the first passage process to the reaction event becomes even more defocused with decreasing $\kappa$, i.e., that the spread of possible reaction times increases significantly. The long spread of reaction times within this plateau region is a consequence of geometrically defocused trajectories exploring the boundary of the reaction volume reinforced by the necessary multiple collisions with the target before a final reaction event due to the reaction-control with finite reactivity. An important consequence of the existence of the extended plateau region is that all positive moments of $H(r,t)$, not only the mean reaction time, will be dominated by integration over this region. In other words, the resulting RTD is a concerted effect of geometry-control and reaction-control. In figure \ref{newfig} we analyse the effect of the initial distance to the surface of the target site for both perfect and imperfect reactions. The exponential shoulder at long reaction times almost coincides for all cases, especially when the reactivity is finite. This part of the reaction time distribution is dominated by trajectories that equilibrate in the volume before eventual reaction (indirect trajectories \cite{Godec16a}). In contrast, we see a strong variation of the most likely reaction time. The exponential cut-off at short reaction times and the position of the maximum of the distribution is geometry-controlled, as can be anticipated from the L{\'e}vy-Smirnov form for the unbounded problem (see the Method section): direct trajectories from the initial position to the target need a minimum travel time. For increasing initial distance the most likely reaction time thus moves to longer times and the relative contribution of the geometry-controlled fraction of direct trajectories becomes less relevant: instead the particles almost fully equilibrate in the confined volume until they finally react with the target. This reaction-control effect is accentuated for decreasing reactivity. We stress that for biological applications both cases are relevant: shorter initial distances, for instance, are involved when proteins are produced around a first gene activated at time $t=0$ and these proteins then need to move to a close-by second gene, here represented by the inner target. This scenario is very similar to the one discussed in reference \cite{Pulkkinen13} as an example for the rapid search hypothesis \cite{Kolesov07}. Longer initial distances are relevant when a molecular signal passes the cellular membrane or is produced around a cytoplasmic plasmid, and when these molecules then need to diffuse to the nucleoid region in a bacterial cell or pass the nuclear membrane in an eukaryotic cell. Figure \ref{fig:Ht_heatmap_kappa} summarises the effects of the finite reactivity and of the distance to the target onto the reaction time distribution in the form of a ``heat map''. \subsection*{Short- and long-time behaviour} We now turn to the discussion of the short- and long time tails of $H(r,t)$. The long-time behaviour of the density $H(r,t)$ is determined by the smallest eigenvalue $\lambda_0$ of the Laplace operator. For the spherical domain, one can accurately compute this eigenvalue by solving the trigonometric equation (see the Method section). When both the target and its reactivity are small one gets $\lambda_0\approx\kappa S_\rho/(DV)$, where the surface area $S_\rho=4\pi\rho^2$ of the target and the volume of the domain $V\approx4\pi R^3/3$ are introduced. According to equation \eqref{eq:MFPT}, in this limit $t_{\rm mean} \approx1/(D\lambda_0)$, i.e., the mean reaction time is dominated by multiple returns to the target until the reaction occurs. As the target shrinks ($\rho$ vanishes), the smallest eigenvalue tends to zero. In turn, the other eigenvalues $\lambda_n$, corresponding to rotation-invariant eigenfunctions of the Laplace operator in the spherical domain, are bounded from below: $\lambda_n>\pi^2n^2/R^2$ for $n=1,2,\ldots$. As a consequence, there is an intermediate range of times, $1/(D\lambda_1)\ll t\ll1/(D\lambda_0)$, for which the contribution of all higher-order eigenmodes vanishes, that is, $e^{-Dt\lambda_n}\ll 1$, whereas the contribution of the lowest eigenmode is almost constant in time, $e^{-Dt\lambda_0}\approx 1$. This is precisely the reason why the intermediate, plateau-like region emerges, see figure \ref{fig:Ht}. Note that this region protrudes over an increasing range of time scales when either the reactivity $\kappa$ or the target radius $\rho$ decrease, or both. Note also that this intermediate regime corresponds approximately to an exponential law which is often evoked in the context of the first passage statistics to small targets, see, for instance, references \cite{Benichou10,Meyer11,Isaacson13}. While the smallest eigenvalue determines the plateau and the ultimate exponential cut-off, the short-time behaviour of the reaction time density $H(r,t)$ is determined by other eigenmodes. Since the limit of a small target ($\rho \ll R$) can alternatively be seen as the limit of large domain size, one can use the density $H_\infty(r,t)$ for diffusion in the exterior of a target, which was first derived by Collins and Kimball \cite{Collins49}, \begin{eqnarray} \label{eq:Ht_Rinf} && H_\infty(r,t)=\frac{\kappa}{r}\exp\biggl(-\frac{(r-\rho)^2}{4Dt}\biggr) \biggl\{\frac{\rho}{\sqrt{\pi Dt}}\\ \nonumber &&-\biggl(1+\frac{\kappa\rho}{D}\biggr)\erfcx\biggl(\frac{r-\rho}{\sqrt{4Dt}}+\biggl(1+\frac{\kappa \rho}{D}\biggr) \frac{\sqrt{Dt}}{\rho}\biggr) \biggr\}, \end{eqnarray} where $\erfcx(x)=e^{x^2} \erfc(x)$ is the scaled complementary error function (its derivation is reproduced in the Method section). As demonstrated in figure \ref{fig:Ht}, equation \eqref{eq:Ht_Rinf} fully captures the geometry-controlled part of the reaction time distribution. In the limit of a perfectly absorbing target, $\kappa\to\infty$, this expression reduces to \begin{equation} \label{eq:Ht_Rinf_kinf} H_\infty(r,t)=\frac{\rho}{r} \, \frac{r-\rho}{\sqrt{4\pi Dt^3}} \, \exp\biggl(-\frac{(r-\rho)^2}{4Dt}\biggr) , \end{equation} whose normalisation $\rho/r\le1$ reflects the transient nature of diffusion in three dimensions. One can easily check that the maximum of this L{\'e}vy-Smirnov-type density is given exactly by equation \eqref{most}, as intuitively expected. \subsection*{Approximate form of the full distribution} Combining the short and long time contributions we arrive at the following approximate formula for the reaction time density \begin{equation} \label{eq:Ht_app} H(r,t)\approx H_{\infty}(r,t)+ (1-q)\frac{e^{-t/t_{\rm mean}}}{t_{\rm mean}} \,, \end{equation} where $t_{\rm mean}\approx1/(D\lambda_0)$ and \begin{equation} q = \int\limits_0^\infty dt \, H_\infty(r,t) = \frac{\rho/r}{1+D/(\kappa\rho)} < 1 \end{equation} is the hitting probability of the target. The correct normalisation of $H(r,t)$ is ensured by the prefactor in front of the second term. Result \eqref{eq:Ht_app} is substantially more general than the simple form \eqref{benichou} suggested in \cite{Benichou10}. The form \eqref{eq:Ht_app} not only extends expression \eqref{benichou} to the partially-reactive case, i.e., for arbitrary finite values of $\kappa$, but also emphasises and provides an explicit form for the contribution from the hump-like region around $t_{\mathrm{mp}}$, which is most relevant for reactions in which the molecule starts close to the target. \begin{figure*} \includegraphics[width=18cm]{figure4.eps} \caption{ Explicit approximation for the reaction time density $H(r,t)$. It is evaluated for a reaction with an inner target of radius $\rho/R=0.01$ with starting point {\bf (a)} $r/R=0.2$ and {\bf (b)} $r/R=0.02$, and four values of the dimensionless reactivity $\kappa' = \kappa R/D$ (decreasing from top to bottom). The coloured vertical arrows indicate the respective mean reaction time. The black vertical dashed line shows the crossover time $t_c=2(R-\rho)^2/(D\pi^2)$ above which the contribution of higher order Laplacian eigenmodes become small. Thin black lines show the approximation \eqref{eq:Ht_app} of the RTD which very nicely captures the main features of the exact density. Length and time units are fixed by setting $R=1$ and $R^2/D=1$.} \label{fig:Ht_app} \end{figure*} Figure \ref{fig:Ht_app} illustrates the quality of this approximation, showing that it becomes most accurate when the target radius $\rho$ or reactivity $\kappa$ are small. One observes that it accurately captures both the maximum, the plateau, and the exponential cut-off of the reaction time distribution. In turn, the transition between the maximum and the plateau region is less sharp than in the exact form. A minor inaccuracy of the approximation \eqref{eq:Ht_app} is that it reaches a constant---set by the second term---in the short time limit while the exact distribution vanishes as $t\to0$. This feature can be simply removed by multiplying the second term by a Heaviside step function $\Theta(t-t_c)$ and re-evaluating the normalisation constant. But even in the present form approximation \eqref{eq:Ht_app} provides a remarkably good insight into the behaviour of the first passage dynamics and can thus be used as an efficient and easy-to-handle fit formula for data analysis or for explicit analytical derivations of follow-up processes. \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline $r/R$ & region & $\kappa'=\infty$ & $\kappa'=10$ & $\kappa'=1$ & $\kappa'=0.1$\\ \hline & hump-like & 3.8 & 0.34 & 0.04 & 0.004 \\ 0.2 & plateau-like & 59.4 & 62.9 & 63.2 & 63.2 \\ & exponential tail & 36.8 & 36.8 & 36.8 & 36.8 \\ \hline & hump-like & 49.4 & 4.4 & 0.5 & 0.05 \\ 0.02 & plateau-like & 20.0 & 58.8 & 62.7 & 63.15 \\ & exponential tail & 30.6 & 36.8 & 36.8 & 36.8 \\ \hline \end{tabular} \end{center} \caption{ Impact of the target reactivity and proximity onto the reaction depth. Relative weights (in per cents) of three characteristic regions of the reaction time density for $\rho/R=0.01$: the hump-like region around the most probable reaction time $t_{\rm mp}$, extending from $0$ till $t_c = 2(R-\rho)^2/(\pi^2 D)$ (and thus merging two subregions discussed in the text: the exponential tail left to $t_{\rm mp}$ and the power-law decay right to $t_{\rm mp}$); the plateau-like region stretching from $t_c$ to the mean reaction time $t_{\rm mean}$; and the exponential tail which persists beyond $t=t_{\mathrm{mean}}$. Two starting points $r/R$ and four values of dimensionless reactivity $\kappa'=\kappa R/D$ are used, corresponding to figure \ref{fig:Ht}.} \label{tab:weights} \end{table} \subsection*{Reaction depth} Lastly, we point out that the contributions of the four different regimes separated by the time scales $t_{\mathrm{mp}}$, $t_c$, and $t_{\mathrm{mean}}$ can be further quantified by the corresponding reaction depths defining which fraction of trajectories reacted up to a given time. We thus focus now on the cumulative distribution function of reaction times \begin{equation} \label{rdepth} F_r(t)=\int^t_0dt'H(r,t')=1-S(r,t), \end{equation} with the evident property $F_r(\infty)=1$ in a bounded domain in which $H(r,t)$ is normalised, and thus shows explicitly which fraction of trajectories have reacted up to time $t$. The reaction depth is illustrated in the Method section. Table \ref{tab:weights} summarises the values of the reaction depths of the three characteristic regions of the RTD: the hump-like region around $t_{\mathrm{mp}}$, the plateau region, and the exponential tail. We realise that for $r/R = 0.2$ the least amount of the reaction events happens within the hump-like region: it is of order of just $4\%$ for perfect reactions, and this fraction rapidly diminishes upon a decrease of $\kappa$. In turn, a much larger amount of the reaction events is collected within the final exponential region. It is typically of order of almost $37\%$, independently of the value of $\kappa$, meaning that for such a value of the ratio $r/R$ roughly one third of all realisations remain unreacted at time $t = t_{\rm mean}$. However, most of realisations of the reaction events occur within the plateau-like regime -- it amounts to roughly $59\%$ for perfect reactions, and becomes even bigger for smaller values of $\kappa$. The situation becomes different for a smaller release radius: $r/R=0.02$. Here, for perfect reactions the majority of trajectories ($49\%$ such that $t_c$ is close to the median time) react within the hump-like region, while the plateau region and the final exponential tail contribute only $20\%$ and $30\%$, respectively. Upon lowering $\kappa$, the hump-like region is no longer representative, and more reaction events take place during the exponential tail ($\sim 37\%$) and the plateau-like regions ($\sim 63\%$), respectively. In conclusion, the plateau region appears to be the most important part of the RTD which contributes most to the overall number of reaction events, except for the case $r/R \ll 1$ and $\kappa R/D\gg1$, for which the hump-like region becomes the dominant one. Concurrently, this plateau is the region of the strongest defocusing effect, in particular for increased reaction-control. \section*{DISCUSSION} \label{sec:discussion} Many molecular signalling processes in living biological cells run off at minute concentrations. Similarly in vitro experiments tracking the motion of colloidal particles employ only few particles. Individual first passage events in such situations are defocused, that is, possible reaction times are spread over a vast range comprising orders of magnitude. In particular, this implies that any pair of reaction events will be characterised by highly disparate reaction times. The quantitative description of the reaction time to a target in this scenario therefore cannot simply be based on the mean reaction time. As we showed, the resulting broad distribution of reaction times is due to a conspiracy between geometry-control and reaction-control effects which cannot be disentangled. We analysed this phenomenon in detail for a generic spherical geometry, concentrating on several main features. (i) The reaction time density consists of four regions with distinct asymptotic behaviour. (ii) These time regions are separated by three characteristic time scales, which means that there is no unique time scale characterising the kinetic behaviour exhaustively well and the reaction times are defocused. In consequence, the textbook notions of diffusion versus kinetic control, which are appropriate for reactions operating at abundant concentrations, are not applicable in our case. We explicitly determined these times scales and also the associated reaction depths. (iii) A finite reactivity broadens an intermediate regime characterised by an extended plateau region. We showed that the plateau emerges due to a time scale separation of the lowest and the next eigenvalues of the diffusion-controlling Laplace operator. The fundamental parameter we found to quantify this intermediate regime is the reaction-control represented by the dimensionless reactivity $\kappa R/D$. A majority of the reaction events occur within this region, except for the case $r/R \ll 1$ and $\kappa R/D \gg 1$. In turn, for perfect reactions with a reactant starting very close to the target site the most important part of the RTD is the hump-like region which contributes with almost $50$ per cent of the reaction events. (iv) The geometry control of the initial particle-to-target distance strongly affects the position and the amplitude of the maximum of the reaction time distribution and thus the most likely reaction time. (v) We came up with a simple and thus practical approximative formula for the full reaction time distribution. In particular, we demonstrated that this approximation captures both the most likely and mean reaction times. While the derivation relied on the rotation symmetry of the considered geometric domain, this approximation is expected to be valid in more complex confinements, as long as the target site is far enough from the surrounding outer boundary. Our main conclusion is that reaction-control with finite reactivity leads to even stronger reaction time defocusing, stressing the necessity to know the full RTD. This conclusion will serve as a benchmark for the behaviour in geometrically more complex situations \cite{Benichou14} when, e.g., the target site is on the wall or bound to some geometrical structure within the domain, and a fully analytical solution is impossible. \section*{METHOD} \subsection*{Exact distribution of reaction times} We consider a diffusion process in a three-dimensional domain $\Omega= \{\x\in\R^3:\rho<\|\x\|<R\}$ between two concentric spheres -- a small target and a bounding surface of radii $\rho$ and $R$, respectively. Although the solution of the underlying diffusion problem is well known \cite{Redner,Carslaw}, we rederive it here for completeness and to highlight several practical points discussed in the main text. In fact, the Laplace transformed probability density function $\tilde H(\x,p)$ satisfies the modified Helmholtz equation \begin{equation} \label{eq:Helmholtz} (p-D\Delta)\tilde{H}(\x,p)=0\qquad(\x\in\Omega), \end{equation} subject to the boundary conditions \begin{subequations} \begin{eqnarray} &&\bigl(\partial_n\tilde{H}(\x,p)\bigr)|_{\|\x\|=R}=0,\\[0.2cm] &&\biggl(\frac{D}{\kappa}\partial_n\tilde{H}(\x,p)+\tilde{H}(\x,p)\biggr)|_{\|\x\| =\rho}=1. \end{eqnarray} \end{subequations} Here $\Delta$ is the Laplace operator, $D$ is the diffusion coefficient, $\kappa$ is the intrinsic reactivity, and $\partial_n$ is the normal derivative directed outward from the domain $\Omega$. The rotational symmetry of the domain reduces the partial differential equation \eqref{eq:Helmholtz} to an ordinary differential equation with respect to the radial coordinate $r$, \begin{subequations} \begin{eqnarray} &&\tilde{H}''(r,p)+\frac{2}{r}\tilde{H}'(r,p)-\frac{p}{D}\tilde{H}(r,p)=0,\\[0.2cm] &&\tilde{H}'(R,p)=0,\\[0.2cm] &&\biggl(\tilde{H}(r,p)-\frac{D}{\kappa}\tilde{H}'(r,p)\biggr)_{r=\rho}=1, \end{eqnarray} \end{subequations} where primes denote derivatives with respect to $r$. The solution of this equation is \begin{equation} \label{eq:Hp} \tilde{H}(r,p)=\frac{g(r)}{g(\rho)-g'(\rho)\frac{D}{\kappa}} \,, \end{equation} where \begin{equation} \label{eq:gr} g(r)=\frac{R\sqrt{p/D}\cosh\xi-\sinh\xi}{r\sqrt{p/D}} \,, \end{equation} with $\xi=(R-r)\sqrt{p/D}$. It follows that \begin{equation} \label{eq:dgr} g'(r)=\frac{(1-Rr p/D)\sinh\xi-\xi\cosh\xi}{r^2\sqrt{p/D}} \,. \end{equation} The mean reaction time is obtained from the Laplace-transformed density as \begin{equation} t_{\rm mean} = -\lim\limits_{p\to 0}\frac{\partial}{\partial p}\tilde H(r,p) \,, \end{equation} from which equation \eqref{eq:MFPT} follows. In the limit $R\to \infty$, equations \eqref{eq:Hp}, \eqref{eq:gr}, and \eqref{eq:dgr} yield \begin{equation} \label{eq:Htilde_inf} \tilde{H}_\infty(r,p)=\frac{(\rho/r) \, e^{-(r-\rho)\sqrt{p/D}}}{1+\bigl(1+ \rho \sqrt{p/D}\bigr) D/(\kappa \rho)} \,. \end{equation} Due to the transient character of three-dimensional diffusion, the related distribution is not normalised to unity, but $\tilde{H}_\infty(r,p=0)=(\rho/r)/(1 +D/(\kappa\rho))<1$ is the probability of reacting with the target before escaping to infinity. The inverse Laplace transform yields equation \eqref{eq:Ht_Rinf}. Using the relation $\tilde{S}_\infty(r,p) = (1 - \tilde{H}_\infty(r,p))/p$ and equation \eqref{eq:Htilde_inf}, one can also compute the survival probability $S_\infty(r,t)$ in the time domain \begin{eqnarray} \nonumber && S_\infty(r,t) = 1 - \frac{\rho \exp\bigl(-\frac{(r-\rho)^2}{4Dt}\bigr)}{r(1 + D/(\kappa\rho))} \biggl\{ \erfcx\biggl(\frac{r-\rho}{\sqrt{4Dt}}\biggr) \\ \label{eq:S_inf} && - \erfcx\biggl(\frac{r-\rho}{\sqrt{4Dt}} + \left(1 + \frac{\kappa \rho}{D}\right) \frac{\sqrt{Dt}}{\rho}\biggr)\biggr\} \,. \end{eqnarray} The Laplace inversion of equation \eqref{eq:Hp} can be performed by identifying the poles of the function $\tilde{H}(r,p)$ in the complex plane $p\in\C$, that is, by finding the zeros of the function \begin{equation} F(p)=g(\rho)-\frac{D}{\kappa}g'(\rho). \end{equation} For convenience, we introduce dimensionless Laplace variable $s=(R-\rho)^2p/D$, so that \begin{align} \nonumber && F(p) = \frac{1}{\rho^2\sqrt{s}}\biggl(\bigl(\rho R+\mu(R-\rho)^2\bigr)\sqrt{s}\cosh \sqrt{s}\\ &&-\bigl(\rho(R-\rho)+\mu(R-\rho)^2-\mu R\rho s\bigr)\sinh\sqrt{s}\biggr), \end{align} where we defined the dimensionless ``dilatoriness'' parameter $\mu$ as \begin{equation} \label{mu} \mu=\frac{D}{\kappa(R-\rho)} \,. \end{equation} The perfectly reactive target with $\kappa=\infty$ corresponds to $\mu=0$. In other words, for high reactivity $\kappa$ the value of the dilatoriness $\mu$ is small and reactions occur more likely on first encounter, and vice versa. Note that a fully reflecting target with $\kappa=0$ is excluded from our analysis because the reaction time would be infinite. In other words, we always consider $0\leq\mu<\infty$. The solutions of the equation $F(p)=0$ lie on the negative real axis. Setting $s=- \alpha^2$, one gets the trigonometric equation \begin{equation} \label{eq:eq_alpha} \tan\alpha=\frac{\alpha\bigl(\rho R+\mu(R-\rho)^2\bigr)}{\rho(R-\rho)+\mu(R- \rho)^2+\mu R\rho\alpha^2} \,. \end{equation} This equation has infinitely many positive solutions that we denote as $\alpha_n$, with $n=0,1,2,\ldots$ Since the function on the right-hand side has the slope $\frac{\rho R+\mu(R-\rho)^2}{\rho(R-\rho)+\mu(R-\rho)^2}>1$ near $\alpha=0$, the smallest solution $\alpha_0$ lies in the interval $(0,\pi/2)$. More generally, the $n$th solution lies in the interval $(\pi n,\pi(n+1/2))$ and tends, for any fixed $\kappa$, to the left boundary of the interval as $n \to\infty$. Note that $\alpha =0$ (or $p=0$) is not a pole of the function $\tilde H(r,p)$. Once the poles are identified, we determine the residues by taking the derivative of $F(p)$ at the poles. Applying the theorem of residues to compute the inverse Laplace transform, we finally deduce the exact expression for the probability density $H(r,t)$ of the reaction time for a particle starting at a distance $r - \rho$ from the target, \begin{equation} \label{eq:Ht} H(r,t) = \sum\limits_{n=0}^\infty u_n(r) \, e^{- Dt \lambda_n} , \end{equation} with \begin{eqnarray} \lambda_n &=& \alpha_n^2/(R-\rho)^2 , \\ \label{eq:un} u_n(r) &=& c_n \, \frac{D}{(R-\rho)^2} \\ \nonumber &\times& \frac{R\alpha_n\cos\bigl(\alpha_n\frac{R-r}{R-\rho}\bigr)-(R-\rho)\sin \bigl(\alpha_n\frac{R-r}{R-\rho}\bigr)}{r\alpha_n}, \end{eqnarray} where the expansion coefficients $c_n$ are given explicitly by the residues as \begin{equation} \label{cn} c_n=\frac{2\rho^2\alpha_n^2}{(\rho R+\mu(R^2+\rho^2))\alpha_n\sin\alpha_n+\rho( \mu R\alpha_n^2-\rho)\cos\alpha_n} \,. \end{equation} \subsection*{Long-time behavior of the RTD} When either the target radius $\rho$ is small or the dilatoriness parameter $\mu$ is large, the slope of the right-hand side of equation \eqref{eq:eq_alpha} is close to unity and thus the smallest eigenvalue $\alpha_0$ is close to zero. Expanding both sides of equation \eqref{eq:eq_alpha} into Taylor series one finds the first-order approximation \begin{eqnarray} \alpha_0&\simeq&\frac{\rho}{\sqrt{\rho(R-\rho)+\mu(R-\rho)^2}}\\ \nonumber &\times&\biggl(\frac13+\mu R\rho\frac{\rho R+\mu(R-\rho)^2}{(\rho(R-\rho)+ \mu(R-\rho)^2)^2}\biggr)^{-1/2}+\ldots. \end{eqnarray} In particular, for small target radius, $\rho\to0$, at fixed dilatoriness $\mu$ we see that $\alpha_0\simeq\sqrt{3}(\rho/R)\mu^{-1/2}$. In turn, when $\mu\to \infty$ with fixed $\rho$, \begin{equation} \alpha_0\simeq\frac{\sqrt{3}\rho}{\sqrt{R^2+R\rho+\rho^2}} \,\mu^{-1/2}. \end{equation} In both cases $\alpha_0$ is proportional to $\rho$ and inversely proportional to $\sqrt{\mu}$. As a consequence, the term with the slowest decay time behaves as \begin{equation} \begin{split} \lambda_0 &\simeq \frac{3\kappa \rho^2}{D(R-\rho)(R^2+R\rho+\rho^2)} \simeq \frac{3\kappa \rho^2}{D R^3} \approx \frac{\kappa S_\rho}{DV} \,, \end{split} \end{equation} where in the intermediate approximation we ignored terms of order $\rho/R$ and higher, and we introduced the surface area $S_\rho=4\pi\rho^2$ of the target and the volume of the domain $V\approx4\pi R^3/3$. We also note that the approximation $c_0\approx3(\rho/R)^2/(\mu+3\rho/(2R))$ holds for $\rho\ll R$, and thus $c_0/\alpha_0^2\simeq1/(1+3\rho/(2\mu R))$, i.e., it is close to unity as long as the dilatoriness $\mu$ is not too small. Therefore the survival probability can be accurately approximated as $S(r,t) \simeq\exp(-Dt\alpha_0^2/R^2)$ for intermediate and large times. In this case the median reaction time becomes \begin{equation} t_{\rm median}\approx\frac{R^2\ln2}{D\alpha_0^2}\simeq\frac{\mu R^4\ln 2}{3D\rho^2}\simeq \frac{R^3\ln 2}{3\kappa\rho^2} , \end{equation} from which the relation $t_{\rm median}\approx t_{\rm mean}\ln2$. This median value is close to the mean reaction time which in the limit $\rho\ll R$ has the dominant behaviour as $R^3/(3\kappa\rho^2)$ according to equation \eqref{eq:MFPT}. In turn, the most probable reaction time, which is determined by the higher-order eigenmodes, is orders of magnitude smaller. This behaviour is, however, only present for weakly reactive targets. In contrast, the median time for perfect reactions is usually close to the crossover time $t_c$, while $t_{\mathrm{mean}}$ is orders of magnitude larger. \subsection*{Most probable reaction time} One may deduce from figure \ref{fig:Ht} that the region around the most probable reaction time is well described by the function in \eqref{eq:Ht_Rinf}, which corresponds to the solution in the limit $R \to \infty$. Hence, the most probable reaction time $t_{\rm mp}$ can be obtained with a good accuracy by merely differentiating this function with respect to $t$ and setting the result equal to zero: \begin{equation} \label{tt} t_{\rm mp} = \frac{(r - \rho)^2}{6 D} z^2 \,, \end{equation} where $z$ is defined implicitly as the solution of the following, rather complicated transcendental equation \begin{align} & \beta^2 z^4 - 3(1 + \beta) z^2 + 9 \nonumber \\ & - \sqrt{\pi/6}\, \beta^3 z^5 \erfcx\biggl(\frac{\sqrt{3/2}}{z} + \frac{\beta z}{\sqrt{6}}\biggr) = 0, \label{eq:tmp_eq} \end{align} where ${\rm erfcx(x)}$ is the scaled complementary error function, and \begin{equation} \label{eq:beta} \beta = \frac{r-\rho}{\rho} \biggl(1 + \frac{\kappa \rho}{D} \biggr) \,. \end{equation} We denote the solution of this equation as $z_\beta$. When $\beta$ tends to $0$, a Taylor expansion of the left-hand side of \eqref{eq:tmp_eq} yields $z^2 - 9 + O(\beta)$, from which $z_0 = \sqrt{3}$. In the opposite limit $\beta\to\infty$, one uses the asymptotic behaviour of the function $\erfcx(x)$ to get \begin{equation} \label{eq:zbeta_large} z_\beta \simeq 1 + \frac{3}{2\beta} + O(\beta^{-2}) . \end{equation} With some technical efforts, one can prove that $z_\beta$ is a monotonously decreasing function of $\beta$ (see Fig. \ref{fig:zbeta}). We conclude that $z_\beta$ is bounded between $\sqrt{3}$ and $1$ so that the most probable time $t_{\rm mp}$ lies between $(r - \rho)^2/(6D)$ (for $\kappa \rho \gg 1$) and $(r-\rho)^2/(2D)$ (for $\kappa \rho \ll 1$). In other words, the most probable reaction time shows remarkably weak dependence on the reactivity $\kappa$, as illustrated by Fig. \ref{fig:zbeta}. \begin{figure} \begin{center} \includegraphics[width=8.8cm]{figure5.eps} \end{center} \caption{ Weak dependence of the most probable reaction time on reactivity. The numerical solution $z_\beta$ of equation \eqref{eq:tmp_eq} as a function of $\beta$ (solid line) and its large-$\beta$ asymptotic behavior \eqref{eq:zbeta_large} shown by dashed line. } \label{fig:zbeta} \end{figure} \subsection*{Moments of the reaction time} As we have already remarked in the main text, the positive moments of the RTD of an arbitrary order are dominated by the integration over the plateau-like region such that their values appear close to the onset of the crossover to the final region -- the exponential decay of the RTD. The exact values of the positive moments of the random reaction time $\tau$ can be accessed directly by a mere differentiation of $\tilde{H}(r,p)$ with respect to the Laplace parameter $p$ and subsequently taking the limit $p = 0$: \begin{equation} \langle \tau^k \rangle = (-1)^k \lim\limits_{p\to 0} \frac{\partial^k \tilde{H}(r,p)}{\partial p^k} \,. \end{equation} For instance, a lengthy but straightforward calculation yields the exact formula for the variance of the reaction time: \begin{align} \label{eq:variance} & \langle \tau^2 \rangle - \langle \tau \rangle^2 = \frac{1}{90 D^2 r^2 \rho^4} \biggl\{ 10r^2(R^3-\rho^3)^2 (D/\kappa)^2 \\ \nonumber &+ 4\rho r^2 (5R^3 + 6R^2\rho + 3R\rho^2 + \rho^3)(R-\rho)^3 (D/\kappa) \\ \nonumber &+ \rho^2(r-\rho)\bigl(2R^3(5R^3\rho + 5R^3 r + 10 r^2\rho^2 - 18R^2 r\rho) \\ \nonumber & - \rho^2 r^2(\rho+r)(r^2+\rho^2) \bigr)\biggr\} , \end{align} from which one also gets the coefficient of variation, $\gamma = \sqrt{\langle \tau^2 \rangle - \langle \tau \rangle^2}/\langle \tau\rangle$, which characterises fluctuations of the random reaction time $\tau$ around its mean, i.e., the effective broadness of the reaction time density. As compared to Ref. \cite{Moffitt10}, the expressions \eqref{eq:MFPT} and \eqref{eq:variance} permit us to quantify the effect of both rate-controlling factors. For a perfectly reactive target, the coefficient of variation diverges as the starting point $r$ approaches $\rho$, in particular, one gets \begin{equation} \label{eq:gamma_perfect} \gamma^2 \simeq \frac{2\rho}{r - \rho} + O(1), \end{equation} when the target is small or the confining domain is large ($\rho \ll R$). In turn, for a partially reactive target, the squared coefficient of variation is finite in the limit $r\to\rho$ and for a small target reads \begin{equation} \label{eq:gamma_partial} \gamma^2 \simeq 1 + \frac{2\rho \kappa}{D} \,. \end{equation} The coefficient of variation $\gamma$ in equations \eqref{eq:gamma_perfect} and \eqref{eq:gamma_partial} exceeds $1$, allowing one to classify this distribution as broad, according to the standard terminology in statistics \cite{Mejia11,Mattos12,Mattos14}. In both cases, the asymptotic behaviour of $\gamma$ does not depend on the size of the confining domain, $R$. We turn next to the negative order moments of the RTD which are clearly dominated by the region close to the origin and hence, probe the left tail of the distribution. The computation of negative moments (with $\nu > 0$) involves integration: \begin{equation} \label{eq:tmean_harm0} \langle \tau^{-\nu} \rangle = \int\limits_0^\infty dt \, t^{-\nu} \, H(r,t) = \frac{1}{\Gamma(\nu)} \int\limits_0^\infty dp \, p^{\nu-1} \, \tilde{H}(r,p) . \end{equation} Although this integral is expressed in terms of the explicitly known Laplace transform $\tilde{H}(r,p)$ from equation \eqref{eq:Hp}, its analytical evaluation does not seem to be feasible. In turn, the integral takes a more tractable form in the limit $R\to\infty$ corresponding to diffusion in the exterior of a partially reactive target of radius $\rho$. Due to the transient character of diffusion in three dimensions, the probability density $H_\infty(r,t)$ is not normalised to $1$ as the molecule can escape to infinity. The integral of the density $H(r,t)$ yields thus the probability of reacting at the target: \begin{equation} \label{eq:q} q = \tilde{H}_\infty(r,p=0) = \frac{\rho/r}{1 + D/(\kappa \rho)} \,. \end{equation} The negative order moments of the renormalised density $H_\infty(r,t)/q$ are \begin{equation} \label{eq:tau_mean} \langle \tau^{-\nu} \rangle_n = \frac{2}{\Gamma(\nu)} \biggl(\frac{D}{(r-\rho)^2}\biggr)^\nu \int\limits_0^\infty dz \, \frac{z^{2\nu-1} e^{-z}}{1+ z/\beta} \,, \end{equation} where $\beta$ was defined in \eqref{eq:beta}. In the limit $\kappa \to \infty$, one finds \begin{equation} \label{eq:tau_mean_kinf} \langle \tau^{-\nu} \rangle_n = \frac{2}{\Gamma(\nu)} \biggl(\frac{D}{(r-\rho)^2}\biggr)^\nu \Gamma(2\nu) . \end{equation} While the mean reaction time diverges for the exterior problem, the negative order moments are well defined and can thus characterise the reaction process. In particular, the harmonic mean reaction time, defined as \begin{equation} t_{\rm harm} = \frac{1}{\langle \tau^{-1} \rangle_n} \,, \end{equation} is deduced from \eqref{eq:tau_mean} for $\nu = 1$: \begin{equation} \label{eq:tharm} t_{\rm harm} = \frac{(r-\rho)^2}{2D} \, \beta^{-1}\biggl(1 - \beta e^{\beta} \Ei_1(\beta)\biggr)^{-1} \,, \end{equation} where $\Ei_1(z) = \int\nolimits_1^\infty dx \,e^{-zx}/x$ is the exponential integral. The dependence of the harmonic mean on the reactivity $\kappa$ is fully captured via $\beta$. In the limit $\kappa \to \infty$, this mean approaches \begin{equation} \label{eq:tharm_kinf} t_{\rm harm} = \frac{(r-\rho)^2}{2D} \,, \end{equation} and is thus of the order of the most probable time, representing the relevant time scale of the problem. In the opposite limit $\kappa \to 0$, $\beta$ approaches a constant, and the harmonic mean reaction time also reaches a constant. One can check that $t_{\rm harm}$ monotonously decreases as $\beta$ (or $\kappa$) grows. Figure \ref{fig:tharm} illustrates by dashed lines the behaviour of the function in \eqref{eq:tharm}, in particular, its approach to the limiting expression \eqref{eq:tharm_kinf} as $\kappa$ increases. One can appreciate a very weak dependence of the harmonic mean reaction time for the exterior problem on the reactivity $\kappa$. We also show the harmonic mean reaction time in the concentric domain, obtained by a numerical integration in equation \eqref{eq:tmean_harm0} with $\nu = 1$. This mean significantly depends on $\kappa$ and behaves as $1/\kappa$ for small $\kappa$. Given that the probability density $H(r,t)$ for the concentric domain can be accurately approximated by $H_\infty(r,t)$ at small times (see equation \eqref{eq:Ht_app}), the harmonic mean reaction time for the concentric domain can be approximated by the expression in \eqref{eq:tmean_harm0}, multiplied by the reaction probability $q$. This approximation, shown by solid lines, turns out to be remarkably accurate when the target radius $\rho$ is small as compared to the radius $R$ of the confining domain. We can also conclude that the significant variations of $t_{\rm harm}$ with $\kappa$ for the concentric domain come from those of $q$ with $\kappa$. \begin{figure*} \begin{center} \includegraphics[width=18cm]{figure6.eps} \end{center} \caption{ The harmonic mean reaction time, $t_{\rm harm}$, as a function of the dimensionless reactivity, $\kappa' = \kappa R/D$. {\bf (a)} An inner target has a radius $\rho/R = 0.1$ (blue curves) or $\rho/R = 0.01$ (red curves), and the release radius $r/R = 0.2$. Symbols show the results for the concentric domain, obtained by a numerical evaluation of the integral in equation \eqref{eq:tmean_harm0} with $\nu = 1$; dashed lines present the relation \eqref{eq:tharm} for the exterior problem; solid lines indicate the relation \eqref{eq:tharm} multiplied by the reacting probability $q$ from \eqref{eq:q}. The length and time units are fixed by setting $R=1$ and $R^2/D = 1$. {\bf (b)} The same but for $\rho/R = 0.01$ and two values of $r/R$: $0.2$ (blue curves) and $0.02$ (red curves). } \label{fig:tharm} \end{figure*} Finally, we consider the time scale \begin{equation} t_{\mathrm{typ}} = t_0 \exp(\langle \ln(t/t_0) \rangle) \end{equation} (where $t_0$ is an arbitrary time scale), based on the mean logarithm of the reaction time -- an important characteristic of the reaction process, which emphasizes the typical values of $t$, i.e., values observed in most of experiments. Indeed, the logarithm is a slowly-varying function and its average is supported by the most frequently encountered values of $t$ with the rare anomalously long or short reaction times being effectively filtered out. The estimates based on $t_{\mathrm{typ}}$ are widely used in the analysis of stochastic reaction-diffusion or transport process in random environments (see, e.g., Refs. \cite{Evans11,Dean14} and references therein). Such an averaged value can be formally computed as \begin{eqnarray} && \langle \ln(\tau/t_0) \rangle = \sum\limits_{n=0}^\infty u_n(r) \int\limits_0^\infty dt \, \ln(t/t_0) e^{-Dt\lambda_n} \\ \nonumber &&~~~= -\sum\limits_{n=0}^\infty u_n(r) \frac{\gamma + \ln(Dt_0\lambda_n)}{D\lambda_n} \\ \nonumber &&~~~= \biggl(\ln \frac{(R-\rho)^2}{Dt_0}\biggr) - \gamma - \frac{(R-\rho)^2}{D} \sum\limits_{n=0}^\infty u_n(r) \frac{\ln \alpha_n^2}{\alpha_n^2} \,, \end{eqnarray} where $\gamma \approx 0.5772\ldots$ is the Euler constant, from which \begin{equation} \label{eq:tlog} t_{\rm typ} = \frac{(R-\rho)^2}{D} \exp\biggl(- \gamma - \frac{(R-\rho)^2}{D} \sum\limits_{n=0}^\infty u_n(r) \frac{\ln \alpha_n^2}{\alpha_n^2} \biggr), \end{equation} where $u_n(r)$ are given by \eqref{eq:un}. To get a more explicit dependence on the initial radius $r$, one can again consider the exterior problem ($R = \infty$). Rewriting equation \eqref{eq:tau_mean} as \begin{eqnarray} \langle \tau^{-\nu} \rangle_n &=& \biggl(\frac{D}{(r-\rho)^2}\biggr)^\nu \frac{2\Gamma(2\nu)}{\Gamma(\nu)} \\ \nonumber &\times& \biggl(1 - \frac{1}{\beta \Gamma(2\nu)} \int\limits_0^\infty dz \, \frac{z^{2\nu} e^{-z}}{1+ z/\beta} \biggr) \,, \end{eqnarray} in order to get a Taylor expansion as $\nu \to 0$, one gets \begin{equation} \langle \ln (\tau/t_0) \rangle_n = \biggl\{\ln \biggl(\frac{(r-\rho)^2}{D t_0}\biggr) + \gamma + 2e^{\beta} \Ei_1(\beta)\biggr\}, \end{equation} where the expectation is computed with respect to the renormalised density $H_\infty(r,t)/q$. We obtain thus the logarithmic mean time \begin{equation} \label{eq:Tlog} t_{\rm typ} = \frac{(r-\rho)^2}{D} \exp\bigl(\gamma + 2e^{\beta} \Ei_1(\beta)\bigr) \,. \end{equation} In the limit $\kappa\to\infty$, $e^{\beta} \Ei_1(\beta)$ vanishes as $1/\beta$, so that for a perfectly reactive target one gets \begin{equation} \label{eq:Tlog_kinf} t_{\rm typ} = \frac{(r-\rho)^2}{D} \, e^{\gamma}, \end{equation} which signifies that in the limit $\kappa = \infty$ the logarithmic mean time is comparable to the most probable reaction time $t_{\rm mp}$. Figure \ref{fig:tlog} shows the logarithmic mean reaction time, $t_{\rm typ}$, as a function of the dimensionless reactivity $\kappa R/D$. As for the harmonic mean in Fig. \ref{fig:tharm}, the results for a bounded concentric domain ($R = 1$) and for the exterior problem ($R = \infty$) differ significantly. The particular definition of the logarithmic time does not allow one to easily renormalise $t_{\rm typ}$ for the exterior domain to get an approximation for the bounded domain. \begin{figure} \begin{center} \includegraphics[width=8.8cm]{figure7.eps} \end{center} \caption{ The logarithmic mean reaction time, $t_{\rm typ}$, as a function of the dimensionless reactivity, $\kappa R/D$. $t_{\rm typ}$ is evaluated for an inner target of radius $\rho/R = 0.1$ (blue curves) or $\rho/R = 0.01$ (red curves), for the initial radius $r/R = 0.2$. Lines show the results for the concentric domain from equation \eqref{eq:tlog}, whereas symbols present the relation \eqref{eq:Tlog} for the exterior problem. The length and time units are fixed by setting $R=1$ and $R^2/D = 1$. } \label{fig:tlog} \end{figure} Finally, Fig. \ref{fig:Tall} compares several mean reaction times for the concentric domain. One can see that the behaviour of the median, the harmonic and the logarithmic means resembles that of the conventional (arithmetic) mean FPT. In particular, all these means behave as $1/\kappa$ at small $\kappa$, indicating that the reaction is limited by the kinetics. Only the most probable FPT exhibits a very different behaviour and shows almost no depedence on the reactivity $\kappa$, as discussed above. \begin{figure*} \begin{center} \includegraphics[width=18cm]{figure8.eps} \end{center} \caption{ Comparison of several means of the reaction time. They are evaluated as functions of the normalised reactivity, $\kappa R/D$, for an inner target of radius $\rho/R = 0.01$ and the starting point $r/R = 0.2$ {\bf (a)} and $r/R = 0.02$ {\bf (b)}. The length and time units are fixed by setting $R=1$ and $R^2/D = 1$. } \label{fig:Tall} \end{figure*} \subsection*{Reaction depth} The reaction depth \eqref{rdepth} is shown in figure \ref{fig:ht}. Note first that the reaction depths corresponding to the shortest characteristic time $t_{\rm mp}$ are evidently the shortest, amounting to only about $4\%$ for perfect reactions and $r$ close to $\rho$. For finite $\kappa$ or for starting points further away from the target, the reaction depth $F_r(t_{\rm mp})$ diminishes. In turn, in all cases the reaction depth connected to the intermediate plateau is dominant, increasingly so due to the reaction-control at lower reactivities. \begin{figure*} \begin{center} \includegraphics[width=18cm]{figure9.eps} \end{center} \caption{ Cumulative distribution function of reaction times, $F_r(t)$. It is evaluated for the reaction on an inner target of radius $\rho/R=0.01$, with the starting point {\bf (a)} $r/R=0.2$ and {\bf (b)} $r/R=0.02$ and varying reactivity $\kappa$. Symbols indicate the relevant characteristic times: most probable time $t_{\rm mp}$ (asterisks), harmonic mean $t_{\rm harm}$ (squares), logarithmic mean $t_{\rm log}$ (diamonds), median $t_{\rm median}$ (triangles), and mean time (circles). Note that some most probable times are not seen at this scale.} \label{fig:ht} \end{figure*} \section*{Data availability statement} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. \section*{Code availability statement} All figures have been prepared by means of Matlab software. The plotted quantities have been computed by explicit formulas provided in the paper by using custom routines for Matlab software. While the explicit form makes these numerical computations straightforward, custom routines are available from the corresponding author upon request.
1,116,691,500,573
arxiv
\section*{Background} Somatic human cells are diploids, that is, they contain 22 pairs of homologous chromosomes and a pair of sex chromosomes, one copy inherited from each parent. In order to fully characterize the genome of an individual, the reconstruction of the two distinct copies of each chromosome, called haplotypes, is essential \cite{levy2007}. The process of inferring the full haplotype information related to a cell is known as haplotyping, which consists in assigning all heterozygous Single Nucleotide Polymorphisms (SNPs) to exactly one of the two chromosome copies. SNPs are one of the most studied genetic variations, since they play a fundamental role in many medical applications, such as drug-design or disease susceptibility studies, as well as in characterizing the effects of SNPs on the expression of phenotypic traits \cite{hirschhorn2005}. This information can be valuable in several contexts, including linkage analysis, association studies, population genetics and clinical genetics \cite{snyder2015}. Obviously, the complete set of SNPs of an individual (i.e., his/her haplotypes) is generally more informative than analyzing single SNPs, especially in the study of complex disease susceptibility. Since a direct experimental reconstruction of haplotypes still requires huge sequencing efforts and is not cost-effective \cite{kuleshov2014}, computational approaches are extensively used to solve this problem. In particular, two classes of methods exist for haplotype phasing \cite{snyder2015}. The first class consists of statistical methods that try to infer haplotypes from genotypes sampled in a population. These data, combined with datasets describing the frequency by which the SNPs are usually correlated in different populations, can be used to reconstruct the haplotypes of an individual. The second class of methods directly leverages sequencing data: in this case, the main goal is to partition the entire set of reads into two sub-sets, exploiting the partial overlap among them in order to ultimately reconstruct the corresponding two different haplotypes of a diploid organism \cite{patterson2015}. The effectiveness of these methods was limited by the length of the reads produced by second-generation sequencing technologies, which might be not long enough to span over a relevant number of SNP positions. This results in the reconstruction of short haplotype blocks \cite{zhang2002, daly2001}, since reads do not cover adjacent SNP positions adequately, hindering the possibility of reconstructing the full haplotypes. However, in recent years the development of new sequencing technologies paved the way to the advent of the third-generation of sequencing platforms, namely PacBio RS II (Pacific Biosciences of California Inc., Menlo Park, CA, USA) \cite{rhoads2015,roberts2013} and Oxford Nanopore MinION (Oxford Nanopore Ltd., Oxford, United Kingdom) \cite{jain2015}, which are able to produce reads covering several hundreds of kilobases and spanning different SNP loci at once. Unfortunately, the increased length comes at the cost of a decreased accuracy with respect to short and precise second-generation sequencing technologies, like NovaSeq (Illumina Inc., San Diego, CA, USA) \cite{quail2008}; thus, in order to obtain reliable data, the read coverage should be increased. Among the computational methods for haplotype assembly, the Minimum Error Correction (MEC) is one of the most successful approaches. This problem consists in computing the two haplotypes that partition the sequencing reads into two disjoint sets with the least number of corrections to the SNP values \cite{wang2005}. Unfortunately, MEC was proven to be NP-hard \cite{lippert2002}. A weighted variant of MEC, named weighted MEC (wMEC), was then proposed in \cite{greenberg2004}: the weights represent the confidence for the presence of a sequencing error, while the correction process takes into account the weight associated with each SNP value of a read. These error schemes generally regard phred-scaled error probabilities and are very valuable for processing long reads generated by third-generation sequencing technologies, as they are prone to high sequencing error rates \cite{patterson2015}. Several assembly approaches have been already proposed in literature. Due to the NP-hardness of the MEC problem, some methods exploit heuristic strategies. Two noteworthy approaches are ReFHap \cite{duitama2010}, which is based on a heuristic algorithm for the Max-Cut problem on graphs, and ProbHap \cite{kuleshov2014ProbHap}, which generalizes the MEC formulation by means of a probabilistic framework. In \cite{wang2005}, Wang \textit{et al.} proposed a meta-heuristic approach based on Genetic Algorithms (GAs) to address an extended version of the MEC problem, called MEC with Genotype Information (MEC/GI), which also considers genotyping data during the SNP correction process. A similar work was presented in \cite{wang2012}, where GAs are used to solve the MEC problem by using a fitness function based on a majority rule that takes into account the allele frequencies. The results shown in \cite{wang2012} are limited to a coverage up to $10\times$ and a haplotype length equal to $700$. More recently, an evolutionary approach called Probabilistic Evolutionary Algorithm with Toggling for Haplotyping (PEATH) was proposed in \cite{na2018}. PEATH is based on the Estimation of Distribution Algorithm (EDA), which uses the promising individuals to build probabilistic models that are sampled to explore the search space. This meta-heuristic deals with noisy sequencing reads, reconstructing the haplotypes under the all-heterozygous assumption. These algorithms present some limitations, as in the case of ReFHap \cite{duitama2010}, ProbHap \cite{kuleshov2014ProbHap} and PEATH \cite{na2018}, which assume that the columns in the input matrix correspond to heterozygous sites \cite{chen2013}. However, this all-heterozygous assumption might be incorrect for some columns, and these algorithms can only deal with limited reads coverages. For example, ProbHap \cite{kuleshov2014ProbHap} can handle long reads coverage values up to $20\times$, which is not appropriate for higher coverage short-read datasets; on the other hand, it works better with very long reads at a relatively shallow coverage ($\leq 12\times$). More recently, a tool based on a dynamic programming approach, called WhatsHap, was presented \cite{patterson2015}. WhatsHap is based on a fixed parameter tractable algorithm \cite{he2010, bonizzoni2015}, and leverages the long-range information of long reads; however, it can deal only with datasets of limited coverage up to $\sim 20 \times$. A parallel version of WhatsHap has been recently proposed in \cite{bracciali2016}, showing the capability to deal with higher coverages up to $\sim 25 \times$. An alternative approach, called HapCol \cite{pirola2015}, uses the uniform distribution of sequencing errors characterizing long reads. In particular, HapCol exploits a new formulation of the wMEC problem, where the maximum number of corrections is bounded in every column and is computed from the expected error rate. HapCol can only deal with instances of relatively small coverages up to $\sim 25-30 \times$. To sum up, even though high-throughput DNA sequencing technologies are paving the way for valuable advances in clinical practice, analyzing such an amount of data still represents a challenging task. This applies especially to clinical settings, where accuracy and time constraints are critical \cite{rimmer2014}. In order to tackle the computational complexity of the haplotyping problem, in this work we propose GenHap, a novel computational method for haplotype assembly based on Genetic Algorithms (GAs). GenHap can efficiently solve large instances of the wMEC problem, yielding optimal solutions by means of a global search process, without any \textit{a priori} hypothesis about the sequencing error distribution in reads. The computational complexity of the problem is overcome by relying on a \textit{divide-et-impera} approach, which provides faster and more accurate solutions compared with the state-of-the-art haplotyping tools. The paper is structured as follows. In the next section, we briefly introduce the haplotyping problem, and describe in detail the GenHap methodology along with its implementation. Then, we show the computational performance of GenHap, extensively comparing it against HapCol. We finally provide some conclusive remarks and future improvements of this work. \section*{Methods} \subsection*{Problem formulation} Given $n$ positions on two homologous sequences belonging to a diploid organism and $m$ reads obtained after a sequencing experiment, we can reduce each read to a fragment vector $\mathbf{f} \in \{0,1,-\}^n$, where $0$ denotes a position that is equal to the reference sequence, $1$ denotes a SNP with respect to the reference sequence and $-$ indicates a position that is not covered by the read. We define a haplotype as a vector $\mathbf{h} \in \{0,1\}^n$, that is, the combination of SNPs and wild-type positions belonging to one of the two chromosomes. Given the two haplotypes $\mathbf{h}_{1}$ and $\mathbf{h}_{2}$---which refer to the first and second copy of the chromosome, respectively---a position $j$ (with $j \in \{1, \dots, n\}$) is said to be heterozygous if and only if $h_{1_j} \neq h_{2_j}$, otherwise $j$ is homozygous. Let $\mathbf{M}$ be the ``fragment matrix'', that is, the $m \times n$ matrix containing all fragments. Two distinct fragments $\mathbf{f}$ and $\mathbf{g}$ are said to be in conflict if there is a position $j$ (with $j \in \{1, \dots, n\}$) such that $f_{j} \neq g_{j}$ and $f_{j}, g_{j} \neq -$, otherwise they are in agreement. $\mathbf{M}$ is conflict-free if there are two different haplotypes $\mathbf{h}_1$ and $\mathbf{h}_2$ such that each row $M_i$ (with $i \in \{1, \dots, m\}$) is in agreement with either $\mathbf{h}_1$ or $\mathbf{h}_2$. The overall haplotype assembly process is outlined in Figure \ref{fig:ReadsMatHap}. \begin{figure}[t!] [width=0.97\textwidth]{images/Reads-Matrix-Haplotypes_new.pdf} \caption{Simplified workflow of the haplotype assembly process. Raw sequencing data are initially aligned, defining $m$ reads. Every position of the two chromosome copies is compared against a reference chromosome. The black solid points denote $n$ heterozygous positions, along with the corresponding nucleobases. The fragment matrix $\mathbf{M}$ is defined assigning $1$ to SNP positions and $0$ to wild-type positions. To reconstruct the two haplotypes $\mathbf{h}_1$ and $\mathbf{h}_2$ characterized by the least number of corrections to the SNP values among the $2^n$ candidate haplotypes, the wMEC problem is solved by partitioning the matrix $\mathbf{M}$ into two disjoint matrices $\mathbf{M}_1$ and $\mathbf{M}_2$. } \label{fig:ReadsMatHap} \end{figure} We can extend the heterozygous and homozygous definition at the column level as follows: a column $c$ of $\mathbf{M}$ is homozygous if all its values are either in $\{0,-\}$ or in $\{1,-\}$, on the contrary $c$ is heterozygous because its values are in $\{0,1,-\}$ meaning that both a SNP and a wild-type exist in that position. Finally, we can detect the case where two distinct fragments are in conflict, and measure their diversity by defining a distance $D(\cdot,\cdot)$ that calculates the number of different values between two fragments. Namely, given $\mathbf{f}=(M_{i1}, \dots, M_{in})$ and $\mathbf{g}=(M_{l1}, \dots, M_{ln})$ of $\mathbf{M}$ (with $i,l \in \{1, \dots, m\}$), we consider: \begin{equation} \label{eq:distance} D(\mathbf{f}, \mathbf{g}) = \sum_{j=1}^{n} d(f_j, g_j), \end{equation} where $d(f_j, g_j)$ is defined as: \begin{equation} \label{eq:HamDist} d(x,y) = \begin{cases} 1, & \mbox{if } x \neq y, \mbox{ } x \neq -, \mbox{ and } y \neq -\\ 0, & \mbox{otherwise} \end{cases}. \end{equation} Equation~(\ref{eq:distance}) defines the \textit{extended Hamming distance} between two ternary strings $\mathbf{f}$ and $\mathbf{g}$ \cite{chen2013}, denoting the total number of positions wherein both characters of $\mathbf{f}$ and $\mathbf{g}$ belong to $\{0,1\}$ but they are different according to Equation~(\ref{eq:HamDist}). If $\mathbf{M}$ is conflict-free, then it can be partitioned into two disjoint matrices $\mathbf{M}_1$ and $\mathbf{M}_2$, each one containing a set of conflict-free fragments. We can infer the two haplotypes $\mathbf{h}_1$ and $\mathbf{h}_2$ from $\mathbf{M}_1$ and $\mathbf{M}_2$, respectively, as follows: \begin{equation} \label{eq:haploCal} h_{k_j} = \begin{cases} 1, & \mbox{if } N_{1_j}(\mathbf{M}_k) \geq N_{0_j}(\mathbf{M}_k)\\ 0, & \mbox{otherwise} \end{cases}, \end{equation} where $j \in \{1, \dots, n\}$, $k\in \{1,2 \}$, and $N_{0_j}(\mathbf{M}_k)$, $N_{1_j}(\mathbf{M}_k)$ denote the number of $0$s and $1$s in the $j$-th column, respectively. In such a way, $\mathbf{N}_0(\mathbf{M}_k)$ is the vector consisting of the number of $0$s of each column $j$ using the reads of the partition $\mathbf{M}_k$, while $\mathbf{N}_1(\mathbf{M}_k)$ is the vector consisting of the number of $1$s of each column $j$ represented by the partition $\mathbf{M}_k$. In order to solve the wMEC problem, $\mathbf{N}_0$ and $\mathbf{N}_1$ are calculated using the $m \times n$ weight matrix $\mathbf{W}$, representing the weight associated with each position in each fragment. As a matter of fact, $\mathbf{W}$ can be divided into the two disjoint partitions $\mathbf{W}_1$ and $\mathbf{W}_2$, whose row indices correspond to those in $\mathbf{M}_1$ and $\mathbf{M}_2$, respectively. We can extend Equation~(\ref{eq:haploCal}) taking into account the weights as follows: \begin{equation} \label{eq:haploCalWeigh} h_{k_j} = \begin{cases} 1, & \mbox{if } N_{1_j}(\mathbf{W}_k) \geq N_{0_j}(\mathbf{W}_k)\\ 0, & \mbox{otherwise} \end{cases}, \end{equation} where $j \in \{1, \dots, n\}$, $k\in \{1,2 \}$, and $N_{0_j}(\mathbf{W}_k)$, $N_{1_j}(\mathbf{W}_k)$ denote the sum of the weights associated with the $0$ and $1$ elements in the $j$-th column, respectively. The distance $D(\cdot, \cdot)$ given in Equation~(\ref{eq:distance}) can be used also to evaluate the distance between a fragment and a haplotype, by means of the following error function: \begin{equation} \label{eq:fitness} \mathcal{E}(\mathbf{M}_1,\mathbf{M}_2, \mathbf{h}_1, \mathbf{h}_2) = \sum_{k=1}^{2} \sum_{\mathbf{f} \in \mathbf{M}_k} D(\mathbf{f}, \mathbf{h}_k). \end{equation} The best partitioning of $\mathbf{M}$ can be obtained by minimizing Equation (\ref{eq:fitness}), inferring $\mathbf{h}_1$ and $\mathbf{h}_2$ with the least number of errors. Equation~(\ref{eq:fitness}) is used as fitness function in GenHap. \subsection*{GenHap: haplotype assembly using GAs} GAs are population-based optimization strategies mimicking Darwinian processes \cite{goldberg1989,baker1985,miller1995}. In GAs, a population $P$ of randomly generated individuals undergoes a selection mechanism and is iteratively modified by means of genetic operators (i.e., crossover and mutation). Among the existing meta-heuristics for global optimization, GAs are the most suitable technique in this context thanks to the discrete structure of the candidate solutions. This structure is well-suited to efficiently solve the intrinsic combinatorial nature of the haplotype assembly problem. In the most common formulation of GAs, each individual $C_p$ (with $p \in \{1, \ldots, |P|\}$) encodes a possible solution of the optimization problem as a fixed-length string of characters taken from a finite alphabet. Based on a quality measure (i.e., the fitness value), each individual is involved in a selection process in which individuals characterized by good fitness values have a higher probability to be selected for the next iteration. Finally, the selected individuals undergo crossover and mutation operators to possibly improve offspring and to introduce new genetic material in the population. GenHap exploits a very simple and efficient structure for individuals, which encodes as a binary string a partition of the fragment matrix $\mathbf{M}$. In particular, each individual $C_p=[C_{p_1}, C_{p_2}, \ldots, C_{p_m}]$ (with $p \in \{1, \ldots, |P|\}$) is encoded as a circular array of size $m$ (i.e., the number of reads). In order to obtain the two partitions $\mathbf{M}_1$ and $\mathbf{M}_2$, $C_p$ is evaluated as follows: if the $i$-th bit is equal to $0$, then the read $i$ belongs to $\mathbf{M}_1$; otherwise, the read $i$ belongs to $\mathbf{M}_2$. Once the two partitions are computed, GenHap infers the haplotypes $\mathbf{h}_1$ and $\mathbf{h}_2$ by applying Equation~(\ref{eq:haploCalWeigh}). Finally, Equation~(\ref{eq:fitness}) is exploited to calculate the number of errors made by partitioning $\mathbf{M}$ as encoded by each individual of $P$. This procedure is iterated until the maximum number of iterations $T$ is reached, the number of errors is equal to $0$ or the fitness value of the best individual does not improve for $\theta = \lceil 0.25 \cdot T \rceil$ iterations. Among the different selection mechanisms employed by GAs (e.g., roulette wheel \cite{goldberg1989}, ranking \cite{baker1985}, tournament \cite{miller1995}), GenHap exploits the tournament selection to create an intermediate population $P'$, starting from $P$. In each tournament, $\kappa$ individuals are randomly selected from $P$ and the individual characterized by the best fitness value is added to $P'$. The size of the tournament $\kappa$ is related to the selection pressure: if $\kappa$ is large, then the individuals characterized by worse fitness values have a low probability to be selected, therefore the variability of $P'$ might decrease. Afterwards, the genetic operators (i.e., crossover and mutation) are applied to the individuals belonging to $P'$ to obtain the offspring for the next iteration. GenHap exploits a single-point crossover with mixing ratio equal to $0.5$. Crossover is applied with a given probability $c_r$ and allows for the recombination of two parent individuals $C_{y}, C_{z} \in P'$ (for some $y, z \in \{1, \ldots, |P|\}$), generating two offspring that possibly have better characteristics with respect to their parents. In order to increase the variability of the individuals, one or more elements of the offspring can be modified by applying the mutation operator. GenHap makes use of a classic mutation in which the elements $C_{p_e}$ (with $e \in \{1, \dots, m\}$) of the individual can be flipped (i.e., from $0$ to $1$ or vice-versa) with probability $m_r$. Besides this mutation operator, GenHap implements an additional bit-flipping mutation in which a random number of consecutive elements of the individual is mutated according to probability $m_r$. This operator is applied if the fitness value of the best individual does not improve for a given number of iterations ($2$ in our tests). \begin{figure}[!ht] [width=0.95\textwidth]{images/MatrixPartition.pdf} \caption{Scheme of the partition of the input matrix: the input matrix $\mathbf{M} \in \{0,1,-\}^{m \times n}$ is split into sub-matrices consisting of $\gamma$ reads, generating $\Pi = \lfloor m/\gamma \rfloor$ sub-problems that are solved independently by a GA instance. The last sub-matrix could have a number of reads lower than $\gamma$.} \label{fig:matrixDivision} \end{figure} Finally, to prevent the quality of the best solution from decreasing during the optimization, GenHap exploits an elitism strategy, so that the best individual from the current population is copied into the next population without undergoing the genetic operators. Unlike the work in \cite{wang2005}, GenHap solves the wMEC problem instead of the unweighted MEC formulation, by means of Equation~(\ref{eq:haploCalWeigh}). Moreover, differently from the other heuristic strategies, such as ReFHap \cite{duitama2010} and ProbHap \cite{kuleshov2014ProbHap}, we did not assume the all-heterozygosity of the phased positions \cite{chen2013}. Under this assumption, every column corresponds to heterozygous sites, implying that $\mathbf{h}_1$ must be the complement of $\mathbf{h}_2$. In addition, since the required execution time as well as the problem difficulty increase with the number of reads and SNPs, to efficiently solve the wMEC problem we split the fragment matrix $\mathbf{M}$ into $\Pi = \lfloor m/\gamma \rfloor$ sub-matrices consisting of $\gamma$ reads (see Figure~\ref{fig:matrixDivision}). Following a \textit{divide-et-impera} approach \cite{maisto2015}, the computational complexity can be tackled by partitioning the entire problem into smaller and manageable sub-problems, each one solved by a GA that converges to a solution characterized by two sub-haplotypes with the least number of corrections to the SNP values. The solutions to the sub-problems achieved by the $\Pi$ GA instances are finally combined. This approach is feasible thanks to the long reads with higher coverage produced by the second- and third-generation sequencing technologies. As a matter of fact, highly overlapping reads allow us to partition the problem into easier sub-problems, avoiding the possibility of obtaining incorrect reconstructions during the merging phase. The parameter $\gamma$, used for the calculation of $\Pi$, depends on the coverage value and on the nature of the sequencing technology; its value must be set to avoid discrete haplotype blocks that do not exist in the input matrix $\mathbf{M}$. Generally, the intervals where several independent historical recombination events occurred separate discrete blocks, revealing greater haplotype diversity for the regions spanning the blocks \cite{daly2001}. GenHap firstly detects all the haplotype blocks inside the fragment matrix $\mathbf{M}$ and then, in each block, it automatically sets $\gamma$ equal to the mean coverage of that block to partition the reads. Notice that GenHap solves each block sequentially and independently, obtaining a number of haplotype pairs equal to the number of detected blocks. So doing, for each block GenHap proceeds by executing $\Pi$ different GA optimizations, one for each sub-problem, calculating $2\cdot\Pi$ sub-haplotypes. The length of the individuals is equal to $\gamma$, except for the last sub-problem that could have a number of reads smaller than $\gamma$ (accordingly, the length of the individuals could be smaller than $\gamma$). Since the problem is divided into $\Pi$ sub-problems, two sub-problems referring to contiguous parts of the two chromosome copies might contain some overlapped positions that can be either homozygous or heterozygous. However, the reads covering an overlapped position might not be entirely included in the same sub-problem. For this reason, during the GA-based optimizations, all the phased positions are assumed to be heterozygous. If a position $j$ is homozygous (i.e., all the reads covering this position have the same value, belonging to $\{0, -\}$ or $\{1, -\}$, in both the sub-partitions and in every read covering it), then only one of the two sub-haplotypes will have the correct value. This specific value is correctly assigned to the sub-haplotype covered by the highest number of reads by following a majority rule. As soon as the two sub-haplotypes are obtained, all the possible uncorrected heterozygous sites are removed and the correct homozygous values are assigned by checking the columns of the two sub-partitions. Finally, once all sub-problems in $\Pi$ are solved, GenHap recombines the sub-haplotypes to obtain the two entire haplotypes $\mathbf{h}_1$ and $\mathbf{h}_2$ of the block under analysis. GenHap is also able to find and mask the ambiguous positions by replacing the $0$ or $1$ value with a $X$ symbol. We highlight that an ambiguous position is a position covered only by the reads belonging to one of the two haplotypes. \subsection*{Implementation} In order to efficiently solve the wMEC problem and tackle its computational complexity, GenHap detects the haplotype blocks inside the matrix $\mathbf{M}$ and then, for each block, it splits the portion of $\mathbf{M}$ into $\Pi$ sub-matrices consisting of $\gamma$ reads. So doing, the convergence speed of the GA is increased thanks to the lower number of reads to partition in each sub-problem with respect to the total number of reads of the whole problem. As shown in Figure~\ref{fig:MS_GenHap}, the $\Pi$ sub-matrices are processed in parallel by means of a \textit{divide-et-impera} approach that exploits a Master-Slave distributed programming paradigm \cite{tangherloni2018PDP} to speed up the overall execution of GenHap. This strategy allowed us to distribute the computation in presence of multiple cores. As a matter of fact, GenHap works by partitioning the initial set of reads into sub-sets and solving them by executing different GA instances. This strategy can be exploited in GenHap, as it solves the wMEC problem working on the rows of the fragment matrix $\mathbf{M}$; on the contrary, HapCol works considering the columns of $\mathbf{M}$, which cannot be independently processed in parallel. \begin{figure}[ht] [width=0.95\textwidth]{images/GenHapMasterSlaveScheme.pdf} \caption{Scheme of the Master-Slave implementation of GenHap: the Master process orchestrates all the $\Sigma$ Slaves sending one or more sub-partitions to each Slave, which then solves the assigned wMEC sub-task.} \label{fig:MS_GenHap} \end{figure} The functioning of our Master-Slave implementation can be summarized as follows: \begin{enumerate} \item the Master allocates the resources and detects the haplotype blocks inside the fragment matrix. For each detected block it partitions the portion of the matrix $\mathbf{M}$ into $\Pi$ sub-matrices and offloads the data onto the available $\Sigma$ Slaves (in real scenarios, $\Sigma \ll \Pi$). During this phase, each Slave generates the initial population of the GA; \item the $\sigma$-th Slave (with $\sigma \in \{1, \dots, \Sigma\}$) executes the assigned wMEC sub-task, running the GA for either $\theta$ non-improving iterations or $T$ maximum iterations, independently of the other Slaves; \item the process is iterated until all the wMEC sub-tasks are terminated; \item the Master recombines the sub-solutions received from the Slaves, and returns the complete wMEC solution for the block under analysis. \end{enumerate} GenHap was entirely developed using the C++ programming language exploiting the Message Passing Interface (MPI) specifications to leverage multi-core Central Processing Units (CPUs). \section*{Results} In this section we first describe the synthetic and real datasets used during the tests and present the results obtained to identify the best GA setting. Then, we discuss the performance achieved by GenHap with respect to HapCol \cite{pirola2015}, which was previously shown to be more efficient than the other existing methods for the haplotype assembly problem, both in terms of memory consumption and execution time. \subsection*{The analyzed datasets} In order to test the performance of GenHap, we generated two synthetic (yet realistic) datasets, each one consisting of instances obtained from a specific sequencing technology. In particular, we considered the Roche/454 genome sequencer (Roche AG, Basel, Switzerland), representing one of the next-generation sequencing (NGS) systems able to produce long and precise reads, and the PacBio RS II sequencer \cite{roberts2013, carneiro2012}, which is an emerging third-generation sequencing technology. Note that the reads produced by the Roche/454 sequencer are approximately $9$-times shorter than those generated by the PacBio RS II system. In order to generate the datasets, we exploited the General Error-Model based SIMulator (GemSIM) toolbox \cite{mcelroy2012}. GemSIM is a software able to generate \textit{in silico} realistic sequencing data. It relies on empirical error models and distributions learned from real NGS data, and simulates both single‐ and paired‐end reads from a single genome, collection of genomes, or set of related haplotypes. GemSIM can in principle simulate data from any sequencing technology producing output data encoded in the FASTQ format \cite{cock2009}, for raw reads, and Sequence Alignment/Map (SAM), for aligned reads. In this work, we exploited the error model for the Roche/454 sequencer, already available in GemSIM, and defined an additional error model for the PacBio RS II technology. The synthetic reads were generated from the reference sequence of the human chromosome 22 (UCSC Genome Browser, GRCh37/hg19 Feb. 2009 assembly \cite{casper2017ucsc}), in which random SNP were inserted. We exploited the GemHaps tool included in GemSIM \cite{{mcelroy2012}} to generate a haplotype file starting from a given genome sequence, and specifying the number as well as the frequency of SNPs in each haplotype, denoted by $\#\text{SNPs}$ and $f_\text{SNPs}$, respectively. Note that the SNP positions were randomly determined. Then, the resulting haplotype file was processed by GemReads, together with an error model file (generated by GemErr or supplied in GemSIM), a FASTA genome file (or directory), and the selected quality score offset. The resulting SAM file was converted into the compressed Binary Alignment/Map (BAM) format for a more efficient manipulation \cite{li2009}. In order to store the SNPs, we exploited the Variant Call Format (VCF) \cite{danecek2011}, which is the most used format that combines DNA polymorphism data, insertions and deletions, as well as structural variants. Lastly, the BAM and VCF files were processed to produce a WhatsHap Input Format (WIF) file \cite{patterson2015}, which is the input of GenHap. The two synthetic datasets are characterized by the following features: \textit{i}) $\#\text{SNPs} \in \{500, 1000, 5000, 10000, 20000\}$ (equally distributed over the two haplotypes); \textit{ii}) coverage $\text{cov} \in \{\sim\! 30\times$, $\sim\!60\times\}$; \textit{iii}) average $f_\text{SNPs} \in \{100, 200\}$, which means one SNP every $100$bp or $200$bp \cite{nachman2001single,gabriel2002structure}, varying the portion of genome onto which the reads were generated. Read lengths were set to $600$bp and $5000$bp for the Roche/454 and the PacBio RS II sequencers, respectively. The number of reads was automatically calculated according to the value of $\text{cov}$ and the sequencing technologies, by means of the following relationship: \begin{equation} \label{eq:reads} \#\text{reads} = \text{cov} \cdot\frac{len(\text{genome})}{len(\text{read})}, \end{equation} where $len(\text{genome})$ represents the length of the considered genome, which starts at a given position $x$ and ends at position $y = x + f_\text{SNPs}\cdot\#\text{SNPs}$. In order to test the performance of GenHap on real sequencing data, we exploited a WIF input file present in \cite{Beretta170225}, which was generated starting from high-quality SNP calls and sequencing data made publicly available by the Genome in a Bottle (GIAB) Consortium \cite{zook2014}. In particular, we exploited data produced by the PacBio technology and limited to the chromosome $22$ of the individual NA12878. real dataset available at Moreover, we tested GenHap on an additional \cite{pacbio54}. As for the previous dataset, we limited our analysis to chromosome $22$. The available BAM file--containing long reads with high-coverage produced with the PacBio RS II sequencing technology--and the VCF file were processed to obtain a WIF input file as described above. \subsection*{GA setting analysis} \begin{figure}[t!] [width=0.95\textwidth]{images/ABF.pdf} \caption{Comparison of the ABF achieved by GenHap with the best parameterizations found for each value of $|P|$ tested here. The ABF was computed over the results of the optimization of instances characterized by $\#\text{SNPs} \in \{500, 1000, 5000\}$ and $f_\text{SNPs}=100$.} \label{fig:settingscomparison} \end{figure} As a first step, the performance of GenHap was evaluated to determine the best settings for the haplotype assembly problem. We considered different instances for two sequencing technologies employed (i.e., Roche/454 and PacBio RS II), and we varied the settings of GenHap used throughout the optimization process, as follows: \begin{itemize} \item size of the population $|P| \in \{ 50, 100, 150, 200 \}$; \item crossover rate $c_r \in \{ 0.8, 0.85, 0.9, 0.95 \}$; \item mutation rate $m_r \in \{ 0.01, 0.05, 0.1, 0.15 \}$. \end{itemize} In all tests, the size of the tournament is fixed to $\kappa = 0.1 \cdot |P|$ and the maximum number of iterations is $T = 100$. A total of $6$ different instances ($3$ resembling the Roche/454 sequencer and $3$ the PacBio RS II sequencer) were generated by considering $\#\text{SNPs} \in \{500, 1000, 5000\}$ and $f_\text{SNPs}=100$. We varied one setting at a time, leading to $64$ different settings tested and a total number of $64\times 6 =384$ GenHap executions. These tests highlighted that, for each value of $|P|$, the best settings are: \begin{enumerate} \item $|P|=50$, $p_c=0.9$, $p_m=0.05$; \item $|P|=100$, $p_c=0.9$, $p_m=0.05$; \item $|P|=150$, $p_c=0.95$, $p_m=0.05$; \item $|P|=200$, $p_c=0.95$, $p_m=0.05 $. \end{enumerate} Figure \ref{fig:settingscomparison} shows the comparison of the performance achieved by GenHap with the settings listed above, where the Average Best Fitness (ABF) was computed by taking into account, at each iteration, the fitness value of the best individuals over the $6$ optimization processes. Even though all settings allowed GenHap to achieve almost the same final ABF value, we observe that the convergence speed increases with the size of the population. On the other hand, also the running time of GenHap increases with the size of the population. In particular, the executions lasted on average $1.41$ s, $2.33$ s, $3.52$ s, $4.95$ s with $|P| \in \{50, 100, 150, 200\}$, respectively, running on one node of the Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University, Nashville, TN, USA. The node is equipped with $2$ Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} E5-2630 v3 ($8$ cores at $2.40$ GHz) CPUs, $240$ GB of RAM and CentOS 7.0 operating system. To perform the tests we exploited all $8$ physical cores of a single CPU. Considering these preliminary results, we selected the parameter settings $|P|=100$, $c_r=0.9$, $m_r=0.05$, as the best trade-off between convergence speed (in terms of ABF) and running time. \subsection*{Performance of GenHap} \begin{table}[t] \scriptsize \centering \caption{Comparison of GenHap and HapCol on the Roche/454 dataset with $\text{cov} \simeq 30\times$. The performances were evaluated both in terms of \textit{HE} and running time. The N/A symbol denotes that HapCol was not able to complete the execution on all the $15$ instances.} \label{tab:GenHapVSHapCol_r454} \begin{tabular}{p{.8cm}p{1cm}p{1cm}|p{1cm}p{1cm}p{1cm}|p{1cm}p{1cm}p{1cm}} \hline\hline & & & \multicolumn{3}{c|}{GenHap} & \multicolumn{3}{c}{HapCol} \\ \hline $f_{\text{SNPs}}$ & $\text{cov}$ & $\#\text{SNPs}$ & Avg \textit{HE} & Std dev \textit{HE} & Avg Running Time {[}s{]} & Avg \textit{HE} & Std dev \textit{HE} & Avg Running Time {[}s{]} \\ \hline \multirow{4}{*}{100} & \multirow{4}{*}{$\sim 30\times$} & 500 & 0.04 & 0.08 & 0.21 & 0.00 & 0.00 & 0.62 \\ & & 1000 & 0.09 & 0.08 & 0.36 & 0.00 & 0.00 & 1.20 \\ & & 5000 & 0.18 & 0.06 & 3.17 & 0.01 & 0.03 & 5.35 \\ & & 10000 & 2.50 & 5.52 & 10.33 & 6.55 & 16.38 & 10.23 \\ \hline \multirow{4}{*}{200} & \multirow{4}{*}{$\sim 30\times$} & 500 & 0.09 & 0.14 & 0.34 & 0.00 & 0.00 & 0.50 \\ & & 1000 & 0.09 & 0.10 & 0.63 & 0.01 & 0.03 & 0.96 \\ & & 5000 & 3.61 & 3.43 & 6.07 & 0.38 & 0.78 & 4.90 \\ & & 10000 & 2.15 & 1.62 & 17.24 & N/A & N/A & N/A \\ \hline\hline \end{tabular} \end{table} \begin{figure}[t] [width=0.95\textwidth]{images/r454_time.pdf} \caption{Comparison of the average running time required by GenHap (blue bars) and HapCol (red bars) computed over $15$ instances for each value of $\#\text{SNPs} \in \{500, 1000, 5000\}$ obtained with the Roche/454 sequencing technology, $\text{cov} \simeq 30\times$ and $f_{\text{SNPs}}=100$ (top) and $f_{\text{SNPs}}=200$ (bottom). In the case of $f_{\text{SNPs}}=200$ and $\#\text{SNPs}=10000$, HapCol was not able to complete the execution on all the $15$ instances.} \label{fig:timer454} \end{figure} The performance achieved by GenHap was compared with those obtained by HapCol \cite{pirola2015}, which was shown to outperform the main available haplotyping approaches. In particular, we exploited here a more recent version of HapCol capable of dealing with haplotype blocks \cite{Beretta170225}. The same computational platform used for the setting analysis of GenHap was used to execute all the tests on the two synthetic datasets described above. \begin{table}[t] \scriptsize \centering \caption{Comparison of GenHap and HapCol on the PacBio RS II dataset with $\text{cov} \simeq 30\times$. The performances were evaluated both in terms of \textit{HE} and running time.} \label{tab:GenHapVSHapCol_PacBio} \begin{tabular}{p{.8cm}p{1cm}p{1cm}|p{1cm}p{1cm}p{1cm}|p{1cm}p{1cm}p{1cm}} \hline\hline & & & \multicolumn{3}{c|}{GenHap} & \multicolumn{3}{c}{HapCol} \\ \hline $f_{\text{SNPs}}$ & $\text{cov}$ & $\#\text{SNPs}$ & Avg \textit{HE} & Std dev \textit{HE} & Avg Running Time {[}s{]} & Avg \textit{HE} & Std dev \textit{HE} & Avg Running Time {[}s{]} \\ \hline \multirow{5}{1.5cm}{100} & \multirow{5}{*}{$\sim 30\times$} & 500 & 2.04 & 0.59 & 0.11 & 2.42 & 0.78 & 2.24 \\ & & 1000 & 1.27 & 0.51 & 0.19 & 1.20 & 0.61 & 1.89 \\ & & 5000 & 1.06 & 0.19 & 0.94 & 0.60 & 0.17 & 9.04 \\ & & 10000 & 0.96 & 0.19 & 2.50 & 0.43 & 0.11 & 15.51 \\ & & 20000 & 1.02 & 0.14 & 8.49 & 0.41 & 0.11 & 31.13 \\ \hline \multirow{5}{1.5cm}{200} & \multirow{5}{*}{$\sim 30\times$} & 500 & 2.09 & 0.52 & 0.14 & 1.73 & 0.42 & 0.95 \\ & & 1000 & 1.70 & 0.24 & 0.22 & 1.09 & 0.41 & 1.84 \\ & & 5000 & 1.05 & 0.18 & 1.39 & 0.54 & 0.11 & 7.10 \\ & & 10000 & 1.13 & 0.18 & 4.09 & 0.51 & 0.17 & 14.13 \\ & & 20000 & 1.02 & 0.13 & 13.86 & 0.33 & 0.05 & 27.55 \\ \hline\hline \end{tabular} \end{table} \begin{figure}[t!] [width=0.95\textwidth]{images/PacBio_time.pdf} \caption{Comparison of the average running time required by GenHap (blue bars) and HapCol (red bars) computed over $15$ instances for each $\#\text{SNPs} \in \{500, 1000, 5000, 10000, 20000\}$ obtained with the PacBio RS II sequencing technology, $\text{cov} \simeq 30\times$, $f_{\text{SNPs}}=100$ (top) and $f_{\text{SNPs}}=200$ (bottom).} \label{fig:timePacBio} \end{figure} We stress the fact that GenHap was compared against HapCol only on the instances with $\text{cov} \simeq 30\times$, since HapCol is not capable of solving instances with higher coverage values (i.e., the algorithm execution halts when a column covered by more than $30$ reads is found). Considering the two sequencing technologies, we generated $15$ different instances for each value of $\#\text{SNPs}$ and $f_\text{SNPs}$. The performance was then evaluated by computing (\textit{i}) the average haplotype error rate (\textit{HE}), which represents the percentage of SNPs erroneously assigned with respect to the ground truth \cite{andres2007}, and (\textit{ii}) the average running time. As shown in Table \ref{tab:GenHapVSHapCol_r454}, in the instances generated using the Roche/454 sequencing technology with $f_{\text{SNPs}} = 100$, both GenHap and HapCol reconstructed the two haplotypes, achieving an average \textit{HE} lower than $0.2\%$ with a negligible standard deviation in the case of $\#\text{SNPs} \in \{500, 1000, 5000\}$. GenHap inferred the haplotypes characterized by $10000$ SNPs with an average \textit{HE} lower than $2.5\%$ and a standard deviation around $5\%$, while HapCol obtained an average \textit{HE} equal to $6.55\%$ with a standard deviation around $16\%$. For what concerns the running time, GenHap outperformed HapCol in all tests except in the case of $\#\text{SNPs}=10000$, as shown in Figure \ref{fig:timer454}, being around $4\times$ faster in reconstructing the haplotypes. In the case of $\#\text{SNPs}=10000$, the running times are comparable, but GenHap obtains a lower \textit{HE} than HapCol. In the instances generated using $f_{\text{SNPs}} = 200$ and $\#\text{SNPs} \in \{500, 1000\}$, both GenHap and HapCol reconstructed the two haplotypes, achieving an average \textit{HE} lower than $0.1\%$ with a negligible standard deviation. When $\#\text{SNPs} \in \{5000, 10000\}$ are taken into account, GenHap inferred the haplotype pairs with an average \textit{HE} lower than $3.65\%$ and a standard deviation lower than $3.5\%$. Notice that HapCol was not able to complete the execution on all the $15$ instances characterized by $10000$ SNPs. As in the case of instances with $f_{\text{SNPs}} = 100$, GenHap is faster than HapCol in all tests, except in the case of $\#\text{SNPs}=5000$. For what concerns the PacBio RS II sequencing dataset, since this technology is characterized by a higher error rate with respect to the Roche/454 sequencer, both GenHap and HapCol reconstructed the two haplotypes with higher \textit{HE} values (see Table \ref{tab:GenHapVSHapCol_PacBio}). Nonetheless, the average \textit{HE} value is lower than $2.5\%$ with a standard deviation lower than $1\%$ in all cases. Figure \ref{fig:timePacBio} shows the running time required by GenHap and HapCol to reconstruct the haplotypes. As in the case of the Roche/r454 dataset, the running time increases with $\#\text{SNPs}$, but GenHap always outperforms HapCol, achieving up to $20\times$ speed-up. \begin{table}[t] \scriptsize \centering \caption{Results obtained by GenHap on the Roche/454 dataset with $\text{cov} \simeq 60\times$. The performances were evaluated both in terms of \textit{HE} and running time.} \label{tab:GenHap60_r454} \begin{tabular}{p{1.3cm}p{1cm}p{1cm}|p{1cm}p{1cm}p{1cm}} \hline\hline & & & \multicolumn{3}{c}{GenHap} \\ \hline $f_{\text{SNPs}}$ & $\text{cov}$ & $\#\text{SNPs}$ & Avg \textit{HE} & Std dev \textit{HE} & Avg Running Time {[}s{]} \\ \hline \multirow{4}{*}{100} & \multirow{4}{*}{$\sim 60\times$} & 500 & 0.00 & 0.00 & 0.26 \\ & & 1000 & 0.05 & 0.05 & 0.54 \\ & & 5000 & 0.10 & 0.03 & 6.57 \\ & & 10000 & 0.15 & 0.03 & 21.13 \\\hline \multirow{4}{*}{200} & \multirow{4}{*}{$\sim 60\times$} & 500 & 0.00 & 0.00 & 0.37 \\ & & 1000 & 0.07 & 0.09 & 0.89 \\ & & 5000 & 1.13 & 1.72 & 11.17 \\ & & 10000 & 2.00 & 1.02 & 53.77 \\ \hline\hline \end{tabular} \end{table} Table \ref{tab:GenHap60_r454} lists the results obtained by GenHap on the instances of the Roche/454 dataset characterized by $\text{cov}\simeq 60\times$, $\#\text{SNPs} \in \{500, 1000, 5000, 10000\}$ and $f_{\text{SNPs}} \in \{100, 200\}$. In all tests with $f_{\text{SNPs}} = 100$, GenHap was always able to infer the two haplotypes with high accuracy, indeed the average \textit{HE} values are always lower than $0.15\%$. In the instances generated with $f_{\text{SNPs}} = 200$, GenHap reconstructed the haplotype pairs with an average \textit{HE} lower than $0.2\%$. This interesting result shows that higher coverages can help during the reconstruction phase, allowing GenHap to infer more precise haplotypes. Regarding the PacBio RS II dataset, the achieved \textit{HE} is on average lower than $1.25\%$ with a standard deviation $\leq 0.4\%$ (see Table \ref{tab:GenHap60_PacBio}). In particular, the average \textit{HE} decreases when the value of $\#\text{SNPs}$ or the coverage increase, thus suggesting that higher cov values can considerably help in achieving a correct reconstruction of the two haplotypes. On the contrary, the running time increases at most linearly with respect to the coverage (see Table \ref{tab:GenHap60_PacBio}). As a first test on real sequencing data, we exploited a WIF input file codifying the SNPs of the chromosome $22$ generated from high-quality sequencing data made publicly available by the GIAB Consortium. This instance contains $\#\text{SNPs}\simeq27000$ and $\#\text{reads}\simeq80000$ with average and maximum coverages equal to $22$ and $25$, respectively. In \cite{Beretta170225}, in order to down-sample the instances to the target maximum coverages of $30\times$ allowed by HapCol, the authors applied a greedy-based pruning strategy. This procedure selects the reads characterized by high base-calling quality. GenHap detected and inferred the $305$ different haplotype blocks in less than $10$ minutes, obtaining approximately an $87\%$ agreement with respect to the HapCol solution. This agreement was calculated considering every SNP of both haplotypes in each block. We tested GenHap also on the chromosome $22$ sequenced using the PacBio RS II technology (publicly available at \cite{pacbio54}). This instance contains $\#\text{SNPs}\simeq28000$ and $\#\text{reads}\simeq140000$ with average and maximum coverages equal to $29$ and $565$, respectively. GenHap reconstructed the two haplotypes in about $10$ minutes. This result shows that GenHap is capable of dealing with instances characterized by high coverages, avoiding pruning pre-processing steps. \begin{table}[t] \scriptsize \centering \caption{Results obtained by GenHap on the PacBio RS II dataset with $\text{cov} \simeq 60\times$. The performances were evaluated both in terms of \textit{HE} and running time.} \label{tab:GenHap60_PacBio} \begin{tabular}{p{1.3cm}p{1cm}p{1cm}|p{1cm}p{1cm}p{1cm}} \hline\hline & & & \multicolumn{3}{c}{GenHap} \\ \hline $f_{\text{SNPs}}$ & $\text{cov}$ & $\#\text{SNPs}$ & Avg \textit{HE} & Std dev \textit{HE} & Avg Running Time {[}s{]} \\ \hline \multirow{5}{1.5cm}{100} & \multirow{5}{*}{$\sim 60\times$} & 500 & 1.22 & 0.36 & 0.17 \\ & & 1000 & 0.88 & 0.21 & 0.33 \\ & & 5000 & 0.56 & 0.10 & 1.81 \\ & & 10000 & 0.62 & 0.10 & 5.34 \\ & & 20000 & 0.60 & 0.07 & 17.14 \\ \hline \multirow{5}{1.5cm}{200} & \multirow{5}{*}{$\sim 60\times$} & 500 & 1.22 & 0.37 & 0.22 \\ & & 1000 & 0.79 & 0.27 & 0.36 \\ & & 5000 & 0.53 & 0.09 & 3.26 \\ & & 10000 & 0.45 & 0.08 & 8.01 \\ & & 20000 & 0.49 & 0.05 & 27.15 \\ \hline\hline \end{tabular} \end{table} \section*{Discussion and conclusions} In this paper we presented GenHap, a novel computational method based on GAs to solve the haplotyping problem, which is one of the hot topics in Computational Biology and Bioinformatics. The performance of GenHap was evaluated by considering synthetic (yet realistic) read datasets resembling the outputs produced by the Roche/454 and PacBio RS II sequencers. The solutions yielded by GenHap are accurate, independently of the number, frequency and coverage of SNPs in the input instances, and without any \textit{a priori} hypothesis about the sequencing error distribution in the reads. In practice, our method was conceived to deal with data characterized by high-coverage and long reads, produced by recent sequencing techniques. The read accuracy achieved by novel sequencing technologies, such as PacBio RS II and Oxford Nanopore MinION, may be useful for several practical applications. In the case of SNP detection and haplotype phasing in human samples, besides read accuracy, a high-coverage is required to reduce possible errors due to few reads that convey conflicting information \cite{jain2016}. In \cite{sims2014}, the authors argued that an average coverage higher than $30\times$ is the \textit{de facto} standard. As a matter of fact, the first human genome that was sequenced using Illumina short-read technology showed that, although almost all homozygous SNPs are detected at a $15\times$ average coverage, an average depth of $33\times$ is required to detect the same proportion of heterozygous SNPs. GenHap was implemented with a distributed strategy that exploits a Master-Slave computing paradigm in order to speed up the required computations. We showed that GenHap is remarkably faster than HapCol \cite{pirola2015}, achieving approximately a $4\times$ speed-up in the case of Roche/454 instances, and up to $20\times$ speed-up in the case of the PacBio RS II dataset. In order to keep the running time constant when the number of SNPs increases, the number of available cores should increase proportionally with $\#\text{SNPs}$. Differently from the other state-of-the-art algorithms, GenHap was designed for taking into account datasets produced by the third-generation sequencing technologies, characterized by longer reads and higher coverages with respect to the previous generations. As a matter of fact, the experimental findings show that GenHap works better with the datasets produced by third-generation sequencers. Although several approaches have been proposed in literature to solve the haplotyping problem \cite{patterson2015,pirola2015}, GenHap can be easily adapted to exploit Hi-C data characterized by very high-coverages (up to $90 \times$), in combination with other sequencing methods for long-range haplotype phasing \cite{ben2016}. Moreover, GenHap can be also extended to compute haplotypes in organisms with different ploidity \cite{aguiar2013,berger2014}. Worthy of notice, GenHap could be easily reformulated to consider a multi-objective fitness function (e.g., by exploiting an approach similar to NSGA-III \cite{deb2014}). In this context, a possible future extension of this work would consist in introducing other objectives in the fitness function, such as the methylation patterns of the different chromosomes \cite{guo2017} or the gene proximity in maps achieved through Chromosome Conformation Capture (3C) experiments \cite{merelli2013}. As a final note, we would like to point out that there is currently a paucity of up-to-date real benchmarks regarding the latest sequencing technologies. Therefore, collecting a reliable set of human genome sequencing data acquired with different technologies against the corresponding ground truth can be beneficial for the development of future methods. \section*{List of abbreviations} \textbf{3C:} Chromosome Conformation Capture \textbf{ABF:} Average Best Fitness \textbf{ACCRE:} Advanced Computing Center for Research and Education \textbf{BAM:} Binary Alignment/Map \textbf{CPU:} Central Processing Unit \textbf{EDA:} Estimation of Distribution Algorithm \textbf{GA:} Genetic Algorithm \textbf{GeneSIM:} General Error-Model based SIMulator \textbf{GIAB:} Genome in a Bottle \textbf{HE:} Haplotype Error rate \textbf{MEC:} Minimum Correction Error \textbf{MPI:} Message Passing Interface \textbf{NGS:} Next-Generation Sequencing \textbf{PEATH:} Probabilistic Evolutionary Algorithm with Toggling for Haplotyping \textbf{SAM:} Sequence Alignment/Map \textbf{SNP:} Single Nucleotide Polymorphism \textbf{VCF:} Variant Call Format \textbf{WIF:} WhatsHap Input Format \textbf{wMEC:} weighted Minimum Correction Error \begin{backmatter} \section*{Acknowledgments} This work was conducted in part using the resources of the Advanced Computing Center for Research and Education at Vanderbilt University, Nashville, TN, USA. \section*{Availability of data and materials} GenHap is a cross-platform software, i.e., it can be compiled and executed on the main Unix-like operating systems: GNU/Linux, and Apple Mac OS X. GenHap is written in C++ and exploits a Message https://www.overleaf.com/dashPassing Interface (MPI) implementation. GenHap's source files and binary executable files, as well as the datasets used during testing, are available on GitHub: https://github.com/andrea-tango/GenHap. \section*{Author's contributions} Conceived the idea: AT, MSN, IM. Designed the code: AT, SS, LR. Implemented the code: AT. Performed the experiments: AT, SS, LR, IM. Analyzed the data: AT, SS, LR. Wrote the manuscript: AT, SS, LR, MSN, PC, IM, DB. Critically read the manuscript and contributed to the discussion of the whole work: PL, GM. \section*{Competing interests} The authors declare that they have no competing interests. \bibliographystyle{bmc-mathphys}
1,116,691,500,574
arxiv
\section{Background and Related Work} \begin{comment} \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[height=1.2in]{figures/arch1.pdf} \caption{Lorem ipsum} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[height=1.2in]{figures/arch2.pdf} \caption{} \end{subfigure} \caption{An examples of multi-tiered memory system.} \label{fig:multi-tiered-mem} \end{figure*} \end{comment} \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{figures/arch1_new.pdf} \vspace{-10pt} \caption{An example of multi-tiered memory system.} \label{fig:multi-tiered-mem} \end{figure} \subsection{Multi-Tiered Large Memory Systems} \label{sec:bg_large_mem} In this section, we introduce memory organization in typical multi-tiered large-memory systems. Note that recent works on using network-attached memory~\cite{ATC20_leap,OSDI20_AIFM,DBLP:conf/osdi/ShanHCZ18, DBLP:conf/usenix/TsaiSZ20,osdi20_semeru}, i.e., \textit{cross-machine} multi-tiered systems are beyond the scope of this work. Figure~\ref{fig:multi-tiered-mem} shows an example of the Intel Optane-based large memory system with two sockets and four memory components (i.e., DRAM 0-1 and PM 0-1). Each memory component is called a \textit{tier} and has different memory latency. \subsection{Large Memory Systems} Given the emergence of large memory systems, there is a pressing need of studying effectiveness and scalability of system software and hardware to support them. We review the related works as follows. \textbf{Software support for page management.} Recent work manages large memory systems based on an existing NUMA balancing solution in Linux, AutoNUMA~\cite{autonuma_balance}. AutoNUMA randomly profiles 256MB memory pages to reduce memory profiling overhead. AutoNUMA migrates pages to reduce memory accesses across NUMA nodes, and does not consider page hotness. Intel Memory-tiering~\cite{memory_tiering} uses the same profiling method as AutoNUMA but adds extension to support multi-tiered large memory. It balances memory accesses between sockets first, and then balance memory accesses across memory tiers within each socket. As a result, a hot page takes a long time to migrate to the fastest memory for high performance. Intel Memory-optimizer~\cite{intel_mem_optimizer} randomly samples pages for profiling, and only migrates pages between two memory tiers within the same socket, failing to exploit fast memory across sockets. AutoTiering~\cite{atc21_autotiering} is a state-of-the-art solution. It uses random sampling as AutoNUMA, and introduces flexible page migration between memory tiers. However, it does not have a systematic migration strategy guided by page hotness. HeMem~\cite{sosp21_hemem} is a state-of-the-art solution for two-tiered PM-based HM. HeMem leverages sampling-based hardware performance counters to identify hot pages and fails to explore more than two tiers. \begin{comment} \textbf{Software support for page management.} \textcolor{jie2}{Recent work manages large memory systems based on the existing NUMA balancing solution, AutoNUMA~\cite{autonuma_balance}. AutoNUMA randomly samples and tracks memory access for only 256 MB pages to maintain low profiling overhead. Based on the profiling result, AutoNUMA migrates pages to reduce memory accesses cross NUMA sockets.} Memory-tiering~\cite{memory_tiering} uses the same profiling method as AutoNUMA to build a page management system for the multi-tiered memory. In particular, it balances memory accesses between sockets first, and then balances memory accesses across NUMA nodes within each socket. As a result, a hot page takes a long data path and long migration time to migrate to the fastest memory for high performance; Also the profiling method is not effective on a large memory node, because of the small profiling scope. \textcolor{check}{Intel Memory-optimizer~\cite{intel_mem_optimizer} is another solution to manage large memory systems. However, this solution manages multiple sockets independently without coordination, losing opportunities to use fast memory across sockets.} \textcolor{jie}{AutoTiering~\cite{atc21_autotiering} proposes multi-tiered memory management solution based on AutoNUMA. Based on the profiling result of AutoNUMA, AutoTiering demotes the less frequent accessed page from the first memory tier to other memory tiers to avoid migration blocking because of no enough space on the first memory tier. However, due to losing track of frequently accessed pages in a comprehensive view, AutoTiering cannot make the best page migration decision to fully utilize all memory tiers. } \end{comment} Mitosis~\cite{asplos20:mitosis} and vMitosis~\cite{asplos21:vmitosis} explore how to efficient place page tables across sockets in large memory systems. They complement HM-Keeper\xspace, because HM-Keeper\xspace focuses on application-level data (the most memory-consuming data), not kernel-level objects. $\emptyset$sim~\cite{asplos20:0sim} recognizes another pressing problem in large memory systems: how to enable rapid, early prototyping and exploration of system software for large memory. $\emptyset$sim harnesses the fact that many workloads follow the same control flow regardless of their input to make huge simulations feasible and fast via memory compression. HM-Keeper\xspace can use $\emptyset$sim for fast evaluation. \textbf{Hardware-managed memory caching.} Some large memory systems use fast memory as a hardware-managed cache to slow memory. For example, in Intel's Optane PM, DRAM can work as a hardware-managed cache to persistent memory in the \textit{Memory Mode}. However, this solution results in data duplication in fast and slow memories, wasting fast memory capacity. It also causes serious write amplification when there are memory cache misses. Recent work~\cite{nvmw21:dram-cache} reveals that Memory Mode causes more than 3x extra writes and 50\% bandwidth drop, compared with software-based solutions. \subsection{Two-Tiered Heterogeneous Memory} Heterogeneous memory (HM) combines the best properties of memory technologies optimized for latency, bandwidth, capacity, and cost, but complicate memory management. There are OS-level, application-transparent solutions that measure data reuse and migrate data for performance~\cite{Agarwal:2017:TAP:3037697.3037706, kleio:hpdc19,ipdps21_cori, Hirofuchi:2016:RHV:2987550.2987570, Kannan:2017:HOD:3079856.3080245, asplos21:kloc, asplos19_softwarefarmem,tpds19_tieredmem,Yan:2019:NPM:3297858.3304024,sosp21_hemem}. However, they can cause large and uncontrolled profiling overhead or low profiling quality, and are not designed for more than two memory tiers. There are application-specific solutions that leverage application domain knowledge to reduce profiling overhead, prefetch pages from slow memory to fast memory, and avoid slow-memory accesses. Those solutions include big data analysis frameworks (e.g., Spark~\cite{pldi19:panthera}), machine learning applications~\cite{AutoTM_asplos20, hpca21_sentinel,neurips20:hm-ann}, scientific computing~\cite{pm-octree:sc17,peng2018siena,ics21:memoization}, and graph analysis~\cite{vldb20_sage,optane_utexas19,peng2018graphphi}. These solutions show better performance than the application-transparent, system-level solutions, but require extensive domain knowledge and application modifications. Instead, HM-Keeper\xspace is an application-transparent solution. \section{Conclusions} Emerging multi-tiered large memory systems bring new challenges to system software and applications. In this work, we study page management in multi-tiered large memory systems and pinpoint the fundamental limitations in existing solutions, i.e., non-scalable, low-quality memory profiling and unawareness of rich memory tiers. We present HM-Keeper\xspace, an application-transparent page management system customized for large memory systems. HM-Keeper\xspace is based on four design principles, i.e., scalable high-quality profiling, global view of all memory tiers, holistic migration decision and pattern-aware migration mechanisms. Our extensive evaluation of HM-Keeper\xspace against seven state-of-the-art solutions shows that HM-Keeper\xspace can largely outperform existing solutions on four-tiered large memory systems by 15\%-78\%. \section{Evaluation} \label{sec:eval} \begin{comment} We aim to answer the following questions for evaluation: \begin{enumerate}[label=(\roman*)] \item What performance does HM-Keeper\xspace achieve on large memory systems with various memory latency and bandwidth (Section~\ref{sec:eval_overall_perf})? \item Does HM-Keeper\xspace outperform the state-of-art page management solutions on a large memory system (Section~\ref{sec:eval_overall_perf})? \item Are adaptive memory profiling, holistic migration strategy, and adaptive migration mechanism effective (Sections~\ref{sec:eval_profiling} and~\ref{sec:eval_migration})? \end{enumerate} \end{comment} \subsection{Experimental Setup} \textbf{Testbed.} We evaluate HM-Keeper\xspace on a two-socket machine based on Intel Optane DC persistent memory module (PMM). This machine has four memory tiers (see Section~\ref{sec:bg_large_mem} for details). We also emulate another multi-tiered large memory system based on the Optane machine for evaluation. The emulated system has four tiers, and their latency and bandwidth are different from those in the original Optane machine. For the emulation, we launch two memhog instances in each memory tier to continuously inject memory access traffic. Memhog is an artificial memory-intensive workload in Linux used in prior studies to emulate heterogeneous memory systems~\cite{pact15,Yan:2019:NPM:3297858.3304024}. Table~\ref{tab:hardware} summarizes the two evaluation platforms. Unless indicated otherwise, Optane uses App Direct Mode, which allows software-based page management, and we report results on the Optane-based machine (not the emulation platform). We use \texttt{madvise} for Transparent Hugepage Support (THP)~\cite{thp}, which uses 2MB as page size for contiguous memory allocation. This configuration is typical in large memory systems. We set the profiling overhead constraint to 5\% and the profiling interval to 10 seconds. This setting is similar to existing works and production environments~\cite{Agarwal:2017:TAP:3037697.3037706, Hirofuchi:2016:RHV:2987550.2987570,middleware19_profiling}. \textbf{Workloads.} Common large-memory workloads, including in-memory database, graph analysis, and data sorting (see Table~\ref{tab:app_details}) are used. The memory footprint of these workloads is larger than the first two memory tiers (fast memories), allowing us to effectively evaluate page management on all four tiers. Unless indicated otherwise, workloads were evaluated with eight application threads. \textbf{Baselines} for performance comparison. Nine state-of-the-art application-agnostic memory management solutions are used for evaluation. \begin{itemize}[noitemsep] \item \textit{Hardware-managed memory caching (HMC)} uses the fast memories as hardware-managed cache for slow memories. We use Memory Mode in Optane. \item \textit{First-touch NUMA} is a common NUMA allocation policy. It allocates pages in a memory tier close to the running task that first touches the pages. It does not migrate pages after memory allocation and does not track page hotness. \item \textit{AutoNUMA}~\cite{autonuma} is a solution established well in Linux. \item \textit{Memory-optimizer~\cite{intel_mem_optimizer}} is a solution from Intel. \item \textit{Memory-tiering}~\cite{memory_tiering} is a solution from Intel based on an extension to AutoNUMA. \item \textit{AutoTiering}~\cite{atc21_autotiering} is a state-of-the-art solution for multi-tiered memory systems based on AutoNUMA. To enable its best performance, we enable its opportunistic promotion and migration (OPM) and background demotion. \item \textit{HeMem}~\cite{sosp21_hemem} is a state-of-the-art solution for two-tiered PM-based HM. HeMem leverages hardware performance counters to find hot pages and promotes hot page to the fast memory tier. \item \textit{Thermostat}~\cite{Agarwal:2017:TAP:3037697.3037706} is a solution for two-tiered HM. It randomly samples pages for profiling. It allocates all pages in fast memory and selectively moves them to slow memory to save production cost, which cannot work for our use cases where the application footprint is larger than fast memory. We do not evaluate page migration of Thermostat but evaluate its profiling method. \item \textit{Nimble}~\cite{Yan:2019:NPM:3297858.3304024} is a highly optimized page migration mechanism using bi-direction page copy and parallel page copy. We use it to evaluate our page migration mechanism. \end{itemize} \input tables/hardware_details \input tables/app_details \subsection{Overall Performance} \label{sec:eval_overall_perf} \begin{figure*}[tb] \centering \includegraphics[width=1.95\columnwidth]{figures/overall_perform.pdf} \caption{Performance comparison between existing solutions and HM-Keeper\xspace on the Optane-based multi-tiered memory system.} \label{fig:overall_optane} \end{figure*} \textbf{Optane-based multi-tiered memory system.} Figure~\ref{fig:overall_optane} shows that HM-Keeper\xspace outperforms all the memory management solutions in the five workloads. We have six interesting observations. (1) HM-Keeper\xspace outperforms HMC up to 52\% (24\% on average). HMC incurs write amplification when cache misses occur frequently~\cite{9408179}, which causes unnecessary data movement and poor performance. (2) HM-Keeper\xspace outperforms first-touch NUMA in all cases by up to 45\% (20\% on average). Without page migration, first-touch NUMA outperforms HMC on VoltDB and BFS, and outperforms AutoNUMA on Cassandra and BFS, indicating that page migration does not always bring performance improvement. HMC performs worse because of unnecessary page movement discussed above. AutoNUMA has worse performance because it cannot effectively identify hot pages. (3) HM-Keeper\xspace outperforms AutoNUMA by up to 72\% (25\% on average). AutoNUMA does not support page migration between memory tiers within the same socket (Memory-tiering, which is an extension to AutoNUMA, supports so), and hence fails to take full advantage of all memory tiers. (4) HM-Keeper\xspace outperforms Memory-optimizer by up to 65\% (32\% on average). Lack of page migration across sockets, Memory-optimizer fails to fully leverage the 2nd fastest memory on a socket for the application on the other socket. (5) HM-Keeper\xspace outperforms Memory-tiering by up to 45\% (22\% on average). Different from AutoNUMA and Memory-optimizer, Memory-tiering enables page migration across all memory tiers. However, it promotes hot pages tier by tier instead of using fast promotion as HM-Keeper\xspace, delaying the opportunity to improve performance using the fastest memory. (6) HM-Keeper\xspace outperforms AutoTiering by up to 77\% (20\% on average). AutoTiering uses random sampling and opportunistic page demotion, which cannot effectively identify hot/cold pages for page migration. \textbf{Emulated multi-tiered memory system.} Figure~\ref{fig:perform_emulated} shows the results. We do not evaluate HMC, because when HMC is used, fast memory tiers are hidden from software and we cannot inject latency into fast memory. On the emulated platform, HM-Keeper\xspace maintains the same performance trend as on Optane: HM-Keeper\xspace outperforms first-touch NUMA, AutoNUMA, Memory-optimizer, Memory-tiering and AutoTiering by 19\%, 38\%, 20\%, 26\% and 17\% on average respectively. \begin{figure*}[t!] \centering \includegraphics[width=1.95\columnwidth]{figures/perform_emulated.pdf} \caption{Performance comparison between existing solutions and HM-Keeper\xspace on the emulated multi-tiered memory system.} \label{fig:perform_emulated} \end{figure*} \textbf{Performance breakdown.} We break down performance into application execution time, migration time, and profiling time, shown in Figure~\ref{fig:perform_breakdown}. The migration time is the migration overhead exposed to the critical path, excluding asynchronous page copying time. Figure~\ref{fig:perform_breakdown} shows the results for Memory-tiering, AutoTiering and HM-Keeper\xspace, because they are the only solutions that can leverage all four memory tiers for migration. We add first-touch NUMA as performance baseline for comparison, because Memory-tiering, AutoTiering and HM-Keeper\xspace use it for memory allocation. In all cases, the profiling overhead falls within the profiling overhead constraint. With Memory-tiering and AutoTiering, the reduction of application execution time is less than or equal to the overhead brought by profiling and migration (see VoltDB and Cassandra). Hence, Memory-tiering and AutoTiering perform worse than first-touch NUMA without page migration. Compared to Memory-tiering, HM-Keeper\xspace spends similar time in profiling but 3.5x less time in migration, reducing the total application execution time by 21\% on average. Compared to AutoTiering, HM-Keeper\xspace again spends similar time in profiling but 1.25x less time in migration, and reduces the application execution time by 19\% on average. Because of the effectiveness of page sampling and migration strategy, HM-Keeper\xspace obtains better performance. \begin{figure*}[t!] \centering \includegraphics[width=2\columnwidth]{figures/perform_breakdown.pdf} \caption{Breakdown of application execution time.} \label{fig:perform_breakdown} \end{figure*} \textbf{Number of memory accesses.} We count the number of memory accesses at each memory tier when running voltDB. We only report the results for Memory-tiering, AutoTiering, and HM-Keeper\xspace, because they are the only solutions that leverage all memory tiers for migration. Table~\ref{tab:traffic} shows the results. We use Intel Processor Counter Monitor~\cite{intel_pcm} to count the number of memory accesses and then exclude memory accesses caused by page migration. This counting method allows us to evaluate how many memory accesses from the application (not from page migration) happen on memories. \textcolor{check}{Table~\ref{tab:traffic} shows that with HM-Keeper\xspace, the number of memory accesses happen in the fastest memory (top tier) is 20\% and 14\% more than with Memory-tiering and AutoTiering. This indicates that HM-Keeper\xspace effectively migrates frequently accessed pages to the fast memory for high performance.} \input tables/traffic \textbf{Scalability of HM-Keeper\xspace}. We evaluate the scalability of HM-Keeper\xspace with VoltDB by increasing the number of application threads. Specifically, we increase the number of clients used in VoltDB. As the number of clients increases, the memory consumption increases from 300GB to 1TB. We compare the performance of HM-Keeper\xspace, HMC, first-touch NUMA, and AutoTiering. We evaluate AutoTiering since it has the second-best performance among all page management solutions we evaluate. Figure~\ref{fig:scalibility} shows that HM-Keeper\xspace consistently outperforms HMC, first-touch NUMA, and AutoTiering by 19\%, 10\%, and 8\% on average as the number of application threads increases. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/scalibility.pdf} \vspace{-20pt} \caption{Execution time of VoltDB with different number of client threads.} \label{fig:scalibility} \end{figure} \textbf{Evaluation with different page sizes.} Figure~\ref{fig:thp} shows the results using 2MB and 1GB as the page size with THP enabled. We use SSSP for evaluation, because it has the largest memory consumption among all evaluated applications. The results confirm that HM-Keeper\xspace consistently outperforms other solutions even at different huge page sizes. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{figures/THP.pdf} \vspace{-15pt} \caption{Evaluating HM-Keeper\xspace with different large page sizes. We use the SSSP application for evaluation.} \label{fig:thp} \end{figure} \textbf{Evaluation on two-tiered HM and comparison with HeMem}. We compare HM-Keeper\xspace with HeMem~\cite{sosp21_hemem}, a state-of-the-art solution for two-tiered HM. The evaluation is performed on a single socket with two memory tiers, using the benchmark GUPS~\cite{gups} as in HeMem~\cite{sosp21_hemem}. Figure~\ref{fig:hemem} reports the results of running GUPS with 16 and 24 application threads, respectively. The results show that when the working set size fits in the fast memory tier (i.e., x values smaller than 1.0), HM-Keeper\xspace performs similarly to HeMem at 16 application threads and better at 24 application threads. Once the working set size exceeds the fast memory (i.e., x values larger than 1.0), HeMem fails to sustain application performance at 24 threads while HM-Keeper\xspace can still sustains higher performance at 24 threads than 16 threads. HM-Keeper\xspace performs better because its profiling method is able to quickly adapt to changes in memory accesses and identify more hot pages. \begin{figure}[t!] \centering \includegraphics[width=0.95\columnwidth]{figures/hemem_res.pdf} \caption{Evaluation of HM-Keeper\xspace on two-tiered HM and comparison with HeMem.} \label{fig:hemem} \end{figure} \subsection{Effectiveness of Adaptive Profiling} \label{sec:eval_profiling} We study profiling quality and overhead, and compare HM-Keeper\xspace with two sampling-based profiling methods (one is used in Memory-tiering, AutoNUMA, and AutoTiering, and the other is used in Thermostat). We use Memory-tiering and Thermostat for evaluation, and replace their migration strategy and mechanism with HM-Keeper\xspace's. This replacement ensures that we exclude the impact of migration on performance, and hence our comparison is fair. The profiling method in Memory-tiering randomly chooses a 256MB virtual address space in each profiling interval, and then manipulates the present bit in each 4KB-page PTE in the chosen address space. This method tracks page accesses by counting page faults. The profiling method in Thermostat randomly chooses a 4KB page out of each 2MB memory region for profiling. This method manipulates page protection bits in PTE and leverages protection faults to count accesses. Figure~\ref{fig:profiling} shows the result. HM-Keeper\xspace outperforms Memory-tiering and Thermostat by 17\% and 7\% respectively. Thermostat has higher profiling overhead than Memory-tiering by 6x because the number of sampled pages for profiling in Thermostat is much larger than that in Memory-tiering. Thermostat has higher profiling overhead than HM-Keeper\xspace by 2.5x, since manipulating reserved bits in PTE and counting protection faults in Thermostat is more expensive than scanning PTE in Memory-tiering and HM-Keeper\xspace. Using Memory-tiering, the application execution time is longer than with HM-Keeper\xspace by 22\%. This indicates that random sampling-based profiling is not as effective as our adaptive profiling method. The adaptive profiling, when choosing samples, considers both temporal and spatial locality, and aims to maximize profiling quality within a profiling overhead constraint, which is missing in random sampling. We further evaluate three major profiling techniques in HM-Keeper\xspace (i.e., adaptive memory regions, adaptive page sampling, and profiling overhead control). We disable them one by one and then examine the performance difference. The last three groups of bars in Figure~\ref{fig:profiling} show the evaluation results. \textbf{Evaluation of adaptive memory regions.} The technique of adaptively merging and spliting memory regions aims to improve profiling quality. We disable it but respect the profiling overhead constraint. Figure~\ref{fig:profiling} shows that the application execution time is 22\% longer, although the profiling overhead constraint is met. Such a performance loss indicates that hot memory regions are not effectively identified without adaptive regions and hence placed in slow memory. \textbf{Evaluation of adaptive page sampling.} This technique distributes samples between memory regions by using time-consecutive profiling results, which includes information on temporal locality. We disable this technique and randomly distribute samples between memory regions, and observe 21\% performance loss in Figure~\ref{fig:profiling}. This indicates the importance of using temporal locality as the metric to guide the selection of samples. \textbf{Evaluation of profiling overhead control.} We disable profiling overhead control by setting $\tau_1=\tau_2=0$ (i.e., no merging/splitting memory regions) and removing the control of the number of page samples ($num\_{ps}$ in Equation~\ref{eq:profiling_overhead_control}). We observe that the profiling time is increased by 3x in Figure~\ref{fig:profiling}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/profiling.pdf} \caption{Evaluation of the effectiveness of adaptive memory regions (``AMR''), adaptive page sampling (``APS''), and profiling overhead control (``OC'') in the VoltDB execution time.} \label{fig:profiling} \end{figure} \begin{comment} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/profiling_overhead.pdf} \caption{Execution time with setting various of profiling overhead targets when execute voltDB with HM-Keeper\xspace. } \label{fig:overhead} \end{figure} We further study the relation between profiling overhead and profiling quality. Figure~\ref{fig:overhead} shows the result. We set profiling interval length as 5s and test a set of profiling overhead targets. As the profiling target increase from 1\% to 10\%, the application execution time decrease by 12\%, which reveals that the profiling quality can be improved by increasing the number of samples in profiling. However, better profiling quality does leads to better system performance, since the performance benefit from more accurate profiling result may not able to cover extra profiling overhead we have to pay. As shown in figure~\ref{fig:overhead}, the end-to-end performance decreases by 7\% as the profiling target increase from 5\% to 10\%. \end{comment} \subsection{Effectiveness of Migration Strategy} \label{sec:eval_migration} HM-Keeper\xspace uses a flat view on multi-tiered memory systems, and adopts the ``fast promotion but slow demotion'' strategy. To evaluate the effectiveness of this strategy, we change HM-Keeper\xspace to use a hierarchical view, where hot pages need multiple profiling intervals to reach the fastest memory. Figure~\ref{fig:migration_mode} shows that the flat view performs 20\% better than the hierarchical view because the fastest memory is used more effectively. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{figures/diff_view.pdf} \vspace{-5pt} \caption{Comparison between flatten- and hierarchical view-based migration. Performance speedup is calculated based on the performance of first-touch NUMA.} \label{fig:migration_mode} \end{figure} \subsection{Effectiveness of Migration Mechanism} \label{sec:eval_migration_mechanism} We use three microbenchmarks to evaluate the migration mechanisms in HM-Keeper\xspace, Nimble~\cite{Yan:2019:NPM:3297858.3304024}, and \textit{move\_pages()} in Linux. The microbenchmarks perform sequential read-only, 50\% read (i.e., a sequential read followed by an update on an array element), and 100\% sequential write on a 1GB array, respectively. The array is allocated and touched in a memory tier, and then migrated to another tier during the execution. Figure~\ref{fig:migration} shows the results. Migrating pages between the tiers 1 and 2, HM-Keeper\xspace's mechanism performs 40\%, 23\%, and -0.5\% better than \textit{move\_pages()}, and performs 36\% 4\% and -6\% better than Nimble, for read-only, 50\% read, and write-only scenarios respectively. We see the same trend in other tiers. In general, for read-intensive pages, HM-Keeper\xspace's mechanism brings large performance benefit because of asynchronous page copy; for write-intensive pages, HM-Keeper\xspace's mechanism performs similar to \textit{move\_pages()} and Nimble, because of the overhead of tracking page dirtiness during the migration. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/migration.pdf} \caption{Performance comparison between Nimble, $move\_pages()$ and HM-Keeper\xspace in page migration mechanism. The performance speedup is calculated based on the performance of using $move\_pages()$.} \label{fig:migration} \end{figure} \section{Implementation} \label{sec:impl} We implement the adaptive memory profiling as a kernel module that periodically scans the page table based on adaptive page sampling. This kernel module takes a process ID as input, performs memory profiling, and saves the profiling result in a shared memory space. Profiling results are stored in a table, where each record contains a memory region ID, hotness indication in the current profiling interval, and the EMA of hotness indication of prior profiling intervals. The region ID is generated based on the start address of the memory region. We implement the holistic page management as a daemon service at the user space. The daemon service executes with the application and calls the kernel module for profiling at the beginning of each profiling interval. At the end of each profiling interval, the service reads the shared memory space to collect the profiling results. Specifically, with overhead control, the profiling module ensures profiling finished in a profiling interval. The service then makes the migration decision and performs migration using \textit{move\_memory\_regions()}. \textit{move\_memory\_regions()} takes the same input as Linux \textit{move\_pages()}, but implements the adaptive migration mechanism. It detects page dirtiness during the migration by setting a reserved bit in PTE, such that when any page in the memory region is written, a write protection fault is triggered. Leveraging a user-space page fault handler, \textit{move\_memory\_regions()} tracks writes, and decides whether to stop the asynchronous page copy and switch to the synchronous mechanism. HM-Keeper\xspace runs on Linux 5.4.0. HM-Keeper\xspace has 709 LOC, 1421 LOC, and 330 LOC, respectively, to implement the profiling kernel module, memory management daemon service, and the new migration call. \section{Introduction} \label{sec:intro} Memory hierarchy has continued deepening to cope with ever-increasing demands from applications. Multi-tier memory systems that started from multi-socket non-uniform memory access (NUMA) architecture is now a de-facto solution for building scalable and cost-effective memory systems. For instance, the Amazon EC2 High Memory Instance has three DRAM-based memory tiers built upon eight NUMA nodes, providing up to 12 TB memory~\cite{amazon_high_mem_inst}. Recently, the commercial availability of new memory devices, such as high-bandwidth memory (HBM) and high-density persistent memory (PM), started expanding a new dimension of multi-tier memory systems. A multi-tier memory system implemented with heterogeneous memories and NUMA architecture can easily exceed two memory tiers. Top tiers typically feature lower memory latency or higher bandwidth but smaller capacity, while bottom tiers feature high capacity but lower bandwidth and longer latency. When high-density PM is in use, e.g., Intel's Optane DC persistent memory~\cite{Optane:blogreview}, a multi-tier large memory system could enable terabyte-scale graph analysis~\cite{vldb20_sage,optane_utexas19,peng2018graphphi}, in-memory database services~\cite{Andrei:2017:SHA:3137765.3137780,vldb21:ai_pm,ucsd_otpane:fast2020}, and scientific simulations~\cite{pm-octree:sc17,ics21:memoization} on a single machine. Muti-tier memory systems can be managed transparently by hardware, e.g., Intel's Optane can configure DRAM as a direct-mapped cache for PM. Such hardware supports require no application modifications or OS changes, and naturally, are often the first option to be explored on a new system. However, cache-like mechanisms rely on good data locality to gain performance. Indeed, applications with low data locality have shown unsatisfactory performance on such systems due to the extra overhead in managing cache~\cite{peng2019} and write amplification~\cite{nvmw21:dram-cache}. Another shortcoming of cache mechanisms is the loss of sizable memory capacity. Unlike processor caches, which are often in MBs, PM-based memory systems, e.g., Intel Cascade Lake processor, could contain hundreds of GB capacity in DRAM tiers. Software-based page management can leverage multi-tier large memory systems more efficiently than hardware mechanisms because it can gain deeper insights into memory access patterns. Most of software solutions~\cite{Agarwal:2017:TAP:3037697.3037706, intel_mem_optimizer, memory_tiering, atc21_autotiering,autonuma_balance} consist of three components -- a profiling mechanism, a migration policy, and a migration mechanism. A profiling mechanism is critical for identifying performance-critical data in applications and is often realized through tracking page accesses. A migration policy chooses candidate pages to be moved to top tiers. Finally, the effectiveness of a page management solution directly depends on whether its migration mechanism can move pages across tiers at low overhead. \textbf{Problems in profiling.} Existing memory profiling mechanisms~\cite{Hirofuchi:2016:RHV:2987550.2987570,Kannan:2017:HOD:3079856.3080245,asplos19_softwarefarmem,asplos06_tracing_pagefault} manipulate specific bits in page table entries (PTEs) to track memory accesses at a per-page granularity. The profiling overhead scales linearly with the number of tracked pages. Our evaluation shows that tracking millions of pages could take several seconds -- too slow to respond to time-changing access patterns, and causes 20\% slowdown in TPC-C against VoltDB~\cite{voltdb}. Some solutions~\cite{Agarwal:2017:TAP:3037697.3037706, intel_mem_optimizer,memory_tiering, autonuma_balance, damon, middleware19_profiling} only profile a small set of randomly-chosen pages based on PTE manipulation or performance counters, or heavily rely on the user to configure the profiling method to reduce profiling overhead. However, such a strategy compromises profiling quality and may miss frequently-accessed pages and time-changing access patterns. \begin{comment} Table~\ref{tab:prof_limitation} reports the average number of hot pages detected by Thermostat~\cite{Agarwal:2017:TAP:3037697.3037706}, AutoTiering~\cite{atc21_autotiering}, and HM-Keeper\xspace (our solution) in all profiling intervals of running TPC-C against VoltDB with 300GB memory footprint (see setup details in Table~\ref{tab:app_details}). With the same profiling overhead (5\%), Thermostat and AutoTiering identify 40\% and 60\% fewer hot pages than a profiling mechanism (HM-Keeper\xspace) that does not use randomly selected pages, resulting in 8\% and 17\% lower performance. Finally, none of these existing solutions capture temporal data locality. \end{comment} Figure~\ref{fig:motivation} compares multiple profiling methods in Thermostat~\cite{Agarwal:2017:TAP:3037697.3037706}, AutoTiering~\cite{atc21_autotiering}, DAMON~\cite{damon, middleware19_profiling}, and our method. These profiling methods represent state-of-the-art. We use the GUPS~\cite{gups} benchmark with 512GB working set and have priori knowledge on which pages are accessed at least twice in a profiling interval (i.e., hot pages). In this workload, hot pages remain stable throughout the execution. We report profiling recall (i.e., the ratio of the number of correctly detected hot pages to the number of hot pages identified by priori knowledge) and profiling precision (i.e., the ratio of the number of correctly detected hot pages to the number of total detected hot pages including mis-identified hot pages). With the same profiling overhead (5\%), Thermostat and AutoTiering take long time to identify hot pages (see Figure~\ref{fig:motivation}). DAMON takes shorter time but about 50\% of hot pages detected by DAMON are actually not hot. Because of low profiling quality, DAMON, Thermostat, and AutoTiering perform at least 15\% worse than our method. In general, there is a lack of scalable and high-quality profiling mechanism for large memory systems. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/access_pattern.pdf} \vspace{-15pt} \caption{Comparison of different memory profiling methods in terms of their effectiveness of identifying frequently accessed pages (hot pages). The profiling overhead is set as 5\% of total execution time.} \label{fig:motivation} \end{figure} \begin{comment} \begin{figure} \begin{subfigure} \includegraphics[width=0.9\columnwidth]{figures/motivation1.pdf} \end{subfigure} \begin{subfigure} \includegraphics[width=0.9\columnwidth]{figures/motivation2.pdf} \end{subfigure} \caption{\textcolor{dong}{Comparison of different memory profiling methods in terms of their effectiveness of identifying hot pages}.} \label{fig:motivation} \end{figure} \end{comment} \textbf{Problems in migration.} Existing solutions to multi-tiered memory are built upon an abstraction extended from traditional memory hierarchy, where page migration occurs between two neighboring tiers. However, such abstraction could limit multi-tiered memory systems. First, migrating pages from the lowest to the top tier, at tier-by-tier steps, takes time, e.g., tens of microseconds for a 4K page. Also, frequent data movement across tiers is needed to accommodate time-changing access patterns, different from occasional swapping between memory and storage. To leverage a multi-tier large memory system efficiently, we argue that the following four principles must be upheld. \begin{itemize}[leftmargin=*,noitemsep,topsep=0pt] \item \textit{Scalable quality-aware profiling.} Formal metrics that quantify performance impact from pages need to be established to guide page selection for profiling. By only tracking the most performance-critical pages at a time, a solution can guarantee adaptiveness to time-changing patterns and controlled overhead. \item \textit{Global view of memory regions in all tiers.} Only tracking data in the fastest tier is insufficient because large memory regions are forced to reside in lower tiers even though they may contain small but performance-critical subregions. This is particularly true on terabyte-scale systems where data in the top tiers can only accommodate a small portion of total working sets and thus insufficient to provide a comprehensive view of system-wide data. Thus, a global view of all memory regions in all tiers is crucial. \item \textit{Tier-bypassing migration.} Emerging large-memory systems feature multiple tiers. Migration through neighboring tiers is too slow. Instead, performance-critical pages should be promoted to the top tier bypassing intermediate tiers, i.e., pages in all slower tiers have equal chances to be promoted to the fastest tier. Victim pages evicted from the top tier should be progressively demoted into the next available lower tier instead of the bottom tier as in the existing swapping-based solution in Linux~\cite{autonuma_balance} (i.e., AutoNUMA) because these pages are still likely to be accessed. \item \textit{Pattern-aware migration mechanism.} The page migration mechanism includes multiple stages. Parallelism between stages is often ignored. Considering read/write patterns of page accesses, such parallelism can be exposed, which is critical to improve page migration performance on multi-tiered memory with frequent page migration. \end{itemize} In this work, we contribute a software-based page management solution called HM-Keeper\xspace that realizes the four principles on terabyte-scale multi-tier memory. HM-Keeper\xspace maintains a flat global view of all memory regions in all memory tiers. Profiling quality and overhead are distributed proportionally according to a memory region's importance. Memory regions with a higher impact on performance are tracked with higher fidelity, i.e., more pages are selected for profiling. In particular, HM-Keeper\xspace uses spatial and temporal variation as the key metrics to quantify the performance impact of a memory region. Over time, HM-Keeper\xspace adaptively merges and splits memory regions based on their similarity in spatial and temporal locality so that pages in a memory region have similar access patterns. HM-Keeper\xspace uses the ``fast promotion slow demotion'' policy for page migration. Hot pages identified in all lower tiers can be directly promoted to the top tier, minimizing data movement through tiers. When a page is migrated out of the top tier to accommodate more important pages, the page will be moved to the next lower tier with available space. HM-Keeper\xspace dynamically chooses from an asynchronous page copy-based scheme and a synchronous page migration scheme, based on the read/write pattern of the migrated page, to minimize migration time. \textbf{Evaluation.} We rigorously evaluated HM-Keeper\xspace against seven state-of-the-art solutions, including two industry-quality software solutions (Intel Memory-optimizer~\cite{intel_mem_optimizer} and Intel Memory-tiering~\cite{memory_tiering}), two state-of-the-art solutions (AutoTiering~\cite{atc21_autotiering} and HeMem~\cite{sosp21_hemem}), an existing solution in Linux (AutoNUMA~\cite{autonuma}), a hardware-based solution (Optane Memory Mode), and first-touch NUMA. HM-Keeper\xspace is also compared against two kernel-based page migration solutions (the ones in Linux and Nimble~\cite{Yan:2019:NPM:3297858.3304024}). HM-Keeper\xspace outperforms Intel memory-optimizer, Intel memory-tiering, AutoTiering, AutoNUMA, Memory Mode, first-touch NUMA and HeMem by 32\%, 22\%, 20\%, 25\%, 24\%, 20\%, and 24\% on average. HM-Keeper\xspace outperforms the Linux and Nimble migration approaches by 40\% and 36\% for read-intensive workloads, and has similar performance for write-intensive workloads. \section{Introduction} \label{sec:intro} Large memory systems with multiple TBs or more are emerging. These systems are driven by the emergence of memory-consuming applications and recent advances in memory technologies. From the application perspective, we see terabyte-scale graph analysis~\cite{vldb20_sage,optane_utexas19,peng2018graphphi}, large in-memory database services~\cite{Andrei:2017:SHA:3137765.3137780,vldb21:ai_pm,ucsd_otpane:fast2020}, and extreme-scale scientific simulations~\cite{pm-octree:sc17,peng2018siena,ics21:memoization}. From the memory technology perspective, we see recent Intel's Optane DC persistent memory (PM) supports up to 12TB on a two-socket machine~\cite{Optane:blogreview}. The trend of large memory systems will continue, as the applications work on increasingly large working sets, and technological advances decrease memory cost and increase memory density. The existing large memory systems are often characterized with more than two memory tiers, and different tiers show different memory latencies and bandwidths. Such rich memory heterogeneity, built upon various memory technologies and multi-socket non-uniform memory access (NUMA) architectures, provides a scalable and cost-effective solution to enable large memory systems. For example, the Intel Optane-based machine has four memory tiers where Optane and DRAM create two tiers and the NUMA effect creates another two, providing up to 12 TB memory. Also, since DRAM is more costly but provides higher bandwidth and lower latency than Optane, DRAM is used as a small cache to Optane for lower production cost and higher performance. Take another example. The Amazon EC2 High Memory Instance has three DRAM-based memory tiers built upon eight sockets and NUMA, providing up to 12 TB memory~\cite{amazon_high_mem_inst}. Richness in NUMA nodes in this machine is needed to scale up memory. In contrast, the traditional memory systems with smaller memory capacity are often equipped with at most two memory tiers. Making effective, transparent use of the multi-tiered large memory system requires a page management system. Because of limited capacity in fast memories, the application with large memory footprint has to spread its pages across memory tiers. The application can have varying memory access patterns, which demands a page management system to assess pages for hotness and distribute them between memory tiers to maximize performance. There are software-based page management solutions~\cite{Agarwal:2017:TAP:3037697.3037706, intel_mem_optimizer, memory_tiering, atc21_autotiering,autonuma_balance} where the tiered memories share the physical address space and a system software is in charge of page management. The software-based page management consists of at least three components: a memory profiling mechanism, a page migration policy, and a page migration mechanism. The memory profiling mechanism is needed to track page accesses and assess pages for hotness; The page migration policy and mechanism shuffle pages between tiers to maximize fast-memory accesses. \textbf{Problems.} The existing solutions~\cite{Agarwal:2017:TAP:3037697.3037706,kleio:hpdc19,Dulloor:2016:DTH:2901318.2901344,intel_mem_optimizer,memory_tiering, Kannan:2017:HOD:3079856.3080245,asplos21:kloc,asplos19_softwarefarmem,autonuma_balance,pldi19:panthera,sc18:wu,Yu:ics17} have fundamental limitation to be applied on multi-tiered large memory systems because of (1) non-scalable, low-quality memory profiling mechanisms and (2) unawareness of rich memory tiers in page migration policies. The existing memory profiling mechanisms for heterogeneous memory~\cite{Hirofuchi:2016:RHV:2987550.2987570,Kannan:2017:HOD:3079856.3080245,asplos19_softwarefarmem,asplos06_tracing_pagefault} are commonly based on the manipulation of specific bits in page table entries (PTE) to track memory accesses. This mechanism accurately captures memory accesses at a per-page granularity, matching established abstraction in operating systems (OS) for data migration and memory management. \textcolor{check} However, the profiling overhead grows arbitrarily as the number of pages to be tracked increases. For example, the overhead of tracking memory accesses at the scale of millions of pages takes multiple seconds, which is too slow to respond to time-changing access patterns. Such a large overhead causes over 20\% performance slowdown in TPC-C against VoltDB~\cite{voltdb} (see Table~\ref{tab:app_details}) in our evaluation.} To avoid the large overhead, some solutions~\cite{Agarwal:2017:TAP:3037697.3037706, intel_mem_optimizer,memory_tiering, autonuma_balance} selectively profile memory pages in randomly-chosen small memory regions. These solutions trade profiling quality for low overhead, but lose track of frequently accessed pages and time-changing access patterns. These solutions also fail to capture similarity of access patterns across regions to merge regions and reduce overhead; Furthermore, all profiling mechanisms do not capture temporal data locality very well, because they do not correlate time-consecutive profiling results. \input tables/motivation To illustrate the above random profiling problem, Table~\ref{tab:prof_limitation} quantifies the effectiveness of profiling mechanisms in Thermostat~\cite{Agarwal:2017:TAP:3037697.3037706}, AutoTiering~\cite{atc21_autotiering}, and HM-Keeper\xspace (our solution) using TPC-C against VoltDB with 300GB memory footprint (see Table~\ref{tab:app_details}). To make performance comparison fair, the three systems \textit{use the same page migration method}. \textcolor{check}{Table~\ref{tab:prof_limitation} reports the average number of identified hot pages (i.e., the most frequently accessed pages) among all profiling intervals.} With the same profiling overhead (5\%), Thermostat and AutoTiering identify 40\% and 60\% less hot pages than a highly optimized profiling mechanism in HM-Keeper\xspace. Just because of low profiling quality, Thermostat and AutoTiering perform 8\% and 17\% worse respectively. In general, there is a lack of scalable and high-quality profiling mechanism for large memory systems. There is a lack of an efficient methodology to maximize profiling quality without paying large profiling overhead. From the perspective of page migration strategy, the existing solutions are built upon a memory abstraction of two-tiered memory: The page migration happens from one tier to the other. An extension to the above policies for multiple tiers is to build a hierarchical view of the multiple memory tiers according to their performance difference, as in the traditional memory hierarchy. Based on this hierarchical view, the page migration happens between two neighboring tiers in the hierarchy. Although this policy works well in the traditional memory hierarchy, it cannot work well in the multi-tiered memory systems, because of two reasons. (1) Data movement from the slowest memory tier to the fastest takes long time (at the granularity of tens of microseconds for a 4K page). Such a long time of data movement is difficult to be hidden and hence causes large performance loss. (2) Data movement across the memory tiers happens frequently to accommodate time-changing memory-access patterns, much more frequently than that between memory and storage. Hence reducing data movement time across the memory tiers is more pressing than that between memory and storage. \textbf{Goal and Insights.} The goal of this project is to design a page management system that automatically migrates pages between memory tiers to maximize fast-memory accesses with low overhead. Our major drive is a shift from non-scalable, empirical, and narrow-scoped designs for the traditional simple-tiered memory system to a scalable, quantified, and general design for the multi-tiered large memory system. To achieve the above goal, we have two insights. First, the memory profiling mechanism must be \textit{adaptive}, which means adaptively forming memory regions and choosing pages for profiling based on spatial and temporal variation of memory access patterns. Such adaptiveness is the key to establish a comprehensive view of memory access patterns of all memory pages \textcolor{check}{(instead of some random pages)} without large overhead; the adaptiveness should also be based on \textit{fine quantification} of profiling overhead, such that the dynamic nature of this new profiling principle does not cause large performance loss. Second, the page migration in the multi-tiered memory must employ a \textit{holistic} design principle. This means that page movement is not only between neighboring tiers but also between the fastest memory and any other memory tiers, and any slow memory tier has equal opportunities to directly use the fastest memory. In other words, we use a flatten view, instead of the hierarchical view, to holistically decide which pages should be in the fastest memory for high performance. Being holistic is the key to enable effective page migration and make the best use of fast memories for high performance. \textbf{HM-Keeper\xspace.} Following the above insights, we develop HM-Keeper\xspace, an application-transparent page management system that supports efficient use of multi-tiered large memory with high performance. HM-Keeper\xspace is based on four key ideas to address the memory profiling challenge: adaptive memory regions, adaptive page sampling, correlation of time-consecutive profiling results, and fine-grained profiling overhead control. In particular, \textit{adaptive memory regions} can be merged or split based on the similarity of memory access patterns between regions; \textit{Adaptive page sampling} dynamically changes the number of page samples for profiling based on the variation of memory access patterns within individual memory regions. The above two techniques aim to maximize profiling quality without increasing profiling overhead. The two techniques also capture spatial locality. \textit{Correlation of time-consecutive profiling results} aims to capture temporal locality by building a histogram of profiling results. \textit{Fine-grained profiling overhead control} is integrated with the above adaptive techniques, and dynamically enforces limits on profiling overhead. HM-Keeper\xspace uses a strategy of ``fast promotion but slow demotion'' for page migration, following the holistic design principle. In particular, slow memory tiers promote their hot pages to the fastest memory and compete for it based on the profiling results. This fast-promotion approach allows frequently accessed pages to be in the fastest memory for high performance without time-consuming movement along other tiers. Furthermore, when a page is migrated out of the fastest memory to accommodate more frequently accessed pages, we use a hierarchical view and migrate the page slowly to lower tiers in the memory hierarchy with available space. This slow-demotion approach mitigates performance loss caused by migrating frequently accessed pages to a low memory tier. In addition, HM-Keeper\xspace is featured with a new migration mechanism. It dynamically decides whether copying pages should happen asynchronously or synchronously based on the memory access pattern to minimize migration overhead. \textbf{Results.} \textcolor{check}{We compare HM-Keeper\xspace with six solutions, including two industry-quality software solutions (Intel Memory-optimizer~\cite{intel_mem_optimizer} and Intel Memory-tiering~\cite{memory_tiering}), a state-of-the-art solution (AutoTiering~\cite{atc21_autotiering}), an existing solution in Linux (AutoNUMA~\cite{autonuma}), a hardware-based solution (Optane Memory Mode), and first-touch NUMA. We also compare with two solutions (the existing page migration mechanism in Linux and Nimble~\cite{Yan:2019:NPM:3297858.3304024}) in terms of page migration mechanism.} HM-Keeper\xspace outperforms Intel memory-optimizer, Intel memory-tiering, AutoTiering, AutoNUMA, Memory Mode and first-touch NUMA by 32\%, 22\%, 20\%, 25\%, 24\%, and 20\% on average. HM-Keeper\xspace outperforms the existing page migration mechanism in Linux and Nimble by 40\% and 36\% (in terms of execution time) for read-intensive workloads, and reaches the similar performance for write-intensive workloads. \section{Adaptive Migration Mechanism} \label{sec:mechanism} HM-Keeper\xspace uses a high-performance page migration mechanism and migrates pages at the granularity of memory regions. \begin{comment} \begin{figure*} \centering \includegraphics[width=0.80\linewidth]{figures } \caption{Illustration of the adaptive page migration mechanism in HM-Keeper\xspace.} \centering \label{fig:mig_mecha} \end{figure*} \end{comment} \begin{figure}[tb] \centering \includegraphics[width=1\columnwidth]{figures/movemr_breakdown.pdf} \caption{Performance breakdown for migration mechanisms.} \label{fig:movemr_breakdown} \end{figure} \subsection{Performance Analysis of Page Migration Mechanism} Linux provides an API, \textit{move\_pages()}, for a privileged process to move a group of 4KB pages from a source memory node (or tier) to a target memory node (or tier). \textit{move\_pages()} consists of four main steps: \begin{enumerate}[leftmargin=*,nolistsep] \item Allocate new memory pages in the target memory node; \item Unmap pages to migrate (including invalidating PTE); \item Copy pages from source to target memory nodes; \item Map new pages (including updating PTE) \end{enumerate} Figure~\ref{fig:movemr_breakdown} shows the performance of migrating a 2MB memory region from the fastest memory tier to the slowest memory tier with \textit{move\_pages()} in the Optane-based platform. Copying pages is the most time-consuming step, taking 40\% of total time. \textit{move\_pages()} moves 4K-sized pages sequentially, causing large page migration overhead. Although recent work~\cite{Yan:2019:NPM:3297858.3304024} enables multi-threaded page copy to fully utilize memory bandwidth, copying pages is still the performance bottleneck, especially when moving a large memory region. \subsection{Adaptive Page Migration Schemes} \textbf{Asynchronous page copy.} We introduce an asynchronous page copy mechanism to reduce page copy overhead. In \textit{move\_pages()}, all of the four steps are performed one after another synchronously. But in the asynchronous page copy, the thread that triggers migration (named the \textit{main thread} in the rest of the discussion) launches one or more helper threads to run the steps (1) and (3); the main thread runs the steps (2) and (4), and then waits for the helper thread(s) to join. With the asynchronous page copy, it is possible that copying a page happens before invalidating its PTE and the page is modified in the source memory node after copying the page. For such a case, copying the page must happen again to update the page in the target memory node. Hence, the asynchronous page copy has limitation: some pages have to be copied twice, which can be costly. We introduce an adaptive page-migration mechanism to address the above limitation. \textbf{Adaptive page migration.} For read-intensive pages, the asynchronous page copy is likely to bring performance benefit. However, for write-intensive pages, due to repeated data copy, it is likely that the asynchronous page copy performs worse than the synchronous. Hence, HM-Keeper\xspace chooses suitable migration mechanism based on the write intensity of pages. In particular, HM-Keeper\xspace uses the asynchronous page copy by default. But whenever any page in the memory region for migration is written after the asynchronous page copy starts, HM-Keeper\xspace switches to the synchronous page copy. To track page dirtiness, HM-Keeper\xspace utilizes PTE access bits and page faults, discussed in Section~\ref{sec:impl}. \textbf{Other optimizations. } \textit{(1) Concurrent page copy and parallel page copy.} HM-Keeper\xspace uses multiple threads to copy pages, and bidirectional page copy between two memory tiers (i.e., from A to B and from B to A) in parallel. Similar optimization can be found in~\cite{Yan:2019:NPM:3297858.3304024}. \textit{(2) Migration of PTE.} Recent works~\cite{asplos20:mitosis,asplos21:vmitosis} reveal that the page table is distributed across all memory tiers, and the remote page-table walk can happen frequently in a large memory system, degrading performance. In HM-Keeper\xspace, a memory region's corresponding PTEs take at least one page, and HM-Keeper\xspace uses the synchronous page copy to migrate PTEs to the memory tier where the migrating memory region moves. We implement the above optimizations and introduce a new API called \textit{move\_memory\_regions()}. In this implementation, tracking page dirtiness, performing page map/unmap, and migrating PTEs are still on the critical path, but the most time-consuming page copying could be performed off the critical path. Figure~\ref{fig:movemr_breakdown} presents the performance of \textit{move\_memory\_regions()} migrating 2MB memory region using the same setting as for \textit{move\_pages()}, and excludes the overhead of page copying (and page allocation in the step (1), using asynchronous page allocation). \textit{move\_memory\_regions()} is 4.37x faster than \textit{move\_pages()} in this case. Section~\ref{sec:eval_migration_mechanism} shows more results. \section{Holistic Page Management} \label{sec:migration} In this section, we discuss the page migration strategy. Essentially, this strategy answers two questions: (1) which memory regions to migrate, and (2) given a memory region to migrate, which memory tier to migrate. HM-Keeper\xspace answers the first question by selecting memory regions based on analyzing time-consecutive profiling results, and the second question by using the strategy of ``fast promotion but slow demotion''. \subsection{Which Memory Region to Migrate?} \label{sec:which_mem} At the end of each profiling interval, HM-Keeper\xspace migrates (or promotes) some memory regions to the fastest memory, and the total size of those memory regions is a constant $N$ (N=200MB in our evaluation). This is similar to the existing works~\cite{intel_mem_optimizer, memory_tiering, atc21_autotiering,ATC20_leap} that periodically migrates a fixed number of pages. If free memory space in the fastest memory is not large enough for migration, some pages in the fastest memory are demoted to the slower memory tiers (see Section~\ref{sec:where_to_migrate}). \textbf{Select memory regions for promotion.} The goal of region promotion is to place the most frequently accessed pages into the fastest memory. HM-Keeper\xspace decides which regions to promote based on hotness indication collected from \textit{all} memory regions, regardless which tiers those memory regions are currently in. Hence, the migration decision is holistic. The memory regions with the largest hotness indication are promoted. HM-Keeper\xspace uses time-consecutive profiling results to select regions for promotion. Particularly, HM-Keeper\xspace uses the hotness indication collected from the most recent profiling interval \textit{and} the prior profiling intervals. As a result, HM-Keeper\xspace captures temporal locality, and avoids page migration due to the bursty memory access pattern in one profiling interval. HM-Keeper\xspace gets time-consecutive profiling results based on the exponential moving average (EMA) of hotness indication collected from all profiling intervals. Given a sequence of data points, EMA places a greater weight and significance on the most recent data points. We define EMA of hotness indication as follows. Assume that $HI_i$ is the hotness indication collected at the profiling interval $i$ for a memory region, the EMA of the hotness indication for that memory region at the profiling interval $i$, denoted as $WHI_i$, is defined in Equation~\ref{eq:whi}. This equation is a recursive formulation including $WHI_i$ and $WHI_{i-1}$ from the prior interval $i-1$. \begin{equation} \label{eq:whi} \small WHI_i = \alpha \times HI + (1-\alpha) \times WHI_{i-1} \end{equation} $\alpha$ in the above formulation indicates how important the history information is for making the decision. In practice, we set $\alpha$ as 0.5. There are two benefits of using EMA. First, the memory consumption is small. There is no need to store all prior profiling results. Second, the computation of EMA is lightweight, and hence the runtime overhead is low. Based on the EMA of hotness indication, HM-Keeper\xspace uses the following method to select memory regions to migrate. HM-Keeper\xspace builds a histogram to get the distribution of EMA of all memory regions. The histogram buckets the range of EMA values, and counts how many and what memory regions fall into each bucket. Given the size of pages to migrate to the fastest memory, HM-Keeper\xspace chooses those memory regions falling into the highest buckets in the histogram to migrate. Building and maintaining the histogram is not costly: Whenever the EMA of hotness indication of a memory region is available, the histogram only needs to be slightly updated accordingly \subsection{Where to Migrate Memory Regions?} \label{sec:where_to_migrate} \textbf{Where to promote memory regions?} As discussed in Section~\ref{sec:which_mem}, memory regions are promoted to the fastest memory tier based on the histogram. It is likely that after a profiling interval, there is no memory region to promote to the fastest memory because those memory regions falling into the highest buckets of the histogram are already there. In that case, memory regions in the lower buckets of the histogram are selected to promote to the second-fastest memory tier, and the accumulated size of those memory regions to migrate should always be $N$ (the total size of memory regions to migrate). In general, HM-Keeper\xspace makes the best efforts to promote frequently accessed memory regions to high-performance memory tiers. \textbf{Where to demote memory regions?} When a memory tier is a destination of memory promotion but does not have enough space to accommodate memory promotion, memory regions in that memory tier are migrated (or demoted) to the next lower memory tier with enough memory capacity. Memory regions for demotion are selected based on the histogram -- Memory regions that are in the lowest buckets of the histogram are demoted to the next lower tier. We use the above slow-demotion strategy to avoid performance loss caused by migrating pages that are still likely to be accessed in a low memory tier in near future. \section{Overview} \label{sec:overview} HM-Keeper\xspace consists of three components: (1) an adaptive memory profiling mechanism that achieves high quality at low overhead; (2) a page-migration strategy that leverages a global view of all memory regions in all tiers to make informed decisions; and (3) a page-migration mechanism that adapts data copy schemes based on page access patterns. HM-Keeper\xspace partitions the virtual address space into memory regions. It periodically profiles memory accesses and migrates pages. In each profiling interval, HM-Keeper\xspace samples one or more pages per memory region. This region-based profiling strategy captures spatial locality in each region. Memory regions can be dynamically merged or split under the guidance of quantitative analysis on the profiling overhead. Such adjustments provide opportunities to re-distribute sampling quotas between memory regions under a fixed profiling overhead to improve profiling quality. HM-Keeper\xspace uses a holistic approach to decide page migration between memory tiers. By calculating the exponential moving average of page hotness collected from all profiling intervals, HM-Keeper\xspace learns the distribution of hot memory regions in \textit{all} memory tiers. Guided by this information, HM-Keeper\xspace promotes hot pages from any memory tier to the fastest tier without going through any intermediate layers. It demotes pages tier by tier when there is not enough space in the fast memory tiers. When migrating pages, we introduce an asynchronous page-copy mechanism that overlaps page copying with application execution. This mechanism reduces the overhead of page copy. However, the asynchronous page copy can come with the time cost of \textit{extra} page copy, because when a page is updated during copying, the page has to be copied again. The traditional, synchronous page-copy mechanism does not need extra page copy, but completely exposes the overhead of page copy into the critical path. Hence, HM-Keeper\xspace uses a hybrid approach that takes advantage of both asynchronous and synchronous mechanisms. HM-Keeper\xspace selects migration mechanism based on whether page modification happens during migration. \section{Adaptive Memory Profiling} \label{sec:profiling} Figure~\ref{fig:profile_workflow} depicts the profiling workflow in HM-Keeper\xspace. The fundamental mechanism tracks page accesses by utilizing PTE reserved bits and PTE scan. In particular, each PTE maintains an access bit, which indicates the access status of the corresponding page. The access bit is initially set to 0, but changed to 1 by the memory management unit (MMU) when the corresponding page is accessed. By repeatedly scanning PTE to check the value of the access bit and resetting the access bit to 0 if it is found to be 1, page accesses can be monitored. This mechanism is commonly used in existing works~\cite{Hirofuchi:2016:RHV:2987550.2987570,middleware19_profiling}. However, using it naively would impose high overhead on large memory systems. Scanning all PTEs to track memory accesses of each page for large memory is expensive. For example, scanning a five-level page table for 1.5 TB memory with page size of 2MB in an Optane-based platform (see Table~\ref{tab:hardware} for hardware details) with one helper thread takes more than one second, which is too long to capture varying workload behaviors. To avoid such a long profiling time, it is natural to sample pages in the address space for profiling. In a large memory system, sampling pages is challenging, because there are a large number of pages to choose for sampling, and the profiling quality with unguided, random sampling~\cite{Agarwal:2017:TAP:3037697.3037706,intel_mem_optimizer,memory_tiering,atc21_autotiering,middleware19_profiling} can lead to poor performance as discussed in Section~\ref{sec:intro}. We introduce an adaptive memory profiling method to improve profiling quality while dynamically enforcing limits on profiling overhead. \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{figures/overview.pdf} \caption{The overview of memory profiling in HM-Keeper\xspace. \label{fig:profile_workflow}} \end{figure} \subsection{Adaptive Memory Regions} HM-Keeper\xspace partitions the virtual address space of a process into memory regions for profiling. By default, a memory region is a contiguous address space mapped by a last-level page directory entry (PDE). This indicates that in a typical five-level page table, the memory region size is 2MB by default. During program execution, whenever a last-level PDE is set as valid by OS, the corresponding memory region is subject to profiling. Memory regions can be dynamically merged to reduce profiling overhead, or split to improve profiling quality. Therefore, different memory regions can have different sizes. We use the last-level PDE to decide the initial size of each memory region, based on experimental analysis. If a higher-level PDE is used to decide the initial size of memory regions, the default memory region becomes at least 1GB. With such a large memory region, data objects with different memory access patterns are likely to reside in the same memory region~\cite{hpca21_sentinel, AutoSawp}. Since memory region is the basic unit for memory profiling and migration, those data objects will be migrated together, even though they may favor different memory tiers. In each memory region, HM-Keeper\xspace samples one or more 4KB pages for profiling (discussed in detail in Section~\ref{sec:page_sampling}). HM-Keeper\xspace scans access bits in PTEs multiple times in a profiling interval. \textbf{Multiple scans of PTEs.} In a profiling interval, the access bit in a PTE corresponding to a page sample is scanned multiple times. The total number of scans per PTE per profiling interval is subject to a constraint, $num\_{scans}$. We use the above multi-scan method, instead of single-scan in the existing work~\cite{Hirofuchi:2016:RHV:2987550.2987570,autonuma_balance,middleware19_profiling} to reduce skewness of profiling results. A page in a profiling interval can be accessed multiple times. In a profiling interval, a single-scan method can only detect whether a page is accessed or not, but cannot accurately capture the number of memory accesses. Although aggregating memory accesses across multiple profiling intervals could alleviate this problem, the skewness of profiling results will be accumulated over time (see Section~\ref{sec:migration}), leading to sub-optimal migration decisions. Using the multi-scan method avoids this problem. At the end of a profiling interval, the average number of accesses to all sampled pages in a memory region is used as the \textit{hotness indication} of that memory region. Based on the hotness indication, HM-Keeper\xspace may either merge or split memory regions. \textbf{Merge memory regions.} HM-Keeper\xspace actively looks for opportunities to merge contiguous memory regions at the end of a profiling interval. Two contiguous regions are merged, if their difference in the hotness indication collected in the most recent profiling interval is smaller than a threshold $\tau_1$. \textbf{Split a memory region.} HM-Keeper\xspace also checks whether a memory region shoud be split to ensure pages in the same region have similar hotness in each interval. When the maximum difference in the number of memory accesses among sampled pages in a region is larger than a threshold $\tau_2$, the memory region is split into two equally-sized ones. \textbf{Selection of $\tau_1$ and $\tau_2$.} $\tau_1$ and $\tau_2$ define the minimum and maximum differences in the number of memory accesses among page samples in a memory region. $\tau_1$ and $\tau_2$ fall into [0, $num\_scans$]. To avoid frequent merging/splitting and balance between them, $\tau_1$ and $\tau_2$ evenly split the range of [0, $num\_scans$], i.e., $\tau_1 = 1/3 * num\_scans$ and $\tau_2 = 2/3 * num\_scans$ by default. $\tau_1$ can be dynamically fine-tuned to enforce the limit on profiling overhead, discussed in Section~\ref{sec:overhead_control}. \vspace{-2pt} \subsection{Adaptive Page Sampling} \vspace{-2pt} \label{sec:page_sampling} \textbf{Initial page sampling.} Initially, each memory region has only one page sample for profiling. Our method for choosing the initial page sample in a memory region is based on a method in Thermostat~\cite{Agarwal:2017:TAP:3037697.3037706}. In particular, HM-Keeper\xspace monitors all pages in the memory region in a profiling interval, identifies those pages with non-zero accesses, and randomly chooses one among those pages. After the initial page sampling, the number of page samples in a memory region can be dynamically changed to improve profiling quality. \textbf{After merging of two memory regions}, the total number of page samples in the two regions is reduced by half under the constraint that the new memory region should have at least one page sample. This reduction of page samples saves the profiling overhead for the two regions, and allows other memory regions to have more samples without exceeding the overhead constraint. The saved page-sample quota after merging memory regions is re-distributed to other memory regions. First, HM-Keeper\xspace distributes sample quota to the memory regions whose hotness indication shows the largest variance in the last two profiling intervals among all memory regions. Having a large variance of hotness indication in two profiling intervals indicates that the memory access pattern is changing. Adding more page samples for profiling in this case is useful to improve profiling quality. To efficiently find memory regions with the largest variance of hotness indication among all memory regions, HM-Keeper\xspace keeps track of top-five largest variances and the corresponding memory regions when analyzing profiling results. Whenever a new profiling result for a memory region is available, HM-Keeper\xspace checks the top-five records and updates them if needed. After the merging, the saved page-sample quota is re-distributed to those top-five memory regions. After splitting a memory region into two new regions, the page-sample quota in the original region is evenly split between the two new regions. Therefore, splitting does not change the number of total samples. Nevertheless, splitting the memory region brings two benefits. First, the hotness indication, which is the \textit{average} number of accesses to all sampled pages in a memory region, provides better indication of memory accesses to the new, smaller memory regions, hence providing better guidance on page migration. Second, migration is more effective, because using the smaller memory region avoids unnecessary data movement coming with the larger region. \subsection{Profiling Overhead Control} \label{sec:overhead_control} We discuss how the profiling overhead control is integrated into adaptive memory regions and page sampling in this section. HM-Keeper\xspace supports the user to define a profiling overhead constraint. HM-Keeper\xspace respects this overhead constraint while maximizing profiling quality, by dynamically changing the number of memory regions and distributing page-sample quotas between the regions. The overhead constraint is a percentage of program execution time without profiling and migration. For example, in our evaluation section, this overhead constraint is 5\%. Given the length of a profiling interval ($t_{mi}$), profiling overhead constraint, overhead of scanning one PTE ($one\_scan\_\\overhead$), and number of scans per PTE ($num\_scans$), the total number of page samples in all memory regions that can be profiled in a profiling interval, denoted as $num\_ps$, is calculated in Equation~\ref{eq:profiling_overhead_control}. \begin{equation} \label{eq:profiling_overhead_control} num\_{ps} = \frac{t_{mi} \times profiling\_overhead\_constraint }{one\_scan\_overhead \times num\_{scans}} \end{equation} $t_{mi}$ can be set by the user, as in existing works~\cite{Agarwal:2017:TAP:3037697.3037706, Hirofuchi:2016:RHV:2987550.2987570,middleware19_profiling}. $one\_scan\_overhead$ is measured offline by repeatedly scanning PTEs and then measuring the average scanning time. As HM-Keeper\xspace merges or splits memory regions, the total number of page samples in all memory regions remains equal to $num\_ps$ to respect the profiling overhead constraint. The total number of memory regions needs to be smaller than $num\_ps$ so that each memory region has at least one page sample. When the total number of memory regions is too large, HM-Keeper\xspace dynamically fine-tunes $\tau_1$ (the threshold to merge regions) to merge memory regions more aggressively. $\tau_1$ is gradually increased across profiling intervals, until the number of memory regions is no larger than $num\_ps$. We do not change $num\_scans$ (i.e., the number of scans per page sample) to enforce the profiling overhead constraint, because of its significant impact on profiling quality. Changing $num\_scans$ leads to a change of profiling results in all memory regions. For example, in our evaluation, when changing $num\_scans$ from 2 to 3, HM-Keeper\xspace changes the migration decision for at least 20\% of memory regions. We set $num\_scans$ as a constant. Our empirical study shows that using a value larger than three leads to no obvious change (less than 5\%) in the migration decision. \textbf{Memory consumption overhead in HM-Keeper\xspace.} For each memory region, HM-Keeper\xspace stores the hotness indication as an integer. Given a terabyte-scale memory, this memory consumption overhead is only hundreds of MBs. For example, in our Optane-based platform with 1.5 TB memory, the memory overhead to store profiling results is no larger than 600MB, which is small on large memory systems. \begin{comment} \textbf{Guidance on choosing the profiling overhead constraint}. The overhead constraint cannot be too small, because that leads to a smaller number of page samples distributed to a smaller number of larger memory regions. Since the memory region is the granularity for page migration, migrating a larger memory region has a higher risk of migrating \textit{both} hot and cold data, and suffers from larger overhead. As a result, the application performance can be worse. The overhead constraint cannot be too large, either. In practice, we find using 5\%-10\% as the profiling overhead constraint is reasonable for high performance. \end{comment} \begin{comment} \subsection{\textcolor{jie}{Spatial Locality Oriented Sampling}} \textcolor{jie2}{ HM-Keeper\xspace profiles big memory system as a set of memory regions, where each memory region is contiguous virtual memory space. HM-Keeper\xspace performs memory management based on memory region. The design of memory region is based on spatial locality. OSes maintain multi-level page table to record virtual address and physical address mapping. HM-Keeper\xspace sets the default memory region size as the size of second last level page table entry corresponding memory space. HM-Keeper\xspace \textit{adaptively} adjusts the size of memory regions based on the profiling results for upcoming profiling windows.} \textcolor{jie2}{ To reduce profiling overhead, HM-Keeper\xspace carefully selects and tracks representative pages access instead of all pages in a memory region to estimate the access information of whole memory region. For example, in profiling window $k$, there are $N_i(k)$ representative pages in memory region $i$. Each representative page goes through three consecutive PTE access bit scanning, which require checking the access bit status. The access frequency of each representative page is $A_{Ni}(m)$, where $m \in N_i(k)$. The profiler reports \textit{time-consecutive profiling results} of memory region $i$ as $A_i(k)$, which is the average number of memory access of all representative pages in memory region $i$. Please note that $A_i(k)$ indicates the hotness of memory region $i$. By comparing the access frequency of representative pages $A_{Ni}(m)$ in each memory region, HM-Keeper\xspace makes memory region splitting decision. By comparing the memory access between contiguous memory region $A_i(k)$ and $A_{i+1}(k)$, HM-Keeper\xspace makes memory region merging decision. } \subsection{\textcolor{jie}{Temporal Locality Oriented Profiling}} \textcolor{jie2} HM-Keeper\xspace uses \textit{adaptive page sampling} to selective representative pages in each memory region. HM-Keeper\xspace decides the number of representative pages in each memory region by considering memory region access temporal locality. Representative pages selection in each memory region is random. Specifically, for memory regions which have similar access pattern across profiling windows, HM-Keeper\xspace reduces the number of representative pages to track memory region accesses. To ensure the profiling accuracy of memory regions with different number of access among different profiling window, HM-Keeper\xspace increases the number of representative pages in those memory regions. By considering the memory access history of the memory region $i$ and comparing $A_i(k)$ and $A_i(k-1)$, HM-Keeper\xspace makes decision on increase/decrease the number of representative pages $N_i(k+1)$ in memory region $i$. Specifically, if $A_i(k) - A_i(k-1) > \tau$, HM-Keeper\xspace increases the number of representative pages in memory region $i$. } \subsection{\textcolor{jie}{Profiling Quality Control}} \textcolor{jie2}{The profiler in HM-Keeper\xspace provides quality control by adaptive adjusting the size of memory regions. Multiple representative pages in each memory region might have different number of memory access be tracked. Based on the memory access tracking for each representative pages, HM-Keeper\xspace merges or splits memory regions. Note that the total number of representative pages remains the same across profiling windows. } \textcolor{jie2}{Memory region merging happens when the contiguous memory regions has similar access frequency. For memory region $i$ and its neighbor $i+1$, we compare the memory region access $A_i(k)$ and $A_{i+1}(k)$. For contiguous memory regions with same access pattern, HM-Keeper\xspace merges them together to reduce the number of representative pages in those memory regions. } \textcolor{jie2}{Memory region splitting happens when the multiple representative pages in the same memory region do not have similar memory access frequency. This indicates that the memory region is too coarse-grained. For those memory regions, HM-Keeper\xspace splits it as two new memory regions with the same size. Specifically, for two representative pages $m$ and $n$ in memory region $i$, if $A_{Ni}(m) != A_{Ni}(n)$, HM-Keeper\xspace splits memory region $i$ into two memory regions with equal size. Note that memory region splitting stops when the memory region is equal to the default memory region size.} \subsection{\textcolor{jie}{Profiling Overhead control}} \textcolor{jie2}{HM-Keeper\xspace provides \textit{fine-grained profiling overhead control} by (1) choosing the size of memory region based on overhead estimation, (2)ensuring the total number of pages involves in PTE access bit scanning remain the same across profiling windows, and (3) adaptively pause PTE scanning at the end of profiling window. Such that HM-Keeper\xspace ensures the profiling overhead satisfying user requirements. } \subsection{Discussion} \textcolor{red}{(discuss how to choose profiling overhead constraint)} \textcolor{jie2}{The profiler in HM-Keeper\xspace provides quality control and overhead control, which aims to provide high quality memory region access tracing with limited profiling overhead. However, the profiling overhead could not be too small. HM-Keeper\xspace increases the size of each memory region to reduce profiling overhead. Too small profiling overhead leads to carouse-grained profiling, such that the memory accesses of representative pages can not provide good indication on accesses in memory regions. } \textcolor{jie2}{ Although region-based sampling method has been used in prior work~\cite{middleware19_profiling}, without correlation of time-consecutive profiling results, region-based sampling would impossible to provide useful memory migration hints, which are, in turn, useless in memory management. Meanwhile, time-consecutive profiling brings new challenges in profiling overhead control. } \end{comment} \section{Adaptive Memory Profiling (2 pages)} \label{sec:profiling} x86 hardware does not support memory access counting at a per-page granularity. We track page accesses by utilizing PTE reserved bits and PTE table scan, similar to existing work~\cite{} for page management for HM. \textcolor{red}{(Jie: add more details on this method.)} \textcolor{dong}{Scanning and setting access bits to track page accesses for large memory is expensive. For example, scanning a five-level page table for 1.5 TB memory in an Optane-based platform using one thread (see Table~\ref{tab:hardware}) takes xxxx--xxxx seconds. To avoid such a large overhead, it is natural to sample pages for profiling. We introduce an adaptive memory profiling method to sample pages and maximize profiling quality with a given profiling overhead. Figure~\ref{fig:profile_workflow} shows the workflow of one profiling window in HM-Keeper\xspace.} \begin{figure*}[tb] \centering \includegraphics[width=1.9\columnwidth]{figures/profile.pdf} \caption{The overview of memory profiling in HM-Keeper\xspace. \label{fig:profile_workflow}} \end{figure*} \subsection{\textcolor{jie}{Spatial Locality Oriented Sampling}} \textcolor{jie2}{ HM-Keeper\xspace profiles big memory system as a set of memory regions, where each memory region is contiguous virtual memory space. HM-Keeper\xspace performs memory management based on memory region. The design of memory region is based on spatial locality. OSes maintain multi-level page table to record virtual address and physical address mapping. HM-Keeper\xspace sets the default memory region size as the size of second last level page table entry corresponding memory space. HM-Keeper\xspace \textit{adaptively} adjusts the size of memory regions based on the profiling results for upcoming profiling windows.} \textcolor{jie2}{ To reduce profiling overhead, HM-Keeper\xspace carefully selects and tracks representative pages access instead of all pages in a memory region to estimate the access information of whole memory region. For example, in profiling window $k$, there are $N_i(k)$ representative pages in memory region $i$. Each representative page goes through three consecutive PTE access bit scanning, which require checking the access bit status. The access frequency of each representative page is $A_{Ni}(m)$, where $m \in N_i(k)$. The profiler reports \textit{time-consecutive profiling results} of memory region $i$ as $A_i(k)$, which is the average number of memory access of all representative pages in memory region $i$. Please note that $A_i(k)$ indicates the hotness of memory region $i$. By comparing the access frequency of representative pages $A_{Ni}(m)$ in each memory region, HM-Keeper\xspace makes memory region splitting decision. By comparing the memory access between contiguous memory region $A_i(k)$ and $A_{i+1}(k)$, HM-Keeper\xspace makes memory region merging decision. } \subsection{\textcolor{jie}{Temporal Locality Oriented Profiling}} \textcolor{jie2} HM-Keeper\xspace uses \textit{adaptive page sampling} to selective representative pages in each memory region. HM-Keeper\xspace decides the number of representative pages in each memory region by considering memory region access temporal locality. Representative pages selection in each memory region is random. Specifically, for memory regions which have similar access pattern across profiling windows, HM-Keeper\xspace reduces the number of representative pages to track memory region accesses. To ensure the profiling accuracy of memory regions with different number of access among different profiling window, HM-Keeper\xspace increases the number of representative pages in those memory regions. By considering the memory access history of the memory region $i$ and comparing $A_i(k)$ and $A_i(k-1)$, HM-Keeper\xspace makes decision on increase/decrease the number of representative pages $N_i(k+1)$ in memory region $i$. Specifically, if $A_i(k) - A_i(k-1) > \tau$, HM-Keeper\xspace increases the number of representative pages in memory region $i$. } \subsection{\textcolor{jie}{Profiling Quality Control}} \textcolor{jie2}{The profiler in HM-Keeper\xspace provides quality control by adaptive adjusting the size of memory regions. Multiple representative pages in each memory region might have different number of memory access be tracked. Based on the memory access tracking for each representative pages, HM-Keeper\xspace merges or splits memory regions. Note that the total number of representative pages remains the same across profiling windows. } \textcolor{jie2}{Memory region merging happens when the contiguous memory regions has similar access frequency. For memory region $i$ and its neighbor $i+1$, we compare the memory region access $A_i(k)$ and $A_{i+1}(k)$. For contiguous memory regions with same access pattern, HM-Keeper\xspace merges them together to reduce the number of representative pages in those memory regions. } \textcolor{jie2}{Memory region splitting happens when the multiple representative pages in the same memory region do not have similar memory access frequency. This indicates that the memory region is too coarse-grained. For those memory regions, HM-Keeper\xspace splits it as two new memory regions with the same size. Specifically, for two representative pages $m$ and $n$ in memory region $i$, if $A_{Ni}(m) != A_{Ni}(n)$, HM-Keeper\xspace splits memory region $i$ into two memory regions with equal size. Note that memory region splitting stops when the memory region is equal to the default memory region size.} \subsection{\textcolor{jie}{Profiling Overhead control}} \textcolor{jie2}{HM-Keeper\xspace provides \textit{fine-grained profiling overhead control} by (1) choosing the size of memory region based on overhead estimation, (2)ensuring the total number of pages involves in PTE access bit scanning remain the same across profiling windows, and (3) adaptively pause PTE scanning at the end of profiling window. Such that HM-Keeper\xspace ensures the profiling overhead satisfying user requirements. } \textcolor{red}{(discuss how to choose profiling overhead target)}
1,116,691,500,575
arxiv
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\bf }} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\it }} \def\alph{footnote}{\alph{footnote}} \def\@makefnmark{{$\!^{\@thefnmark}$}} \pagestyle{empty} \renewenvironment{thebibliography}[1] {\begin{list}{\arabic{enumi}.} {\usecounter{enumi}\setlength{\parsep}{0pt} \setlength{\itemsep}{0pt} \settowidth {\labelwidth}{#1.}\sloppy}}{\end{list}} \topsep=0in\parsep=0in\itemsep=0in \newcounter{arabiclistc} \newenvironment{arabiclist} {\setcounter{arabiclistc}{0} \begin{list}{\arabic{arabiclistc}} {\usecounter{arabiclistc} \setlength{\parsep}{0pt} \setlength{\itemsep}{0pt}}}{\end{list}} \def\@citex[#1]#2{\if@filesw\immediate\write\@auxout {\string\citation{#2}}\fi \def\@citea{}\@cite{\@for\@citeb:=#2\do {\@citea\def\@citea{,}\@ifundefined {b@\@citeb}{{\bf ?}\@warning {Citation `\@citeb' on page \thepage \space undefined}} {\csname b@\@citeb\endcsname}}}{#1}} \newif\if@cghi \def\cite{\@cghitrue\@ifnextchar [{\@tempswatrue \@citex}{\@tempswafalse\@citex[]}} \def\citelow{\@cghifalse\@ifnextchar [{\@tempswatrue \@citex}{\@tempswafalse\@citex[]}} \def\@cite#1#2{{$\!^{#1}$\if@tempswa\typeout {IJCGA warning: optional citation argument ignored: `#2'} \fi}} \newcommand{\cite}{\cite} \setcounter{secnumdepth}{2} \def1}\let\glb@currsize=\relax\selectfont{1.0} \ifx\selectfont\undefined \@normalsize\else\let\glb@currsize=\relax\selectfont \fi \ifx\selectfont\undefined \def\@singlespacing{% \def1}\let\glb@currsize=\relax\selectfont{1}\ifx\@currsize\normalsize\@normalsize\else\@currsize\fi% } \else \def\@singlespacing{\def1}\let\glb@currsize=\relax\selectfont{1}\let\glb@currsize=\relax\selectfont} \fi \long\def\@makecaption#1#2{ \vskip 10pt \setbox\@tempboxa\hbox{\footnotesize #1: #2} \ifdim \wd\@tempboxa >\hsize \leftskip 0pt plus 1fil \rightskip 0pt plus -1fil \parfillskip 0pt plus 2fil \footnotesize #1: #2\par \else \hbox to\hsize{\hfil\box\@tempboxa\hfil} \fi} \catcode`@=12 \begin{document} \title{\large \bf THE MASS OF $\bfm \eta_b$\thanks{Talk given at the 44th Rencontres de Moriond }} \author{\normalsize A. PENIN$^{a,b}$ \\[2mm] \small \it $^{a}$Department of Physics, University Of Alberta Edmonton, AB T6G 2J1, Canada\\ \small \it $^{b}$Institute for Nuclear Research, Russian Academy of Sciences 119899 Moscow, Russia} \date{} \maketitle\abstract{ In this paper we briefly review the advances and problems in the QCD theory of the $\eta_b$ mass.} \section{Introduction} The properties of the $\Upsilon$ mesons, the bottom quark-antiquark spin-one bound states, are measured experimentally with great precision, and recent theoretical analysis of the $\Upsilon$ family based on high-order perturbative calculations resulted in determinations of the bottom-quark mass $m_b$ with unprecedent accuracy.\cite{KPP,PenSte} At the same time the spin-zero $\eta_b$ meson remained elusive despite dediacted experimental searches. Only recently a signal of the $\eta_b$ has been observed by Babar collaboration in the radiative decays of the excited $\Upsilon$ states.\cite{BABAR1,BABAR2} The $\eta_b$ meson shows up as a peak in the photon energy spectrum of the $\Upsilon\to \gamma\eta_b$ transitions. Despite considerable background from $\Upsilon\to \gamma\chi_b$ and $e^+e^-\to \gamma \Upsilon$ processes, the peak energy can be measured with rather high precision. Together with very high accuracy of the $\Upsilon$ spectroscopy, this allows for the determination of $\eta_b$ mass $M(\eta_b)$ with only a few MeV error. The analysis of the $\Upsilon(3S)$ decays gives $M(\eta_b)={9388.9}^{+ 3.1}_{-2.3}\, ({\rm stat})\pm 2.7\,{(\rm syst)} ~{\rm MeV}$,\cite{BABAR1} while $\Upsilon(2S)$ data give $M(\eta_b)={9392.9}^{+ 4.6}_{-4.8}\, ({\rm stat})\pm 1.9\,{(\rm syst)} ~{\rm MeV}$.\cite{BABAR2} Thus an accurate prediction of $M(\eta_b)$ is a big challenge and a test for the QCD theory of heavy quarkonium. Due to a very small experimental uncertainty of the $\Upsilon(1S)$ mass, the problem can be reduced to the calculation of the hyperfine splitting (HFS) $E_{\rm hfs}=M(\Upsilon(1S))-M(\eta_b)$. This quantity is very sensitive to $\alpha_s$ and could become a competitive source for the determination of the strong coupling constant. In this paper we briefly review the advances and problems in the QCD theory of bottomonium HFS. We consider only the approaches entirely based on the first principles of QCD, leaving aside numerous semi-phenomenological models. \section{Bottomonium Hyperfine Splitting in QCD} Systematic perturbative analysis of the heavy quarkonium bound states is based on the effective field theory of (potential) nonrelativistic QCD, or (p)NRQCD.\cite{CasLep,PinSot1} A recent major breakthrough in the high-order calculations of the heavy quarkonium properties is related to the use of dimensional regularization \cite{PinSot2} and the threshold expansion \cite{BenSmi} within the effective field theory framework.\cite{PenSte,KniPen1} The bottomonium spectrum has been computed to ${\cal O}(m_b\alpha_s^5)$, which includes the ${\cal O}(\alpha_s)$ next-to-leading order (NLO) correction to the HFS. The corresponding result for an arbitrary principal quantum number is given in Refs.\cite{PenSte,PSS} in a closed analytical form. For the ground state it reads \begin{eqnarray} {E_{\rm hfs}^{NLO}}& = &{ C_F^4 \alpha_s^4m_b\over 3} \left[ 1 + {{\alpha_s \over \pi}}\,\left( \frac{7\,{C_A}\,}{4} {\ln\left({C_F\alpha_s}\right)} -{{C_F}\over 2}+ \frac{2\pi^2-26}{9} {n_f}\,{T_F} +\frac{3-3\ln\,2}{2}T_F \right.\right. \nonumber\\ &+&\left.\left.\frac{ 122-11\pi^2}{18}{C_A}\right)\right] \approx {E_{\rm hfs}^{LO}}\left[ 1 + {{\alpha_s}}\,\left( 1.67\, {\ln\left({\alpha_s}\right)} +0.61 \right)\right]\,, \label{eq:pqcd} \end{eqnarray} where $C_F=(N_c^2-1)/(2N_c)$, $C_A=N_c=3$, $n_l=4$, and $\alpha_s$ is renormalized in the $\overline{\rm MS}$ scheme at the scale $\mu=C_F\alpha_sm_b$. A logarithmically enhanced term in Eq.~(\ref{eq:pqcd}) is characteristic to the multiscale dynamics of the nonrelativistic bound states.\cite{KniPen2} Such terms can be resummed to all orders through the renormalization group analysis of pNRQCD, or the {\it nonrelativistic renormalization group} (NRG) \cite{Pin,PPSS} (see also Ref.\cite{LMR}). The renormalization-group-improved expression for the bottomonium HFS is available to the next-to-leading logarithmic (NLL) approximation, which sums up all the corrections of the form $\alpha_s^n\ln^{n-1}\alpha_s$.\cite{KPPSS} The corresponding analytical expression is too lengthy to be presented here. The result of the numerical analysis is given in Fig.~\ref{fig1}. The logarithmic expansion shows nice convergence and weak scale dependence at the physical scale of the inverse Bohr radius $\mu\sim \alpha_sm_b$. This suggests a small uncertainty due to uncalculated higher-order terms. At the same time the nonperturbative contribution to the HFS is difficult to estimate. In principle it can be investigated by the method of vacuum condensate expansion.\cite{VolLeu} The resulting series, however, does not converge well and suffers from large numerical uncertainties.\cite{TitYnd2} On the other hand, the nonperturbative contribution is suppressed at least by the second power of the heavy quark velocity $v\sim \alpha_s$. Hence it is beyond the accuracy of the NLL approximation and should be added to the errors. In the charmonium system, where the nonperturbative effects are supposed to be much more important, the NLL approximation gives the central value $M(J/\psi)-M(\eta_c)=104$~MeV,\cite{KPPSS} which is in a very good agreement with the experimental value $117.7\pm 1.3$~MeV. This suggests that the nonperturbative contribution to the bottomonium HFS is likely to be small as well. A detailed discussion of the uncertainties of the NLL result can be found in Ref.\cite{KPPSS} The final numerical prediction for the bottomonium HFS based on perturbative QCD reads \begin{equation} E^{\rm QCD}_{\rm hfs}=39 \pm 11\,{(\rm th)} \,{}^{+9}_{-8}\, (\delta\alpha_s)~{\rm MeV}\,, \label{eq:res} \end{equation} where ``th'' stands for the errors due to the high-order perturbative corrections and nonperturbative effects, whereas ``$\delta\alpha_s$'' stands for the uncertainty in $\alpha_s(M_Z)=0.118\pm0.003$. \begin{figure}[t] \begin{center} \epsfig{figure=etab1S.eps,height=6cm} \end{center} \caption{HFS of 1S bottomonium as a function of the renormalization scale $\mu$ in the LO (dotted line), NLO (dashed line), LL (dot-dashed line), and NLL (solid line) approximations. For the NLL result, the band reflects the errors due to $\alpha_s(M_Z)=0.118\pm 0.003$. \label{fig1}} \end{figure} The problem of proper description of the nonperturbative dynamics of strong interactions at long distance is naturally solved by the lattice simulations of QCD. A systematic analysis of the bottomonium HFS within the unquenched lattice NRQCD predicts \cite{Gra} \begin{equation} E^{\rm lat}_{\rm hfs}={61}\pm 14~{\rm MeV}, \label{eq:lat} \end{equation} which has somewhat larger central value than Eq.~(\ref{eq:res}), but agrees with the perturbative result within the error bars. \section{Discussion} The estimate~(\ref{eq:pqcd}) based on the perturbative QCD undershoots the experimentally measured values \begin{equation} E^{\rm exp}_{\rm hfs}= 71.4^{+2.3}_{-3.1}\,(stat) \pm 2.7 \,(syst)~{\rm MeV},\mbox{\cite{BABAR1}} \qquad E^{\rm exp}_{\rm hfs}= 67.4^{+4.8}_{-4.6}\,(stat) \pm 2.0\, (syst)~{\rm MeV},\mbox{\cite{BABAR2} } \end{equation} by about two standard deviations. This discrepancy is rather unexpected and difficult to explain if one takes into account the very successful perturbative description of the HFS in charmonium. At the same time the prediction of the lattice QCD apparently agrees with the experimental data. This fact, however, should be taken with great care. Indeed, the lattice simulation \cite{Gra} uses a finite lattice spacing $a\sim (\alpha_sm_b)^{-1}$. It is determined by fitting the bottomonium spectrum, which is mostly sensitive to the soft momentum scale $\alpha_sm_b$. At the same time the HFS gets a significant contribution from the hard momentum scale of the heavy quark mass through the radiative corrections. In the lattice NRQCD framework this contribution should be included into the Wilson coefficient of the spin-flip operator in the effective Hamiltonian, which is neglected in Eq.~(\ref{eq:lat}). The one-loop Wilson coefficient contains a large logarithm of the form $\ln\left(am_b\right)$. It is in one-to-one correspondence with the logarithmic term of Eq.~(\ref{eq:pqcd}) and results in an additional contribution to the HFS \begin{equation} \delta^{\rm hard}E_{\rm hfs}= - {{\alpha_s \over \pi}} \frac{7\,{C_A}\,}{4} {\ln\left(am_b\right)}E_{\rm hfs}\approx -20~{\rm MeV}, \end{equation} which brings the lattice estimate~(\ref{eq:lat}) in a perfect agreement with the perturbative result~(\ref{eq:pqcd}). Thus, no definite conclusion on the accuracy of the lattice QCD predictions for the bottomonium HFS can be made at the moment and further theoretical study is necessary. In particular one has to compute the Wilson coefficient of the spin-flip operator perturbatively in the lattice regularization beyond the logarithmic approximation. To summarize, with the precise experimental data now at hand, the bottomonium HFS becomes one of the most interesting hadronic systems to apply and to test the QCD theory of strong interactions. A significant discrepancy between the prediction based on perturbative QCD and the experimentally measured HFS is intriguing and requires further analysis. \section*{Acknowledgments} This work is supported by the Alberta Ingenuity foundation and NSERC. \section*{References}
1,116,691,500,576
arxiv
\section{Introduction} \label{intro} The magnetic moments of nuclear ground states provided important empirical evidence for the development of the nuclear shell model \cite{MariaMayer-PhysRev.78.16}. Today magnetic moment measurements on ground and excited nuclear states remain important observables to gain insights into nuclear structure -- they are sensitive to the single-particle structure of the quantum state, give insight into how the nucleus carries its angular momentum, and can distinguish single-particle versus collective contributions to the wavefunction. This paper reviews some current developments in excited-state $g$-factor measurements. The transient-field technique (sect.~\ref{sec-1}), the recoil in vacuum method (sect.~\ref{sec-2}), and moment measurements with LaBr$_3$ detectors (sect.~\ref{sec-3}) are discussed. As the gyromagnetic precession of the nucleus is the experimentally measured quantity, the following discussion generally refers to $g$~factors rather than magnetic moments: $g=\mu/I$ where $I$ is the nuclear spin and the moment $\mu$ is given in nuclear magnetons. \section{Transient field measurements} \label{sec-1} \subsection{50 years of transient fields} An intense hyperfine magnetic field called the transient field (TF) acts on the nuclei of ions moving swiftly within a magnetized ferromagnetic medium. The discovery of the TF \cite{bor68} will reach its 50th anniversary in 2018. While some $g$-factor measurements performed between 1968 and 1975 made use of the TF (e.g. \cite{EberhMg26}), it was not until after the 1975 discovery that the TF increases with the velocity of the moving ion \cite{EBERHARDT-ETF1975} that the method became widely used \cite{ben80}. It has continued in regular use ever since \cite{benczerkoller07}. The TF method gives the sign of the $g$~factor and is best suited for relative $g$-factor measurements on excited states with lifetimes in the picosecond range. The following subsection describes a contemporary measurement of $g(^{26}{\rm Mg};2^+)/g(^{24}{\rm Mg};2^+)$ by the high-velocity TF method \cite{Mg26.MCormick2018}. \subsection{High-velocity transient-field method: $N=14$ subshell closure in $^{26}$Mg} The $E(2^+_1)$ and $B(E2)$ systematics for $Z=12$ indicate a subshell closure at $N=14$, i.e. $^{26}$Mg: as $N$ increases from $^{22}_{12}$Mg$_{10}$ the $E(2^+)$ value spikes at $N=14$ and the $B(E2)$ value dips, indicative of a subshell closure. The expectation, then, is that the $2^+_1$ state of $^{26}$Mg should be dominated by proton excitations, giving $g(2^+_1)\sim+1$. Indeed, shell model calculations, using NuShellX \cite{NuShellX} and the USDB interactions \cite{Brown-New-USD,USDA-B_Obs}, predict $g(2^+_1)=+0.959$. Surprisingly, the currently adopted value is $g(2^+_1)=+0.50(13)$ \cite{A26,SpeidMg26}, half the expected value. The first $g(2^+_1$,~$^{26}$Mg) measurement by Eberhardt \textit{et al.} in 1974, using the thick foil transient-field method in which the excited $^{26}$Mg ions slowed and stopped in a magnetized iron host, found $g=+0.97(18)$ \cite{EberhMg26,ZalmTF}. Later, in 1981, Speidel \textit{et al.} \cite{SpeidMg26} argued that Eberhardt \textit{et al.} had incorrectly accounted for the static-field contribution, which came into effect after the ions came to rest in the iron host. Speidel \textit{et al.} made a new measurement using the thin-foil transient-field method, thereby excluding the static-field contribution, and obtained $g=+0.50(13)$, in agreement with Hartree-Fock calculations available at the time. This result, which implies near equal contributions from protons and neutrons, is currently listed as the adopted value in Nuclear Data Sheets \cite{A26}. As noted above, modern shell model calculations and single-particle arguments contend that the $N=14$ subshell closure should result in $g(2^+_1)$ being much more heavily influenced by the proton contribution than the currently adopted measurement indicates. Both Eberhardt \textit{et al.} and Speidel \textit{et al.} used $(\alpha,\alpha')$ reactions to excite and recoil $^{26}$Mg ions into an iron host. The recoil velocity was relatively low, $v/c$~$\sim$~1\%, and precession angles due to the transient field were very small, $\sim$1~mrad. These were challenging experiments. A high-velocity transient-field measurement \cite{StuchMgTF,Zn72-Fiori} on beams of $^{24,26}$Mg ions was performed. Beams of 120-MeV $^{24}$Mg$^{8+}$ and $^{26}$Mg$^{8+}$ were provided by the ANU Heavy Ion Accelerator Facility. The beams were Coulomb excited on a 9.9~mg/cm$^2$ natural gadolinium target, which also served as the ferromagnetic layer for the transient-field precession effect. Precession angles an order of magnitude larger than the earlier works \cite{EberhMg26,SpeidMg26} were observed. Moreover, the same target was used with beam excitation to measure the ratio of $2^+_1$-state $g$ factors in $^{24}$Mg and $^{26}$Mg. Taking $g(2^+_1; ^{24}\rm{Mg}) = +0.538(13)$ \cite{Mg24RIV} gave $g(2^+_1; ^{26}\rm{Mg}) = +0.86(10)$. The present $g$-factor measurement agrees with that of Eberhardt \textit{et al.} \cite{EberhMg26}, but with a reduced uncertainty. More details of the experiment and results are given in Ref.~\cite{Mg26.MCormick2018}. The $E(2^+_1)$, $B(E2)$ and $g(2^+_1)$ systematics of the even-$A$ magnesium isotopes from $^{22}$Mg to $^{32}$Mg are shown in Fig.~\ref{fig:MGTF}. The peak in $E(2^+_1)$ and the dip in $B(E2)$ value at $^{26}$Mg together are indicative of the $\nu d_{5/2}$ subshell being filled. Shell-model calculations performed with NuShellX \cite{NuShellX} and the USDB interaction \cite{Brown-New-USD,USDA-B_Obs} are in agreement with most of the experimental data in Fig.~\ref{fig:MGTF}. Of course, as the Island of Inversion is approached toward $^{32}$Mg, the $sd$-shell model breaks down. More realistic predictions of $g$~factors in the $sdpf$ model space \cite{MCShellModel} are indicated in the lowest panel of Fig.~\ref{fig:MGTF}. To measure the magnitudes of these $g$~factors requires measurements on radioactive beams, for which the recoil in vacuum method has advantages over the TF method. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{MGTFfig4.pdf} \caption{ Comparison of USDB shell model calculations and experiment for the magnesium isotopes from $A=22$ to 32 a) $E(2^+_1)$ energies, b) $B(E2)$ rates, and c) $g$~factors. The theoretical $g$ factors for $^{30}$Mg and $^{32}$Mg in a more realistic $sdpf$ model space are shown as stars \cite{MCShellModel}.} \label{fig:MGTF} \end{figure} \section{Recoil in Vacuum} \label{sec-2} In general, although the RIV method gives only the magnitude of the $g$~factor, it has proven to give it more precisely than the transient-field method, particularly in the case of radioactive beam measurements where statistical precision is limited; compare Refs.~\cite{Allmond-Sn126RIV,Sn126TF}. The primary reason is that the transient-field method requires $\gamma$-ray detection at a few specific angles in the plane perpendicular to the direction of the applied magnetic field whereas the RIV method can take advantage of $\gamma$-ray detection over a much broader angular range. A second reason, applicable for the time-dependent recoil in vacuum (TDRIV) method as applied recently to hydrogen-like $^{24}$Mg $2^+_1$ ions \cite{Mg24RIV}, is that the hyperfine interaction of the free ion in vacuum can be calculated from first principles with very high accuracy. The case of simple atomic systems such as H-like and Li-like ions will now be discussed, followed by some remarks on the RIV measurements on complex ions, which still have to be calibrated empirically. \subsection{RIV with H-like, Li-like and Na-like ions} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{MgTDRIVFig1.pdf} \caption{Sketch of TDRIV experiment. The `stopper' of the traditional plunger technique is replaced by a thin foil that resets the electron configuration of H-like ions. The particle detector, with segmentation around the beam axis, is located downstream of the $\gamma$-ray detectors.} \label{fig:MgTDRIVfig1} \end{figure} The experimental method \cite{stu05ecr} used to measure $g(2^+_1)$ in $^{24}$Mg \cite{Mg24RIV} is illustrated in Fig.~\ref{fig:MgTDRIVfig1}. Excited nuclei emerge from a target foil as ions carrying one electron. The nuclear spin ${\bm I}$ is aligned by the Coulomb-excitation reaction whereas the atomic spin ${\bm J}$ is oriented randomly. The hyperfine interaction couples the atomic spin to the nuclear spin, and together they precess about the total ${\bm F} = {\bm I} + {\bm J}$ with a frequency proportional to the nuclear $g$~factor. Thus the orientation of the nuclear spin is periodically reduced and restored during the flight through vacuum. As a consequence, the angular intensity pattern of the $\gamma$ rays emitted by the nuclei varies periodically, in step with the orientation of the nuclear spin. The nuclear precession frequency is determined by observing changes in the radiation pattern as the flight time is varied by changing the distance between the target and reset foils. Experimental data obtained at the ALTO facility at IPN Orsay showing the time dependence of the radiation pattern are shown in Fig.~\ref{fig:Mg24Rt} \cite{Mg24RIV}. Results of the fits to these $R(T)$ data having strong, intermediate and weak amplitude oscillations give $g= 0.538(13)$, 0.539(24) and 0.54(3), respectively, where the uncertainties are statistical only. The weighted average is $g= 0.538(11)$ (statistical error). Even the weakest amplitude oscillations yield a $g$~factor with a statistical uncertainty of better than 6\%. These data demonstrate that if the precession frequency can be observed in a TDRIV measurement, then the statistical precision is likely to exceed that of a transient-field $g$-factor measurement on the same state. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{RTCombo2.pdf} \caption{ $R(T)$ ratio data showing the time-dependence of the radiation pattern and fits based on detailed modeling of the TDRIV experiment on $^{24}$Mg \cite{Mg24RIV}. The distance is the separation of target and reset foils ($22.4~\mu {\rm m} = 1$~ps flight time). $R(T)$ is a ratio of $\gamma$-ray intensities observed at different angles versus the flight time $T$ (or plunger separation). See \cite{Mg24RIV} for details. The frequency of the oscillation determines the $g$~factor whereas the amplitude affects the precision achieved. The rate of damping is determined by the nuclear level lifetime.} \label{fig:Mg24Rt} \end{figure} Unfortunately the TDRIV method can be applied to H-like ions only up to $Z \sim 20$ because the hyperfine frequency scales with $Z^3$ and the period becomes too short to measure; for $g=0.5$, the period of the oscillations is 3.1~ps at $Z=10$ and 0.38~ps at $Z=20$. To apply this type of TDRIV method to higher-$Z$ nuclei requires the use of the weaker fields of shielded electrons, such as Li-like ions (2s electron) or Na-like ions (3s electron). The use of the method with Li-like or Na-like ions is under investigation. There are indications from measurements of charge-state distributions and integral recoil in vacuum measurements on relatively low-$Z$ ions that atomic ground states and low-excitation atomic states dominate the free-ion interactions \cite{Stuchbery2013HFI12}. However a Na-like ion, for example, has many more excited states than an H-like ion, with the potential to wash out the unique frequency of the atomic ground-state configuration. Along with the $^2S_{1/2}$ ground state the low-excitation $^2P_{1/2}$ and $^2P_{3/2}$ atomic states are likely to contribute. Even so, Lin et al. \cite{Lin1978} have reported evidence of population of Na-like $^{41}$Ca ions in the $^2S_{1/2}$ state via the observation of a spin precession at the expected frequency in transverse decoupling experiments on the $15/2^+$ level with $\tau = 4.7$~ns. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Na-like-Fe.pdf} \caption{ Evolution of the population of atomic states for Na-like Fe assuming an initial Gaussian distribution centred around 200 eV excitation energy with $\sigma = 100$ eV. Black represents least (no) population and white most. } \label{fig:Na-like} \end{figure} We have also begun to perform calculations with the GRASP2K atomic structure codes \cite{GRASP2K} to explore how the population of atomic states might evolve as ions recoil in vacuum. The time evolution of excited atomic states of Na-like Fe assuming an initial Gaussian distribution of excited states is shown in Fig.~\ref{fig:Na-like}. Note that there is a prominent decay to the lower-excited states and the ground state on the time-scale of several picoseconds. This calculation is schematic; at present it is unclear what initial population distribution is appropriate. However it does tend to confirm the empirical observations that atomic ground states and low-excitation atomic states dominate the free-ion interactions on the time scale of several picoseconds. Attempts are underway to apply the TDRIV method to Na-like ions of $fp$ shell nuclei. Whether oscillations associated with the Na-like $^2S_{1/2}$ atomic ground state are observed or not, important information will be obtained to understand the free-ion hyperfine fields and develop their application to future $g$-factor measurements. If an identifiable oscillation frequency is clearly observed, it may provide an independent and reliable measure of the absolute $g$ factors in the $fp$ shell. Such measurements are very important because transient-field strengths in the $fp$ shell are somewhat uncertain due to a dearth of independently determined $g$~factors that can serve to calibrate the transient-field strength - see Ref.~\cite{PhysRevC.79.024303}. \subsection{RIV with complex ions} The time integral RIV technique on complex ions with $\sim 20$ - 30 electrons has proved to be a powerful method to measure the $g$~factors of excited states of neutron-rich nuclei produced as radioactive beams, particularly in the Sn and Te isotopes near the neutron-rich doubly magic nuclide $^{132}$Sn \cite{Stone.Te132g.PhysRevLett.94.192501,AES.TeRIV.PhysRevC.76.034307,AES.Te134.PhysRevC.88.051304,JMA.Sn124-8g.PhysRevC.87.054325,JMA.StableSn.PhysRevC.92.041303,Te136.PhysRevLett.118.092503,Te136g.PhysRevC.96.014321}. One of the method's advantages is that the $g$~factor of the 2$^+_1$ state can be measured simultaneously with $B(E2; 0^+ \rightarrow 2^+)$ and $Q(2^+)$ \cite{AES.Te134.PhysRevC.88.051304,JMA.Sn124-8g.PhysRevC.87.054325,JMA.Sn124-8BE2.PhysRevC.84.061303,JMA.StableSn.PhysRevC.92.041303}. Although the hyperfine interaction must be calibrated empirically, the RIV method has proven to give the magnitude of the $g$~factor more precisely \cite{Stone.Te132g.PhysRevLett.94.192501,AES.TeRIV.PhysRevC.76.034307,JMA.Sn124-8g.PhysRevC.87.054325} than the transient-field method \cite{Te132TFg.BenczerKoller2008241,Sn126TF} A detailed description of calibration and analysis procedures has been included in the report on the case of $^{136}$Te \cite{Te136g.PhysRevC.96.014321}. We are pushing the limits by developing this methodology for simultaneous measurements of $B(E2)$, $Q(2^+)$ and $g(2^+)$ in other regions of the nuclear chart. In particular, applications to few-nucleon $2^+_1$ states around $^{208}$Pb are of considerable interest. The shell structure in the neutron-rich $^{132}$Sn region can be compared with that in the vicinity of stable $^{208}$Pb \cite{Coraggio.PhysRevC.80.021305,Gargano09}. While the high-spin structure has been quite thoroughly studied experimentally around $^{208}$Pb, the electromagnetic properties of low-excitation, low-spin states associated with a few pairs of valence nucleons outside $^{208}$Pb have not. Thus direct comparisons of the related few-particle states around $^{132}$Sn and $^{208}$Pb are currently limited by the lack of experimental data on electromagnetic properties near $^{208}$Pb rather than near $^{132}$Sn. Measurements of the type described here on $^{210}$Pb, $^{210}$Po and $^{212}$Po would enable comparison with their equivalents, $^{134}$Sn, $^{134}$Te and $^{136}$Te. At present little is known about the strengths of the relevant free-ion hyperfine interactions near $Z=82$; it is important to determine the strength of the RIV interaction for ions with $v/c \sim 0.08$ and $\sim 30$ - 40 electrons recoiling in vacuum. The effective fields are expected to be much stronger than in the $^{132}$Sn region, so the $g$-factor measurements in the vicinity of $^{208}$Pb will need to control the interaction time of the nuclear moment with the electronic configuration by use of a plunger. \section{Applications of LaBr$_3$ detectors} \label{sec-3} The development of Lanthanum Bromide (LaBr$_3$) detectors, which have excellent time resolution (typically {${\sim}300$~ps}) and energy resolution much superior to scintillators such as NaI and BaF$_2$, provides an opportunity to perform perturbed angular distribution $g$-factor measurements under new experimental conditions. We have investigated the application of LaBr$_3$ detectors in beam to measure \mbox{${g}$ factors} of nuclear states with nanosecond lifetimes using static hyperfine magnetic fields in magnetic hosts. A preliminary experiment on $^{54}$Fe implanted into nickel was performed. The target consisted of 0.76 mg/cm$^2$ of $^{45}$Sc evaporated onto a nickel foil of thickness 2.44 mg/cm$^2$ that had previously been annealed at $\sim 790^{\circ}$C. The 10$^+$ isomer with $\tau = 525(10)$ ns and $g=+0.728(1)$ was populated by the $^{45}$Sc($^{12}$C, p2n)$^{54}$Fe reaction with 40-MeV $^{12}$C beams from the ANU 14UD Pelletron. For the known $g$~factor and hyperfine field $B_{hf} = -27.00(3)$ T, the expected precession period is $\sim 3$ ns. This period approached the limit of the experimental setup because the time width of the beam-pulse was about 2 ns. The expected frequency was observed in the measured autocorrelation function for the 3432 transition as shown in Fig.~\ref{fig:Fe54auto}. These results demonstrate that cases where the precession period is of the order of a few nanoseconds are accessible for in-beam measurement. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{54fe_auto-3432.pdf} \caption{Autocorrelation function for the 3432-keV transition below the 10$^+$ isomer of $^{54}$Fe after implantation into nickel and showing the expected 3.3 ns oscillation period. The autocorrelation procedure was applied to the time-dependent perturbed angular distribution data, as described in Ref.~\cite{GG}, to make the oscillations visually more apparent.} \label{fig:Fe54auto} \end{figure} \begin{figure}[b] \centering \includegraphics[width=\columnwidth]{Cd107Fig4.pdf} \caption{Time spectra for the 640-keV transition depopulating the $I^\pi=\frac{11}{2}^-$ isomer in $^{107}$Cd. Time as displayed starts with the beam pulse and stops when a $\gamma$-ray is detected. Fits to guide the eye show the oscillations in the two spectra out of phase. The prompt peak is due to a prompt 632-keV transition in $^{106}$Cd that cannot be resolved from the 640-keV line by the LaBr$_3$ detectors.} \label{fig:Cd107} \end{figure} The hyperfine field of Cd implanted into gadolinium was then investigated. The motivation was to reassess the $g$-factor measurement on the \mbox{$I^\pi=10^{+}$} state in $^{110}$Cd which reported \mbox{$g(10^+)=-0.09(3)$}, at least a factor of two smaller than \mbox{$g\approx-0.2$ to $-0.3$} that would be expected for a rather pure $(h_{\frac{11}{2}})^2$ neutron configuration~\cite{Regan1995}. The hyperfine fields at $^{107}$Cd ions implanted into gadolinium following the $^{98}$Mo($^{12}$C, $3n$)$^{107}$Cd reaction were measured under similar conditions to the $^{110}$Cd $g$-factor measurement. Examples of time spectra for the 640-keV transition depopulating the 11/2$^-$ isomer in $^{107}$Cd are shown in Fig.~\ref{fig:Cd107}. The oscillations show close to the expected frequency, but their amplitude indicates that the fraction of nuclei on field-free sites is significant. The consequence is that the effective hyperfine field for the integral perturbed angular distribution measurement of $g(10^+)$ in $^{110}$Cd was much smaller than adopted in Ref.~\cite{Regan1995}. With a corrected effective hyperfine field a re-analysis of the data from Ref.~\cite{Regan1995} gives $g(10^+)=-0.29(16)$, consistent with that of the expected seniority-two $\nu h_{11/2}$ configuration. A full account of this work has been published \cite{TGray2017}. \section{Conclusions} Some current developments in excited-state $g$-factor measurements have been described. The transient-field method continues to give important results a half century after the transient field itself was discovered. The recoil in vacuum method has advanced greatly over the past decade, yet it has much potential for further development, particularly through time-dependent measurements. Moment measurements with LaBr$_3$ detectors and fast timing have opened up new opportunities for in-beam $g$-factor measurements on states that live a few nanoseconds. \section*{Acknowledgements} We thank our many colleagues who contributed to the work discussed here. This research was supported in part by the Australian Research Council grant numbers DP140102986, DP140103317 and DP70101673. B.P.M. T.J.G. and B.J.C. acknowledge the support of the Australian Government Research Training Program. Support for the Heavy Ion Accelerator Facility operations through the Australian National Collaborative Research Infrastructure Strategy (NCRIS) program is acknowledged.
1,116,691,500,577
arxiv
\section{Introduction} \label{sec:intro} \smallskip Filamentary structures are present across many astrophysical scales, from $\,{\rm Mpc}$ to sub-$\,{\rm pc}$. On the largest scales, structure formation occurs in the ``cosmic web", a network of sheets and filaments that connect dark matter haloes \citep{Zeldovich70,Bond96,Springel05}, and is also evident in the distributions of galaxies \citep[e.g.][]{Colless03,Tegmark04,Huchra05}. Intergalactic gas cools and condenses towards the centres of the dark matter filaments, forming a network of baryon-dominated intergalactic gas streams \citep{db06,Birnboim16}. There have been several recent attempts to model such streams self-gravitating Mpc-scale gaseous cylinders, which seems consistent with cosmological simulations \citep{Harford08,Harford11,Freundlich14,M18a}. \smallskip At the nodes of the cosmic web, the most massive haloes reside at the intersection of several filaments and are penetrated by the gas streams residing at their centres. These streams constitute the main mode of gas accretion onto the central galaxies \citep{Keres05,Dekel09,Danovich12,Zinger16}. At redshifts $z\lower.5ex\hbox{\gtsima} 2$, simulations suggest that streams feeding galactic haloes remain dense and cold, with temperatures of $\sim 10^4\,{\rm K}$, as they travel through the hot circumgalactic medium (CGM) towards the central galaxy (\citealp{Keres05,db06,Ocvirk08,Dekel09,CDB,FG11,vdv11}, though see also \citealp{Nelson13,Nelson16}). The filamentary structure in such systems can thus be maintained down to scales of tens of $\,{\rm kpc}$ around galaxies (though see below), where it has been suggested that they may fragment due to gravitational instability \citep[hereafter GI;][]{DSC,Genel12,M18a}. While these cold circumgalactic streams are difficult to directly detect, recent observations have revealed massive extended cold components in the CGM of high-redshift galaxies, whose spatial and kinematic properties are consistent with predictions for cold streams \citep{Bouche13,Bouche16,Prochaska14,Cantalupo14,Martin14a,Martin14b,Borisova16,Fumagalli17,Leclercq17,Arrigoni18}. \smallskip Within galactic discs, spiral arms have been modeled as one dimensional filaments whose gravitational fragmentation leads to the formation of giant molecular clouds (GMCs) or star-forming clumps \citep{Inoue18}. Within individual GMCs, \textit{Herschel} observations of star forming regions reveal a multi-scale network of filamentary structures and dense cores aligned with them like beads on a string \citep{Andre10,Jackson10,Arzoumanian11,Kirk13,Palmeirim13}. This has led to the suggestion that turbulence-driven formation of filaments in the interstellar medium (ISM) is the first step towards core and star-formation \citep{Molinari10,Andre10,Andre14}, a connection which had been speculated for some time \citep[e.g.][]{Schneider79,Larson85}. In this scenario, the densest filaments with widths of order $\sim 0.1\,{\rm pc}$ \citep{Arzoumanian11,Hennebelle13} collapse due to GI and lead to the formation of dense cores where star-formation occurs. Simulations of molecular clouds in the ISM reveal similar multi-scale filamentary structures, arising from a variety of mechanisms such as turbulence, gravitational collapse of larger structures, thermal instabilities, or colliding flows \citep[e.g.][]{Padoan01,Banerjee09,Gomez14,Moeckel15,Smith16}. \smallskip Studies of the structure and stability of self-gravitating filaments have a long history, mostly in the context of star-formation in ISM filaments. Early analytic work investigated the stability of an infinite incompressible cylinder with and without an axial magnetic field \citep{Chandrasekhar53}, a compressible yet still homogeneous infinite cylinder \citep{Ostriker64b}, a homogeneous stream of finite radius \citep{Mikhailovskii72,Fridman84}, and a uniformly rotating isothermal cylinder \citep{hansen76}. Hydrostatic equilibrium of a self-gravitating isothermal cylinder is only possible if its mass per unit length (hereafter line-mass) is less than a critical value which depends only on its temperature (\citealp{Ostriker64}; see \equ{isothermal_line_mass} below). For non-isothermal filaments, the critical line-mass is similar (\se{prof}). Filaments with line-mass larger than the critical value must collapse radially. For line-masses smaller than the critical value, a hydrostatic solution exists, but is unstable to long wavelength axisymmetric perturbations. The fastest growing wavelength is roughly eight times radius of the stream, $\lambda\sim 8R_{\rm s}$ \citep[][hereafter N87]{Nagasawa}, resulting in stream fragmentation as described in more detail below. A collapsing filament with a line mass slightly exceeding the critical value, as may eventually be the case for a filament growing via radial accretion, is also unstable to axisymmetric perturbations and will fragment at a similar wavelength to the hydrostatic case \citep{Inutsuka92}. Both cases eventually lead to the formation of bound clumps with masses of order the local Jeans mass \citep{Clarke16,Clarke17}. However, if the line-mass greatly exceeds the critical value the filament collapses towards its axis without fragmenting \citep{Inutsuka92}. On scales smaller than the filament radius, the local stability criterion reduces to the classical Jeans criterion, even in the presence of rotation \citep{Freundlich14}. This implies that such local collapse is only possible if the filament is larger than its Jeans length. \smallskip N87 studied the stability of a self-gravitating isothermal cylinder with line-mass below the critical value, pressure confined by a low density external medium. He found that the system is always unstable to long-wavelength axisymmetric perturbations even at low values of the line-mass. Similarly, \citet[][hereafter H98]{Hunter98} found that a self-gravitating cylinder which is pressure confined by an external medium, with a density discontinuity at the boundary, is always unstable to long-wavelength axisymmetric perturbations. These results are contrary to the spherical case, where a hydrostatic sphere with mass below the critical Bonner-Ebert mass \citep{Ebert55,Bonnor56} is stable against gravitational collapse. We elaborate further on these two studies in \se{sgi}. \smallskip In addition to GI, cylindrical streams or jets are susceptible to Kelvin-Helmholtz Instability (KHI) whenever there is a shearing motion between the stream and its surroundings. Numerous authors have studied KHI in cylinders, typically focusing on light or equidense jets meant to represent protostellar or AGN jets \citep[e.g.][]{Birkinshaw84,Payne_Cohn85,Hardee95,Bassett95,Bodo98,Bogey11}. Several authors have also addressed the effects of magnetic fields and/or radiative cooling on KHI in cylindrical jets \citep{Ferrari81,Massaglia92,Micono00,Xu00}. However, none of the aforementioned studies accounted for the self-gravity of the gas, as this is expected to be negligible for the systems being considered, namely jets from young stars or AGN. It has also been noted that tidally disrupted streams, resulting from stars tidally destroyed by black holes, may also experience KHI \citep{Bonnerot2016}. Recently, in a series of several papers, \citet{M16,P18}; and \citet{M18b} (hereafter M16, P18 and M19, respectively) presented a detailed study of KHI, without self-gravity or radiative cooling, in a dense supersonic cylinder representing the cold circumgalactic streams feeding high redshift galaxies. These can be up to 100 times denser than their surroundings. They found that KHI can be important in the evolution of such streams, leading to significant deceleration and energy dissipation, and in certain cases to total stream disruption in the CGM. We elaborate further on these studies in \se{khi}. \smallskip Clearly, extensive work has been done studying separately the effects of GI and of KHI in filaments and streams. While the evolution of KHI in a self-gravitating fluid has been studied in planar \citep[][hereafter H97]{Hunter97} and spherical geometry \citep[][hereafter M93]{Murray93}, we are unaware of any such work in cylindrical geometry. Since the evolution of KHI in cylindrical geometry is qualitatively different than in planar geometry (M19, and references therein), while GI in cylinders is qualitatively different than in spheres (e.g. N87; H98), it is worth explicitly studying the combined effects of KHI and self-gravity in cylindrical systems, which is the focus of this paper. \smallskip This has important astrophysical implications as well, as there are several filamentary systems where both effects are likely to be important. For instance, it has been shown that the cold circumgalactic streams are likely gravitationally unstable in the inner haloes of massive galaxies at high redshift, potentially resulting in star formation and even globular cluster formation along the streams in the CGM \citep{M18a}. This may explain recent \textit{ALMA} observations of dense star-forming gas at distances of tens of $\,{\rm kpc}$ away from a massive galaxy at $z\sim 3.5$, which does not appear to be associated with the galaxy or any of its satellites \citep{Ginolfi17}. Additionally, filaments in GMCs in the ISM occasionally exhibit shearing flows with respect to their background \citep{HilyBlant09,Federrath16,Kruijssen19}, suggesting that KHI may be important in their evolution. \smallskip The rest of this paper is organised as follows. In \se{theory}, we review the current theoretical understanding of GI and KHI in pressure-confined cylinders, and present predictions for how the two may behave in unison. In \se{sim}, we describe a suite of numerical simulations used to study GI and KHI in cylinders. In \se{res} we present the results of our numerical analysis and compare these to our analytical predictions. In \se{disc} we discuss our results and their astrophysical applications, present caveats to our analysis and outline future work. Finally, we summarise our main conclusions in \se{conc}. \section{Theory of instabilities} \label{sec:theory} In this section we briefly review the existing theory of GI (\se{sgi}) and KHI (\se{khi}) in pressure confined cylinders. We then make new predictions for how the two effects may be combined in cylindrical systems (\se{combined}, to be tested using numerical simulations in \se{res}), and compare these to previous results of a combined analysis in spherical systems (\se{sphere}). \subsection{Gravitational instability} \label{sec:sgi} We focus here on the results of N87 and H98, as these are the most relevant for our current analysis. These studies both focus on the stability of a self-gravitating cylinder with finite radius and line-mass below the critical value for hydrostatic equilibrium, pressure confined by a uniform external medium. \smallskip N87 consider an isothermal cylinder initially in hydrostatic equilibrium, with the density profile \begin{equation} \label{eq:isothermal_density} \rho(r)=\rho_{\rm c}\left[1+\frac{1}{8}\left(\frac{r}{H}\right)^2\right]^{-2}, \quad H=\frac{c_s}{\sqrt{4\pi G \rho_{\rm c}}}, \end{equation} {\noindent}\citep{Ostriker64}. $\rho_{\rm c}$ is the central density of the cylinder, $H$ is its scale height, $c_{\rm s}$ is the isothermal sound speed, and $G$ is the gravitation constant. The line-mass of such a cylinder out to radius $R_{\rm s}$ is \begin{equation} \label{eq:line_mass_def} \Lambda=\int_0^{R_{\rm s}} 2\pi r\rho (r)~{\rm dr}. \end{equation} {\noindent}For $R_{\rm s}=\infty$, this yields the critical line-mass for hydrostatic equilibrium \citep{Ostriker64}, \begin{equation} \label{eq:isothermal_line_mass} \Lambda_{\rm cr,\,iso}=2c_{\rm s}^2/G. \end{equation} {\noindent}An equilibrium initial condition is only possible for $\Lambda\le \Lambda_{\rm cr,iso}$. For a cylinder truncated at a finite radius $R_{\rm s}$, the density and line mass profiles at $r<R_{\rm s}$ are still given by \equs{isothermal_density} and \equm{line_mass_def}. Thus, the ratio of the cylinder's line-mass to the critical line-mass is related to the ratio of the cylinder's radius to its scale height, \begin{equation} \label{eq:iso_line_mass_radius} \frac{\Lambda}{\Lambda_{\rm cr,\,iso}}=\left[1+8\left(\frac{H}{R_{\rm s}}\right)^2\right]^{-1}. \end{equation} {\noindent}Increasing the central density, $\rho_{\rm c}$, or decreasing the temperature and thus the sound speed, $c_{\rm s}$, reduces the scale height, $H$. For a fixed stream radius, $R_{\rm s}$, this results in an increase of the ratio $\Lambda/\Lambda_{\rm cr,\,iso}$. \smallskip In terms of the external pressure confining the truncated cylinder, pressure equilibrium at the boundary dictates that \begin{equation} P_{\rm ext}=P(R_{\rm s})=c_{\rm s}^2\rho(R_{\rm s})=c_{\rm s}^2\rho_{\rm c} \left[1+\frac{R_{\rm s}^2}{8H^2}\right]^{-2}. \end{equation} {\noindent}Inserting this into \equ{iso_line_mass_radius} yields \begin{equation} \label{eq:line_mass_pressure} \frac{\Lambda}{\Lambda_{\rm cr,\,iso}}=1-\frac{P_{\rm ext}}{\rho_{\rm c} c_{\rm s}^2}. \end{equation} {\noindent}This shows that for a given temperature and external pressure, a cylinder can have any line mass from $0$ to $\Lambda_{\rm cr,\,iso}$, by decreasing the central density from $\rho_{\rm c}=P_{\rm ext}/c_{\rm s}^2$ to $0$. The critical line-mass therefore does not depend on the external pressure. This is fundamentally different from the spherical case where the maximal mass for which a hydrostatic equilibrium solution exists depends on the external pressure. This is the Bonnor-Ebert mass, \begin{equation} \label{eq:BE_mass} M_{\rm BE}=1.18\frac{c_{\rm s}^4}{P_{\rm ext}^{1/2}G^{3/2}} \end{equation} \citep{Ebert55,Bonnor56}. {\noindent}For further comparison of the structure and properties of self-gravitating cylinders and spheres confined by external pressure, see \citet{Fischera12}. For the remainder of our analysis we will use the scale-height, $H$, and the stream radius, $R_{\rm s}$, rather than the external pressure. \smallskip N87 analyzed perturbations about hydrostatic equilibrium in a cylinder with radius $R_{\rm s}$, pressure confined by an external medium with constant pressure and effectively zero density, $\rho_{\rm ext}<<\rho(R_{\rm s})$. The dispersion relation was numerically evaluated for several values of $\Lambda/\Lambda_{\rm cr,\,iso}$. All cases were found to be stable to non-axisymmetric modes. For axisymmetric modes, the system was found to be unstable at long wavelengths, with longitudinal wavenumber $k<k_{\rm cr}$. The system attains a maximal growth rate, $\omega_{\rm max}$, at a finite wavenumber, $k_{\rm max}$, hereafter the fastest growing mode, and then stabilises again at infinite wavelengths, $\omega\rightarrow 0$ as $k\rightarrow 0$. This is unlike the spherical Jeans instability where the growth rate diverges as $k\rightarrow 0$. There is no closed analytic expression for $k_{\rm cr}$, $k_{\rm max}$ or $\omega_{\rm max}$ for the general case, but it is useful to consider two limiting cases. \smallskip In the limit $\Lambda\rightarrow\Lambda_{\rm cr,\,iso}$, equivalent to $R_{\rm s}>>H$ (\equnp{iso_line_mass_radius}), the solution converges to that of an infinite cylinder. In this case, one obtains $k_{cr} \simeq 0.56 H^{-1}$, $k_{\rm max}\simeq 0.28 H^{-1}$, and $\omega_{\rm max}\simeq 0.60\left(4 G\rho_{\rm c}\right)^{1/2}$. For comparison, the free-fall time of a cylinder with average density $<\rho>=\Lambda/(\piR_{\rm s}^2)$ is \begin{equation} \label{eq:tff} t_{\rm ff}=(4G<\rho>)^{-1/2}. \end{equation} {\noindent}For an isothermal cylinder with radius $R_{\rm s}$, \begin{equation} \label{eq:rho_bar} <\rho>=\rho_{\rm c}\left(1-\frac{\Lambda}{\Lambda_{\rm cr,\,iso}}\right)=\rho_{\rm c}\left(1+\frac{R_{\rm s}^2}{8H^2}\right)^{-1}. \end{equation} {\noindent}For $\Lambda = 0.90\Lambda_{\rm cr,\,iso}$, we thus have $R_{\rm s}\simeq 8.5H$ and $\omega_{\rm max}/t_{\rm ff}^{-1}\simeq 1.9$. For larger values of $\Lambda$ the ratio $\omega_{\rm max}/t_{\rm ff}^{-1}$ increases. \smallskip In the opposite limit, when $\Lambda<<\Lambda_{\rm cr,\,iso}$ or $R_{\rm s}<<H$, the density is roughly constant within $R_{\rm s}$ and the solution converges to that of an incompressible cylinder, first studied by \citet{Chandrasekhar53}. The dispersion relation for an incompressible cylinder is given by\footnote{Note that there is a minus sign missing from the corresponding equation (4.10) in N87.} \begin{equation} \label{eq:nagasawa} \frac{\omega^2}{4\pi G \rho} = -\frac{xI_1}{I_0}\left[K_0I_0-\frac{1}{2}\right] , \end{equation} {\noindent}where $I_{\nu}(x)$ and $K_{\nu}(x)$ are modified Bessel functions of the first and second kind of order $\nu$, evaluated at the argument $x=kR_{\rm s}$. This yields $k_{cr} \simeq 1.1 R_{\rm s}^{-1}$, $k_{max} \simeq 0.6 R_{\rm s}^{-1}$, and $\omega_{\rm max}\simeq 0.4t_{\rm ff}^{-1}$. \smallskip To summarise, the shortest unstable wavelength is $\lambda_{\rm cr}=2\pi/k_{\rm cr}\sim 4\pi H$ and $2\piR_{\rm s}$ in the limits $\Lambda \rightarrow \Lambda_{\rm cr,\,iso}$ and $\Lambda<<\Lambda_{\rm cr,\,iso}$ respectively. In all cases, the most unstable mode occurs at $\lambda_{\rm max}\sim 2\lambda_{\rm cr}$, while $\omega_{\rm max}/t_{\rm ff}^{-1}$ is within a factor $\sim 2$ of unity. Note that since in the latter limit $R_{\rm s}<<H$, we arrive at the somewhat counterintuitive result that for smaller values of the line-mass the shortest and most unstable wavelengths are much shorter. As noted by N87, the instability manifests itself in different ways in these two limits. For large values of the line-mass the system is unstable to body-modes which are maximal near the stream axis and are similar to the classic Jeans instability. On the other hand, for small values of the line-mass the instability is dominated by surface modes, which are maximal near the stream interface and lead to its deformation. In the non-linear regime, these two modes of instability lead to different shapes and orientations of collapsed clumps within the stream \citep{Heigl18}. \smallskip H98 generalised this analysis by allowing for a finite background density, $\rho_{\rm b}$, confining the stream. However, they assumed a constant stream density, $\rho_{\rm s}$, rather than an isothermal profile. Their scenario is thus analogous to the limit $\Lambda<<\Lambda_{\rm cr,\,iso}$ from N87. H98 derive the following dispersion relation\footnote{This is equivalent to equation (68) from H98 using the identity $I_0(x)K_1(x) + I_1(x)K_0(x) = 1/x$.} \begin{equation} \label{eq:hunter} \begin{array}{c} \dfrac{\omega^2}{4\pi G \bar{\rho}}= -\left[\dfrac{x(\rho_{\rm s}-\rho_{\rm b})^2 I_0K_0}{\bar{\rho}^2}-\dfrac{x\rho_{\rm s}(\rho_{\rm s}-\rho_{\rm b})}{2\bar{\rho}^2}\right]\\ \times\left[\dfrac{\rho_{\rm s} I_0}{\bar{\rho}I_1}+\dfrac{\rho_{\rm b} K_0}{\bar{\rho}K_1}\right]^{-1}, \end{array} \end{equation} {\noindent}where $\bar{\rho}=0.5(\rho_{\rm s}+\rho_{\rm b})$, and $I_{\nu}(x)$ and $K_{\nu}(x)$ are again modified Bessel functions with $x=kR_s$. This converges to \equ{nagasawa} in the limit $\rho_{\rm b}\rightarrow 0$. \smallskip From \equ{hunter}, the condition for instability is \begin{equation} \label{eq:hunter_unstable} I_0K_0>\frac{1}{2\left(1-\delta^{-1}\right)}, \end{equation} {\noindent}where $\delta=\rho_{\rm s}/\rho_{\rm b}$ is the density contrast between the stream and the background. If $\delta<1$, such that the background is denser than the stream, the system is unstable at all wavelengths due to Rayleigh-Taylor instability (RTI). If $\delta>1$, such that the stream is denser than the background, the system is unstable at long wavelengths, i.e. small values of the argument of the Bessel functions on the left-hand side of \equ{hunter_unstable}, $x=kR_{\rm s}$. Furthermore, H98 find that the instability always manifests itself as a surface mode, leading to the deformation of the stream-background interface, similar to the conclusion of N87 for the low line-mass case. For $\delta\rightarrow\infty$, corresponding to $\rho_{\rm b}\rightarrow 0$, the system is unstable for $k<k_{\rm cr}\simeq 1.07R_{\rm s}^{-1}$, as for \equ{nagasawa}. For $\delta=1$ such that there is no density discontinuity at the interface, $k_{\rm cr}=0$ and the system is stable for all finite wavelengths. This highlights the fact that this is an interface instability, caused by a density discontinuity between the stream and the background. For $\delta=4,~10,~100$, we have $k_{\rm cr}R_{\rm s}\simeq 0.79,~0.96,~1.06$. The maximal growth rate for these cases is $\omega_{\rm max}/t_{\rm ff}^{-1}\simeq 0.26,~0.36,~0.43$. \smallskip H98 also note that in the case of a dense sphere pressure confined by a lower density background, the analogous surface mode is always stable, in agreement with the known fact that spheres less massive than the Bonner-Ebert mass are stable. However, in planar geometry, such as a dense slab pressure confined by a lower density background, a similar surface instability exists above a critical wavelength (H97). \smallskip The GI surface modes can be thought of as RTI analogues, induced by the self-gravity of the fluid rather than by an external gravitational field. An intuitive explanation was offered by H97 for the planar case, and can be adapted to cylindrical geometry as follows. Consider a dense cylinder with constant density $\rho_{\rm s}$ pressure confined by a background medium with constant density $\rho_{\rm b}<\rho_{\rm s}$. Such a system is stable to classical RTI. Now consider an axisymmetric perturbation to the interface of the cylinder with longitudinal wavelength $\lambda$. In some region, say $0<z<\lambda/2$, there is an outward distortion of the interface, $\xi(z)$, which results in a mass excess just outside the original interface, proportional to $(\rho_{\rm s}-\rho_{\rm b})\xi(z)$. Through Poisson's equation, this leads to a more negative gravitational potential in this region, resulting in a perturbation $\Phi_1<0$ to the initial potential. As the fluid is incompressible and at rest, Bernoulli's equation tells us that $P+\rho\Phi={\rm const}$ along any streamline in either fluid, where $P$ is the pressure. In the incompressible limit, where $\rho={\rm const}$ in each fluid, this implies that $P_{\rm 1,s}=-\rho_{\rm s}\Phi_{\rm 1}$ and $P_{\rm 1,b}=-\rho_{\rm b}\Phi_{\rm 1}$, where $P_{\rm 1,s}$ and $P_{\rm 1,b}$ are the perturbations to the pressure in the stream and the background respectively, on either side of the interface. Since $\rho_{\rm s}>\rho_{\rm b}$ and $\Phi_1<0$, we have that $P_{\rm 1,s}>P_{\rm 1,b}$, so the pressure in the stream just inside the interface is larger than the pressure in the background just outside the interface, causing the perturbation to continue growing. \smallskip This~instability~only~manifests~at~long wavelengths, when the mass excess leading to the perturbation of the potential is large enough to overcome the stabilizing effect of RT modes induced by the unperturbed potential. As noted above, the shortest unstable wavelength for cylinders is $\sim 2\piR_{\rm s}\sim 6.3R_{\rm s}$ (\equnp{hunter_unstable}). By contrast, the longest available wavelength on the surface of a sphere corresponds to the $l=2$ spherical harmonic, since the $l=0$ mode represents global expansion or contraction of the sphere while the $l=1$ mode represents a rigid displacement. The wavenumber associated with the $l=2$ mode is $k=[l(l+1)]^{1/2}/R_{\rm s}\sim 2.5/R_{\rm s}$, corresponding to a wavelength of $\lambda\sim 2.6R_{\rm s}$. This is too short for GI surface modes to overcome RT stabilization, which is why there are no GI surface modes for spherical systems (H98). \subsection{KH Instability} \label{sec:khi} KHI arises from shearing motion between the interfaces of two fluids, leading to efficient mixing and smoothing out the initial contact discontinuity. We focus here on the recent results of M19, who analysed the non-linear evolution of KHI in a dense 3d cylinder streaming through a static background, expanding on earlier work by M16 and P18. The system is characterised by two dimensionless parameters, the Mach number of the stream velocity with respect to the background sound speed, $M_{\rm b}=V_{\rm s}/c_{\rm b}$, and the density contrast of the stream and the background, $\delta=\rho_{\rm s}/\rho_{\rm b}$. M19 analytically derived timescales for the non-linear mixing of the two fluids and eventual disruption of the stream, as well as for stream deceleration and the loss of bulk kinetic energy, as a function of these two parameters. \smallskip We begin by noting that, similar to the dichotomy between surface modes and body modes in GI (N87), there are two modes of KHI. The nature of the instability depends primarily on the ratio of the stream velocity to the sum of the two sound speeds, \begin{equation} \label{eq:Mtot} M_{\rm tot}=\frac{V_{\rm s}}{c_{\rm s}+c_{\rm b}}. \end{equation} \smallskip If $M_{\rm tot}<1$, the instability is dominated by surface modes. These are concentrated at the interface between the fluids, and lead to the growth of a shear layer which expands into both fluids. Within the expanding shear layer a highly turbulent medium develops, efficiently mixing the two fluids. Surface modes can have any longitudinal wavenumber\footnote{So long as the wavelength, $\lambda=2\pi/k$, is larger than the width of the transition region between the two fluids.}, $k$, and any azimuthal wavenumber, $m$, representing the number of azimuthal nodes along the stream-background interface. $m=0$ corresponds to axisymmetric perturbations, $m=1$ to helical perturbations, and $m\ge 2$ to more complicated fluting modes. Low order $m$ modes with wavelengths of order $R_{\rm s}$ dominate the early non-linear evolution of the instability, as their eddies reach the largest amplitudes before they break, but the shear layer between the fluids quickly develops into a highly turbulent mixing zone with no discernible symmetry. \smallskip The shear layer separating the fluids expands self-similarly through vortex mergers. Independent of the initial perturbation spectrum, the width of the shear layer, $h$, evolves as \begin{equation} \label{eq:shear_growth} h=\alpha V_{\rm s} t \end{equation} {\noindent}where $\alpha$ is a dimensionless growth rate that depends primarily on $M_{\rm tot}$, and is typically in the range $\alpha\sim 0.05-0.25$ (P18; M19). \smallskip The shear layer penetrates asymmetrically into the stream and background due to their different densities. The penetration depth of the shear layer in either medium can be derived from conservation of mass and momentum in the shear layer, and are given by (P18; M19): \begin{equation} \label{eq:hs_growth} h_{\rm s} = \frac{\alpha V_{\rm s} t}{1+\sqrt{\delta}},\quad h_{\rm b} = \frac{\sqrt{\delta}\alpha V_{\rm s} t}{1+\sqrt{\delta}}. \end{equation} {\noindent}Stream disruption occurs when the shear layer encompasses the entire stream, namely when $h_{\rm s}=R_{\rm s}$. This occurs at time \begin{equation} \label{eq:tau_diss} t_{\rm dis} = \frac{\left(1+\sqrt{\delta}\right)R_{\rm s}}{\alphaV_{\rm s}}. \end{equation} {\noindent}The contact discontinuity effectively disappears before the stream is completely disrupted, once the full width of the shear layer is of order the stream radius, namely $h=R_{\rm s}$. This occurs at time \begin{equation} \label{eq:tau_shear} t_{\rm shear} = \frac{R_{\rm s}}{\alphaV_{\rm s}}. \end{equation} \smallskip As the shear layer expands into the background, it entrains background mass. This causes the stream to decelerate as its initial momentum is distributed over more mass. As shown by M19, the stream velocity as a function of time is well fit by \begin{equation} \label{eq:stream_deceleration} V_{\rm s}(t) = \frac{V_{\rm s,0}}{1+t/t_{\rm dec}}, \end{equation} {\noindent}where $V_{\rm s,0}$ is the initial velocity of the stream, and \begin{equation} \label{eq:tau_dec} t_{\rm dec} = \dfrac{\left(1+\sqrt{\delta}\right)\left(\sqrt{1+\delta}-1\right)}{\alpha\sqrt{\delta}}\frac{R_{\rm s}}{V_{\rm s,0}}, \end{equation} {\noindent}is the time when the background mass entrained in the shear layer equals the initial stream mass, such that momentum conservation implies the velocity is half its initial value. \smallskip An empirical expression for the dimensionless shear layer growth rate, $\alpha$, was proposed by \citet{Dimotakis}, \begin{equation} \label{eq:alpha_fit} \alpha \simeq 0.21\times \left[0.8{\rm exp}\left(-3 M_{\rm tot}^2\right)+0.2\right]. \end{equation} {\noindent}M19 found \equ{alpha_fit} to be a good fit to shear layer growth in simulations of 2d slabs, regardless of whether one measures $h$, $h_{\rm s}$, or $h_{\rm b}$. However, they found that $h_{\rm s}$ expanded more rapidly in 3d cylinders due to an enhanced eddy interaction rate near the stream axis. This yielded $\alpha$ values $\sim 50\%$ larger than \equ{alpha_fit} when measuring $h_{\rm s}$ and using \equ{hs_growth}. On the other hand, $h_{\rm b}$ was found to expand at a similar rate in 2d and 3d so long as $h_{\rm b}\lower.5ex\hbox{\ltsima} 2R_{\rm s}$. Since the shear layer width is dominated by $h_{\rm b}$ for $\delta>1$, we use \equ{alpha_fit} together with \equ{tau_shear} to evaluate the time when the contact discontinuity is destroyed. \smallskip Once $h_{\rm b}\lower.5ex\hbox{\gtsima} 2R_{\rm s}$, its growth rate is reduced by roughly half, due to a turbulent cascade to small scales which removes energy from the largest eddies driving the expansion. For $\delta>8$, this occurs before the stream reaches half its initial velocity (\equsnp{stream_deceleration}-\equmnp{alpha_fit}). M19 found that in these cases, a good fit to the velocity evolution of streams can be obtained simply by using $0.5\alpha$ in \equ{tau_dec} with $\alpha$ taken from \equ{alpha_fit}. \smallskip When $M_{\rm tot}>1$, surface modes of low azimuthal order (low values of $m$) stabilise\footnote{The formal condition for stabilization of $m=0,\,1$ surface modes is $M_{\rm b}>(1+\delta^{-1/3})^{3/2}$, similar to $M_{\rm tot}>1$.}. The nature of the instability then depends on the width of the initial transition region between the fluids (which is likely set by transport processes such as viscosity and thermal conduction). If this is relatively narrow, the instability becomes dominated by high-$m$ surface modes, and the above description, summarised in \equs{shear_growth}-\equm{tau_dec}, remains valid, with $\alpha\sim 0.05$ according to \equ{alpha_fit}. However, if the initial transition region is wide, of order $\lower.5ex\hbox{\gtsima} 0.25R_{\rm s}$ or larger, high-$m$ surface modes are also stable and the instability becomes dominated by body modes. These do not result in shear layer growth but rather in the global deformation of the stream into a helical, $m=1$, shape with a characteristic wavelength of $\sim 10R_{\rm s}$ and an amplitude of $\lower.5ex\hbox{\gtsima} R_{\rm s}$. The timescale for this to occur depends on the initial perturbation amplitude and spectrum, though it is almost always longer than the timescale for stream disruption by surface modes when these are unstable. Following the formation of the sinusoid, small scale turbulence develops near its peaks and leads to stream disruption within roughly one stream sound crossing time. Interestingly M19 find that \equs{stream_deceleration}-\equm{tau_dec} are a good description of stream deceleration due to body modes as well, despite the different processes involved. \smallskip We will hereafter ignore KHI body modes, and assume that KHI is dominated by surface modes of some order $m$ for all Mach numbers. If KHI surface modes are suppressed by a large initial transition region, then GI surface modes will also likely be suppressed, based on the analysis of H97 and H98. \subsection{Combined treatment} \label{sec:combined} We now wish to combine the above two processes, and discuss the evolution of a pressure-confined self-gravitating cylinder undergoing KHI. In addition to $M_{\rm b}$ and $\delta$, a third parameter is required to describe such a system, namely the line-mass of the cylinder in units of the critical line-mass for hydrostatic equilibrium, $\mu\equiv\Lambda/\Lambda_{\rm cr}$. We begin by making the assumption, to be justified below, that any coupling between GI and KHI in the linear regime is relatively small, such that the region of parameter space where each process results in instability is unchanged, and the linear growth rates are only mildly altered. Under this assumption, it is clear from \se{sgi} and \se{khi} that for all values of $(M_{\rm b},\delta,\mu)$, the system is unstable over some wavelength range. We assume that the initial perturbation spectrum spans this range. \smallskip GI enhances density contrasts and leads to the formation of long-lived collapsed clumps, while KHI smooths the interface between the fluids and dilutes the mean density of the stream. The question is which process will win. The timescale for GI is the inverse growth rate of the fastest growing mode discussed in \se{sgi}, $t_{\rm max}\equiv \omega_{\rm max}^{-1}$. At low values of $\mu$, GI is dominated by surface modes (N87), which require the presence of a contact discontinuity (H98). Thus, the timescale for KHI to prevent gravitational collapse is $t_{\rm shear}$ (\equnp{tau_shear}), the timescale for nonlinear KHI to destroy the contact discontinuity. On the other hand, for high values of $\mu$, GI is dominated by body modes which are unrelated to the contact discontinuity (N87). In this case, the relevant timescale for KHI to prevent collapse is $t_{\rm dis}$ (\equnp{tau_diss}), the timescale for nonlinear KHI to disrupt the stream itself. \smallskip Since $t_{\rm shear}<t_{\rm dis}$ for all $\delta>1$, we distinguish between three regimes. If $t_{\rm max}<t_{\rm shear}<t_{\rm dis}$, we expect GI to win and the stream to fragment into long-lived clumps. If $t_{\rm shear}<t_{\rm dis}<t_{\rm max}$, we expect KHI to win and disrupt the stream by mixing it into the background. We hereafter refer to this process as ``shredding the stream". In the intermediate case where $t_{\rm shear}<t_{\rm max}<t_{\rm dis}$, the outcome may depend on the value of $\mu$. If $\mu$ is small, such that GI is dominated by surface modes, then we expect KHI to win and shred the stream since $t_{\rm shear}<t_{\rm max}$. On the other hand, if $\mu$ is large such that GI is dominated by body modes, GI may still win and lead to stream fragmentation and the formation of bound clumps, since $t_{\rm max}<t_{\rm dis}$. However, this is uncertain, since the shear layer will penetrate somewhat into the stream within $t_{\rm max}$, reducing the effective line-mass of the unperturbed (non-turbulent) region. If this is reduced below the threshold for GI body modes to be effective, KHI may still win and suppress clump formation. \smallskip Since $t_{\rm max}\propto \rho_{\rm c}^{-1/2}\propto \mu^{-1/2}$, as $\mu$ is increased at fixed $(M_{\rm b},\delta)$, $t_{\rm max}$ decreases while $t_{\rm shear}$ (and $t_{\rm dis}$) remain constant. Thus, for each $(M_{\rm b},\delta)$ there exists a critical value of $\mu$, $\mu_{\rm cr}\equiv \mu_{\rm cr}(M_{\rm b},\delta)$, such that $t_{\rm max}<t_{\rm shear}$ for $\mu>\mu_{\rm cr}$ (see \fig{criticalmu} in \se{khivg} below). Therefore, GI will win and lead to stream fragmentation and clump formation whenever $\mu>\mu_{\rm cr}$. If $\mu_{\rm cr}$ is small enough to be in the regime where GI is dominated by surface modes, then KHI will win and shred the stream for $\mu<\mu_{\rm cr}$. On the other hand, if $\mu_{\rm cr}$ is in the regime where GI is dominated by body modes, the fate of the stream at $\mu<\mu_{\rm cr}$ depends on the ratio of $t_{\rm max}$ to $t_{\rm dis}$. \smallskip At first glance, it may seem inconsistent to compare a linear timescale for GI, $t_{\rm max}=\omega_{\rm max}^{-1}$, to a nonlinear timescale for KHI, $t_{\rm shear}$ or $t_{\rm dis}$. While $t_{\rm max}$ is formally the timescale for the growth of linear perturbations, once density perturbations grow the free-fall times become ever shorter and the collapse accelerates. Full collapse is thus dominated by the linear growth time. On the other hand, KHI tends to saturate following the linear phase, because it is driven by the presence of a contact discontinuity which is destroyed by the instability. Continued growth in the nonlinear regime is dominated by the merger of eddies within the shear layer on timescales of $t_{\rm shear}$ and $t_{\rm dis}$, as described in \se{khi}. \smallskip The above discussion notwithstanding, one may ask whether density fluctuations within the stream induced by KHI can trigger local gravitational collapse when $\mu<\mu_{\rm cr}$. Note that this is different than the global fragmentation of the stream induced by GI. Such local collapse can occur in filaments on scales larger than the spherical Jeans length, $\lambda_{\rm J}=[\pic_{\rm s}^2/(G\rho)]^{1/2}$, but smaller than the stream radius, $R_{\rm s}$ \citep{Freundlich14}. This implies that this is only possible if $\lambda_{\rm J}<R_{\rm s}$. A lower limit to the Jeans length is obtained by inserting $\rho=\rho_{\rm c}$, the density along the stream axis. This yields $\lambda_{\rm J}=2\pi H$, with $H$ given by \equ{isothermal_density}. The condition that $\lambda_{\rm J}<R_{\rm s}$ thus implies that $R_{\rm s}>>H$, so GI is dominated by body modes (N87). We conclude that KHI induced density fluctuations can only trigger local gravitational collapse if $\mu<\mu_{\rm cr}$ but GI is still dominated by body modes. \smallskip We must now justify our initial ansatz that the linear coupling between GI and KHI does not fundamentally alter the instability region of parameter space. We rely here on the analysis of H97, who derived the dispersion relation of a self-gravitating system undergoing KHI in the vortex sheet limit, i.e. two semi-infinite fluids separated by a single, planar interface. In their derivation they made the simplifying assumption that the gravitational field in the unperturbed system was weak compared to the perturbed forces induced by both pressure and potential perturbations. This is equivalent to assuming that the wavelengths are much shorter than the gravitational scale-height of the unperturbed system, which itself is equivalent to assuming constant density and pressure in both fluids. The resulting dispersion relation contains terms associated with KHI, RTI, and surface mode GI. We refer the reader to H97 for the expression and its derivation. Relevant to our discussion is the fact that the coupling between self-gravity and shearing motions does not modify the stability region of the system, only mildly affects the linear growth rates of KH modes at short to intermediate wavelengths, and does not suppress GI surface modes at long wavelengths. Deriving an analogous dispersion relation for cylinders is beyond the scope of this paper. Rather, we assume that the same conclusions hold for cylindrical systems, in particular because KHI in cylinders is even more unstable than for planar vortex sheets (M16; M19). The validity of this assumption and our subsequent analysis will be tested with numerical simulations in \se{res}. \subsection{Comparison to the Spherical Case} \label{sec:sphere} It is worth comparing our analysis to that of M93, who addressed the question of when self-gravity would prevent KHI from disrupting a cold, dense spherical cloud moving through a hot, dilute background. They assumed that the cloud was pressure confined by the background fluid, and that its mass was less than the Bonnor-Ebert mass, making it gravitationally stable and in hydrostatic equilibrium. In this case, unlike for self-gravitating cylinders, there is no GI, and the only effect of the self-gravity is to induce RT modes at the cloud surface. Since the cloud is denser than the background, these RT modes can counteract the KHI and stabilise the system, due to the restoring buoyancy force. They showed this by considering the combined dispersion relation of KHI and RTI in the incompressible limit, \begin{equation} \label{eq:Murray} \omega^2=-\frac{\rho_{\rm s} \rho_{\rm b}}{(\rho_{\rm s}+\rho_{\rm b})^2}V^2 k^2 + \frac{\rho_{\rm s}-\rho_{\rm b}}{\rho_{\rm s}+\rho_{\rm b}} kg, \end{equation} {\noindent}where $V$ is the velocity of the cloud in the static background and $g$ is the gravitational acceleration at its surface. This implies that KHI is stable for all wavelengths greater than \begin{equation} \label{eq:Murray2} \lambda_{\rm max}=\frac{2\pi \rho_{\rm s}\rho_{\rm b} V^2}{\left(\rho_{\rm s}^2-\rho_{\rm b}^2\right)g}. \end{equation} \smallskip M93 then assumed that KHI would only disrupt the cloud if $\lambda_{\rm max}>R_{\rm cl}$, the cloud radius. This was based on the assumption that KHI surface modes saturate at an amplitude comparable to their wavelength, thus neglecting the subsequent shear layer growth. This assumption together with $g=GM_{\rm cl}/R_{\rm cl}^2$ and $M_{\rm cl}=(4\pi/3)\rho_{\rm cl} R_{\rm cl}^3$ results in a minimum mass for self-gravity to stabilise the sphere against KHI. For velocities of order the background sound speed, the critical mass is of order the Bonnor-Ebert mass, $M_{\rm BE}$. Such a system is thus always unstable, either to KHI at $M_{\rm cl}<M_{\rm BE}$ or to global gravitational collapse at $M_{\rm cl}>M_{\rm BE}$. \smallskip Our main prediction for the cylindrical case is qualitatively similar. We predict that a self-gravitating stream will always be unstable either to KHI at $\mu<\mu_{\rm cr}$ or to GI at $\mu>\mu_{\rm cr}$, depending on whether the timescale for GI, $t_{\rm max}$, is longer or shorter than the timescale for KHI to destroy the contact discontinuity, $t_{\rm shear}$, and/or the stream itself, $t_{\rm dis}$. However, unlike M93, we do not rely on a similar criterion of gravity stabilizing wavelengths longer than $R_{\rm s}$. First of all, unlike in spherical systems, self-gravity actually destabilises cylinders at long wavelengths (N87; H98; \se{sgi}). Furthermore, even if KHI is stable for wavelengths longer than $R_{\rm s}$ in the linear regime, it can still lead to stream disruption in the nonlinear regime by shear layer growth caused by initially shorter wavelength perturbations. \section{Numerical Methods} \label{sec:sim} \smallskip In this section we describe the details of our simulation code and setup, as well as our analysis method. We use the Eulerian AMR code \texttt{RAMSES} \citep{Teyssier02}, with a piecewise-linear reconstruction using the MonCen slope limiter \citep{vanLeer77}, an HLLC approximate Riemann solver \citep{Toro94}, and a multi-grid Poisson solver. \subsection{Hydrostatic Cylinders} \label{sec:prof} \smallskip Unlike the isothermal cylinder described in \se{sgi}, there is no closed analytic expression for the density profile of an isentropic cylinder in hydrostatic equilibrium, so this must be evaluated numerically. We briefly review here how this is done, beginning with the equilibrium solution of an isolated cylinder following \citet{Ostriker64}. The equation of hydrostatic equilibrium, \begin{equation} \label{eq:hydrostatic_eq} \vec{\nabla} P = -\rho\vec{\nabla} \Phi, \end{equation} {\noindent}is solved together with Poisson's equation \begin{equation} \label{eq:poisson} \nabla^2 \Phi = 4\pi G \rho, \end{equation} {\noindent}and an isentropic equation of state (EoS), \begin{equation} \label{eq:polytrope} P = K \rho^\gamma, \end{equation} {\noindent}where we assumed $K$ to be constant and the adiabatic index of ideal monoatomic gas, $\gamma=5/3$, throughout. These equations can be combined to yield \begin{equation} \label{eq:gen_stream} \frac{1}{r}\frac{\partial}{\partial r}\left[\frac{r}{\rho} \frac{\partial \left(K\rho^\gamma\right)}{\partial r}\right] = -4\pi G \rho, \end{equation} {\noindent}with the boundary conditions \begin{equation} \label{eq:bc} \rho(r=0) = \rho_{\rm c}, \quad \frac{\partial \rho}{\partial r}\bigg|_{r=0}=0. \end{equation} \smallskip \Equs{gen_stream}-\equm{bc} can be cast into unitless form by defining $y=\rho/\rho_{\rm c}$ and $x=r/H$, with \begin{equation} \label{eq:H_general} H^2 = \frac{c_{\rm s,0}^2}{(\gamma-1)4\pi G\rho_{\rm c}}, \end{equation} {\noindent}the scale radius of the cylinder, where $c_{\rm s,0}^2=\gamma P_{\rm c}/\rho_{\rm c}=\gamma K \rho_{\rm c}^{\gamma -1}$ is the sound speed along the filament axis, with $P_{\rm c}=P(r=0)$ the pressure along the filament axis. The resulting equation is \begin{equation} \label{eq:gen_stream_unitless} \frac{1}{x}\frac{\partial}{\partial x}\left(x \frac{\partial y^{\gamma-1}}{\partial x}\right) = - y, \quad y(0) = 1, \quad \frac{\partial y}{\partial x}\bigg|_0=0. \end{equation} {\noindent}Analytic solutions exist only for $\gamma=1$ (isothermal cylinder), $\gamma=2$, and $\gamma=\infty$ (incompressible cylinder) \citep{Ostriker64}. For other values of $\gamma$ \equ{gen_stream_unitless} must be solved numerically. \smallskip While the isothermal cylinder discussed in \se{sgi} extends to $r=\infty$, all cases with $\gamma>1$ have a finite radius, $R_{\rm equ}$, defined as the radius where the density profile first reaches $\rho=0$ \citep{Ostriker64}. We can thus generalise the notion introduced in \se{sgi} of a critical line-mass above which hydrostatic equilibrium is not possible \begin{equation} \label{eq:lambda_crit_gen} \Lambda_{\rm cr} = \frac{c_{\rm s,0}^2}{2(\gamma-1)G} \int_0^{R_{\rm equ}/H} y(x)\times x~{\rm d}x = a \frac{c_{\rm s,0}^2}{G}, \end{equation} {\noindent}where $y(x)$ is the solution to \equ{gen_stream_unitless}. The factor $a$ on the right-hand-side of \equ{lambda_crit_gen} depends on the EoS. For $\gamma=5/3$, $R_{\rm equ}\simeq 2.648H$, the half-mass radius is $R_{\rm 1/2}\simeq 1.168H$, and $a\simeq 0.796$. For comparison, an isothermal cylinder has $R_{\rm 1/2}\simeq 2.828H$, with $H$ defined in \equ{isothermal_density}, and $a=2$ (\equnp{isothermal_line_mass}). In \fig{adiabatic_profile} we show the normalised equilibrium density and line-mass profiles of an isolated, isentropic, $\gamma=5/3$ cylinder. \begin{figure} \includegraphics[width=0.49\textwidth]{adiabatic_profile.png} \caption{Normalised density and line-mass profiles for a self-gravitating, isentropic cylinder with $\gamma=5/3$. The radial coordinate has been normalised by $H$ given in \equ{H_general}, the density (solid black line) has been normalised by its central value, and the line-mass (dashed red line) has been normalised by $c_{\rm s,0}^2/G$ following \equ{lambda_crit_gen}. The cylinder has a finite radius $R_{\rm equ}\simeq2.65H$, and a finite line-mass equal to $\Lambda_{\rm cr}\simeq 0.80c_{\rm s,0}^2/G$. The half-mass radius of the cylinder is $R_{\rm 1/2}\simeq 1.17H$.} \label{fig:adiabatic_profile} \end{figure} \smallskip Equilibrium profiles with $\Lambda<\Lambda_{\rm cr}$ can be constructed for cylinders pressure confined by an external medium and truncated at some radius $R_{\rm s}<R_{\rm equ}$. In \fig{muh} we show the stream radius, $R_{\rm s}/H$, as a function of $\mu=\Lambda/\Lambda_{\rm cr}$. For $\mu=0,\,1$ we have $R_{\rm s}=0,\,R_{\rm equ}$ respectively. For $\mu=0.5$ we have $R_{\rm s}=R_{1/2}\simeq 1.17H$. We adopt model units where $G=\rho_{\rm c}=1$ and $R_{\rm s}=1/32$. For a given value of $\mu$, we can obtain $H$ in model units from \fig{muh} and then \equ{H_general} can be used to obtain $c_{\rm s,0}=(8\pi/3)^{1/2}H$ and $P_{\rm c}=K_{\rm s}=3c_{\rm s,0}^2/5$. Note that the stream and the background fluid have different entropy, and hence different values of $K$. \smallskip In addition to $\mu$, the system is defined by \begin{equation} \label{eq:deltac} \delta_{\rm c} = \frac{\rho_{\rm c}}{\rho(R_{\rm s}^+)}, \end{equation} {\noindent}the ratio of the density along the stream axis to the background density just outside the stream. For a given $\mu$ and $\delta_{\rm c}$ we may evaluate the density contrast between the stream and background on either side of the interface, \begin{equation} \label{eq:delta} \delta = \frac{\rho(R_{\rm s}^-)}{\rho(R_{\rm s}^+)} = \delta_{\rm c} \frac{\rho(R_{\rm s}^-)}{\rho_{\rm c}}. \end{equation} {\noindent}We show the ratio $\rho(R_{\rm s}^-)/\rho_{\rm c}=\delta/\delta_{\rm c}$ as a function of $\mu$ in \fig{muh}. For $\mu=0.1,\,0.5,\,0.9$ we have $\delta/\delta_{\rm c}\simeq 0.92,\,0.58,\,0.18$ respectively. \smallskip To construct equilibrium profiles for pressure confined cylinders with given values of $\mu$ and $\delta_{\rm c}$, we first evaluate $R_{\rm s}/H$ and $\delta$ from \fig{muh}. We then solve \equ{gen_stream_unitless} separately for $r<R_{\rm s}$ and $r>R_{\rm s}$. For $r<R_{\rm s}$, the boundary conditions are $y(0) = 1$, and ${\rm d}y/{\rm d}x|_0=0$, and the profile is unchanged from the isolated cylinder. For $r>R_{\rm s}$, the boundary conditions are given in terms of the pressure, rather than the density. Specifically, the pressure is continuous at the interface, $P(R_{\rm s}^-)=P(R_{\rm s}^+)$, while the pressure gradient is discontinuous, with \begin{equation} \label{eq:pressure_discontinuity} \frac{{\rm d}P/{\rm d}R|_{R_{\rm s}^-}}{{\rm d}P/{\rm d}R|_{R_{\rm s}^+}} = \frac{\rho(R_{\rm s}^-)}{\rho(R_{\rm s}^+)} = \left(\frac{K(R_{\rm s}^+)}{K(R_{\rm s}^-)}\right)^{1/\gamma} = \delta, \end{equation} {\noindent}which follows from \equ{hydrostatic_eq}. \fig{profiles} shows the resulting density and pressure profiles for $\mu=0.1$ and $0.9$. For $\mu=0.1$ the density and pressure are nearly constant in either medium, while for $\mu=0.9$ there are strong gradients within the stream. \begin{figure} \includegraphics[width=0.49\textwidth]{Rs_over_H.png} \caption{Properties of a truncated $\gamma=5/3$ cylinder in hydrostatic equilibrium. The x-axis shows the line-mass divided by the critical line-mass, $\mu=\Lambda/\Lambda_{\rm cr}$. On the y-axis we show the stream radius, $R_{\rm s}$, divided by the scale radius, $H$ (\equnp{H_general}, black solid line), and the density at the stream radius divided by the central density, $\rho(R_{\rm s}^-)/\rho_{\rm c}$ (red dashed line).} \label{fig:muh} \end{figure} \begin{figure} \includegraphics[trim={0cm 0.5cm 0cm 0cm},clip,width=0.49\textwidth]{profile.pdf} \caption{Equilibrium density and pressure profiles of pressure confined cylinders with two different values of the stream line-mass, $\mu=0.9$ (in blue) and $\mu=0.1$ (in red). The solid (dashed) lines show the density (pressure) profiles. All cases correspond to $\delta_{\rm c}=100$ and $\gamma=5/3$. For $\mu=0.1$, $\rho(R_{\rm s}^-)\simeq 0.92\rho_{\rm c}$ (\fig{muh}) and the density and pressure are nearly constant in both the stream and background. For $\mu=0.9$, $\rho(R_{\rm s}^-)\simeq 0.18\rho_{\rm c}$ (\fig{muh}), and there are strong density and pressure gradients within the stream. } \label{fig:profiles} \end{figure} \subsection{Initial Conditions} \label{sec:methods} \subsubsection{Simulation Domain \& Boundary Conditions} \label{sec:bc} The simulation domain is a cube of side $L=1$, extending from $-0.5$ to $0.5$ in all directions. We hereafter adopt the standard cylindrical coordinates, $(r,\varphi,z)$. The axis of our cylindrical stream is placed along the $z$ axis, at $r=0$, and we adopt a stream radius of $R_{\rm s}=1/32$. The stream fluid occupies the region $r<R_{\rm s}$ while the background fluid occupies the rest of the domain. The equation of state (EoS) of both fluids is that of an ideal monoatomic gas with adiabatic index $\gamma=5/3$. We use periodic boundary conditions at $z=\pm 0.5$, and zero force boundary conditions, often called outflow boundary conditions, at $x=\pm 0.5$ and $y=\pm 0.5$, such that gas crossing the boundary is lost from the simulation domain. At these boundaries, the gradients of density and velocity are set to 0, while the pressure gradient is taken from the hydrostatic profile computed following \se{prof}. The potential at the boundary is set to be that at the outer edge of an isolated and infinitely long cylinder with total mass $M$, equal to the total mass in the simulation domain, $\Phi(r) =2G(M/L){\rm ln} (r)$ with $r=(x^2+y^2)^{1/2}$ on the boundary. We note that this does not produce perfect equilibrium due to fitting a cylindrical profile in a cubic box. However, we find that our configuration is extremely stable in simulations with no initial perturbations and no shear flow, exhibiting $\lower.5ex\hbox{\ltsima} 3\%$ change in the density and pressure profiles after 4 stream free-fall times. \subsubsection{Smoothing the Discontinuity} \smallskip As noted by many previous studies of KHI, the presence of a sharp discontinuity at the interface of two fluids leads to numerical perturbations on the grid scale. These grow faster than the intended perturbations in the linear regime, and may dominate the instability at late times depending on their amplitude. Furthermore, since smaller scales grow more rapidly in the linear regime, these numerical perturbations become more severe as the resolution is increased, preventing convergence of the solution. To alleviate this issue, we smooth the velocity and density around the interfaces using the ramp function proposed by \citet{Robertson10}, also used by M16, P18, and M19. Specifically, we normalise each quantity in the stream and the background by its value at $R_{\rm s}$, denoted $f_{\rm s}$ and $f_{\rm b}$ respectively. We then smooth between these values using \begin{equation} \label{eq:ramp1} f(r)=f_{\rm b} + \left(f_{\rm s}-f_{\rm b}\right)\times \mathcal{R}(r), \end{equation} \begin{equation} \label{eq:ramp2} \mathcal{R}(r)=\frac{1}{2}\left[1-{\rm tanh}\left(\frac{r-R_{\rm s}}{\sigma}\right)\right], \end{equation} {\noindent}and multiply the normalised profiles in either medium by $f(r)$. The parameter $\sigma$ determines the width of the transition zone. The function $\mathcal{R}(r)$ transitions from $0.05$ to $0.95$ over a full width of $\sim 3\sigma$ in $(r-R_{\rm s})$. We adopt $\sigma=R_{\rm s}/32$ for all of our simulations, which is sufficient to suppress artificial perturbations with small longitudinal wavelength, while still allowing azimuthal modes with $m\lower.5ex\hbox{\ltsima} 12$ to grow (M19). \subsubsection{Perturbations} \label{sec:methods-pert} \smallskip The stream is initialised with velocity $\vec{v_s} = \Mbc_{\rm b} \hat{z}$, where $c_b=[\gamma P(R_{\rm s}^+)/\rho(R_{\rm s}^+)]^{1/2}$ is the sound speed at the outer boundary of the stream. The background gas is initialised at rest, with velocity to $\vec{v_b}=0$. \smallskip We then perturb our simulations with a random realization of periodic perturbations in the radial component of the velocity, $v_{\rm r}=v_{\rm x}{\rm cos}(\varphi)+v_{\rm y}{\rm sin}(\varphi)$, as in M19. In practice, we perturb the Cartesian components of the velocity, \begin{equation} \label{eq:pertx} \begin{array}{c} v_{\rm x}^{\rm pert}(r,\varphi ,z) = \sum_{j=1}^{N_{\rm pert}} v_{0,j}~{\rm cos}\left(k_{j}z+m_{j}\varphi + \phi_{j}\right)\\ \\ \times {\rm exp}\left[-\dfrac{(r-R_{\rm s})^2}{2\sigma_{\rm pert}^2}\right]{\rm cos}\left(\varphi\right) \end{array}, \end{equation} \begin{equation} \label{eq:perty} \begin{array}{c} v_{\rm y}^{\rm pert}(r,\varphi ,z) = \sum_{j=1}^{N_{\rm pert}} v_{0,j}~{\rm cos}\left(k_{j}z+m_{j}\varphi + \phi_{j}\right)\\ \\ \times {\rm exp}\left[-\dfrac{(r-R_{\rm s})^2}{2\sigma_{\rm pert}^2}\right]{\rm sin}\left(\varphi\right) \end{array}. \end{equation} \smallskip The velocity perturbations are localised on the stream-background interface, with a penetration depth set by the parameter $\sigma_{\rm pert}$. We set $\sigma_{\rm pert}=R_{\rm s}/16$ in all of our simulations, as in M19. To comply with periodic boundary conditions, all wavelengths were harmonics of the box length, $k_{j}=2\pi n_{j}$, where $n_{j}$ is an integer, corresponding to a wavelength $\lambda_{j}=1/n_{j}$. In each simulation, we include all wavenumbers in the range $n_{j}=2-64$, corresponding to all available wavelengths in the range $R_{\rm s}/2 - 16R_{\rm s}$. Each perturbation mode is also assigned a symmetry mode, represented by the index $m_{j}$ in \equs{pertx} and \equm{perty}, and discussed in \se{khi}. As in M19, we only consider $m=0,1$. For each wavenumber $k_{j}$ we include both an $m=0$ mode and an $m=1$ mode. This results in a total of $N_{\rm pert}=2\times 63=126$ modes per simulation. Each mode is then given a random phase $\phi_{j} \in [0,2\pi)$. The stochastic variability from changing the random phases was extremely small, as shown in P18 and M19. The amplitude of each mode, $v_{0,j}$, was identical, with the rms amplitude normalised to $0.01c_{\rm s}$. \subsubsection{Resolution and Refinement Scheme} \label{sec:grid_res} We used a statically refined grid with resolution decreasing away from the stream axis. The highest resolution region is ${\rm max}(|x|,|y|)<3R_{\rm s}$, with cell size $\Delta=2^{-10}$. For $R_{\rm s}=1/32$ this corresponds to 64 cells per stream diameter. The cell size increases by a factor of 2 every $3R_{\rm s}$ in the $x$ and $y$-directions, up to a maximal cell size of $2^{-6}$. The resolution is uniform along the $z$ direction, parallel to the stream axis. For uniform density cylinders, KHI surface modes are converged at this resolution (M19). We also ran two cases with a factor 2 higher resolution (128 cells per stream diameter) in order to test convergence of our results for self-gravitating streams. As described in \se{disruption} and \se{fragmentation}, we find that the majority of our results are indeed converged. \subsubsection{Simulations Without Self-Gravity} \label{sec:no_grav} \smallskip In addition to the simulations of self-gravitating cylinders described above, we performed several simulations without self-gravity for comparison, hereafter our ``no-gravity" simulations. In the no-gravity simulations, the boundary conditions at $x=\pm 0.5$ and $y=\pm 0.5$ are simply zero gradients in all fluid variables, including the pressure. These were then initialised with the same density profiles as the corresponding self-gravitating streams, but with constant pressure throughout the simulation domain, since there is no gravitational field. We set the pressure to be the same as the pressure at the stream boundary in the corresponding self-gravity simulations, $P_{\rm no-gravity}(r)=P_{\rm self-gravity}(R_{\rm s})$. This allows us to separate the effects of the density profile from those of self-gravity on the evolution of KHI. \subsection{Tracing the Two Fluids} \label{sec:tracer} Following P18 and M19, we use a passive scalar field, $\psi(r,\varphi,z,t)$, to track the growth of the shear layer and the mixing of the two fluids. Initially, $\psi=1$ and $0$ at $r<R_{\rm s}$ and $r>R_{\rm s}$ respectively, and is then smoothed using \equs{ramp1}-\equm{ramp2}. During the simulation, $\psi$ is advected with the flow such that the density of stream-fluid in each cell is $\rho_{\rm s}=\psi\rho$, where $\rho$ is the total density in the cell. \smallskip The volume-weighted average radial profile of the passive scalar is given by \begin{equation} \label{eq:volume-averaged-colour} \overline{\psi}(r,t) = \frac{\int_{-L/2}^{L/2}\int_{0}^{2\pi} \psi_{(r,\varphi ,z,t)} r~{\rm d\varphi \,dz}}{2\pi r L}. \end{equation} {\noindent}The resulting profile is monotonic (neglecting small fluctuations on the grid scale) and can be used to define the edges of the shear layer around the stream interface, $r(\overline{\psi}=\epsilon)$ on the background side and $r(\overline{\psi}=1-\epsilon)$ on the stream side, where $\epsilon$ is an arbitrary threshold. We set $\epsilon=0.04$ following M19, though our results are not strongly dependent on this choice. The background-side thickness of the shear layer is then defined as \begin{equation} \label{eq:hb_def} h_{\rm b} \equiv {\rm max_r}r(\overline{\psi}=\epsilon)-R_{\rm s}, \end{equation} {\noindent}while the stream-side thickness is defined as \begin{equation} \label{eq:hs_def} h_{\rm s} \equiv R_{\rm s}-{\rm min_r}r(\overline{\psi}=1-\epsilon). \end{equation} {\noindent}While $h_{\rm b}$ as defined in \equ{hb_def} is always well defined, at late times the perturbed region encompasses the entire stream and $\overline{\psi}(r=0)<1-\epsilon$. In this case, we define $h_{\rm s}=R_{\rm s}$. The total width of the perturbed region is given by $h\equiv h_{\rm b}+h_{\rm s}$. \section{Results} \label{sec:res} \smallskip In this section we present the results of our numerical simulations. In \se{khivg}, we examine when the combined evolution of GI and KHI leads to the formation of long-lived clumps or to stream shredding, and compare to our theoretical predictions. In \se{disruption} and \se{fragmentation}, we discuss the late time evolution of the system in the cases when KHI and GI dominate, respectively. \subsection{KHI vs GI} \label{sec:khivg} \smallskip As detailed in \se{combined}, we predict that a dense, self-gravitating filament shearing against a dilute background will either fragment into long-lived, bound clumps due to GI, or disrupt and mix into the background due to KHI, depending on the ratio of their respective timescales. The timescale for GI, $t_{\rm max}(\mu,\delta_{\rm c})$, is well approximated by \equ{hunter} (see the Appendix~\se{surf_bod}). The timescales for KHI are $t_{\rm shear}(M_{\rm b},\delta_{\rm c})$ (\equnp{tau_shear}) or $t_{\rm dis}=(1+\delta^{1/2})t_{\rm shear}$ (\equnp{tau_diss}). For given values of $(M_{\rm b},\delta_{\rm c})$ there is a critical line mass ratio, $\mu_{\rm cr}$, such that for $\mu>\mu_{\rm cr}$, $t_{\rm max}<t_{\rm shear}$ and GI will dominate. If $\mu_{\rm cr}$ is small enough to be in the regime where GI is dominated by surface modes, then KHI will dominate for $\mu<\mu_{\rm cr}$. However, if $\mu_{\rm cr}$ is in the regime where GI is dominated by body modes, then the fate of the stream when $\mu<\mu_{\rm cr}$ depends also on the ratio of $t_{\rm max}$ to $t_{\rm dis}$. \smallskip Solid curves in \fig{tcompare} show the ratio $t_{\rm max}/t_{\rm shear}$ as a function of $\mu$ for $(M_{\rm b},\delta_{\rm c})=(1.0,100)$, $(1.0,6.7)$, $(2.5,100)$, and $(6.0,100)$. The corresponding values of $\mu_{\rm cr}$ are $\sim 0.36$, $0.28$, $0.62$, and $0.96$. Note the very weak dependence of $t_{\rm max}/t_{\rm shear}$ on $\delta_{\rm c}$ for $M_{\rm b}=1$, since $t_{\rm max}$ depends weakly on $\delta_{\rm c}$ for $\delta_c \lower.5ex\hbox{\gtsima} 4$, while $t_{\rm shear}$ depends weakly on $\delta_{\rm c}$ only through $\alpha(M_{\rm tot})$ (\equnp{alpha_fit}). The dependence of $t_{\rm max}/t_{\rm shear}$ on $M_{\rm b}$ is much stronger, since $t_{\rm shear}$ decreases roughly linearly with $M_{\rm b}$. \smallskip In \fig{criticalmu} we show $\mu_{\rm cr}$ as a function of $M_{\rm b}$ and $\delta_{\rm c}$. The general trend is the same as inferred from \fig{tcompare}, namely $\mu_{\rm cr}$ increases strongly with $M_{\rm b}$ and has only a slight tendency to increase with $\delta_{\rm c}$. The exception is a narrow strip near $M_{\rm b}\sim (1-2)$ where $\mu_{\rm cr}$ decreases with $M_{\rm b}$. In this region, the increase of $t_{\rm shear}$ due to decreasing $\alpha$ is stronger than the decrease in $t_{\rm shear}$ due to increasing $V$, leading to a net increase in $t_{\rm shear}$ with $M_{\rm b}$ and thus a net decrease in $\mu_{\rm cr}$. For density contrasts $\delta_{\rm c}\lower.5ex\hbox{\ltsima} 100$, $\mu_{\rm cr}>0.5$ only for supersonic flows with $M_{\rm b}\lower.5ex\hbox{\gtsima} 2.5$. This implies that for massive streams, KHI can only overcome GI for highly supersonic flows (recall that the Mach number of the flow with respect to the sound speed in the stream is $\sim \delta^{1/2}M_{\rm b}$). In this regime, KHI is dominated by high-order azimuthal surface modes (see \se{khi}), which have a short eddy turnover time leading to rapid shear layer growth. \smallskip Consider, for example, $\delta_{\rm c}\sim 30$. $\mu_{\rm cr}$ increases from $\mu_{\rm cr}<<1$ at $M_{\rm b}<<1$ towards $\mu_{\rm cr}\lower.5ex\hbox{\gtsima} 0.3$ at $M_{\rm b}\sim 0.6$, then decreases to $\mu_{\rm cr}\lower.5ex\hbox{\ltsima} 0.2$ at $M_{\rm b}\sim 1.2$, before strongly increasing at $M_{\rm b}>>1$. Thus, as $M_{\rm b}$ is increased from $\lower.5ex\hbox{\ltsima} 0.2$ to $\lower.5ex\hbox{\gtsima} 2$ for $\delta_{\rm c}\sim 30$ and $\mu\sim 0.25$, the stream fluctuates from being dominated by GI, to KHI, to GI, to KHI. The high $M_{\rm b}$ KHI regime is dominated by surface modes with high azimuthal wavenumber. While these modes are always unstable, at lower Mach numbers they tend to be sub-dominant compared to axisymmetric or helical modes, with $m=0,1$ (M19). \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} $\mu$ & $M_b$ & $\delta_c$ & $\delta$ & $t_{\rm max}$ & $t_{\rm shear}$ & $t_{\rm dis}$ & $t_{\rm sc}$ & $\lambda_{cr}$\\ \hline 0.1 & 1 & 100 & 92 & 2.39 & 1.34 & 14.15 & 0.64 & 5.95\\ 0.2 & 1 & 100 & 84 & 2.46 & 2.01 & 20.43 & 0.90 & 5.95\\ 0.3 & 1 & 100 & 76 & 2.54 & 2.65 & 25.65 & 1.11 & 5.95\\ 0.4 & 1 & 100 & 67 & 2.63 & 3.32 & 30.51 & 1.28 & 5.95\\ 0.5 & 1 & 100 & 58 & 2.76 & 4.09 & 35.38 & 1.44 & 5.98\\ 0.6 & 1 & 100 & 49 & 2.93 & 5.06 & 40.62 & 1.58 & 5.98\\ 0.7 & 1 & 100 & 40 & 3.18 & 6.39 & 46.77 & 1.73 & 6.02\\ 0.8 & 1 & 100 & 30 & 3.60 & 8.52 & 55.01 & 1.88 & 6.09\\ 0.9 & 1 & 100 & 18 & 4.56 & 13.20 & 69.68 & 2.06 & 6.23\\ \hline 0.1 & 1 & 6.7 & 6.2 & 3.21 & 2.00 & 6.97 & 0.64 & 7.02\\ 0.2 & 1 & 6.7 & 5.6 & 3.41 & 3.04 & 10.26 & 0.90 & 7.16\\ 0.3 & 1 & 6.7 & 5.1 & 3.66 & 4.07 & 13.22 & 1.11 & 7.36\\ \hline 0.5 & 2.5 & 100 & 58 & 2.76 & 2.27 & 19.57 & 1.44 & 5.98\\ 0.6 & 2.5 & 100 & 49 & 2.93 & 2.84 & 22.77 & 1.58 & 5.98\\ 0.7 & 2.5 & 100 & 40 & 3.18 & 3.65 & 26.71 & 1.73 & 6.02\\ 0.9 & 2.5 & 100 & 18 & 4.56 & 8.22 & 43.40 & 2.06 & 6.23\\ \hline 0.7 & 6 & 100 & 40 & 3.18 & 1.52 & 11.13 & 1.73 & 6.02\\ 0.8 & 6 & 100 & 30 & 3.60 & 2.09 & 13.47 & 1.88 & 6.09\\ 0.9 & 6 & 100 & 18 & 4.56 & 3.43 & 18.09 & 2.06 & 6.23 \end{tabular} \caption{ Parameters of simulations used for studying the evolution of streams undergoing both GI and KHI. The first three columns list the control parameters, namely the line-mass ratio $\mu$, Mach number $M_{\rm b}$, and the ratio of central density to background density at the stream boundary $\delta_{\rm c}$. The remaining six columns list derived parameters: the ratio of stream to background density on either side of the interface, $\delta$, the GI time scale, $t_{\rm max}$, the timescale for KHI to destroy the contact discontinuity, $t_{\rm shear}$, the timescale for KHI to destroy the entire stream, $t_{\rm dis}$, the stream sound crossing time, $t_{\rm sc}$, and the shortest unstable wavelength for GI, $\lambda_{\rm cr}$. All timescales are in units of the stream free-fall time, $t_{\rm ff}$, while $\lambda_{\rm cr}$ is in units of the stream radius, $R_{\rm s}$. For all cases, the fastest growing wavelength for GI is $\lambda_{\rm max}\lower.5ex\hbox{\ltsima} 2\lambda_{\rm cr}$. } \label{tab:sim_clump} \end{table} \begin{figure} \includegraphics[trim={0cm 0.5cm 0cm 0cm},clip,width=0.49\textwidth]{tcompare.pdf} \caption{Clump formation versus stream disruption according to our model, and in simulations. The solid lines show the ratio of the timescales for GI to form clumps, $t_{\rm max}$, and for KHI to destroy the contact discontinuity, $t_{\rm shear}$, as a function of the line-mass ratio, $\mu$. Different colours show different values of the Mach number and central density contrast, $M_{\rm b}$ and $\delta_{\rm c}$. Our model predicts that when this ratio is less than 1, marked by the horizontal dashed line, the stream should fragment and form clumps, while a ratio larger than one implies stream disruption by KHI. The transition occurs at a critical line-mass ratio, $\mu_{\rm cr}\sim 0.28$, $0.36$, $0.62$, and $0.96$ for $(M_{\rm b},\delta_{\rm c})=(1.0,6.7)$, $(1.0,100)$, $(2.5,100)$, and $(6.0,100)$ respectively. The markers show simulation results, where circles indicate cases where the stream was disrupted by KHI and diamonds indicate cases where the stream fragmented to form clumps. Nearly all our simulations agree with our model, with circles lying above the dashed line and diamonds below it. The one exception is $(M_{\rm b},\delta_{\rm c},\mu)=(6.0,100,0.9)$, which is dominated by GI body modes rather than surface modes, and forms clumps despite $\mu_{\rm cr}~0.96$. } \label{fig:tcompare} \end{figure} \begin{figure} \includegraphics[trim={0cm 0.5cm 0cm 0cm},clip,width=0.49\textwidth]{criticalmu.pdf} \caption{Critical line-mass ratio, $\mu_{\rm cr}$, for which $t_{\rm max}/t_{\rm shear}=1$, as a function of $M_{\rm b}$ and $\delta_{\rm c}$. For $\mu>\mu_{\rm cr}$, the stream will eventually fragment into clumps, while for $\mu<\mu_{\rm cr}$ KHI will disrupt the stream before fragmentation occurs. $\mu_{\rm cr}$ tends to increase with $M_{\rm b}$, except for a narrow strip near $M_{\rm b}\sim 1.5$, and with $\delta_{\rm c}$, though the dependence on $\delta_{\rm c}$ is much weaker. For $\delta_{\rm c}\lower.5ex\hbox{\ltsima} 100$, $\mu_{\rm cr}>0.5$ only for $M_{\rm b}\lower.5ex\hbox{\gtsima} 2.5$, suggesting that for large line-masses KHI can only overcome GI for very supersonic flows which are dominated by high-order azimuthal modes. } \label{fig:criticalmu} \end{figure} \begin{figure} \includegraphics[trim={0cm 0.4cm 0cm 0cm},clip,width=0.48\textwidth]{dpdf.pdf} \caption{Clump identification in the simulations. We show the PDFs of stream density, $\rho_{\rm s}=\psi\rho$, at $t=8t_{sc}$ for the no-gravity (blue) and gravity (red) simulations with $(M_{\rm b},\delta_{\rm c},\mu)=(1,100,0.9)$. While the no-gravity simulation exhibits a unimodal, roughly lognormal, PDF, the gravity simulation is bi-modal. Cells with densities higher than the break, $\rho_{\rm s,th}$ marked by the vertical dashed line, are associated with collapsed clumps.} \label{fig:dpdf} \end{figure} \smallskip To test our predictions, we performed a series of simulations with the same combinations of $(M_{\rm b},\delta_{\rm c})$ as shown in \fig{tcompare}, and different values of $\mu$. For $(M_{\rm b},\delta_{\rm c})=(1.0,100)$, we performed nine simulations spanning the line-mass range $\mu=0.1,\,0.2,\,...,\,0.9$. For the other combinations of $(M_{\rm b},\delta_{\rm c})$, we performed three to four simulations each, with $\mu$ spanning a small region around the predicted $\mu_{\rm cr}$. The full list of simulations is presented in \tab{sim_clump}, along with several relevant parameters. The stream sound crossing time\footnote{The sound crossing times listed in \tab{sim_clump} refer to the self-gravity simulations only. In the no-gravity runs at $r<R_{\rm s}$, $\rho_{\rm no-gravity}(r)=\rho_{\rm gravity}(r)$ while $P_{\rm no-gravity}(r)=P_{\rm gravity}(R_{\rm s})<P_{\rm gravity}(r)$. This results in a lower sound speed at each $r<R_{\rm s}$, and hence a longer sound crossing time.}, $t_{\rm sc}$, is defined as \begin{equation} \label{eq:sound_speed} t_{\rm sc} = 2\int_0^{R_s} 1/c_s(r) dr, \end{equation} {\noindent}where $c_{\rm s}(r)=(\gamma P(r)/\rho(r))^{1/2}$ is the sound speed at radius $r$. \smallskip The markers in \fig{tcompare} indicate for each of our simulations whether or not the stream has fragmented into long-lived collapsed clumps. To identify such clumps, we examine the PDF of stream fluid density, $\rho_{\rm s}=\psi\rho$. If the density distribution is a result of pure turbulence, we expect it to have a roughly lognormal shape. If, however, the highest density regions have collapsed due to gravity, we expect a break in the PDF at high densities \citep[e.g.][]{VS08,Elmegreen11,Hopkins12,Federrath15}. An example is shown in \fig{dpdf}, where we show the density PDFs for the gravity and no-gravity simulations with $(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.9)$ at $t=8t_{\rm sc}$. While the no-gravity simulation has a unimodal PDF which is roughly lognormal except at the lowest densities, the gravity simulation produces a bi-modal PDF, and we associate all cells with densities larger than the break density, $\rho_{\rm s,th}$, as being in clumps. As discussed in \se{fragmentation} below, these clumps are indeed long-lived. If a simulation never exhibits a similar break in the density PDF we determine that this simulation has not formed any clumps. In particular, isolated high density regions produced in no-gravity simulations at late times (see \figs{mpl1_maps} and \figss{mpl9_maps} below) are not clumps, but rather transient features associated with the high-density part of a turbulent PDF. \begin{figure*} \includegraphics[trim={3.4cm 2.5cm 4.5cm 1.5cm},clip,width=0.99\textwidth]{del100_m1_disrupt.png} \caption{ Evolution of streams with $\mu<\mu_{\rm cr}$ undergoing KHI. Shown are snapshots of density normalised by the initial density along the stream axis, $\rho_{\rm c}$, in a slice through the $yz$ plane showing an ``edge-on" view of the cylinder. The two columns show simulations with $(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.1)$ run without self-gravity (left) and with self-gravity (right). The snapshot times in units of the stream sound crossing time, $t_{\rm sc}$, are listed in each panel. The evolution with and without gravity is very similar up until $t\sim 5t_{\rm sc}$ and shows the formation of a turbulent shear layer penetrating into the stream and background and miximg the two fluids. At later times, the penetration of the shear layer into the background continues similarly, though self-gravity reduces the penetration into the stream, leaving more high density material near the stream axis.} \label{fig:mpl1_maps} \end{figure*} \smallskip All of our simulations with $\mu>\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$ form gravitating clumps, as predicted by our model. Furthermore, for $M_{\rm b}=1,2.5$, when $\mu_{\rm cr}\lower.5ex\hbox{\ltsima} 0.63$, streams in simulations with $\mu<\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$ are disrupted by KHI and mixed into the background before forming bound clumps, as predicted by our model. In these cases, GI is dominated by surface modes, so the comparison of $t_{\rm max}$ and $t_{\rm shear}$ is justified. On the other hand, for $M_{\rm b}=6.0$, $\mu_{\rm cr}=0.96$ is in the body mode regime for GI, and our simulation with $\mu=0.9$ fragments into bound clumps, as discussed in \se{combined}. However, in this same regime we find that streams with $\mu=0.8$ and $0.7$ are disrupted by KHI and do not form bound clumps. So the effect of GI body modes is to lower $\mu_{\rm cr}$ from $\sim 0.96$ to $\sim 0.85$. \smallskip Overall, we conclude that our model adequately predicts the fate of streams under the combined effects of KHI and GI when GI surface modes dominate. When GI body modes dominate, the actual value of $\mu_{\rm cr}$ is $\sim 10\%$ lower than our prediction, since the relevant timescale for KHI to prevent clump formation is no longer $t_{\rm shear}$. In the next two sections, we turn to studying the evolution of streams and clumps in the regime where each instability dominates. \subsection{Stream Disruption due to KHI} \label{sec:disruption} \smallskip We here examine the evolution of streams with $\mu<\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$, where KHI dominates over GI and prevents the formation of long-lived collapsed clumps. Specifically, we examine whether the self-gravity of the gas, while unable to completely overcome the KHI, affects its evolution in any way. \smallskip \Fig{mpl1_maps} shows the evolution of streams with $(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.1)$, with and without self-gravity. At early times, $t\lower.5ex\hbox{\ltsima} 4t_{\rm sc}$, the evolution in the two cases is extremely similar, and the shear layer seems to expand at roughly the same rate, mixing the two fluids and diluting the stream density. At later times, $t\lower.5ex\hbox{\gtsima} 6t_{\rm sc}$, while the expansion of the shear layer into the background continues similarly in both simulations, the penetration into the stream has stalled in the gravity run. The self-gravity of the stream thus seems to partly shield its inner core from mixing and disruption. As we will show below, this is due to restoring buoyancy forces caused by the stream's gravitational field. \begin{figure} \includegraphics[trim={0cm 0.5cm 0cm 0cm},clip,width=0.48\textwidth]{hb.pdf} \caption{ Shear layer growth in simulations dominated by KHI, with $\mu<\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$. We show the penetration depth of the shear layer into the background, $h_{\rm b}$ (top), and into the stream, $h_{\rm s}$ (bottom). These have been normalised by the stream radius, $R_{\rm s}$, while time on the x-axis has been normalised by the stream sound crossing time, $t_{\rm sc}$. In each panel, solid lines show our fiducial simulations with self-gravity, while dashed lines show our no-gravity simulations. Different colours mark different combinations of $(M_{\rm b},\delta_{\rm c},\mu)$. The dot-dashed red line in each panel shows results from a simulation with $(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.1)$ and twice higher resolution. The penetration of the shear layer into the background proceeds similarly in simulations with and without gravity, while the penetration into the stream is qualitatively different with and without gravity. Without gravity, the shear layer consumes the entire stream at $t\sim t_{\rm dis}$ (\equnp{tau_diss}). However, with self-gravity $h_{\rm s}\sim (0.3-0.4)R_{\rm s}$ at this time, regardless of $\mu$, likely caused by buoyancy stabilizing the inner stream.} \label{fig:mixing} \end{figure} \begin{figure} \includegraphics[trim={0cm 0.4cm 0cm 0cm},clip,width=0.48\textwidth]{richardson.pdf} \caption{Evolution of mass-weighed Richardson number ${\rm Ri}$ within the shear layer $[R_{\rm s}-h_{\rm s}(t)]<r<R_{\rm s}$ over time. Value of ${\rm Ri}<1/4$ indicates that the buoyant force from gravity cannot stop mixing, resulting in quick growth of shear layer $h_s$ at early times. At late times when ${\rm Ri}>1/4$, the growth of shear layer slows down due to less mixing.} \label{fig:Ri} \end{figure} \smallskip We examine this more quantitatively in \fig{mixing}, where we compare the evolution of $h_{\rm b}$ and $h_{\rm s}$, the penetration of the shear layer into the background and stream respectively (\equnp{hs_growth}), in gravity and no-gravity simulations with $\mu<\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$. Focusing on the top panel, we see that $h_{\rm b}$ evolves similarly with and without self-gravity, and is consistent with the results of M19. During the first sound crossing time, $h_{\rm b}$ remains roughly constant as the initial velocity perturbations trigger perturbations in the stream-background interface associated with growing eigenmodes of the system. Following this phase, $h_{\rm b}$ grows approximately linearly following \equ{hs_growth} until it reaches $h_{\rm b}\sim 2R_{\rm s}$. Up until this point, the gravity and no-gravity runs are nearly indistinguishable. Following this, the growth rate of $h_{\rm b}$ is reduced by roughly half in both cases, as the developing turbulent cascade transfers power from the largest scales driving the expansion to smaller scales (M19). During this phase, the growth rate of $h_{\rm b}$ is reduced in the gravity simulations, by $\sim 25\%$ for $\mu=(0.5-0.6)$ and $\sim 12\%$ for $\mu=(0.1-0.2)$. Overall, we find that the self-gravity of the stream has a relatively minor effect on the growth of $h_{\rm b}$. \smallskip On the other hand, as inferred from \fig{mpl1_maps}, there is a qualitative difference in the evolution of $h_{\rm s}$, shown in the bottom panel of \fig{mixing}. For the first $\sim 2t_{\rm sc}$, until $h_{\rm s}\sim 0.3R_{\rm s}$, the gravity and no-gravity runs evolve similarly. After this, the growth rate in the gravity runs is a factor $\sim 3$ smaller than in the no-gravity runs. In the latter, the shear layer reaches $h_{\rm s}/R_{\rm s}=1$ and consumes the entire stream at $t\sim t_{\rm dis}$ (\equnp{tau_diss}), and the evolution is similar to that seen in M19 (see their figure B1). However, in the runs with self-gravity, $h_{\rm s}\sim (0.3-0.4)R_{\rm s}$ at this time, and does not exceed $\sim 0.5R_{\rm s}$ at $t=10t_{\rm sc}$. This is consistent with the visual impression in \fig{mpl1_maps}, where the density remains high in the interior of the self-gravitating stream even after the non-gravitating stream has been completely diluted. Although the growth rate of $h_{\rm s}$ does not depend on $\mu$, there appears to be a tendency for more penetration for larger $\delta_{\rm c}$. \smallskip We propose that the stalling of $h_{\rm s}$ is due to restoring buoyancy forces in the stream interior. This can be seen by considering the Richardson number, ${\rm Ri}=[N_{\rm BV}/({\rm d} u/{\rm d} r)]^2$, where ${\rm d} u/{\rm d} r$ is the gradient of longitudinal velocity inside the shear layer, and $N_{\rm BV}$ is the Brunt-Vais$\ddot{{\rm a}}$l$\ddot{{\rm a}}$ frequency, \begin{equation} \label{eq:NBV} N_{\rm BV} = \left[\frac{g}{\gamma}\frac{\partial {\rm ln}K}{\partial r}\right]^{1/2}, \end{equation} {\noindent}with $g(r)$ the magnitude of the gravitational field, $\gamma$ the adiabatic index, $K(r)=P(r)\rho^{-\gamma}(r)$ the entropy profile of the gas. Note that $K$ is piecewise constant in our initial conditions, with a non-zero gradient only at the stream-background interface. However, as the shear layer expands, mixing between the fluids creates a non-zero entropy gradient throughout the shear layer. Had our initial conditions been such that the initial stream was not isentropic, this may have increased $N_{\rm BV}$ and ${\rm Ri}$ in the stream interior. \smallskip For a 2d plane-parallel system in a constant external gravitational field, it can be shown that a sufficient (but not necessary) criterion for buoyancy to stabilize the system against shearing induced mixing is that ${\rm Ri}>0.25$ \citep{Miles61,Howard61}. While our situation is more complex in that the geometry is cylindrical and the gravitational field is due to self-gravity rather than an external field\footnote{To our knowledge, no analogous criterion exists for the stability of self-gravitating flows or for cylindrical flows. Deriving such a criterion is beyond the scope of this paper.}, we may use this as a benchmark to asses the role of buoyancy in stabilizing the inner stream. In \fig{Ri} we show the mass weighed average of ${\rm Ri}$ within the shear layer, $[R_{\rm s}-h_{\rm s}(t)]<r<R_{\rm s}$, as a function of time. In all simulations, ${\rm Ri}<<1$ at early times, and crosses ${\rm Ri}=0.25$ at $t\sim (2-3)t_{\rm sc}$, corresponding to the sharp decline in the growth rate of $h_{\rm s}$. Further growth of ${\rm Ri}$ is rather slow and it does not exceed ${\rm Ri}\sim (0.3-0.4)$. We find very similar behaviour when evaluating ${\rm Ri}$ locally at the inner boundary of the shear layer, $r=[R_{\rm s}-h_{\rm s}(t)]$. This supports our assertion that buoyancy stabilizes the inner stream and slows the growth of $h_{\rm s}$, significantly delaying stream disruption. \begin{figure} \includegraphics[trim={0cm 0.5cm 0cm 0cm},clip,width=0.49\textwidth]{momentum.pdf} \caption{Stream deceleration due to KHI. We show the centre of mass velocity of the stream fluid normalised by its initial value, as a function of time normalised by the predicted deceleration timescale, $t_{\rm dec}$ (\equnp{tau_dec}). The time axis has been set to zero at $t_0$, the time when the stream velocity is $98\%$ of its initial value. Solid (dashed) lines show simulations with (without) gravity, as in \fig{mixing}. The thick green dotted line shows the prediction for the deceleration rate due to KHI by M19 (\equnp{stream_deceleration}). The simulations with and without gravity behave similarly and closely follow the predicted deceleration rate. This is consistent with the similar behaviour of $h_{\rm b}$, since the deceleration is primarily driven by entrainment of background material by the shear layer. } \label{fig:momentum} \end{figure} \smallskip In \fig{momentum} we show the deceleration of streams in simulations with and without gravity. We show the centre of mass velocity of the stream fluid, i.e. weighted by the passive scalar $\psi$, normalised by its initial value, $V_{\rm i}$, as a function of time normalised by the predicted decelration timescale, $t_{\rm dec}$ (\equnp{tau_dec}). The time axis has been shifted to begin at $t_0$, the time when the stream velocity reaches $98\%$ of its initial value. In all cases, $t_0\sim t_{\rm sc}$. The gravity and no-gravity simulations behave similarly, and both are well fit by the theoretical prediction (\equnp{stream_deceleration}). This was expected given the similarity between the evolution of $h_{\rm b}$ in the gravity and no-gravity simulations (\fig{mixing}), since KHI-induced deceleration is primarily driven by entrainment of background material in the shear layer (P18,M19), not affected by buoyancy in the stream. \smallskip We~ran~the~$(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.1)$ simulation with a factor two higher spatial resolution throughout the simulation domain. The results of this simulation are shown in \figs{mixing} and \figss{momentum}. The evolution of $h_{\rm b}$ and stream velocity, $V$, are nearly indistinguishable from our fiducial resolution. The penetration of the shear layer into the stream is slightly enhanced, with $h_{\rm s}\sim 10\%$ larger in the high-resolution run at $t\sim 8t_{\rm sc}$. This is still significantly less than the no-gravity simulation, supporting our conclusion that self-gravity significantly suppresses shear layer growth inside the stream. \subsection{Stream Fragmentation due to GI} \label{sec:fragmentation} \begin{figure*} \includegraphics[trim={4cm 2.5cm 5cm 2cm},clip,width=0.99\textwidth]{del100_m1_clump.png} \caption{ Same as \fig{mpl1_maps}, but for simulations with $\mu>\mu_{\rm cr}$. The three columns represent three different simulations, each with $(M_{\rm b},\delta_{\rm c})=(1.0,100)$. The left-hand column shows the no-gravity simulation with $\mu=0.9$, while the centre and right-hand columns show the gravity simulations with $\mu=0.9$ and $0.4$ respectively. The snapshot times in units of the stream sound crossing time, $t_{\rm sc}$, and free-fall time, $t_{\rm ff}$, are listed in each panel. At $t\sim 2t_{\rm sc}$, a turbulent shear layer has developed in the non-gravitating simulation and the gravitating simulation with $\mu=0.4$, while the gravitating simulation with $\mu=0.9$ appears unperturbed. At later times, the shear layer consumes the non-gravitating stream as expected for KHI, while GI takes over in both simulations with gravity, resulting in dense clumps along the stream axis by $t\sim 10t_{\rm sc}$. These clumps are separated by $\sim 6.5R_{\rm s}$, consistent with the shortest unstable mode predicted by H98 (see text). } \label{fig:mpl9_maps} \end{figure*} \smallskip We here examine the evolution of streams undergoing GI in our simulations, and in particular the properties of clumps formed within them. Regardless of whether GI is dominated by surface or body modes in the linear regime, the end result is always expected to be the collapse of dense, long-lived clumps along the stream axis \citep[N87, H98,][]{Heigl16,Heigl18}. \smallskip \Fig{mpl9_maps} shows the evolution of three simulations, each with $(M_{\rm b},\delta_{\rm c})=(1.0,100)$. The left-hand column shows the no-gravity simulation with $\mu=0.9$, while the centre and right-hand columns show the gravity simulations with $\mu=0.9$ and $0.4$, respectively. By $t=2t_{\rm sc}$, the non-gravitating stream has developed a well defined shear layer which has penetrated into both the background and the stream, inducing a turbulent mixing zone and diluting the stream density. Meanwhile, the interior of the stream shows numerous density fluctuations caused by turbulence and shocks, with overdensities of up to $\lower.5ex\hbox{\gtsima} 1.5$ times the unperturbed density. At later times the shear layer continues to grow, reaching $h_{\rm s} \sim 0.4R_{\rm s}$ at $t\sim 4t_{\rm sc}$, when the fraction of unmixed fluid in the stream, with $\psi>0.96$, is $\sim 50\%$. This is very similar to the no-gravity simulation shown in the left-hand column of \fig{mpl1_maps}, and is consistent with the evolution of KHI in a constant density stream with $\delta=100$ (M19, figure B1), showing that the steep density profile associated with $\mu=0.9$ does not qualitatively alter the evolution. \smallskip Comparing to the corresponding self-gravitating stream, we see that the initial KHI has been suppressed by the introduction of gravity. At $t=4t_{\rm sc}$, the stream appears relatively unperturbed, with no shear layer and only minor density perturbations. The fraction of unmixed fluid in the stream is $77\%$. By $t\sim 6t_{\rm sc}$, small density perturbations can be seen along the stream axis, with a wavelength of $\sim 6.5R_{\rm s}$, slightly larger than the shortest unstable wavelength for GI predicted by H98\footnote{While the fastest growing mode in this case is $\lambda_{\rm max} \sim 11R_{\rm s}$, corresponding to 3 clumps, the growth rate at $\sim 6.5R_{\rm s}$ is only $0.85$ times the growth rate at $\lambda_{\rm max}$, and the resulting power spectrum is roughly flat in the range $\lambda\sim (6.5-12)R_{\rm s}$.} (\tab{sim_clump}). These density peaks are associated with an axisymmetric distortion of the stream-background interface, despite the fact the the initial perturbations had an equal mix of axisymmetric ($m=0$) and helical ($m=1$) modes. As described in \se{sgi} and \se{khi}, GI is unstable only to $m=0$ modes, while the dominant KHI mode at late times has either a long-wavelength and $m=1$ or a short wavelength and $m>1$. This supports the fact that these density perturbations were not amplified by nonlinear KHI, but rather by GI. By $t\sim 8t_{\rm sc}$, these density perturbations have evolved into five dense clumps along the box length of $32R_{\rm s}$, two of which merge by $t\sim 10t_{\rm sc}$. \smallskip The evolution of the lower line-mass stream, with $\mu=0.4$, is different. Despite being in the regime where GI dominates over KHI (\fig{tcompare}), at early times the evolution appears dominated by KHI. By $t\sim 4t_{\rm sc}$, a shear layer has developed around the stream, turbulent density fluctuations are visible, and the fraction of unmixed fluid in the stream is $65\%$. This is because the ratio $t_{\rm max}/t_{\rm shear}$ is larger and closer to 1, allowing KHI to develop further before GI takes over. However, by $t\sim 6t_{\rm sc}$, GI has begun to dominate, developing an axisymmetric pattern in the stream-background interface associated with density perturbations along the stream axis, characteristic of GI but not of nonlinear KHI. By $t\sim 8t_{\rm sc}$, five proto-clumps are visible along the stream axis, consistent with the predicted $\lambda_{\rm cr}$. Two of these clumps merge by $t\sim 10t_{\rm sc}$, leaving four large clumps. Assymptotically, for both $\mu=0.4$ and $0.9$, the spacing between clumps is predicted to be $\lambda_{\rm max}\sim 11R_{\rm s}$, the fastest growing GI mode, corresponding to $3$ clumps across $32R_{\rm s}$. \smallskip To study the properties of clumps in the simulations, we first select all cells with stream density greater than the break in the PDF of the corresponding snapshot, $\rho_{\rm s,th}$ (\fig{dpdf}). We then group together neighbouring cells above this threshold, removing groups containing fewer than 30 cells to avoid spurious density fluctuations. Varying $\rho_{\rm s,th}$ by 0.1 dex, or using $\rho$ rather than $\rho_{\rm s}$, does not change the number of identified clumps, changes the clump masses by $\lower.5ex\hbox{\ltsima} 20\%$, and the other clump properties discussed below by $\lower.5ex\hbox{\ltsima} 10\%$. \begin{figure} \includegraphics[trim={0.38cm 0.35cm 0.34cm 0cm},clip,width=0.49\textwidth]{evolution.pdf} \caption Evolution of clump properties, each shown as a function of time since clumps are first detected. Different coloured solid lines show different simulations as indicated in the legend. For clarity, we show results from only a few simulations bracketing the range of stream parameters examined, and thus the range of resulting clump properties. The dashed line in each panel shows the results of a simulation with $(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.9)$ and twice higher resolution than the fiducial value. \textit{Top-left panel:} clump mass normalised by the average initial stream mass per clump, $M_{\rm i}=M_{\rm stream}/N_{\rm clump}$. \textit{Top-right panel:} turbulent Mach number. \textit{Bottom-left panel:} clump virial parameter, with solid (dash-dotted) lines representing the virial parameter without (with) accounting for the external pressure (\equsnp{virial} and \equmnp{avir} respectively). Clumps forming in higher line-mass streams are more massive, have lower turbulent Mach numbers and lower virial parameters, though the dependence on $M_{\rm b}$ or $\delta_{\rm c}$ is extremely weak. For $\mu=0.9$, roughly $90\%$ of the initial stream mass winds up in clumps, which following collapse are in approximate virial equilibrium. For $\mu=0.3$, only $\lower.5ex\hbox{\ltsima} 50\%$ of the initial stream mass is in clumps, which are primarily confined by external pressure. } \label{fig:evolution} \end{figure} \smallskip \Fig{evolution} shows several properties of clumps identified in our simulations as a function of time, where $t=0$ is set to the first timestep where clumps have been identified. We show the clump mass, $M_{\rm c}$, the turbulent Mach number within the clumps, $\mathcal{M}_{\rm turb}=\sigma_{\rm turb}/c_{\rm s}$, and the clump virial parameter, defined as \begin{equation} \label{eq:virial} \alpha_{\rm vir} = \frac{5(\sigma_{\rm turb}^2+c_s^2)R}{3GM}, \end{equation} {\noindent}where the factor $5/3$ comes from assuming a constant density profile inside the clump. If $\alpha_{\rm vir}\sim 1$, the clump is in virial equilibrium, while $\alpha_{\rm vir}<1$ implies the clump is collapsing and $\alpha_{\rm vir}>1$ implies it is unbound. For each property we display the average over all clumps identified in a given snapshot, typically four to five clumps. \smallskip Following the initial collapse when the clump mass grows significantly, it tends to saturate at a well defined value despite some oscillations. These oscillations, on the order of $\sim 10-20\%$, are due in part to our density threshold for clump cells, which is recalibrated at each snapshot. We have normalised the mass in \fig{evolution} by $M_i=M_{\rm stream}/N_{\rm clump}$, where $N_{\rm clump}$ is the number of clumps in the stream and $M_{\rm stream}=\piR_{\rm s}^2 L {\overline{\rho_{\rm s}}}$ is the initial stream mass with ${\overline{\rho_{\rm s}}}$ the mean density in the stream. $M_{\rm i}$ is thus the typical clump mass one would expect if the entire initial stream fragments into clumps. We find that $M_{\rm c}/M_{\rm i}$ increases with $\mu$, rising from $\sim (0.4-0.9)$ for $\mu=(0.3-0.9)$, independent of $M_{\rm b}$ or $\delta_{\rm c}$. \smallskip The spherical Jeans mass obtained using the average properties in the initial stream is $M_{\rm J}=(\pi^{5/2}/6){\overline{c_{\rm s}}}^3 G^{-3/2} {\overline{\rho_{\rm s}}}^{-1/2}$. For $N_{\rm clumps}=4$ and $L=32R_{\rm s}$ we obtain $M_{\rm i}/M_{\rm J}\sim 0.14(t_{\rm sc}/t_{\rm ff})^3$, with $t_{\rm sc}\simeq 2R_{\rm s}/{\overline{c_{\rm s}}}$ and $t_{\rm ff}=(4G{\overline{\rho_{\rm s}}})^{-1/2}$. This corresponds to $M_{\rm i}/M_{\rm J}\sim (0.2-1.2)$ for $\mu=(0.3-0.9)$ (\tab{sim_clump}), yielding clump masses $M_{\rm c}\sim (0.1-1)M_{\rm J}$. For small $\mu$, when the density profile in the initial stream is roughly constant, the Bonnor-Ebert mass (\equnp{BE_mass}) is $M_{\rm BE}\sim 0.5M_{\rm J}$. In general, $M_{\rm BE}>M_{\rm c}$ for $\mu<1$. \smallskip The turbulent Mach number increases by a factor of $\sim 3$ as $\mu$ is decreased from 0.9 to 0.3. However, in all cases $\mathcal{M}_{\rm turb}$ is $\lower.5ex\hbox{\ltsima} 0.3$ asymptotically, and does not exceed $\sim 0.6$ during the initial collapse of the clump. Turbulent support is thus negligible compared to thermal pressure. The clump virial parameter increases from $\alpha_{\rm vir}\sim 1$ for $\mu=0.9$, consistent with $M_{\rm c}/M_{\rm J}\sim 1$ in this case, to $\alpha_{\rm vir}\sim 2.3$ for $\mu=0.3$. \smallskip The additional support for clumps in simulations with lower values of $\mu$ comes from the external pressure, which also played a larger role in confining the initial stream. This can be seen by considering the full virial parameter including the surface pressure term \citep[e.g.][Chapter 6]{Krumholz15}. We approximate this as \begin{equation} \label{eq:avir} {\tilde{\alpha}}_{\rm vir} = \frac{5(\sigma_{\rm turb}^2+c_s^2-\gamma P_{\rm ext}/\rho)R}{3GM}. \end{equation} {\noindent}This is shown by dot-dashed lines in the rightmost panel of \fig{evolution}. For $\mu=0.9$, the external pressure is negligible and the two virial parameters are nearly identical. However, for $\mu=0.3$, ${\tilde{\alpha}}_{\rm vir}\sim 1.4$, indicating that the clumps in this case are primarily confined by external pressure. While this is still larger than 1, \equ{avir} is only an approximation, assuming a spherical clump with constant density and uniform external pressure. Properly accounting for the density profile within the clump tends to reduce the virial parameter compared to \equs{virial}-\equm{avir} \citep[e.g.][]{M17}. Given this, a value of ${\tilde{\alpha}}_{\rm vir}\sim 1.4$ is indicative of the clumps being in approximate virial equilibrium due to a combination of gravitational and pressure confinement. \smallskip Contrary to the strong dependence of clump properties on $\mu$, their dependence on $(M_{\rm b},\delta_{\rm c})$ at fixed $\mu$ is extremely weak. $M_{\rm c}$ and $\alpha_{\rm vir}$ vary by only a few percent as $\delta_{\rm c}$ varies from $6.7-100$ or $M_{\rm b}$ from $1-2.5$. Furthermore, clumps formed in simulations of pure GI, with $(M_{\rm b},\delta_{\rm c})=(0,100)$ (see the Appendix~\se{surf_bod}), have masses only $\sim 10\%$ larger than those in simulations with $M_{\rm b}=1$ for both $\mu=0.9$ and $0.4$. We conclude that once GI dominates over KHI and leads to clump formation, KHI has little effect on the resulting clump properties even if $\mu$ only slightly exceeds $\mu_{\rm cr}$. \smallskip To check convergence, we repeated the $(M_{\rm b},\delta_{\rm c},\mu)=(1.0,100,0.9)$ simulation with a factor two higher spatial resolution, and show the results in \fig{evolution}. No significant change was found in the number of clumps, their formation time, or their properties. The clump mass increases by $\lower.5ex\hbox{\ltsima} 4\%$, while $\mathcal{M}_{\rm turb}$ and $\alpha_{\rm vir}$ are unchanged. We conclude that our fiducial resolution is sufficient to resolve the stream fragmentation and resulting clumps. \smallskip In summary, clumps forming in higher line-mass streams are more massive, have lower turbulent Mach numbers and lower virial parameters. This is primarily due to the larger degree of external pressure support for low line-mass streams present in the initial conditions, with a small contribution from enhanced mixing and dilution in lower line-mass streams caused by more efficient KHI. At fixed $\mu$, the variation of clump properties with $M_{\rm b}$ and $\delta_{\rm c}$ is very small. For $\mu=0.9$, roughly $90\%$ of the initial stream mass winds up in clumps, which following collapse are in approximate virial equilibrium at the thermal Jeans scale. For $\mu=0.3$, only $\lower.5ex\hbox{\ltsima} 50\%$ of the initial stream mass is in clumps, the rest having mixed into the background due to KHI. The collapsed clumps have $M_{\rm c}\sim (0.1-0.2)M_{\rm J}$, and are confined by external pressure. In all cases, the turbulent pressure in the collapsed clumps is negligible, with turbulent Mach numbers $\sim (0.1-0.3)$ for $\mu=(0.9-0.3)$. \section{discussion} \label{sec:disc} \subsection{Astrophysical Applications} \label{sec:astr} \smallskip Our results on the combined evolution of KHI and GI in self-gravitating filaments have several astrophysical implications. In this section, we highlight potential applications for studies of star-forming filaments in the ISM and for cold streams feeding massive galaxies at high redshift. \subsubsection{High-z Intergalactic Streams} \smallskip Massive galaxies with baryonic masses $\lower.5ex\hbox{\gtsima} 10^{11}{\rm M}_\odot$ at $z\sim (1-4)$ reside in halos with virial masses $M_{\rm vir}\lower.5ex\hbox{\gtsima} 10^{12}{\rm M}_\odot$. The CGM of these galaxies is thought to contain hot gas with $T\lower.5ex\hbox{\gtsima} 10^6\,{\rm K}$ in approximate hydrostatic equilibrium. However, the star-formation rates measured in these galaxies of $\lower.5ex\hbox{\gtsima} 100\,M_\odot\, {\rm yr}^{-1}$ is significantly larger than expected from the cooling of the hot CGM, and their prevalence exceeds that expected from mergers \citep{Dekel09}. As outlined in \se{intro}, such galaxies are fed by cold, $T\sim 10^4\,{\rm K}$ gas streams from the cosmic web, which efficiently penetrate the hot halo all the way to the central galaxy \citep{Keres05,db06,Ocvirk08,Dekel09,CDB,FG11,vdv11}. The shearing against the hot CGM makes these streams susceptible to KHI. This has motivated several detailed studies of KHI in such systems, with $\delta\sim (30-100)$ and $M_{\rm b}\sim (0.5-2)$ (M16; P18; M19). As cosmological simulations lack the spatial resolution to properly resolve KHI in the streams, these studies have been idealized, accounting thus far only for non-radiative hydrodynamics without gravity. \smallskip These studies find that sufficiently narrow streams, with $R_{\rm s}/R_{\rm v}\lower.5ex\hbox{\ltsima} (0.005-0.05)$ where $R_{\rm v}$ is the halo virial radius, will disrupt in the CGM before reaching the central galaxy. The threshold value of $R_{\rm s}$ depends on $(M_{\rm b},\delta)$. However, our results suggest that in a certain regime of parameter space, self-gravity may stabilize streams and halt their disruption. Even if the line mass is very low compared to the critical value, $\mu\sim 0.1$, we find that buoyancy can prevent the shear layer from penetrating the inner stream (\figs{mpl1_maps} and \figss{mixing}). For $\delta=100$, we find that the penetration rate of the shear layer into the stream is reduced by a factor of $\lower.5ex\hbox{\gtsima} 3$ when self-gravity is included (\fig{mixing}). This implies that the previous estimates of the upper limit on the radius of streams that can disrupt in the CGM should be reduced by a similar factor, namely $R_{\rm s}/R_{\rm v}\lower.5ex\hbox{\ltsima} (0.0015-0.015)$. Very narrow streams may thus survive the journey to the central galaxy, though they are likely to reach it somewhat wider and more diluted than they began. \smallskip M19 also found that typical streams can significantly decelerate in the CGM, dissipating $\sim (10-50)\%$ of their bulk kinetic energy before the central galaxy. If this energy is subsequently radiated away, it can significantly contribute to the Ly$\alpha$ emission observed in the CGM of massive high-$z$ galaxies. Our results show that the self-gravity of the gas is unlikely to alter this conclusion, because the deceleration rates and the entrainment of background mass are unaffected (\figs{mixing} and \figss{momentum}). \smallskip Other studies have suggested that at higher redshift, $z\lower.5ex\hbox{\gtsima} 5$, the streams feeding massive galaxies may be gravitationally unstable, with $\mu\sim 1$ \citep{M18a}. These authors speculated that such streams could gravitationally fragment while still in the halo, and that this could lead to the formation of metal-poor globular clusters and stars directly in the halos of high-$z$ galaxies. While this study did not account for KHI in the streams, our results suggest that this is unlikely to affect their conclusions, since for $\mu\sim 1$ and $M_{\rm b}\sim 1$, GI is unaffected by KHI (\figs{mpl9_maps} and \figss{evolution}). We note that for cosmic web filaments far from haloes, only GI operates as no shear is expected. \subsubsection{ISM Filaments} \label{sec:disc_1_2} \smallskip As outlined in \se{intro}, numerous filametary structures are observed in the ISM, in particular in star-forming regions such as giant molecular clouds. While much attention has been payed to the gravitational stability and fragmentation of such filaments, these studies do not consider KHI induced by shearing motions between the filament and its surroundings. This is despite the fact that strong shearing motions and even signatures of KHI have been detected in molecular clouds and around filaments \citep[e.g.][]{Rodriguez92,Berne10,Berne12}. Numerical simulations of molecular clouds in the central molecular zone have also revealed strong shearing motions which generate turbulence and reduce the SFR by a factor of $\sim 7$ compared to nearby clouds \citep{Federrath16}. It is thus important to consider how KHI might affect the fragmentation of ISM filaments. \smallskip We note that in this case, the shearing motion is thought to be due to a background ``wind" flowing across a roughly static filament, rather than a stream flowing through a static background. However, due to Galilean invariance, these two scenarios should behave identically. \smallskip The regions surrounding ISM filaments are often extremely turbulent, with turbulent Mach numbers of order 10 or higher, and the filaments themselves are often supervirial, with $\mu\lower.5ex\hbox{\gtsima} 1$. This is obviously very different from our initial conditions of a smooth filament in hydrostatic equilibrium (see \se{phys}). However, subvirial filaments with $\mu<1$ have been observed (\citealp{Henshaw16}; \citealp{Hacar18}, and references therein; \citealp{Orkisz19}). In some cases, these low line-mass filaments host pre-stellar cores which are at least partly supported by external pressure \citep{kirk17,Seo18}, consistent with GI in filaments with $\mu_{\rm cr}<\mu<1$ (\fig{evolution}). If $\mu$ is known, this will constrain $\mu_{\rm cr}$, which in turn can be used to place constraints on the properties of the confining medium, and in particular on $M_{\rm b}$, the velocity of the shear flow between the filament and the background (\fig{criticalmu}). \subsubsection{Tidal Disruption Events} \label{sec:disc_1_3} Stars that wander too close to a supermassive black hole, such as found in the centres of most massive galaxies, can be disrupted by the strong tidal forces exerted by the black hole \citep{Rees88}. Following the disruption, the stellar debris often evolves into a gas stream which partly accretes onto the black hole producing a luminous flare. Following their formation, the tidal shear of the black hole renders these streams gravitationally stable, so bound clumps are unlikely to form along the stream. Furthermore, the streams can be treated as approximately in hydrostatic equilibrium in the cylindrically-radial direction \citep{Coughlin2015,Coughlin2016,Coughlin2016b}. Recently, it has been argued that interactions between the debris stream and the ambient tenuous gas near the galactic centre can render such streams unstable to KHI, with nominal disruption times shorter than the infall timescale of the stream onto the black hole \citep{Bonnerot2016}. If true, this would significantly reduce the expected luminosity of the accretion flare. However, as we have shown, even in weakly self-gravitating streams, total stream disruption is significantly delayed due to buoyancy within the stream. This would mean that KHI in the streams below $\mu_{\rm cr}$ will be stopped by buoyancy, and the decrease in the flare-luminosity predicted by \citet{Bonnerot2016} may be overestimated. Such a scenario can be tested with dedicated simulations. \subsection{Caveats and Additional Physical Effects} \label{sec:phys} \smallskip While our analysis has focused on elucidating the interplay between KHI and GI in filaments, applications of our results to astrophysical scenarios require careful consideration of additional physical processes that have not yet been taken into account. These include the assumed isentropic initial conditions and lack of radiative cooling, the assumption of line mass ratios $\mu<1$ and hydrostatic equilibrium in the initial conditions, the lack of magnetic fields, and (in the case of cold streams feeding massive galaxies at high redshift) the lack of a dark matter component to the gravitational potential. In this section, we speculate as to the possible effects of these processes, all of which will be explored in future work. \smallskip Radiative cooling is clearly very important for both ISM filaments and intergalactic gas streams. Both of these are expected to have cooling times much shorter than their sound crossing times, which is why they are often modeled as isothermal. Radiative cooling can either enhance or suppress KHI in the linear regime, depending on the slope of the cooling function and on the ratio of the cooling time in each fluid to the sound crossing time \citep{Massaglia92,Bodo93,Vietri97,Hardee97,Xu00}. However, when these ratios are either much larger or much smaller than unity, the linear growth rates are similar to the adiabatic case at longitudinal wavelengths $\lambda\lower.5ex\hbox{\gtsima} R_{\rm s}$ (Mandelker et al., in prep.). Even in this case, cooling can substantially alter the nonlinear evolution of KHI \citep{Vietri97,Stone97,Xu00,Micono00}, though the net effect again depends on details of the cooling function and the stream parameters. Some authors have found that cooling leads to more violent disruption of the stream \citep{Stone97,Xu00}, while others have found that it prevents stream disruption by limiting the penetration of the shear layer into the stream \citep{Vietri97,Micono00}. If shear layer growth is suppressed and the contact discontinuity maintained, then $t_{\rm shear}$ will increase and $\mu_{\rm cr}$ will decrease (\fig{tcompare}). Thus, the regime where GI dominates over KHI will expand. Furthermore, it is also found that KHI in a cooling medium leads to much larger density fluctuations, and to the formation of dense knots and filaments inside the stream. These are likely to further enhance GI and filament fragmentation. Cooling is also likely to allow the clumps to collapse to higher densities and reach lower temperatures, thus decreasing their Jeans mass and leading to further fragmentation and collapse. \smallskip Magnetic fields are likely to be dynamically important in ISM filaments. This can have a stabilizing effect on GI, especially when $\mu<1$ (e.g. N87, H98), and also on KHI, where magnetic fields parallel to the flow have been found to stabilise high-$m$ modes and suppress shear layer growth \citep{Ferrari81,Birkinshaw90}. It is therefore unclear what the net effect will be in terms of the competition between these two processes, and this will likely depend sensitively on the properties of the field. For intergalactic gas streams at high redshift, magnetic fields are likely dynamically unimportant \citep[e.g.][]{Bagchi02}. Nevertheless, they may significantly weaken thermal conductivity and viscosity, which will influence the width of the shear layer (M19) and thus affect the instability. All these effects should be accounted for simultaneously in future work. \smallskip When considering intergalactic gas streams, we must also account for the contribution of the host dark matter filament to the gravitational potential. To our knowledge, the gravitational stability of a gas stream embedded in a dark matter filament has not been studied. The dark matter may stabilise the stream by making it more buoyant, or it may destabilise the stream by increasing the inward radial gravitational force, thus requiring non-thermal turbulent motions to support the stream against radial collapse. This may also suppress KHI by further limiting shear layer growth and stream disruption (see \figs{mpl1_maps}-\figss{mixing}). The central dark matter halo into which the streams are flowing will also affect their evolution. The central potential focuses the stream into a conical shape with its radius decreasing towards the halo centre, $R_{\rm s}\propto r$. \citep{Dekel09,Voort12}. This decreases the KHI timescales, which are proportional to $R_{\rm s}$ (\equsnp{tau_diss}-\equmnp{tau_shear}). However, this focusing also increases the stream density, with $\rho\propto R_{\rm s}^{-2}\propto r^{-2}$, resulting in a decrease of the free-fall time, $t_{\rm ff}\propto \rho^{-1/2}\propto r$. Since $t_{\rm max}\propto t_{\rm ff}$, the ratio $t_{\rm max}/t_{\rm shear}$ is unlikely to vary significantly throughout the halo, as is the critical line-mass ratio, $\mu_{\rm cr}$. However, this must be studied in more detail, as must the effect of gravitational acceleration towards the halo centre on the evolution of KHI and GI in intergalactic cold streams. \smallskip Throughout our analysis, we assumed that filaments began in hydrostatic equilibrium, and without any internal non-thermal support such as turbulence or vorticity. This is unlikely to be the case for either ISM filaments or intergalactic streams. Theoretical studies of GI in ISM filaments growing self-consistently via radial accretion have shown that turbulence builds up inside the stream with Mach numbers of order unity and contributes to its support \citep{Heitsch13,Clarke16,Clarke17,Heigl18b}. Despite this, the filament was found to fragment when its line mass reached the critical value for hydrostatic equilibrium, namely at $\mu\lower.5ex\hbox{\gtsima} 1$, in a similar manner to the $\mu<1$ filaments considered here, leading to the formation of Jeans-scale clumps \citep{Clarke16,Clarke17}. It is unclear how these results will change in the presence of KHI. Likewise, it has been suggested that accretion onto cosmic gas streams from the intergalactic medium creates specific profiles \citep{FG84,Birnboim16}, induces roughly sonic turbulence \citep{M18a} and vorticity \citep{Codis12,Codis15,Laigle15}, and grows streams to $\mu>1$ \citep{M18a}. Such non-equilibrium effects must be considered in order to describe stream evolution. \section{Summary and Conclusions} \label{sec:conc} \smallskip Self-gravitating gaseous filaments are ubiquitous in astrophysics, from sub-pc filaments within the interstellar medium, to Mpc scale streams feeding galaxies along the cosmic web. As such, they may be subject to gravitational instability (GI), which leads to stream fragmentation and to the formation of long-lived, collapsed clumps along the stream axis. In many cases, such filaments are also susceptible to Kelvin-Helmholtz Instability (KHI) due to a shear flow against a confining background medium, which acts to mix the filament with the background fluid via a turbulent shear layer. Motivated by this, we have performed the first ever study of the evolution of a self-gravitating filament or stream undergoing KHI, using simple analytic models and hydrodynamic simulations. Such a system is characterised by three dimensionless parameters: the Mach number of the stream with respect to the sound speed in the (static) background, $M_{\rm b}$, the ratio of the central density in the stream to the background density outside the stream, $\delta_{\rm c}$, and the ratio of the mass-per-unit-length (line-mass) of the stream to the maximal line-mass for which initial hydrostatic equilibrium is possible, $\mu$. The current analysis is restricted to filaments with $\mu<1$ initially in hydrostatic equilibrium. Our main results can be summarised as follows: \begin{enumerate} \smallskip \item The competition between GI and KHI is governed by the ratio of the timescale for linear growth of the fastest growing GI mode, $t_{\rm max}$, and the relevant nonlinear KHI timescale. When GI is dominated by surface modes, this is the time for the KHI-induced shear layer to expand to a size comparable to the stream radius and destroy the initial contact discontinuity, $t_{\rm shear}$. If $t_{\rm max}/t_{\rm shear}<1$, GI causes the stream to fragment into long-lived clumps and suppresses mixing with the background medium. Likewise, if $t_{\rm max}/t_{\rm shear}>1$, KHI mixes the stream with the background medium, dilutes its density and suppresses clump formation (\fig{tcompare}). Regardless, the stream is always unstable. When GI is dominated by body modes, clumps may form even when $t_{\rm max}$ is slightly longer than $t_{\rm shear}$, since the contact discontinuity no longer plays a role in GI. \smallskip \item The timescale criterion can be rephrased as a criterion on the line-mass ratio $\mu$. If this is smaller than a critical value which depends on the Mach number and density contrast, $\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$, then KHI will win and mix the stream and background. However, if $\mu>\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$, the stream will fragment into long-lived, bound clumps. $\mu_{\rm cr}$ increases strongly with $M_{\rm b}$, and has a weak tendency to increase with $\delta_{\rm c}$ (\fig{criticalmu}). For $M_{\rm b}\lower.5ex\hbox{\ltsima} 2.5$ we have $\mu_{\rm cr}\lower.5ex\hbox{\ltsima} 0.5$. At larger Mach numbers, when KHI is dominated by high-order azimuthal surface modes, $\mu_{\rm cr}\lower.5ex\hbox{\ltsima} 0.9$. In practice, values of $\mu_{\rm cr}>0.9$ are not relevant, as GI body modes will cause clump formation even if $\mu\lower.5ex\hbox{\ltsima} \mu_{\rm cr}$. \smallskip \item When $\mu<\mu_{\rm cr}$, the evolution of KHI outside the stream boundary is similar to the case of a non-gravitating uniform density stream, studied in detail by \citet{M18b}. Self-gravity slows the expansion of the shear layer into the background by less than $20\%$ for large $\mu$, and significantly less than that for smaller $\mu$ (\fig{mixing}). Consequently, the stream deceleration due to entrainment of background mass in the shear layer is also unaffected, and follows the analytical prediction (\fig{momentum}). \smallskip \item However, gravity does qualitatively affect the penetration of the shear layer into the stream. At $t\lower.5ex\hbox{\gtsima} (2-3)t_{\rm sc}$, the penetration rate of the shear layer into the stream is slowed by a factor of $\lower.5ex\hbox{\gtsima} 3$ compared to the no-gravity case (\fig{mixing}). This is due to restoring buoyancy forces in the stream interior, corresponding to values of the Richardson number, ${\rm Ri}>0.25$ (\fig{Ri}). This significantly slows the final disruption of the stream by KHI, as a dense central core remains partly shielded against mixing (\fig{mpl1_maps}). \smallskip \item The clumps that form by GI when $\mu>\mu_{\rm cr}$ are largely unaffected by KHI. They are typically less massive than the Jeans mass, and supported partially by external pressure. However, as $\mu\rightarrow 1$ the clumps approach the Jeans mass and the external pressure support becomes negligible. In all cases, the internal turbulent motions are subsonic and turbulent pressure support is negligible, though the turbulent Mach number increases towards lower $\mu$ (\fig{evolution}). KHI seems to have a minor effect on the clump properties, which are largely insensitive to the Mach number of the flow, even in the static limit, $M_{\rm b}=0$. \smallskip \item Our finding that self-gravity may shield the inner core of filaments from disruption by KHI, implies that recent studies of KHI in gas streams feeding massive galaxies at high-$z$ may have overestimated the disruption of these streams in the CGM. However, the dissipation and deceleration rates should not be affected. Additionally, our finding that GI induced fragmentation only occurs when $\mu>\mu_{\rm cr}(M_{\rm b},\delta_{\rm c})$ can be used to place constraints on the properties and kinematics of the confining medium surrounding low mass filaments in the ISM. However, in order to properly address these phenomena, additional physics such as radiative cooling, magnetic fields, external gravitational potential, and non-thermal turbulent motions, will have to be added to our models. \end{enumerate} \section*{Acknowledgments} We thank Romain Teyssier for many helpful suggestions while running the simulations. We thank Frank van den Bosch, Frederic Bournaud, Andreas Burkert, Drummond Fielding, Shuo Kong, Diederik Kruijssen, and Xun Shi for insightful discussions. NM acknowledges support from the Klaus Tschira Foundation through the HITS Yale Program in Astrophysics (HYPA). The simulations were performed on the Omega and Grace HPC clusters at Yale. This work is supported in part by the facilities and staff of the Yale Center for Research Computing. AD was partly supported by the grants BSF 2014-273, GIF I-1341-303.7/2016 and NSF AST-1405962.
1,116,691,500,578
arxiv
\section{Using Integrated Galaxy Spectra as Chemical Diagnostics } Optical emission lines from H{\sc {II}}\ regions have long been the primary means of gas-phase chemical diagnosis in galaxies (Aller 1942; Searle 1971; reviews by Peimbert 1975; Pagel 1986; Shields 1990; Aller 1990). With the advent of large telescopes and sensitive spectrographs, nebular emission lines from distant star-forming regions can probe the chemical evolution of objects at earlier epochs (Steidel {\it et~al.\/}\ 1996; Kobulnicky \& Zaritsky 1998) in a complementary manner to absorption line measurements (reviewed by Lauroesch {\it et~al.\/}\ 1996). Optical emission line spectroscopy preferentially samples the warm ionized phase of the interstellar medium in the immediate vicinity of recent star formation events (i.e., H{\sc {II}}\ regions). Madau {\it et~al.\/}\ (1996), Lilly {\it et~al.\/}\ (1996) observe that the rate of star-forming is higher in the past, so potential emission line targets are plentiful at redshifts of $0.3<z<2$. Compared to absorption-line studies which are limited to lines of sight toward bright background quasars, nebular emission line observations are possible in any galaxy with H{\sc {II}}\ regions. Complications due to line width ambiguities, saturation effects, multiple velocity components, and ionization corrections are less severe or absent for nebular spectroscopy. Limited signal-to-noise and lack of spatial resolution are the two most formidable obstacles for ground-based spectroscopic studies of cosmologically distant H{\sc {II}}\ regions. A typical ground-based resolution element of 1.0$^{\prime \prime}$\ corresponding to a linear size of 5.2 kpc at $z=0.5$ will encompass entire galaxies.\footnote{For a cosmology with $H_0$=75 ~km~s$^{-1}$\ Mpc$^{-1}$, $\Lambda=0$, and $q_0$=0.1.} Our motivation in this paper is to explore the utility of {\it spatially-integrated} emission line spectroscopy for studying the chemical properties of star-forming galaxies at earlier epochs. Osterbrock (1989) thoroughly discusses the standard techniques for measuring the chemical properties of ionized gas. Typically, chemical analyses of H{\sc {II}}\ regions require measurement of H and He recombination lines along with collisionally-excited lines from one or more ionization states of heavy element species. Oxygen is the most commonly--used metallicity indicator in the ISM by virtue of its high relative abundance and strong emission lines in the optical part of the spectrum (e.g., [O~II] $\lambda$3727 and [O~III] $\lambda\lambda$4959,5007). In best-case scenarios, the electron temperature of the ionized medium can be derived from the ratio of a higher-excitation auroral line, such as [O~III] $\lambda$4363, to [O~III] $\lambda$5007. In practice, [O~III] $\lambda$4363 is difficult to measure since it is typically only a few percent the strength of H$\beta$ even in the most metal-poor H{\sc {II}}\ regions (I~Zw~18; (O/H)=0.02(O/H)$_\odot$), and becomes unmeasurably weak in more metal-rich environments. The electron density of the medium may also be constrained using the density-sensitive ratio of [S~II] $\lambda$6713/[S~II] $\lambda$6731 or [O~II] $\lambda$3727/[O~II] $\lambda$3729 doublets. When bright H{\sc {II}}\ regions are available, abundances of He, C, Ne, Si, S, and Ar are well-measured in many local galaxies. However, in cosmologically distant H{\sc {II}}\ regions, only the brightest lines may be detectable, even using the collecting area and sensitivity of the largest telescope/instrument combinations envisioned today. A typical ground-based resolution element will encompass large fractions of galaxies exhibiting a wide variety of physical characteristics. Are global galaxy spectra in any way indicative of the physical properties of its ISM? Three effects may bias spatially-integrated (global) emission line spectra of galaxies (kpc--sized apertures) compared to the values that would be derived from individual H{\sc {II}}\ regions observed with higher spatial resolution (10---100 pc--sized apertures). 1) The aperture may include a mixture of gas or multiple H{\sc {II}}\ regions with similar metallicity but differing ionization conditions. 2) The aperture may include a mixture of gas or multiple H{\sc {II}}\ regions with significantly different metallicities. 3) The aperture may include a mixture of overlapping stellar absorption and nebular emission features which adversely affect the measured equivalent widths and emission line strengths. One or several of these effects may reduce the precision of chemical abundance determinations using global galaxy spectra. In this paper, we explore the magnitude of such effects by comparing chemical analyses based upon local and global spectroscopy of nearby, well-studied galaxies. In Section~2 we consider the case of dwarf galaxies with uniform gas-phase abundance distributions, but a range of ionization conditions. In Section~3, we consider larger spiral galaxies which exhibit both chemical abundance and ionization variations. In Section~4 we consider the case of low-S/N spectra in which emission line strengths may be affected by underlying stellar absorption. We also consider the possibility of measuring oxygen abundances when only the H$\beta$ and [O~III] emission lines are detected. The summary in Section~5 recaps the prospects for using global galaxy spectroscopy to study the chemical properties of star-forming objects in the early universe. {\it Note of Caution: } The prescriptions for deriving gas-phase metal abundances provided here assume that the nebular emission line spectrum is generated by photoionization from massive stars. Non-stellar ionizing sources, such as AGN, can generate emission spectra that superficially resemble photoionized H{\sc {II}}\ regions. Generally, non-stellar energy sources produce distinctive emission line ratios compared to ordinary H{\sc {II}}\ regions. Veilleux \& Osterbrock (1987) and Heckman (1980) provide diagnostic diagrams which should be consulted to ascertain that ionizing sources are stellar in nature before proceeding with chemical analysis. \section{The Case of Metal-Poor Irregular and Dwarf Galaxies} Low-mass galaxies with sizes and luminosities smaller than that of the LMC appear to be chemically homogeneous on scales from $\sim$10 pc to 1 kpc (review by Kobulnicky 1998). Yet, they often contain a multiplicity of H{\sc {II}}\ regions, and they exhibit a variety of ionization conditions. Localized high surface--brightness, high-ionization H{\sc {II}}\ regions lie amidst a network of extended low-ionization filaments which sometimes stretch for a kpc or more (e.g., Hunter 1994; Hunter \& Gallagher 1997). What is the effect of mixing these regions of disparate ionization parameter into a single aperture? We address this question empirically by considering both spatially integrated and localized measurements of ionized gas in a sample of nearby irregular galaxies. \subsection{Data Collection} We obtained new spectroscopic observations of the irregular and blue compact galaxies NGC~3125 ($\equiv$Tololo 1004-296), NGC~5253, Henize~2-10, Tololo 35 ($\equiv$Tololo 1324-276$\equiv$IC4249) on 4-6 February 1998 using the 4 m telescope + RC spectrograph at the Cerro Tololo InterAmerican Observatory. The KPGL-3 grating produced a dispersion of 1.2 \AA\ pixel$^{-1}$ across the wavelength range 3700\AA\ -- 7000 \AA\ imaged onto the 1k x 3k Loral CCD. A slit width of 2$^{\prime \prime}$\ produced a spectral resolution of 5.1 \AA. Seeing averaged 1-2$^{\prime \prime}$\ through occasional light cirrus clouds and airmasses ranged from 1.0 to 1.6. The spatial scale of the CCD was 0.50$^{\prime \prime}$\ pixel $^{-1}$. The data were reduced in the standard fashion, by subtracting overscan bias, and dividing by a flat field image produced from a combination of dome-illuminated exposures and exposures of a quartz continuum lamp inside the spectrograph. Based on exposures of the twilight sky, we applied to the data a correction for variations in the illumination pattern along the 5$^{\prime}$\ slit length. This combination of dome and internal lamp flats was necessary to achieve good S/N over the full wavelength range, and to correct for scattered light within the instrument during quartz lamp exposures. Periodic exposures of HeAr arc lamps allowed a wavelength solution accurate to $\sim$0.5 \AA\ rms. Multiple observations of the standard stars GD108, GD50, Hz4, and SA95-42 (Oke 1990) with a slit width of 6$^{\prime \prime}$\ over a range of airmass provided a site-dependent atmospheric extinction curve which we found to be consistent with the mean CTIO extinction curve. Standard star exposures also allowed us to derive the instrumental sensitivity function which had a pixel-to-pixel RMS of 2\%. Between five and twenty 6-pixel apertures were defined across the nebular portion of each galaxy. We extracted multiple one-dimensional spectra of each object with aperture sizes of 6 pixels which correspond to linear sizes of 20 pc to 100 pc at the distances of the galaxies. We used emission-free regions 20-pixels or larger on either side of the aperture to define the mean background sky level which was subtracted from the source spectrum. In order to measure a global emission line spectrum, we performed 5 and 10-minute drift scans of each galaxy by moving the telescope at a rate of 1$^{\prime \prime}$\ $s^{-1}$ back and forth across the galaxy perpendicular to the slit. Summing the spatial extent of the nebular emission (100--300 pixels), we produced a global spectrum of the galaxy at the same spectral resolution as the fixed-pointing exposures. Figure~1 displays the global spectrum for each galaxy in arbitrary units. For display purposes, we plot each spectrum a second time, multiplied by a factor of 40. To augment the sample, we selected four additional irregular galaxies with both global and local nebular spectroscopy in the literature. Kennicutt (1992---K92) presents spatially-integrated spectra for NGC~1569, NGC~4449, and NGC~4861$\equiv$Mrk 59. We used nebular spectroscopy of individual H{\sc {II}}\ regions for these galaxies from Kobulnicky \& Skillman (1997---NGC~1569), Talent (1980---NGC~4449), and our own 3.5 m Calar Alto spectra (Kobulnicky, in prep---NGC~4861; observations as described in Skillman, Bowmans \& Kobulnicky 1997). Using the longslit data described in Kobulnicky \& Skillman (1996), we constructed a global spectrum for NGC~4214. We analyzed the one-dimensional localized spectra and global spectra for each galaxy using the nebular emission--line software and procedures described in Kobulnicky \& Skillman (1996). The flux in each emission line was measured with single Gaussian fits. In the case of blended lines or lines with low-S/N, we fixed the width and position of the fit using the width and position of other strong lines nearby. Logarithmic extinction parameters, c(H$\beta$), underlying stellar hydrogen absorption, EW(abs), and the electron temperatures were derived iteratively and self-consistently for each object using observed Balmer line ratios compared to theoretical case B recombination ratios (Hummer \& Storey 1987). Table~1 lists the emission line strengths dereddened relative to H$\beta$, c(H$\beta$), and an adopted value for the underlying stellar hydrogen absorption, EW(abs), for all 8 global spectra. For the new observations obtained here, tabulated uncertainties on the emission line strengths and derived physical properties take into account errors due to photon noise, detector noise, sky subtraction, flux calibration, and dereddening. For global spectra described in Kennicutt (1992), we determined uncertainties empirically from the RMS in the continuum portion of the spectrum adjacent to each line. Analyses of the physical conditions in each galaxy follow Kobulnicky \& Skillman (1996). For each spectrum (except He~2-10 and NGC~4449 where [O~III] $\lambda$4363 is not detected), we made a direct determination of the electron temperatures, $T_e(O~III)$, from the [O~III] $\lambda$4363 and [O~III] $\lambda$5007 lines. We determined the electron temperature of the lower ionization zones using the empirical fit to photoionization models from Pagel {\it et~al.\/}\ (1992) and Skillman \& Kennicutt (1993), \begin{equation} T_e(O^+)=2(T_e^{-1}(O^{++}) + 8\times10^{-5})^{-1}. \end{equation} \noindent The final O/H ratio involves the assumption that \begin{equation} { O\over H} = {{O^{+}}\over{H^{+}}} + {{O^{++}}\over{H^{+}}}. \end{equation} \noindent Analysis of the [O~I] $\lambda$6300 lines in the global and localized spectra showed that neutral oxygen accounts for less than 4\% of the total in all objects. Inclusion of the O$^{0+}$ contribution would raise the oxygen abundance by $<0.02$ dex, and is probably not a significant factor since $O^{0+}$ co-exists with $H^{0+}$ in photoionization models. Nebular He~II $\lambda$4686 was not detected in any of the global spectra except NGC~1569, so we assume that the contribution from highly ionized species like $O^{3+}$ is negligible. The nitrogen to oxygen ratio, N/O is computed assuming \begin{equation} { N\over O} = {{N^{+}}\over{O^{+}}}. \end{equation} Equation~3 appears justified in low metallicity environments where photoionization models for a range of temperatures and ionization parameters indicate uncertainties of less than 20\% through this approximation (Garnett 1990). Table~2 summarizes the derived electron temperatures, oxygen abundances, and nitrogen abundances for each galaxy. All objects except Henize 2-10 lie in the range 7.9$<$12+log(O/H)$<$8.4, typical of nearby irregular and blue compact galaxies with metallicities between 1/10 and 1/3 the solar value. We use these results in the next section to investigate the potential for measuring galaxy metallicities from global spectra. The lack of an [O~III] $\lambda$4363 detection in He~2-10, even in the localized smaller apertures ($T_e<$8400 K), places this galaxy in the metal-rich regime. The empirical strong-line relation of Zaritsky, Kennicutt, \& Huchra (1994) indicates 12+log(O/H)$\simeq$8.93 with probable uncertainties of 0.15 dex. Due to the lack of measured [O~III] $\lambda$4363, we exclude He~2-10 from further analyses. \subsection{Comparison of Local and Global Spectroscopy} Figure 2 shows the derived electron temperature for each spectrum, along with the signal-to-noise ratio of the [O~III] $\lambda$4363 line. Different symbols distinguish each galaxy. Small symbols denote measurements of individual H{\sc {II}}\ regions through small apertures while large symbols with error bars represent the global spectrum. Figure~3 shows the resulting oxygen abundance, 12+log(O/H) for each object, versus signal-to-noise ratio of [O~III] $\lambda$4363. This combination of figures reveals that apertures (within a given galaxy) having the highest electron temperatures exhibit the lowest oxygen abundances. At constant oxygen abundance such a correlation is expected since $T_e$ and the emissivity of a collisionally excited line, $\epsilon$, are correlated inversely as (cf., Osterbrock 1989), \begin{equation} \epsilon \propto\ T_e^{-1/2} e^{h\nu/kT_e}. \end{equation} Figures~2 and 3 show that the global spectra consistently indicate higher (lower) electron temperatures (oxygen abundances) than the individual H{\sc {II}}\ regions observed using smaller apertures within the same galaxy. In most cases, the global spectrum produces electron temperatures (oxygen abundances) consistent with the highest (lowest) value derived from the smaller apertures. One possible cause for such a trend is that the [O~III] $\lambda$4363 line measured in the global spectra has a lower signal-to-noise ratio, and is systematically overestimated in the presence of significant noise, since Gaussian fits to emission lines with very low S/N ratios are systematically biased toward larger values. However, since the S/N of [O~III] $\lambda$4363 in all the global spectra exceeds 10 (except NGC~4449) this effect is not likely to be the major cause of a systematic temperature deviation. A more likely possibility is that the regions of highest nebular surface brightness, which also tend to have the highest electron temperatures, dominate the global spectrum. Since the global spectrum is the same as an intensity-weighted average, it preferentially selects the regions of highest surface brightness which are likely to be ionized by the youngest, hottest stars, and thus have the highest electron temperature. The exponential dependence of $T_e$ on the [O~III] $\lambda$4363 line biases the measured electron temperature toward higher values. Thus, temperature fluctuations within the observed aperture will give rise to artificially--elevated electron temperatures derived from collisionally-excited lines, as observed in individual H{\sc {II}}\ regions and discussed extensively in the literature (Peimbert 1967; Kingdon \& Ferland 1995; Peimbert 1996). Figure~4 illustrates the range of oxygen abundances derived for each galaxy, along with the global value plotted using a large symbol and error bars showing the uncertainties due to statistical observational errors. Figure~4 demonstrates more clearly that the oxygen abundances derived from global spectra systematically lie 0.05---0.2 dex below the median values computed from smaller apertures. For the localized measurements within a given galaxy, there is a strong correlation between O/H and electron temperature as shown in Figure~5. The slope of this correlation within each galaxy is consistent with the correlation expected in the presence of random electron temperature uncertainties (solid line). Random or systematic errors in the adopted electron temperature can produce spurious variations in O/H that mimic real oxygen abundance fluctuations. The solid line in Figure~5 illustrates the direction along which the derived oxygen abundances will deviate in the presence of significant errors on the electron temperature. The data are consistent with constant metal abundance throughout each galaxy and a dispersion of $\Delta(O/H)\sim0.1$---$0.2$ dex caused by variations of $\Delta{T_e}\sim$1000---2000 K in the derived electron temperature. Such low-mass galaxies typically have O/H dispersions of 0.1 dex but no measurable chemical gradient (cf., review by Kobulnicky 1998), even in the diffuse ionized gas at large radii (Martin 1997). The data in Figure~4 is consistent with the known correlation between blue magnitude and oxygen abundance obeyed by nearly all known galaxy types (e.g., Lequeux {\it et~al.\/}\ 1979; French 1980; Faber 1973; Brodie \& Huchra 1991; Zaritsky, Kennicutt, \& Huchra--ZKH; Richer \& McCall 1995). The luminosity-metallicity relation derived by Skillman, Kennicutt, \& Hodge (SKH: 1989) for irregular galaxies appears as a dashed line. NGC~4861 and Tololo 35 deviate the most from this trend, suggesting that either their luminosities are not well measured, or that they are slightly over-luminous for their metallicities compared to the majority of galaxies. However, their deviation is consistent with the observed dispersion in the luminosity-metallicity relation, 0.3 dex at a fixed luminosity. \subsection{The Effects of Variable Temperature and Ionization} Can global spectra reliably probe the chemical content of the low-mass galaxies? Several effects may cause the global emission line ratios to produce oxygen abundance estimates $0.05$---$0.2$ dex lower than those derived from small aperture observations of individual H{\sc {II}}\ regions. Since the abundances derived from collisionally-excited lines are sensitive to the assumed electron temperature, the most likely cause involves temperature uncertainties or temperature fluctuations within the observed region. Peimbert (1967) first discussed the effects of temperature fluctuations within H{\sc {II}}\ regions. Since then a variety of authors have investigated their effects on the measured chemical composition of the Orion nebula (Walter, Dufour, \& Hester 1992), planetary nebulae (Liu \& Danziger 1993; Kingdon \& Ferland 1995), on primordial helium abundance measurements (Steigman, Viegas, \& Gruenwald 1997), and in evolving starbursts (P\'erez 1997). Typically, the magnitude of the temperature fluctuations is described by Peimbert's (1967) parameter, $t^2$, the root mean square of the temperature variation. Observational constraints suggest that $t^2$ ranges from $t^2=0.03$ for most planetary nebula to $t^2=0.1$ for a few planetary nebulae (Liu \& Danziger 1993) and giant extragalactic H{\sc {II}}\ regions like NGC~2363 (Gonzalez-Delgado {\it et~al.\/}\ 1994). In addition to temperature fluctuations within individual star-forming regions, many galaxies exhibit temperature gradients of 1000 K -- 3000 K (NGC~5253---Walsh \& Roy 1989; II~Zw~40; Walsh \& Roy 1993; I~Zw~18---Martin 1996; NGC~4214---Kobulnicky \& Skillman 1996; NGC~1569---Kobulnicky \& Skillman 1997). Thus, real galaxies containing one or more giant H{\sc {II}}\ regions will contain an arbitrary mixture of gas with differing electron temperatures. They will produce global spectra that cannot easily be characterized by the simple single-temperature, two-zone approximation ($T_e(O^{+})$ and $T_e(O^{++})$ commonly used. Peimbert (1967) and the subsequent researchers have shown that in the presence of temperature fluctuations, collisionally-excited lines indicate electron temperatures which are 1000 K -- 4000 K higher than $T_e$ measurements from recombination lines. A second factor that may influence chemical determinations from global galaxy spectra, even in the absence of temperature fluctuations, is variation of the ionization parameter, $U=Q_{Ly}/(4\pi R^2~n_H~c)$, the number density of ionizing photons. Diffuse, inter-H{\sc {II}}\ region gas (i.e., Diffuse Ionized Gas; DIG) with low ionization parameter ($\log{U}\simeq-3.5$) accounts for 20\% to 50\% of the Balmer line emission in irregular and spiral galaxies (Hunter \& Gallagher 1990; Martin 1997; Ferguson {\it et~al.\/}\ 1996). Diffuse ionized gas appears to be mostly photo-ionized, perhaps with a shock-excited component (Hunter \& Gallagher 1990; Martin 1997). Diffuse ionized gas in irregular galaxies exhibits higher [O~II]/[O~III] ratios ($1<[O~II]/[O~III]<10$) than H{\sc {II}}\ regions ($0.1<[O~II]/[O~III]<2$). This is a signature of a decreasing ionization parameter, as the distance from the ionizing O and B stars increases (Hunter \& Gallagher 1990; Hunter 1994; Martin 1997). The electron temperature of this diffuse gas is not well constrained, however. For the six galaxies with local and global spectroscopy, there is weak evidence for a correlation between the ionization parameter, as measured by $O^+/O^{++}$ and the magnitude of the offset between the global-derived oxygen abundance, and the mean O/H derived from localized spectra. Figure~6 shows the global $O^+/O^{++}$ for each galaxy versus the difference between the mean (O/H) for individual H{\sc {II}}\ region measurements and the (O/H) ratio derived from the global spectra for each object. The data have a linear correlation coefficient of 0.52, consistent with the existence of a correlation at the 70\% confidence level. The largest deviations are nearly $-0.15$ dex for NGC~1569 and NGC~4861 which have the lowest $O^+/O^{++}$ values of 0.2, consistent with the smallest contribution from diffuse ionized gas with a low ionization parameter. None of the objects in this sample appear to be dominated by low-ionization gas. In all cases, $O^+/O^{++}>1.0$, so we cannot address empirically the impact of large contributions from diffuse ionized gas with these data. Global and spatially-resolved observations of large irregular galaxies dominated by diffuse gas (e.g., NGC~1800, NGC~3077, NGC~4449) can address this issue. To simulate the possible effects of temperature fluctuations and varying ionization parameter on chemical analysis, we construct a set of six emission line spectra which characterize interstellar gas with a realistic range of temperatures and ionization parameters, but with a common oxygen abundance, 12+log(O/H)=8.0. We refer to these six spectra as ``basis spectra'' which we mix in different proportions, to simulate the effects of inhomogeneous temperature and ionization conditions on chemical analysis. Table~3 summarizes the physical parameters of the basis spectra. The ``standard'' spectrum, $S$, with an electron temperature, $T_e(O^{++})$, of 13,000 K, and $O^+/O^{++}=0.2$, represents a relatively high ionization parameter of $\log{U}\simeq-3$ in the models of Stasi\'nska (1990). Spectra T,U,V,W,X have $T_e(O^{++})$ of either 13,000 K, 11,000 K, or 9000 K. The temperature differences between spectra, $\Delta{T_e}(O^{++})$=2000 K and $\Delta{T_e}(O^{++})$=4000 K correspond roughly to $t^2=0.08$ and $t^2=0.14$ in the notation of Peimbert (1967). The former is within the $t^2$ values observed in actual nebulae, while the latter represents an upper bound to observed values. Approximate ionization parameters range from $\log{U}\simeq-3$ or $\log{U}\simeq-4$, parameterized here as $O^+/O^{++}=0.2$ or $O^+/O^{++}=2.0$. This range reflects the variations observed in irregular galaxy H{\sc {II}}\ regions (Martin 1997) and will serve for the modeling required here. However, the $O^+/O^{++}$ ratio in diffuse ionized gas can reach as high as $\sim$7 for line ratios [O~II]/[O~III]$\simeq$10 (Martin 1997). We begin by mixing the emission line spectrum of the standard sample, $S$, with the spectra of samples T,U,V,W,X in ratios of 80:20, 50:50, 20:80, and 10:90. Our standard nebular software is then used to analyze the resulting composite spectrum. Figure~7 displays the resulting electron temperatures, oxygen abundances, and $O^+/O^{++}$ ratios derived for the composite samples. Different symbols denote samples comprised of 80\% (filled squares), 50\% (filled circles), 20\% (open circles), and 10\% (open squares) of the standard sample, $S$. Crosses designate the physical conditions of the basis spectra, T,U,V,W,X, and S, used to construct the composite spectra. Line styles denote combinations of spectra T+S (dash-dot-dot line), U+S (solid line), V+S (dashed line), W+S (dotted line) and X+S (dash-dot line). Figure~7 (top panel) shows that, as the spectrum S ($T_e(O^{++})$=13,000 K, $O^+/O^{++}=0.2$) is increasingly diluted with gas from spectrum U ($T_e$=11,000 K, $O^+/O^{++}=0.2$), the measured electron temperature decreases smoothly, while the measured oxygen abundance becomes slightly underestimated by up to 0.02 dex. For example, when the mixture of the composite spectrum is 80\% S and 20\% U (filled square on the solid line) the derived electron temperature is 12,750 K. The derived oxygen abundance is 12+log(O/H)=7.99, an underestimate by 0.01 dex. As the composite spectrum becomes 50\% S and 50\% U (solid line, filled circle) the derived oxygen abundance is underestimated by 0.02 dex. As the composite mixture becomes dominated by spectrum U (open circle and then open square) the derived physical conditions converge once again toward the temperature of basis spectrum U (11,000 K) and toward the initial oxygen abundance of both spectra, 12+log(O/H)=8.0. When spectrum S is combined with spectrum V ($T_e$=11,000 K, $O^+/O^{++}=0.2$) the measured deviation from 12+log(O/H)=8.0 becomes more pronounced, up to 0.05 dex. The lower panel of Figure~7 shows a smooth progression in the measured $O^+/O^{++}$ ratio from 0.2 to 2.0. The mixture of spectrum S with spectrum W ($T_e$=9000 K, $O^+/O^{++}=0.2$) shows a substantial systematic deviation from constant metallicity. When the fraction of the lower temperature, lower ionization gas represented by spectrum W reaches 50\% to 90\% of the total emission line flux, the oxygen abundance is underestimated by 0.1 to 0.2 dex! A similar underestimate of the oxygen abundance results from spectrum X ($T_e$=9000 K, $O^+/O^{++}=2.0$). Figure~7 demonstrates that temperature fluctuations, modeled in the simplest possible way as a two-temperature medium, are the primary cause for over-estimation of the electron temperature and under-estimation of the oxygen abundance. Ionization parameter variations further exacerbate the systematic underestimate of oxygen abundances. Figure~7 shows that the addition of even a small quantity of high temperature gas (10\%) creates a significant overestimate of the mean electron temperature. Figure~7 demonstrates that the measured electron temperature, $T_e(O^{++})$, is sensitive to the physical conditions in the hottest $O^{++}$ zone of an H{\sc {II}}\ region due to the strongly non-linear dependence of [O~III] $\lambda$4363 on $T_e$. Furthermore, the electron temperature, $T_e(O^{++})$, in the $O^{++}$ zone is usually different than the electron temperature in lower-ionization $O^+$, $N^+$, $S^+$ zones at larger radii. Since $T_e(O^{+})$ or $T_e([S~II])$ is seldom measured directly, most nebular analyses procedures, including ours, adopt an empirical estimate based on the measured $T_e(O^{++})$ and photo-ionization modeling (Pagel {\it et~al.\/}\ 1992; Skillman \& Kennicutt 1993). However, in the presence of temperature fluctuations, or, in the limiting case of a two-temperature medium, the derived [O~III] electron temperatures are weighted toward the high-temperature medium. Application of a naively--computed $T_e(O^{++})$ artificially decreases the total abundance by underestimating the oxygen abundance of both the $O^{++}$ zone and the $O^{+}$ zones. A additional underestimate of the total metal abundance results if a large fraction of the nebular medium has a low ionization parameter. This is because estimates of $T_e(O^{+})$ derived from $T_e(O^{++})$ via an empirical relation based on photoionization models (Equation~1) will be inappropriately high. They systematically overestimate $T_e(O^{+})$, and underestimate the oxygen contribution from the dominant $O^+$ zones. While photoionization models can simulate the expected ionization structure of an ideal Stromgren sphere under a variety of ionization conditions, they do not take into account emission from the extended ionized (predominantly low-U) filaments and shells that are seen in actual galaxies. Since important factors such as the porosity of H{\sc {II}}\ regions, the ionizing source of the extended shells and filaments, and the origin of the variation in DIG content from galaxy to galaxy are still uncertain, it is unlikely that these structures will soon be incorporated into photoionization models. \section{The Case of Spiral Galaxies} Like their low-mass counterparts, star-forming spiral galaxies also exhibit a range of ionization and temperature conditions throughout their ISM. However, except those with strong optical bars, they also show considerable radial chemical gradients, often exceeding an order of magnitude (Searle 1971; Villa-Costas \& Edmunds 1992; Zaritsky, Kennicutt \& Huchra 1994; Martin \& Roy 1994). Global spectra of spiral galaxies will necessarily encompass a wide range of metallicities. In this section we consider whether it is possible to use global galaxy spectra, even in the presence of true chemical variations, to measure a ``mean'' or ``indicative'' systemic metallicity. In high-metallicity H{\sc {II}}\ regions ($12+{\log}(O/H)\geq8.5$), the temperature sensitive [O~III] $\lambda$4363 line is very weak, and it is seldom detected. Nevertheless, the oxygen abundance can be estimated using only the [O~II] $\lambda$3727, [O~III] $\lambda\lambda$4959,5007, and H$\beta$ lines using the method proposed by Pagel {\it et~al.\/}\ (1979) and subsequently developed by many authors. For high-metallicity ($12+{\log}(O/H)\geq8.5$) H{\sc {II}}\ regions, there exists a monotonic relationship between the ratio of observed collisionally-excited emission line intensities, \begin{equation} R_{23}\equiv(I_{3727}+I_{4959}+I_{5007})/H\beta, \end{equation} \noindent and the oxygen abundance of the nebula. In practice, only one of the [O~III] lines is required, since $I_{5007}\simeq2.87I_{4959}$ for all temperatures and densities encountered in H{\sc {II}}\ regions. In the most metal-rich H{\sc {II}}\ regions, $R_{23}$ is a minimum because the high metal abundance produces efficient cooling, reducing the electron temperature and the level of collisional excitation. $R_{23}$ increases in progressively more metal-poor nebula since lower metal abundance means reduced cooling, elevated electron temperatures, and a higher degree of collisional excitation. However, the relation between $R_{23}$ and O/H becomes double valued below about 12+log(O/H)=8.4 ($Z=0.3 Z_\odot$). Figure~8 illustrates this double-valued behavior. In Figure~8, we plot a variety of published calibrations between $R_{23}$ and O/H, including McGaugh (1991: solid line), Zaritsky, Kennicutt, \& Huchra, (1994: dashed line), McCall, Rybski, \& Shields (1985: dotted line), Edmunds \& Pagel (1984: dash-dot line), and Dopita \& Evans (1986: dash-dot-dot line). On the upper, metal-rich branch of the relationship, the various calibrations show a dispersion of 0.2 dex at a fixed value of $R_{23}$. This dispersion represents the inherent uncertainties in the calibration which are based on photoionization modeling and observed H{\sc {II}}\ regions (see original works for details). For metal abundances progressively lower than 12+log(O/H)$\simeq$8.2, $R_{23}$ decreases once again. On this lower branch, although the reduced metal abundance further inhibits cooling and raises the electron temperatures, the intensities of the [O~II] and [O~III] lines drop because of the greatly reduced abundance of oxygen in the ISM. On the lower, metal-poor branch of the relationship, a second parameter, the ionization parameter $U$, becomes important, in additional to $R_{23}$. This can be seen in the offset between the three solid lines from the calibration of McGaugh (1991). A varying ionization parameter may lead to a similar value of $R_{23}$ for different oxygen abundances. In Figure~8 we represent the approximate ionization parameter in terms of the easily observable line ratio [O~III]/[O~II]. The solid lines show the oxygen abundance as a function of $R_{23}$ for [O~III]/[O~II] = 10, 1.0, and 0.1 which correspond (very roughly) to ionization parameters, $U$, of $10^{-1}$, $10^{-2}$, and $10^{-4}$. Figure~8 serves as a useful diagnostic diagram for finding the oxygen abundances of nebulae when the electron temperature is not measured directly. The typical uncertainties using this empirical oxygen abundance calibration are $\pm0.15$ dex, but are larger, ($\pm$0.25 dex) in the turn-around region near 12+log(O/H)$\sim$8.4 when ${\log}(R_{23})>0.7$. The most significant uncertainty involves deciding whether an observed object lies on the upper, metal-rich branch of the curve, or on the lower, metal-poor branch of the curve. For instance, a measurement of $\log(R_{23})=0.0$ could indicate either an oxygen abundance of 12+log(O/H)$\simeq$7.2 or 12+log(O/H)$\simeq$9.1. Knowledge of either the luminosity of the galaxy or the [N~II] $\lambda$6584 intensity can help break the degeneracy. Because star-forming galaxies of all types {\it in the local universe} follow a luminosity-metallicity correlation (e.g., Lequeux {\it et~al.\/}\ 1979; Talent 1980; Skillman, Kennicutt, \& Hodge 1989, ZKH), objects more luminous than $M_B\simeq-18$ have metallicities larger than 12+log(O/H)$\simeq$8.3, placing them on the upper branch of the curve. However, it has not yet been established whether galaxies at earlier epochs conform to the same relationship as local galaxies. An even better discriminator is the ratio [O~III]$\lambda$5007/[N~II] $\lambda$6584 which is usually greater than $\sim$100 for galaxies with 12+log(O/H)$>$8.3 on the metal-rich branch (Edmunds \& Pagel 1984). This is because objects which are considerably enriched in oxygen are generally more nitrogen-rich as well, while the most metal-poor galaxies on the lower branch of the $R_{23}$ relation have very weak [N~II] lines. Figure~A1(b) of Edmunds \& Pagel (1984) shows the monotonic sequence of the ratio [O~III] $\lambda$5007/[N~II] $\lambda6584$ as a function of metallicity. If the [N~II] $\lambda6584$ line can be measured, we believe it will provide the most useful way to break the $R_{23}$ degeneracy in the absence of a measured electron temperature. \subsection{Individual H{\sc {II}}\ Regions and Global Spiral Spectra} The compilation of H{\sc {II}}\ region spectra in spiral galaxies presented by Zaritsky, Kennicutt, \& Huchra (1994; ZKH) provides an excellent dataset to explore the utility of global metallicity measurements in galaxies with chemical gradients. We compiled a subset of spectra for 194 HII regions in 22 galaxies from ZKH and Kennicutt \& Garnett (1996). We use our own remeasurements of the emission line ratios in the subsequent analysis. We estimate the integrated [O~II]/H$\beta$\ and [O~III]/H$\beta$\ emission line ratios for each galaxy in the following way. Spectra of the individual HII regions from ZKH are used to define the range of [O~II]/H$\beta$\ and [O~III]/H$\beta$\ as a function of galactocentric radius. For NGC~5457 (M101) we use data from Kennicutt \& Garnett (1996). In order to provide a meaningful estimate, we restrict the analysis to galaxies with at least 8 measured HII regions, spanning most of the radial range over which significant star formation takes place, and for which data on the radial distribution of H$\alpha$\ emission are available. The actual number of HII regions measured ranges from 8 in NGC~4725 and NGC~5033 to 42 in M~101. Typical values lie in the range 10--20. We subdivide each disk into 5--13 equal-sized radial zones, with the number of zones depending on the number and distribution of HII regions; the mean line ratios in each zone are derived from the averages of the individual nebular line ratios. In a few instances a zone did not contain a measured HII region, and in such cases the local line ratios were interpolated from the two adjacent zones. In order to derive disk-integrated line ratios, we compute a weighted radial average with the spectrum at each radius weighted by the relative H$\alpha$\ surface brightness at each radius and the area contained in each zone. The H$\alpha$\ radial profiles are derived from H$\alpha$\ CCD images from Martin \& Kennicutt (1998) and unpublished imaging from the original ZKH program, using an ellipse-fitting surface photometry routine. We sum [O~II]/H$\beta$\ and [O~III]/H$\beta$\ ratios for each galaxy to derive an integrated $R_{23}$ index for each galaxy, as given in Equation~5. In the calculations presented here we use the reddening-corrected HII region line strengths, convolved with the observed (uncorrected) H-alpha emission distributions, because this most faithfully duplicates the actual weighting when the integrated spectrum of a galaxy is observed. Table~4 lists the integrated line strengths and $R_{23}$ values for each galaxy, along with the number of HII regions used to derive these average values. In Figure~9 we plot the integrated [O~II] (log [O~II]/H$\beta$) versus [O~III] (log [O~III]/H$\beta$) line strengths for each galaxy determined in this way (large triangles), as well as the [OII] and [O~III] strengths for the individual HII regions in the same galaxy sample (dots). This comparison shows that the most of the integrated spectra lie along the same excitation/abundance sequence that is defined by the individual HII region. Such a correspondence would not necessarily be predicted {\it a priori}, because the HII region abundance sequence is not entirely linear, and one might expect the average spectra to be systematically displaced from the sequence. The tendency for the integrated spectra to lie on the excitation sequence partly reflects the limited range of abundance and excitation in many disks (especially barred systems), and the radial concentration of star formation in others. It is clear from Figure 9 that the integrated [O~II] and [O~III] line strengths mimic those of individual HII regions, but how do the corresponding ``mean" abundances compare to the actual abundances in the disks? In order to address this question we apply the $R_{23}$ calibration of ZKH to the integrated $R_{23}$ values in Table 4, to estimate a mean abundance for each galaxy. The ZKH calibration is only valid for HII regions which lie on the metal-rich branch of the R23 relation (see discussion above in Section 3), but this condition is satisfied for the sample considered here, with the exception of the outermost HII regions in M101, where $T_e$-based abundances were used. Figure 10 illustrates how well the weighted-average mean nebular spectrum characterizes the overall galaxy abundance. In Figure 10 we compare the empirical abundances derived from the integrated spectra with the actual disk abundances from ZKH, measured at 0.4 times the corrected isophotal radius (25.0 mag arcsec$^{-2}$ in $B$). The latter abundances are listed in Table 4, along with two other characteristic abundances from ZKH, that measured at a fixed linear radius of 3 kpc (using distances given in ZKH), and the abundance at 0.8 exponential scale lengths (also given in ZKH). Figure 10 shows that the integrated [O~II] and [O~III] emission-line strengths provide an excellent estimate of the mean abundance of the disk, corresponding roughly to the value at 40\%\ of the optical radius of the disk (indicated for reference by the solid line in Figure 10). The dispersion about the mean relation is very small, $\pm$0.05 dex. On the basis of these results we conclude that the beam-smearing effects from sampling large numbers of HII regions in the disk have a small effect on characterizing the mean abundance of galaxies, even those with substantial abundance gradients, relative to the intrinsic uncertainties in the R23 method itself. Although these results offer assurances that the effects of averaging the composite spectra of large numbers of discrete HII regions do not seriously hamper the measurement of a mean disk abundance, there are other systematic effects, not incorporated into our simulations, that may affect the derived abundances more significantly. One such effect is dust reddening, which will tend to depress the flux of [O~II] $\lambda$3727 relative to [O~III] and H$\beta$\ and tend to cause the mean abundance to be overestimated (for galaxies on the upper branch of the $R_{23}$--O/H relation). In our simulations we used the measured (i.e., reddened) line strengths when simulating the integrated spectra, so this effect is already incorporated into the comparisons shown in these Figures. To test the effects of reddening on the integrated spectra we carried out an identical comparison using reddening-corrected spectra, and the resulting O/H abundances change only slightly, increasing by 0.06 dex on average. A potentially more serious effect may be the contribution to the integrated spectrum of diffuse ionized gas, as discussed in Section~2.3. This gas tends to be characterized by stronger [O~II] emission and weaker [OIII] emission than discrete HII regions of the same abundance (e.g., Hunter 1994, Martin 1997). However the same data show that the sum of [O~II] and [O~III] line strengths ($R_{23}$) does not change substantially, so including the diffuse gas may preserve the information on mean abundances, at least in the metallicity range probed here. However it would be useful to test this conclusion directly using actual integrated spectra of spiral galaxies. Finally, the effects of stellar absorption of H$\beta$\ in integrated spectra of spirals can be very significant (see Section 4.2) and must be corrected for in the measurements of the integrated spectrum (Kennicutt 1992). Overall, the practicalities of measuring integrated emission-line strengths in spiral galaxies probably pose a much more challenging problem than the actual interpretation of the composite nebular spectrum. \section{The Effect of Underlying Stellar Absorption and Low Signal-to-Noise Spectra} In the previous two sections we show that global galaxy spectra can provide reliable information on the chemical properties of distant galaxies which exhibit high equivalent width emission lines. However, except for the most vigorously star-forming objects, the integrated spectra of most large galaxies are dominated by stellar continuum, rather than nebular emission. In some galaxies with strong post-starburst stellar populations, the nebular Balmer line emission is erased by stellar atmospheric absorption. Only 7 of the 24 global spectra of normal spiral and elliptical galaxies obtained by Kennicutt (1992) show measurable [O~II], [O~III], and H$\beta$ emission lines needed for empirical oxygen abundance determinations. However, a larger fraction of irregular and peculiar spiral galaxies show the requisite emission lines. In this section, we discuss the effects of low signal-to-noise spectra and stellar absorption which limit the precision of nebular abundance measurements from global spectra. \subsection{Uncertainties due to Low Signal-to-Noise} Because the empirical calibration between $R_{23}$ and oxygen abundance has an intrinsic uncertainty of 0.2 dex (40\%), the total error budget will be dominated by this uncertainty even when the observed emission line spectra have low signal-to-noise. For example, a spectrum with a signal-to-noise of 8:1 (12\%) on each of the [O~II], [O~III], and H$\beta$ emission lines will yield an uncertainty of 25\% on $R_{23}$ or $\delta({\log}R_{23})\simeq$0.1 This observational uncertainty propagates into an O/H uncertainty of 0.1--0.2 dex, depending on the local slope of the calibration curve in Figure~8. This quantity is smaller than, or comparable to the uncertainty of the calibration curve, $\sim$0.2 dex, based on photoionization modeling. Thus, to within the accuracy of the strong-line calibration, even modest signal-to-noise spectra can yield useful indications of a galaxy's gas-phase oxygen abundance. \subsection{Uncertainties due to Stellar Balmer Line Absorption} In high signal-to-noise nebular spectra, the observed ratios of H$\alpha$/H$\beta$, H$\gamma$/H$\beta$, and H$\delta$/H$\beta$, compared to theoretical values, simultaneously constrain the amount of reddening from intervening dust, and the degree to which the stellar atmospheric absorption lines reduce the measured nebular Balmer emission.\footnote{This assumes that the underlying EW of the Balmer absorption is the same for the strongest Balmer lines. This assumption appears to be approximately valid for most star-forming populations (Olofsson 1995).} In practice, global galaxy spectra, especially at high redshift, will seldom have the signal-to-noise and wavelength coverage necessary to decouple these effects. In the absence of high-quality Balmer line measurements, we recommend that a statistical correction be applied to the measured strength of the Balmer emission lines, principally H$\beta$. For integrated galaxy spectra presented here, and for the global spectra presented in Kennicutt (1992) the amount of underlying stellar absorption, Abs(H$\beta$), runs from 1\AA\ to 6 \AA, with a mean of 3 \AA. (see also McCall, Rybski, \& Shields 1985; Izotov, Thuan, \& Lipovetsky 1994 for a sample of observations). We recommend that a statistical correction of $+3\pm2$ \AA\ be applied to H$\beta$ measurements from global galaxy spectra. When the signal-to-noise ratio of H$\beta$ is low, the uncertainty of $\pm$2 \AA\ will act as an additional error term. For example, for spectra where H$\beta$ has an equivalent width of 8, the statistically important $+3\pm2$ \AA\ correction on EW(H$\beta$) introduces an additional 25\% uncertainty on the strength of H$\beta$. This uncertainty must be accounted for in the total error budget. \subsection{Uncertainty due to Un-measured Ionization Species} An empirical measurement of the oxygen abundance using the method outlined in Section~3 requires measurements of the emission line strengths for both of the dominant species of oxygen, [O~III] $\lambda\lambda$4959,5007, [O~II] $\lambda$3727, and H$\beta$. However, complications due to limited wavelength coverage, or contamination from night sky lines and atmospheric absorption bands may preclude measurement of the necessary emission lines for objects at unfavorable redshifts. Fortunately, measurement of either [O~III] $\lambda$4959 or [O~III] $\lambda$4959, along with [O~II] $\lambda$3727 and H$\beta$, is sufficient to measure oxygen abundances. The fixed theoretical ratio of $\lambda$5007/$\lambda$4959 is $\sim$2.9 for all electron temperatures and densities encountered in photoionized H{\sc {II}}\ regions (although ratios as higher than 3 are sometimes measured in H{\sc {II}}\ regions, and are more commonly seen in supernova remnants). Thus, the strength of one line can be computed from the other, and the accuracy of the O/H determination is not diminished. Oxygen abundances become highly uncertain in the case where [O~II] $\lambda$3727 is not measured. [O~II] $\lambda$3727 is a crucial diagnostic for indicating the ionization parameter, fraction of diffuse ionized gas, and possible contamination from AGN-like excitation mechanisms. $O^+$ may be the dominant ionization state in low-ionization H{\sc {II}}\ regions. The global spectra in Kennicutt (1992) indicate that $O^+$ is the dominant ionization species in more than half of the 24 galaxies with suitable emission lines. Ratios of $O^+/O^{++}$ range from 0.10 to 4.0. Neglecting the contribution from $O^+$ may result in errors as small as 10\% in the case of high-ionization nebulae, up to a factor of 4 in galaxies dominated by low-ionization gas. However, the metallicity-excitation sequence observed in Figure~9 between [O~III]/H$\beta$ and [O~II]/H$\beta$ suggests that, {\it for the majority of normal galaxies on the metal-rich branch of the $R_{23}$ relation}, a measurement of [O~III]/H$\beta$ can constrain the value of [O~II]/H$\beta$ to within a factor of 3. The converse is not true. An observed ratio [O~II]/H$\beta$ does not correspond to a any well-constrained [O~III]/H$\beta$ ratio since the scatter in [O~III]/H$\beta$ at a given [O~II]/H$\beta$ exceeds a factor of 10. Attempts to estimate oxygen abundances without an [O~II]/H$\beta$ ratio must carry very large uncertainties of 0.5 dex, while attempts to measure oxygen abundances without an [O~III]/H$\beta$ ratio have uncertainties exceeding 1 dex. \section{Prospects for Measuring the Metallicities of High-Redshift Emission Line Galaxies} We have shown that global galaxy spectra which are dominated by normal H~II regions can provide reliable information on the gas-phase chemical abundances, even in objects with variable gas temperature and chemical properties. In summary, chemical analysis using nebular optical emission lines falls into three regimes. 1). {\it [O~III] $\lambda$4363 is detected along with [O~II] $\lambda$3727, [O~III] $\lambda\lambda$4959,5007 and H$\beta$}: In the best case scenario, the [O~III] $\lambda$4363 line can be used to derive an electron temperature for the emitting gas, and chemical abundance ratios can be estimated using standard nebular analysis techniques (e.g., Osterbrock 1989). In local galaxies, [O~III] $\lambda$4363 is generally detected only in galaxies with 12+log(O/H)$\leq$8.4 ($Z\leq0.3~Z_\odot$). Based on spatially-resolved and global spectra of local irregular galaxies which are chemically homogeneous but contain varying temperature and ionization conditions, we find that the [O~III] $\lambda$4363 line strength provides a firm upper limit on the mean electron temperature of the ionized gas. The oxygen abundance derived using this $T_e$ is therefore a firm lower limit. Our empirical results in local galaxies, combined with modeling realistic mixtures of H{\sc {II}}\ regions with varying physical conditions, suggests that a statistical correction factor of $\Delta(O/H)=+0.1$ or $\Delta{T_e}=-1000$ K be applied to physical parameters derived from global galaxy spectra. 2) {\it [O~III] $\lambda$4363 is not detected but [O~II] $\lambda$3727, [O~III] $\lambda\lambda$4959,5007 and H$\beta$ are measured}: Most of the time, the temperature sensitive [O~III] $\lambda$4363 will not be detected due to either limited signal-to-noise or intrinsically weak lines in metal-rich nebulae. In this case, the empirical strong-line calibration in Figure~8 can still be used to derive an oxygen abundance to within $\pm$0.2 dex (the uncertainty in the model calibrations) if the [O~III] $\lambda\lambda$5007,4959, [O~II] $\lambda$3727, and H$\beta$ lines are measured with a signal-to-noise of at least 8:1. A major difficulty with this method is that the relation between oxygen abundance and $R_{23}$ is double valued, requiring some assumption or rough $a~priori$ knowledge of a galaxy's metallicity in order to locate it on the appropriate branch of the curve. We suggest that the [N~II]/[O~III] line ratios may be useful in breaking this degeneracy, if they can be measured. Analytic fits to the curves in Figure~8 may assist in computing the oxygen abundances from measured line ratios. Zaritsky, Kennicutt, \& Huchra (1994) provide a polynomial fit to their average of three previous calibrations shown in Figure~8. This mean relation is good for the upper, metal-rich regime only: \begin{equation} 12+log(O/H)=9.265-0.33x - 0.202x^2 - 0.207x^3 - 0.333x^4 \ (ZKH~1994) \end{equation} \noindent where \begin{equation} x\equiv\ \log{R_{23}}\equiv\ \log\Biggr({{[O~II] \lambda3727 + [O~III] \lambda\lambda4959,5007}\over{H\beta}}\Biggr) \end{equation} McGaugh (1991) computed a more extensive calibration based on a set of photoionization models which take into account the effects of varying ionization parameter in both the metal-rich and metal-poor regimes as shown in Figure~8. McGaugh (1998) provides analytic expressions for the metal-poor (lower) branch, \begin{equation} 12+log(O/H)_{l} = 12 -4.944+0.767x+0.602x^2-y(0.29+0.332x-0.331x^2), \end{equation} \noindent and for the metal-rich (upper) branch, \begin{eqnarray} 12+log(O/H)_{u} = 12 -2.939-0.2x-0.237x^2-0.305x^3-0.0283x^4- \cr y(0.0047-0.0221x-0.102x^2-0.0817x^3-0.00717x^4), \end{eqnarray} \noindent where \begin{equation} y \equiv\ \log(O_{32}) \equiv\ \log\Biggr( {{[O~III]\lambda\lambda4959,5007}\over{[O~II]\lambda3727}}\Biggr) \end{equation} These analytic expressions to the semi-empirical calibration of McGaugh (1991, 1998) fit the models to within an RMS of $\leq$0.05 dex. 3) {\it Only [O~III] $\lambda\lambda$4959,5007 and H$\beta$ are measured}: Very crude oxygen abundances may be derived from the ratio of [O~III] $\lambda\lambda$4959,5007/H$\beta$ even if the [O~III] $\lambda$4363 and [O~II] $\lambda$3727 are not measured. This requires both an assumption about the objects location on the bi-valued $R_{23}$ relation, {\it and} an assumption about the ionization parameter which leads to an estimate of the [O~II] $\lambda3727$ line strength. Uncertainties in this case must exceed 0.5 dex. 4) {\it The spectrum has a signal-to-noise less than 8:1, or one of the necessary emission lines are not measured}: In this worst-case scenario, the uncertainties on the O/H ratio will exceed a factor of 3 (0.5 dex). Conclusions about the chemical nature of the object under consideration will be speculative at best. Given that star-forming regions appear to be plentiful at higher redshifts, the prospects appear good for measuring chemical abundances in distant galaxies, even with coarse spatial resolution of ground-based telescopes. The coming generation of near-infrared spectrographs on large telescopes will make it possible to trace the chemical evolution of the universe using emission line regions in a manner complementary to absorption-line techniques. \acknowledgments We are grateful to Mauricio Navarrete for his expertise with observations and calibration at CTIO and Sabine M\"ohler for assistance at Calar Alto. We appreciate a copy of the electronic galaxy spectra from Dennis Zaritsky, and helpful conversations with Max Pettini, Crystal Martin, and Stacy McGaugh. H.~A.~K. and R.~C.~K thank the Aspen Center for Physics for the opportunity to collaborate on this research during a three-week workshop on star formation in June 1998. H.~A.~K appreciates hospitality at the 1998 Guillarmo Haro International Program for Advanced Studies in Astrophysics at the Instituto National de Astrofisica Optica y Electronica (INAOE) in Puebla, Mexico where this work was completed. R.~C.~K. and J.~P. were supported by NSF grant AST-9419150. Support for H.~A.~K was also provided by NASA through grant \#HF-01094.01-97A awarded by the Space Telescope Science Institute which is operated by the Association of Universities for Research in Astronomy, Inc. for NASA under contract NAS 5-26555.
1,116,691,500,579
arxiv
\section{Introduction} Extra dimensions provide an approach to modify gravity without abandoning the form of the action proposed in the Einstein's general relativity. From a phenomenological point of view we can avoid constraints coming from standard model observations, by considering a brane world scenario, that is, we are living in a hypersurface (3-D) in a higher dimensional spacetime. From the theoretical point of view, string theory predicts a boundary layer, a brane, on which edges of open strings stand \cite{Polchinski:1995mt}. The possibility that we may be living in a brane generates many questions as how gravity looks like. Also in an attempt to solve the much debated hierarchy problem, various problems are studied, but also in order to understand the cosmology, such as inflation and Dark Energy. In this contribution to consequences of the brane-world in 4-D, we study one of the most famous model, DGP \cite{Dvali:2000hr}. We know that the physics of black hole and gravitational collapse is complicated, especially because of the matter localized to the brane, while the gravitational field can access the extra dimension, and also because of the nonlocal effects of the bulk into the surface. Part of the problem would be to find a solution on the brane. We can not necessarily embed this solution into a bulk but some information of the global solution can be understood, some intuition can be developed. The solution can be smoothly continued into the bulk via the ADM formalism, where the solution on the brane can be considered as an initial data. At least a local solution of the bulk exist, even if the global solution is not guaranteed. The model is defined as an empty five-dimensional space (not necessarily Minkowsky) and all the energy-momentum is localised on the four-dimensional brane. The theory is described by the following action in the vacuum \begin{align} \label{eq:action} \mathcal{S}=M_{(5)}^3\int {\rm d^5x} \sqrt{-g}R +\frac{M_{(4)}^2}{2}\int {\rm d^4x}\sqrt{-h} R, \end{align} where $(g,h)$ are respectively the metric of the bulk and the brane and $R$ the intrinsic curvature in 5-D and 4-D respectively. The Gibbons-Hawking term is implicit in (\ref{eq:action}). Variation of the action gives the following equations of motion \begin{align} \text{Bulk equation:}\quad R_{\mu\nu}&=0,\\ \text{Brane equation:}\quad G_{\mu\nu}&=\frac{1}{r_c}\Bigl(K_{\mu\nu}-Kh_{\mu\nu}\Bigr). \end{align} where $r_c=M_{(4)}^2/2M_{(5)}^3$ is the crossover scale that governs the transition between four-dimensional behaviour and five-dimensional behaviour. Following \cite{Shiromizu:1999wj} and \cite{Kofinas:2002gq} we can rewrite an equation on the brane with the metric $h$ only. For this we define the tensor \cite{Kofinas:2002gq} \begin{align} L_{\mu\nu}=K_{\mu\nu}-\frac{K}{2}h_{\mu\nu}+\frac{1}{2r_c}h_{\mu\nu}, \end{align} which gives on the brane \begin{align} \label{Eq:brane} G_{\mu\nu}+\frac{3}{2r_c^2}h_{\mu\nu}=\frac{1}{r_c}\Bigl(L_{\mu\nu}+\frac{L}{2}h_{\mu\nu}\Bigr), \end{align} and $L$ is solution of the following algebraic equation (Gauss equation) \begin{align} \label{Eq:L} L_\mu^{~\alpha}L_{\alpha\nu}-\frac{L^2}{4}h_{\mu\nu}+\frac{3}{4r_c^2}h_{\mu\nu}=-E_{\mu\nu}, \end{align} where $L$ is the trace of the tensor and $E$ is the electric part of the Weyl tensor. In the following, we will focus on static spherically solutions in the vacuum of the form \begin{align} \label{eq:metric} {\rm d}s^2=-A(r){\rm d}t^2+\frac{{\rm d}r^2}{B(r)}+r^2{\rm d}\Omega^2. \end{align} Therefore we can decompose irreducibly the tensor $E$ with respect to a $4$-velocity field $u^\mu$ \cite{Maartens:2000fg,Dadhich:2000am} \begin{align} \label{Eq:E} E_{\mu\nu}=\rho(r)\Bigl(u_{\mu}u_{\nu}+\frac{1}{3}q_{\mu\nu}\Bigr)+P(r)\Bigl(r_{\mu}r_{\nu}-\frac{1}{3}q_{\mu\nu}\Bigr), \end{align} where $q_{\mu\nu}=h_{\mu\nu}+u_{\mu}u_{\nu}$ projects orthogonal to the timelike vector. Here $(\rho,P)$ are respectively an effective energy density and anisotropic stress on the brane arising from the $5$-D gravitational field. It is easy to see from (\ref{Eq:brane},\ref{eq:metric}) that $L$ is diagonal, hence we write $L^\mu_{~\nu}=\text{diag} \Bigl(L_0,L_1,L_2,L_3\Bigr)$. From the equations (\ref{Eq:L},\ref{Eq:E}) we have the following algebraic equations \begin{align} \label{Eq:L1} L_1&=\pm\sqrt{L_0^2-\frac{2}{3}(2\rho+P)},\\ \label{Eq:L2} L_2&=L_3=\pm\sqrt{L_0^2-\frac{1}{3}(4\rho-P)}, \end{align} where the equation $L_2=L_3$ comes from the Eq.(\ref{Eq:brane}). Also from the brane equation (\ref{Eq:brane}), we have \begin{align} -A\frac{\rm d}{\rm d r}\Bigl(\frac{B}{A}\Bigr)=\frac{r}{r_c}\Bigl(L_1-L_0\Bigr). \end{align} Hereafter, we will assume that radial photons should experience no acceleration, the velocity of light in the radial direction should remain constant. Therefore we have \cite{Dadhich:2012pd} $A=B$ which implies $L_1=L_0$, hence from (\ref{Eq:L1}) we have $2\rho+P=0$. This constraint between the density and the pressure is the same than in the absence of induced curvature term \cite{Dadhich:2000am}. Finally, we have from (\ref{Eq:L}) \begin{align} 4L_0^2+2P\pm 8L_0\sqrt{L_0^2+P}=\frac{3}{r_c^2}. \end{align} Following \cite{Kofinas:2002gq}, we define $v=2\pm\sqrt{L_0^2+P}/L_0$. Then, it is straightforward to check that \begin{align} \label{Eq:L02} L_0^2&=\frac{3}{2r_c^2(v^2-3)},\\ \label{Eq:P} P&=\frac{3}{2r_c^2}\frac{(v-1)(v-3)}{v^2-3}. \end{align} We see that we need $v^2>3$. The only undetermined function is $v$, all the other quantities as $(\rho,P,K_{\mu\nu})$ are related to $v$. Considering now the Bianchi identity \begin{align} \nabla_\mu L^\mu_{~\nu}+\frac{1}{2}\nabla_\nu L=0, \end{align} we can close the system of equations and get an equation for $v$ \begin{align} \frac{{\rm d}v}{{\rm d} r}+\frac{2}{3r}(v-3)(v^2-3)=0, \end{align} which gives \begin{align} \label{Eq:v} \frac{r^4}{r_c^2}=Q^2 \frac{|v-\sqrt{3}|^{(\sqrt{3}+1)/2}}{|v+\sqrt{3}|^{(\sqrt{3}-1)/2}|v-3|}, \end{align} where $Q$ is an integration constant. The coefficient $r_c^2$ is fixed in order to recover the result $P\simeq 1/r^4$ \cite{Dadhich:2000am} in the limit $r_c\rightarrow 0$, which would corresponds to the Reissner-Nordstr\"om solution on the brane with tidal charge. The eq.(\ref{Eq:v}) gives $v(r)$ which from (\ref{Eq:L02}) gives $L_0(r)$, therefore we can solve the final equation (\ref{Eq:brane}) \begin{align} \frac{rB'(r)+B(r)-1}{r^2}=-\frac{3}{2r_c^2}\pm \frac{\sqrt{3/2}}{r_c^2}\frac{v}{\sqrt{v^2-3}}, \end{align} where the sign $\pm$ is because of the sign of $L_0$. We have found 3 different solutions depending on the range of $v$. In the first solution we have $v<-\sqrt{3}$, the second solution corresponds to $\sqrt{3}<v<3$ and the last to $v>3$. Accordingly the range for $r$ will be respectively $r>\sqrt{Qr_c}$, $r>0$ and $r>\sqrt{Qr_c}$. The second and the third solutions are identical except the range for $r$, hence we keep only the second solution which covers the full spacetime (brane). The first solution do not cover the full spacetime $r>\sqrt{Qr_c}$ and therefore can't describe a black hole. This solution will not be studied in this paper. Hence we have 2 branches of the solution \begin{align} \label{metric} A\equiv B=1-\frac{2m}{r}-\frac{r^2}{2r_c^2}\pm \frac{r^2}{2r_c^2}f(v), \end{align} where $m$ is an integration constant, $v$ is solution of the algebraic equation (\ref{Eq:v}) and $f$ can be written in terms of Gauss's hypergeometric function, \begin{widetext} \begin{align} f(v)=\sqrt{6}\frac{v-2}{\sqrt{v^2-3}}-\frac{4\sqrt{2}(\sqrt{3}-1)(v-3)^2}{5(v-\sqrt{3})^{3/2}(v+\sqrt{3})^{1/2}}~_2F_1\Bigl[1,\frac{3(3-\sqrt{3})}{8},\frac{9}{4},(\sqrt{3}-1)\frac{v-3}{v-\sqrt{3}}\Bigr]. \end{align} \end{widetext} $f$ is a concave down monotonically increasing function of $r$. It is negative for $r\lesssim 0.78 \sqrt{Qr_c}$ and positive otherwise and $\lim_{r\rightarrow \infty} f=1$. The horizon structure of the black hole on the brane depends on the branch considered. The negative branch (negative sign in the equation (\ref{metric})) has a black hole horizon and a cosmological horizon because of the de-Sitter structure at large distances, while the positive branch, which is asymptotically flat, has a single horizon. It is interesting to study the asymptotic behaviour of this solution. We have at large distances \begin{align} \label{linear} f(r)= 1-\frac{2q^2r_c^2}{r^4}+\frac{2q^4r_c^4}{5r^8}-\frac{4q^6r_c^6}{9r^{12}}+O(\frac{1}{r^{16}}), \end{align} where we have redefined the integration constant $q=\sqrt{3/2}(3-\sqrt{3})^{(\sqrt{3}-1)/4}(3+\sqrt{3})^{-(\sqrt{3}+1)/4} Q$. We see that only the positive branch has a smooth limit as $r_c\rightarrow 0$ (Randall-Sundrum model limit) and as such we will refer to as the RS branch. In contrast the negative branch is not smooth as $r_c\rightarrow 0$ and represents a distinct new feature of DGP, the DGP branch also known as the self-accelerating branch. The RS branch converges to $A=1-2m/r-q^2/r^2$, it is not a Reissner-Nordstr\"om spacetime. In fact the tidal charge has always the same sign and it is physically more natural for a brane solution \cite{Dadhich:2000am}, the tidal charge strengthens the gravitational field. This is why our solution do not have a Cauchy horizon, even in the limit $r_c\rightarrow 0$. Also as $r_c\rightarrow \infty$, we recover the Schwarzschild solution ($1-2m/r$) for the 2 branches, the solution is the same than in Einstein theory, there is no vDVZ \cite{vanDam:1970vg,Zakharov:1970cc} discontinuity. The continuity of the theory is restored because the nonlinear effects were taken into account while we would conclude to a discontinuity if we use the linearized solution at large distances (\ref{linear}). At small distances \begin{align} f(r)\simeq -\frac{\alpha}{W^{3 (1+\sqrt{3})/8}}+\frac{\beta}{\sqrt{W}}+\frac{\gamma}{W^{(3\sqrt{3}-5)/8}}, \end{align} where $(\alpha,\beta,\gamma)$ are 3 positive constants \begin{align} \alpha &=\frac{3 \left(2 \sqrt{3}-3\right)^{\frac{3}{8} \left(\sqrt{3}-3\right)} \Gamma \left(\frac{5}{4}\right) \Gamma \left(\frac{1}{8} \left(3 \sqrt{3}-1\right)\right)}{\sqrt{45+26 \sqrt{3}} ~\Gamma \left(\frac{3}{8} \left(3+\sqrt{3}\right)\right)}\simeq 0.76,\nonumber\\ \beta &=\frac{1}{13} \sqrt{90+\frac{111 \sqrt{3}}{2}}\simeq 1.05,\nonumber\\ \gamma &=\frac{3 \sqrt[4]{3} \left(2 \sqrt{3}-3\right)^{\frac{3 \sqrt{3}}{8}-\frac{1}{8}} \Gamma \left(\frac{9}{4}\right) \Gamma \left(\frac{1}{8} \left(3 \sqrt{3}-1\right)\right)}{5 \sqrt{2} \Gamma \left(\frac{3}{8} \left(3+\sqrt{3}\right)\right)}\simeq 0.77,\nonumber \end{align} and $W$ is the Lambert function at the point $3^{\sqrt{3}/2}4^{1-\sqrt{3}}(2-\sqrt{3}) \Bigl(\frac{r}{\sqrt{qr_c}}\Bigr)^{4(\sqrt{3}-1)}$. The dominant contribution is given by the first term \begin{align} f(r)\simeq -\delta \frac{(r_c q)^{3/2}}{r^3}, \end{align} with \begin{align} \delta=2 \sqrt{2}\frac{\Gamma \left(\frac{5}{4}\right) \Gamma \left(\frac{1}{8} \left(3 \sqrt{3}-1\right)\right)}{3^{3/8}\Gamma \left(\frac{3}{8} \left(3+\sqrt{3}\right)\right)}\simeq 3.11\nonumber. \end{align} We see that even in the case of a massless black hole, we have a "mass-term" because of the fifth dimension. Hence we recover a standard result; even for a massless black hole, the behaviour of the solution is $1/r$ at small distances and $1/r^2$ at large distances. In order to keep the effective mass positive, in the DGP branch, we impose $\bar{q}<\frac{4}{\delta}\bar{m}$, where $(\bar{m}=m/\sqrt{q r_c},\bar{q}=q/r_c)$. The existence of the black hole is constrained in the Fig.(\ref{fig:f}). From which we see that for a fixed parameter $\bar q$, the mass of the black hole has an upper bound but also a lower bound. We would have a naked singularity for lightest black holes, this can be seen as an instability of the branch. \begin{figure} \includegraphics[scale=0.75]{Nbranch} \caption{Existence of the black hole for the DGP branch: The blue part represents the range of the parameters $(\bar{m},\bar{q})$ for which the black hole has 2 horizons. In the grey region, the metric is always negative and the white region corresponds to a black hole with only a cosmological horizon, because of the negativity of the effective mass, hence inside the cosmological horizon the solution would be a naked singularity.} \label{fig:f} \end{figure} The RS branch is much simpler, we have a black hole for all positive parameters $(\bar{m},\bar{q})$, also these parameters play the same role, they increase the position of the horizon and hence its entropy. At large distances, the Newtonian potential is dominant. Depending on the parameters, the situation can be the same for all distances, the solution will be very close to the Schwarzschild solution, except for large values of $\bar{q}$ where the mass of the black hole will be renormalized at small distances. But in the case of the DGP branch, we do not have the same behaviour at large and small distances, hence we have a new distance scale $r_\star$ dubbed the Vainshtein radius \begin{align} r_\star\simeq (mr_c^2)^{1/3}. \end{align} As we said previously, the mass of the black hole is bounded from bellow. The existence of the horizon is constrained by $m> \bar q^2 r_c^2/r_\star\simeq \bar q^2 10^{28} M_{\odot}$ if we assume the Vainshtein radius of the order galaxy scale and the crossover scale of the order Hubble scale. A stellar black hole exists if $\bar q<10^{-14}$. Otherwise it will be a naked singularity. The extrinsic curvature and therefore the curvature constant can be easily derived from (\ref{Eq:L02},\ref{Eq:P}) \begin{align} R=\frac{3}{r_c^2}\Bigl(2\mp\sqrt{6}\frac{v-1}{\sqrt{v^2-3}}\Bigr), \end{align} which is singular at $r=0$. Also we can see that $R>0$ for the DGB-branch and converges to $12/r_c^2$, while we have $R<0$ for the RS-branch and it goes to zero at infinity. A non-vanishing curvature outside the source leads usually to screening mechanism and we have shown previously the absence of the vDVZ discontinuity. On the physical stability of the solutions, it is interesting to study the violation of the energy conditions if we consider the tensor $K_{\mu\nu}-Kh_{\mu\nu}$ as a source term, and the positivity of the gravitational mass of this spacetime. These particular problems should be addressed separately.\\ In order to study the linear stability of this solution, we follow the Regge-Wheeler formalism \cite{Regge:1957td,Zerilli:1970se} and we decompose the metric perturbations according to their transformation properties under two-dimensional rotations. They are classified depending on the transformation properties under parity, namely odd (axial) and even (polar). Using the Regge-Wheeler, and Zerilli gauge, one obtains two distinct perturbations : odd and even perturbations. For $\ell > 1$, the equation of perturbations takes the form \begin{align} \frac{d^2}{d t^2}\psi-\frac{d^2}{dr^*}\psi+V \psi=S, \end{align} where $S$ is the perturbation of the source term, $r^*$ is the tortoise coordinate defined as $dr^{*2}=dr^2/A$, $\psi$ is a function of the metric perturbations and $V$ is the Regge-Wheeler potential or the Zerilli potential in the respective case. We have for the Regge-Wheeler potential \begin{align} V_{RW}=&\frac{A \left(\lambda+2 A-r A' \right)}{r^2}-\frac{12 \rho r^2 A}{6 r^2+r_c^2 \left[(r^2 A)''-2\right]},\\ =&A\frac{\lambda-1+3A}{r^2}+\frac{3A}{2r_c^2}\Bigl(1\mp \sqrt{\frac{6}{v^2-3}}\Bigr), \end{align} where $\lambda=(\ell-1)(\ell+2)$. The last form is useful in order to see that the potential is always positive for the 2 branches. The first term of the potential is the standard term in 4 dimensional and the last term comes from the 5 dimensional effects. We do not write here the Zerilli potential because it is much more complicated but as in General Relativity, the graph of the Zerilli potential is similar than the Regge-Wheeler potential for the same parameters. The positivity of the potential indicates the stability of the spacetime under linear perturbations \cite{Vishveshwara:1970cc} for the 2 branches. It is important to notice that we didn't considered the source term in order to study the stability of the theory. In this case, the source term is much more complicated than in General Relativity. In fact it is not localized, the source is function of the electric part of the Weyl tensor which is a non local term. For the dipole perturbation $\ell=1$, we can write $h_{t\phi}=\beta(r)\sin^2(\theta)$ because the time dependant term can be removed via the gauge freedom. The equation for $\beta$ takes the form \begin{align} \label{Hill} \frac{\beta''}{\beta} =\frac{2}{r^2}\pm \sqrt{\frac{3}{2}}\frac{v-3}{r_c^2 A \sqrt{v^2-3}}, \end{align} where we have neglected the source term. The solution of this equation represents the spacetime around a slowly rotating black hole on the brane for the DGP model. In the case where $r_c\rightarrow \infty$ which corresponds to the GR limit, we recover the standard result, the Kerr metric at the first order of perturbations. \begin{align} h_{t\phi}=-\frac{J}{r}\sin^2(\theta), \end{align} where $J$ is identified to the angular momentum and we have gauged away the unphysical term proportional to $r^2$. The Eq.(\ref{Hill}) has a Hill type form, hence the evolution will depend on the sign of the RHS of the equation. In order to integrate the system, we assume a Kerr form of the metric at small distances. For the DGP branch, we have an angular speed $\Omega=-\beta(r)/r^2$ which decreases with the radial distance but contrary to General Relativity, $\Omega=0$ at a finite distance, smaller than the cosmological horizon. After this critical point, the frame-dragging is occurring in the opposite direction. In the case of the RS branch, the angular speed depends on the parameters $(\bar m,\bar q)$. For small $\bar q$ we recover Kerr solution. Once we increase this parameter, $\Omega$ goes to a finite value at infinity, and finally when $\bar q>\bar m$ the angular speed increases after decreasing and converge also to a finite value. In conclusion, we have derived an exact black hole solution on the brane for the DGP model. This solution recovers the standard results at small and large distances but also covers the intermediate regime which was not known. The 2 branches depend on 3 parameters, the mass of the black hole, the tidal charge and the cross scale parameter. We have shown than if we do not consider the perturbations of the bulk (the source term), the solutions are stable under linear perturbations. It would be also interesting to see how the horizon will be affected by the full solution and how the quasinormal modes of the black hole are modified. Finally we have shown from the dipole perturbation, an approximation of a possible rotating black hole on the brane. For the DGP branch, the metric $g_{t\phi}$ decreases to zero in a finite range of the radial distance, while for larger distances, the angular speed is opposite. Whereas $g_{t\phi}$ converges to a finite value at infinity for the RS branch. Contrary to General Relativity, the dragging do not disappears completely at large distances. \begin{acknowledgments} It is a pleasure to thank N. Dadhich and M. Sami for useful discussions. I thank also H. Nandan and R. Ul Haq Ansari for their initial collaboration on this work. This research is supported by the Grant-in-Aid for Scientific Research Fund of the JSPS No. 10329. \end{acknowledgments}
1,116,691,500,580
arxiv
\section{Introduction} A period of exponential expansion of the early universe driven by the potential energy of a scalar field --- the inflaton --- is an elegant explanation for the flatness, isotropy and homogeneity of the universe today~\cite{Sta80,Gut81,Lin82,Alb82,Lin83}. Furthermore, it provides a very plausible mechanism for generating the nearly scale invariant spectrum of primordial density fluctuations that have been imprinted on the cosmic microwave background (CMB)~\cite{WMA12,Pla13} and have grown into the large scale structure of galaxies~\cite{Lid93}. The nature of the inflaton is, however, still unknown. While a large number of inflationary models that extend the scalar degrees of freedom of the Standard Model (SM) have been proposed (see e.g.\ \cite{Lyt99,Mar13}), the possibility that the SM Higgs boson is the inflaton --- a scenario attractive for its minimality --- still remains for the model of Higgs inflation from a non-minimal coupling to gravity~\cite{Bez08}.\footnote{Other proposed models of Higgs inflation make use of special features of the SM potential that develop if the Higgs quartic coupling $\lambda$ runs to very small values. The quasiflat SM potential considered in~\cite{Isi08}, however, predicts too large an amplitude of density fluctuations while false vacuum inflation~\cite{Mas12a,Mas12b,Mas12c} requires an additional scalar particle to achieve a graceful exit from inflation. Further possibilities, not discussed here, make use of derivative couplings of the Higgs to gravity or other non-renormalizable Higgs couplings~\cite{Ger10,Nak10,Kam11,Kam12,Her12}.} This model of Higgs inflation, based on the work of~\cite{Sal89,Fak90,Kai95,Kom99}, makes use of a large non-minimal gravitational coupling $\xi H^\dag H \mathcal{R}$ between the Higgs doublet $H$ and the Ricci scalar $\mathcal{R}$.\footnote{This is the only local, gauge-invariant interaction with mass dimension four or less that can be added to the SM once gravity is included.} The effect of this coupling is to flatten the SM potential above the scale $M_\text{Pl}/\sqrt{\xi}$, thereby allowing a sufficiently flat region for slow roll inflation. An analysis of the tree-level potential finds $\xi \simeq 5 \times 10^4\sqrt{\lambda}$ is required to produce the correct amplitude of primordial density fluctuations~\cite{Bez08}, which for $M_h \simeq 125$--126~GeV~\cite{Gia13} gives $\xi \sim 2 \times 10^4$. The predictions for the spectral index and the tensor-to-scalar ratio are also well within the current 1$\sigma$ allowed regions~\cite{WMA12,Pla13}. It has been pointed out, however, that Higgs $\xi$-inflation with the large value $\xi \sim 10^4$ suffers from a serious problem. Perturbative unitarity is violated at the scale $M_\text{Pl}/\xi$, and new physics entering at $M_\text{Pl}/\xi$ to restore unitarity is naively expected to contain new particles and interactions that affect the potential in an uncontrollable way~\cite{Bur09,Bar09,Bur10,Her10}.\footnote{It has been argued that the scale of perturbative unitarity violation for a large background Higgs field is higher than the small background field estimate $M_\text{Pl}/\xi$ and, in particular, does not spoil the perturbative analysis of inflation~\cite{Bez09b,Bez11,Fer11}. In this case, one must make a non-trivial assumption about the new physics sector that the scale of new physics is background dependent~\cite{Ler12}. In this paper, we make the more conservative working assumption that the scale of new physics is independent of the background Higgs field and therefore must be taken to be the lowest scale of perturbative unitarity violation, $M_\text{Pl}/\xi$.} The self-consistency of the model in the inflationary region $h \gtrsim M_\text{Pl}/\sqrt{\xi}$ is therefore questionable. To address the issue of unitarity violation while preserving the minimality of Higgs inflation, one must make a rather strong assumption that either additional non-renormalizable Higgs interactions accompany the non-minimal coupling and restore unitarity~\cite{Ler10} or that new strong dynamics entering at $M_\text{Pl}/\xi$ restores unitarity in a non-perturbative way~\cite{DeS09,Bez09b,Bez11,Bez11b}. It is unknown whether the former approach can be made consistent with quantum corrections or the effect of additional potential and Yukawa interactions~\cite{Ler12}, while it is unclear whether strong coupling in graviton exchange processes for the latter scenario can unitarize scattering cross sections without requiring new physics~\cite{Ler12}. If the latter scenario is possible, however, an approximate shift symmetry of the potential in the inflationary region $h \gtrsim M_\text{Pl}/\sqrt\xi$ may keep quantum corrections to the potential under control~\cite{Bez11}. The problem of perturbative unitarity violation in Higgs $\xi$-inflation, at least with regard to new physics entering at $M_\text{Pl}/\xi$ below the inflationary scale, is perhaps not as severe as the tree-level estimate of $\xi$ suggests. A Higgs mass $M_h \simeq 125\text{--}126$~GeV is in the region that, for a top quark mass only about 2$\sigma$ below its central value, the effective Higgs quartic coupling $\lambda_\text{eff}(\mu)$ can run to very small (positive) values near the Planck scale~\cite{Bez12,Deg12,Sha10}. The effect of small $\lambda_\text{eff}(\mu)$ near the Planck scale is to reduce the value of $\xi$ necessary for successful inflation~\cite{Bez09a,DeS09,Bez09b} and hence push the scale of perturbative unitarity violation toward the inflationary scale. If inflation with $\xi \sim 1$ is possible for sufficiently small $\lambda_\text{eff}(\mu)$ --- a scenario that is not yet explored --- the problem of perturbative unitarity violation occurring below the inflationary scale can be avoided.\footnote{In this case, note that although the potential during inflation $V^{1/4} \lesssim 2 \times 10^{16}$~GeV is constrained to be sub-Planckian~\cite{WMA12,Pla13}, the non-minimal coupling $\xi H^\dag H \mathcal{R}$ with $\xi \sim 1$ is still relevant to inflation since the Higgs field $h \sim M_\text{Pl}/\sqrt\xi$ is then assumed to be near the Planck scale.} Of course, an investigation of this possibility requires a proper treatment of the RG evolution and effective potential within the framework of Higgs $\xi$-inflation. Extending the analysis of Higgs $\xi$-inflation to higher loop order is not entirely straightforward. While the renormalization group (RG) equations of the SM are perfectly adequate for describing the RG evolution below $M_\text{Pl}/\xi$, there are two ambiguities in the RG evolution above $M_\text{Pl}/\xi$ due to the non-minimal coupling of the Higgs. First, quantum loops involving the physical Higgs field (and not the Nambu-Goldstone bosons present in the Landau gauge) are heavily suppressed in this region~\cite{DeS09,Bez09b}. To deal with this, one can either use the chiral electroweak theory (SM with frozen radial Higgs mode) to derive the RG equations above $M_\text{Pl}/\xi$~\cite{Bez09b} or one can simply use the RG equations of the SM with a suppression factor for each Higgs running in a loop~\cite{DeS09,Cla09,Ler09,Ler11}. Second, radiative corrections to the SM potential (in particular the choice of the renormalization scale $\mu(h)$) depend on whether they are computed in the Einstein or Jordan frame~\cite{Bez09a}, and it is unclear which frame should be used without knowledge of physics at the Planck scale. In this paper, we extend the two-loop analysis of Higgs $\xi$-inflation~\cite{Bez09b,DeS09} to include the three-loop SM beta functions for the gauge couplings~\cite{Mil12} as well as the leading three-loop terms for the RG evolution of $\lambda$, the top Yukawa coupling $y_t$, and the Higgs anomalous dimension $\gamma$~\cite{Che12}. For the first time, a complete two-loop insertion of suppression factors for the {\em physical} Higgs loops, which was missing in~\cite{DeS09}, is carried out. The use of these RG equations provides a modest update to the previous analyses of Higgs $\xi$-inflation. The main focus of this paper, however, is to investigate the region of parameter space with $\lambda_\text{eff}(\mu) \ll 1$ near the Planck scale that exists for the recently measured Higgs mass $M_h \simeq 125\text{--}126$~GeV and a top quark mass $M_t \sim 171$~GeV, about 2$\sigma$ below its central value.\footnote{For the top quark mass central value, the SM potential develops an instability at around $10^{11}$~GeV~\cite{Bez12,Deg12}. Since Higgs $\xi$-inflation requires the stability of the potential up to the inflationary scale $M_\text{Pl}/\sqrt{\xi}$, one could interpret this result as disfavouring Higgs $\xi$-inflation at 2$\sigma$. The position advocated here is that a special region of Higgs $\xi$-inflation with $\lambda_\text{eff}(\mu) \ll 1$ exists within only 2$\sigma$ of experimental measurements.} The paper is organized as follows. In section~\ref{sec:review}, we give a brief review of Higgs $\xi$-inflation and the tree-level analysis. In section~\ref{sec:twoloop}, the RG equations and the effective potential relevant for a two-loop analysis of Higgs $\xi$-inflation are presented. The numerical results and inflationary predictions for both the Einstein and Jordan frame renormalization prescriptions, with a particular focus on the small $\lambda_\text{eff}^\text{min}$ region, are given in section~\ref{sec:results}. A summary of the results and the conclusions are given in section~\ref{sec:conclusions}. \section{Tree-level analysis} \label{sec:review} Let us first briefly review Higgs $\xi$-inflation and the tree-level computation of the inflationary predictions. Although the tree-level results will differ from those in the two-loop analysis, many qualitative features of the computation will remain the same. As an example of inflation from a non-minimally coupled scalar, Higgs $\xi$-inflation is characterized by a non-minimal gravitational coupling $\xi H^\dag H \mathcal{R}$ between the Higgs doublet $H$ and the Ricci scalar $\mathcal{R}$. The Lagrangian of the model is given by~\cite{Bez08} \begin{equation} \label{eq:lagrangian} \mathcal{L} = \mathcal{L}_\text{SM} - \frac{M^2}{2}\mathcal{R} - \xi H^\dag H \mathcal{R}, \end{equation} where $\mathcal{L}_\text{SM}$ is the SM Lagrangian and $M$ is a mass parameter (the bare Planck mass) that can safely be identified with the present Planck mass value $M_\text{Pl} = \paren{8\pi G_N}^{-1/2} \simeq 2.4 \times 10^{18}$~GeV for $\sqrt{\xi} \ll 10^{17}$. The part of \eqref{eq:lagrangian} that is relevant to inflation gives the action \begin{equation} \label{eq:actionJ} S_J = \int d^4 x \sqrt{-g} \squareparen{-\frac{M_\text{Pl}^2}{2}\paren{1 + \frac{2 \xi H^\dag H}{M_\text{Pl}^2}}\mathcal{R} + \paren{\partial_\mu H}^\dag \paren{\partial^\mu H} - V}, \end{equation} where $V = \lambda\paren{H^\dag H-v^2/2}^2$ is the SM potential and the subscript $J$ denotes the Jordan frame. This is the frame in which the inflationary model is defined. To compute the inflationary observables, it is convenient to first remove the non-minimal coupling to gravity in \eqref{eq:actionJ} by performing the conformal transformation \begin{equation} \label{eq:conformal} g_{\mu \nu} \rightarrow \tilde{g}_{\mu \nu} = \Omega^2 g_{\mu \nu}, \qquad \Omega^2 = 1 + \frac{2\xi H^\dag H}{M_\text{Pl}^2}. \end{equation} The resulting Einstein frame action is given by \begin{equation} \label{eq:actionEfull} S_E = \int d^4 x \sqrt{-\tilde g} \squareparen{-\frac{M_\text{Pl}^2}{2}\tilde{\mathcal{R}} + \frac{1}{\Omega^2}\paren{\partial_\mu H}^\dag \paren{\partial^\mu H} + \frac{3\xi^2}{\Omega^4 M_\text{Pl}^2}\partial_\mu(H^\dag H) \partial^\mu (H^\dag H) - \frac{V}{\Omega^4}}, \end{equation} where $\tilde \mathcal{R}$ is calculated with the metric $\tilde g$. The action \eqref{eq:actionEfull} simplifies greatly in the unitary gauge $H = \frac{1}{\sqrt 2} \paren{\begin{smallmatrix} 0 \\ h \end{smallmatrix}}$, which may be used for the tree-level computation, giving \begin{equation} \label{eq:actionEunitary} S_E = \int d^4 x \sqrt{-\tilde g} \squareparen{-\frac{M_\text{Pl}^2}{2}\tilde{\mathcal{R}} + \frac{1}{2}\paren{\frac{\Omega^2 + 6\xi^2 h^2 / M_\text{Pl}^2}{\Omega^4}}\partial_\mu h \partial^\mu h - \frac{V}{\Omega^4}}, \end{equation} where $V = \frac{\lambda}{4}\paren{h^2-v^2}^2$ and $\Omega^2 = 1 + \xi h^2/M_\text{Pl}^2$. It is also convenient to remove the non-canonical kinetic term for the Higgs field in \eqref{eq:actionEunitary} by changing to a new scalar field $\chi$, defined by \begin{equation} \label{eq:scalarfieldredef} \frac{d\chi}{dh} = \sqrt{\frac{\Omega^2 + 6 \xi^2 h^2/M_\text{Pl}^2}{\Omega^4}}. \end{equation} The Einstein frame action then takes the form \begin{equation} \label{eq:actionE} S_E = \int d^4 x \sqrt{-\tilde g} \squareparen{-\frac{M_\text{Pl}^2}{2}\tilde\mathcal{R} + \frac{1}{2}\partial_\mu \chi \partial^\mu \chi - U(\chi)}, \end{equation} where the potential is given by \begin{equation} \label{eq:potential} U(\chi) = \frac{V}{\Omega^4} = \frac{\lambda(h^2-v^2)^2}{4(1+\xi h^2/M_\text{Pl}^2)^2} \end{equation} with $h = h(\chi)$. It is the flattening of the potential $U(\chi)$ to a constant value $U_0 \equiv \lambda M_\text{Pl}^4/4 \xi^2$ in the region $h \gtrsim M_\text{Pl}/\sqrt \xi$ that allows slow roll inflation to occur. The standard analysis of inflation in the slow roll approximation can be carried out for the field $\chi$ and potential $U(\chi)$. In the inflationary region $h^2 \gtrsim M_\text{Pl}^2/\xi \gg v^2$, the slow roll parameters for $\xi \gg 1$ can be approximated by~\cite{Bez08b,DeS09} (see~\cite{Kai95} for exact expressions) \begin{align} \epsilon &= \frac{M_\text{Pl}^2}{2}\parenfrac{dU/d\chi}{U}^2 \simeq \frac{4M_\text{Pl}^4}{3\xi^2h^4},\label{eq:epsilon} \\ \eta &= M_\text{Pl}^2\frac{d^2U/d\chi^2}{U} \simeq \frac{4M_\text{Pl}^4}{3\xi^2 h^4}\paren{1-\frac{\xi h^2}{M_\text{Pl}^2}},\label{eq:eta} \\ \zeta^2 &= M_\text{Pl}^4 \frac{\paren{d^3U/d\chi^3}dU/d\chi}{U^2} \simeq \frac{16M_\text{Pl}^6}{9\xi^3 h^6}\paren{\frac{\xi h^2}{M_\text{Pl}^2}-3}.\label{eq:zeta} \end{align} Slow roll ends when either $\epsilon \simeq 1$ or $\abs{\eta} \simeq 1$. For \eqref{eq:epsilon} and \eqref{eq:eta}, this occurs when $\epsilon \simeq 1$ at a field value $h_\text{end} \simeq \paren{4/3}^{1/4} M_\text{Pl}/\sqrt{\xi} \simeq 1.07 M_\text{Pl}/\sqrt{\xi}$. The number of e-folds of inflation as $h$ changes from $h_0$ to $h_\text{end}$ is given by~\cite{Bez09c} \begin{equation} \label{eq:N} N = \int_{h_\text{end}}^{h_0} \frac{1}{M_\text{Pl}^2}\frac{U}{dU/dh}\parenfrac{d\chi}{dh}^2 dh \simeq \frac{3}{4}\left[ \frac{h_0^2 - h_\text{end}^2}{M_\text{Pl}^2/\xi} + \ln \paren{\frac{1+\xi h_\text{end}^2/M_\text{Pl}^2}{1+\xi h_0^2/M_\text{Pl}^2}} \right]. \end{equation} The values of the parameters \eqref{eq:epsilon}--\eqref{eq:zeta} at a particular field value $h_0$, corresponding to the time at which the pivot scale $k_* \simeq 0.002$Mpc$^{-1}$ left the horizon during inflation, can be used to compare with the CMB data. This value of $h_0$ (or equivalently $N$) is a model-dependent quantity that is sensitive to the details of reheating. For Higgs $\xi$-inflation, an analysis of reheating finds that $N \simeq 59$, or equivalently $h_0 \simeq 9.14M_\text{Pl}/\sqrt{\xi}$, is the value at which $k_*$ left the horizon during inflation~\cite{Bez09c,Gar09}. Using \eqref{eq:epsilon} in the WMAP9 normalization $U/\epsilon \simeq \paren{0.0274M_\text{Pl}}^4$~\cite{WMA12}, the required value of $\xi$ is\footnote{The {\it Planck} 2013 normalization $U/\epsilon \simeq \paren{0.0269M_\text{Pl}}^4$~\cite{Pla13} gives $\xi \simeq 18000$.} \begin{equation} \label{eq:xi} \xi \simeq 48000 \sqrt \lambda = 48000 \frac{M_h}{\sqrt 2 v} \simeq 17000. \end{equation} The predictions for the spectral index $n_s$, the tensor-to-scalar ratio $r$, and the running of the spectral index $dn_s/d\ln k$ are given by \begin{align} n_s &= 1 - 6\epsilon + 2\eta \simeq 0.967, \\ r &= 16 \epsilon \simeq 0.0031, \\ \frac{dn_s}{d\ln k} &= 24 \epsilon^2 - 16 \epsilon \eta + 2 \zeta^2 \simeq 5.4 \times 10^{-4}. \end{align} These predictions for $n_s$ and $r$ are well within the current 1$\sigma$ allowed regions from~\cite{WMA12,Pla13}, while the prediction of $dn_s/d\ln k$ is consistent with observations at the 1--$2\sigma$ level. \section{Two-loop analysis} \label{sec:twoloop} An analysis of Higgs $\xi$-inflation beyond the tree level must include both the running of the couplings and loop corrections to the (effective) potential~\cite{Bez09a,DeS09,Bez09b}. The most significant effect of these higher order corrections comes from the running of the Higgs quartic coupling $\lambda = \lambda(\mu)$. For $M_h \simeq 125$--126~GeV, it is well known that the running of $\lambda(\mu)$ --- or more specifically $\lambda_\text{eff}(\mu)$ --- causes the SM potential to develop an instability below the Planck scale unless the top quark mass is about 2$\sigma$ below its central value~\cite{Bez12,Deg12}. Since Higgs $\xi$-inflation requires the stability of the potential up to the inflationary scale $M_\text{Pl}/\sqrt \xi$, in order to realize this model of inflation one must make the moderate assumption of a top quark mass $M_t \lesssim 171$~GeV. In this case, it has been shown that the small values of $\lambda_\text{eff}(\mu)$ near the Planck scale can significantly reduce the non-minimal coupling $\xi$ required for successful inflation~\cite{Bez09a,DeS09,Bez09b}.\footnote{Actually, the one-loop~\cite{Bez09a} and two-loop~\cite{DeS09,Bez09b} analyses predate the Higgs mass measurement and were carried out to determine the range of $M_h$ allowed for Higgs $\xi$-inflation. In retrospect, however, a Higgs mass near the lower end of the allowed region suggests a value of $\xi \lesssim 10^3$ is required for successful inflation, with the lower limit of $\xi$ unknown.} The reason for this is relatively simple: the tree-level estimate \eqref{eq:xi} shows that it is the combination $\lambda/\xi^2$ that must be small ($\sim 4 \times 10^{-10}$) in order to give the proper normalization of the CMB power spectrum. If $\lambda_\text{eff}(\mu)$ is much smaller in the inflationary region than its tree-level value $\lambda \simeq 0.13$, then $\xi$ must also be smaller than the tree-level estimate $\xi \simeq 18000$. The smaller value of $\xi$ required for successful inflation is particularly important since it is closely related to one of the most significant drawbacks of Higgs $\xi$-inflation: the violation of perturbative unitarity at the scale $M_\text{Pl}/\xi$. For $\xi \rightarrow 1$, this scale is pushed toward the inflationary scale $M_\text{Pl}/\sqrt{\xi}$ and the questionable assumptions of non-renormalizable operators~\cite{Ler10} or new strong dynamics~\cite{DeS09,Bez09b,Bez11,Bez11b} entering to restore unitarity are no longer required.\footnote{Of course, the Higgs field $h$ during inflation becomes trans-Planckian in this case and one must worry about the effects of higher dimensional operators suppressed by the Planck scale, which may spoil the flatness of the potential or the inflationary predictions~\cite{Bez09c}. As remarked in~\cite{DeS09}, however, the same worry applies to many minimal models of inflation, such as $m^2\phi^2$ chaotic inflation.} Since the lower limit of $\xi$ in the case of small $\lambda_\text{eff}(\mu)$ during inflation has not been explored, an important question is whether it is possible to realize Higgs $\xi$-inflation with $\xi \sim 1$ and hence avoid the perturbative unitarity issues with the model. Such a region is, by nature, highly sensitive to the running of $\lambda_\text{eff}(\mu)$ and requires a proper loop analysis within the Higgs $\xi$-inflation framework. To investigate the lower limit of $\xi$ with $\lambda_\text{eff}(\mu) \ll 1$ during inflation, we first describe the RG equations and the two-loop effective potential that are appropriate for Higgs $\xi$-inflation in sections~\ref{sec:RGE} and \ref{sec:potential}, respectively. The analysis of inflation, including the lower limits on $\xi$ and the inflationary predictions, are presented in section~\ref{sec:results}. \subsection{Renormalization group equations} \label{sec:RGE} The modification of the well-known RG equations of the SM for the Higgs $\xi$-inflation scenario has been discussed in~\cite{Bez09a,DeS09,Bez09b,Cla09,Ler09,Ler11}. Essentially, the scalar propagator of the physical Higgs field, which enters into loop diagram calculations for the RG equations, must be multiplied by the field-dependent factor~\cite{DeS09,Ler09}\footnote{The reason for this suppression is that the canonical momentum of $h$ (which is evaluated in the Einstein frame with a canonical gravity sector) gives a non-standard commutator $[h(\vec x),\dot{h}(\vec y)] = i \hbar s(h)\delta^3(\vec x - \vec y)$ in the Jordan frame after imposing the standard commutation relations $[h(\vec x),\pi(\vec y)] = i\hbar \delta^3(\vec x - \vec y)$.} \begin{equation} \label{eq:s} s(h) = \frac{1+\xi h^2/M_\text{Pl}^2}{1+(1+6\xi)\xi h^2/M_\text{Pl}^2}. \end{equation} For small field values $h \ll M_\text{Pl}/\xi$, $s \simeq 1$ and the RG equations for the SM are perfectly adequate for describing the RG evolution. For large field values $h \gg M_\text{Pl}/\xi$, however, the physical Higgs propagator is suppressed by a factor $s \simeq 1/(1+6\xi)$ and hence the RG equations differ from those of the SM. Two methods of dealing with this effect have been considered in the literature~\cite{DeS09,Bez09b}, leading to somewhat different results. The first method of treating the suppressed Higgs loops, which is described in~\cite{DeS09}, is to insert one suppression factor $s$ into the RG equations of the SM for each off-shell Higgs propagator. Originally this was done by extracting out all Higgs doublet propagators at one-loop order and inserting the appropriate factors of $s$, repeating the process only for obvious terms at two-loop order~\cite{DeS09}. It was later pointed out, however, that only the propagator of the physical Higgs field and not the Nambu-Goldstone bosons that are present in the Landau gauge should come with such a factor~\cite{Bez09b}. The corrected RG equations with systematic insertions of $s$ for all two-loop terms, except for $\beta_\lambda$, are given in~\cite{Ler11}. By using these RG equations in the full two-loop SM effective potential from~\cite{Deg12} (with $m^2 \rightarrow 0$ and $M_h^2 \rightarrow 3s\lambda h^2$) and demanding that the potential be independent of $\mu$, we have been able to extract the two-loop part for $\beta_\lambda$.\footnote{In the process, we believe that two typos in the complete expression for the two-loop SM effective potential have been discovered. In~\cite{Deg12}, the final term of (A.3) should read $-\chi I_{ttg}$ instead of $-\chi I_{ttz}$ and the second last term on the third line of (A.5) should read $-3I_{w00}$ instead of $+I_{w00}$.} A similar procedure can then be used to obtain the two-loop RG equation for the Higgs mass parameter $m^2$ (in the notation of~\cite{Deg12}) with appropriate suppression factors. Although $\beta_{m^2}$ is not actually required for an analysis of Higgs $\xi$-inflation, it can be used to derive the RG equation for $\xi$ through the relation $\beta_\xi = \paren{\xi + 1/6}\gamma_m$, where $\gamma_m = \beta_{m^2}/m^2$~\cite{Ler09}. The complete set of two-loop RG equations with suppressed physical Higgs loops is given in appendix~\ref{app:RGE}. The second method of treating the suppressed Higgs loops is to instead view the effect as a suppression of the effective Higgs coupling to other SM fields and, for large $\xi$, neglect the physical Higgs field altogether in the region $h \gtrsim M_\text{Pl}/\xi$~\cite{Bez09b}. The resulting theory (SM with frozen radial Higgs mode) is known as the chiral electroweak theory and has been studied previously in the literature. It is therefore possible to extract one-loop RG equations, which are valid for $\xi \gg 1$, from earlier works such as~\cite{Dut08}. In~\cite{Bez09b}, however, the RG equations for $\lambda$, $y_t$, and $\xi$ derived in this way differ from \eqref{eq:betalambda}, \eqref{eq:betayt}, and \eqref{eq:betaxi} with $s = 0$. A closer look reveals two sources (though not necessarily errors) for this discrepancy. First, the equation for the running of $v^2$ used in~\cite{Bez09b}, which was first given in~\cite{Dut08}, differs from the running of the SM Higgs field $h^2$ in the Landau gauge, \begin{equation} \label{eq:betah2} 16\pi^2\mu\frac{\partial}{\partial \mu}h^2 = \paren{\frac{3}{2}\gp{2}+\frac{9}{2}g^2-6y_t^2}h^2. \end{equation} In the usual SM case, the running of the Higgs vacuum expectation value $v^2$ and the Higgs field $h^2$ are both gauge-dependent quantities~\cite{Buc95} and have the same running~\cite{Ara92}. Although it has been argued that $v^2$ in the chiral electroweak theory is a gauge-invariant parameter and therefore its running should also be gauge invariant, we have found it difficult to reproduce the chiral electroweak theory result using Feynman diagrams and understand why its running differs from the running of $h^2$ in the SM. In any case, if eq.~(5.6) of~\cite{Bez09b} is replaced by the similar expression \eqref{eq:betah2}, the resulting one-loop equation for $\beta_{y_t}$ agrees with \eqref{eq:betayt} for $s = 0$. Second, $\beta_\lambda$ is derived in \cite{Bez09b} by demanding that the one-loop effective potential be independent of $\mu$, where the one-loop potential does not include the usual contribution from the Nambu-Goldstone bosons~\cite{Deg12} \begin{equation} \label{eq:deltaU1} \Delta U_1 = \frac{3M_G^4}{64\pi^2}\paren{\ln\frac{M_G^2}{\mu^2}-\frac{3}{2}}, \end{equation} where $M_G^2 = \lambda h^2$. Although the Goldstone boson contribution to the effective potential is strongly suppressed for prescription~I (see section~\ref{sec:potential}), the result of excluding this term in deriving $\beta_\lambda$ is equivalent to suppressing some Feynman diagrams with off-shell Goldstone boson propagators; that is, the $\paren{6+18s^2}\lambda^2$ term in \eqref{eq:betalambda} disappears entirely. While this difference is small in the region $h \gtrsim M_\text{Pl}/\xi$ (numerically it is smaller than the two-loop correction to $\beta_\lambda$), if the contribution~\eqref{eq:deltaU1} is included in eq.~(4.1) of~\cite{Bez09b} the resulting one-loop equation for $\beta_\lambda$ agrees with \eqref{eq:betalambda} for $s = 0$. Note that the one-loop equation for $\beta_\xi$ in~\cite{Bez09b} still differs from \eqref{eq:betaxi} even after accounting for these changes. Specifically, the latter has a factor of $\xi + 1/6$ instead of $\xi$ and an additional term $\paren{6+6s}\lambda$ compared to the former. Again, these differences in $\beta_\xi$ are small (typically below the size of the two-loop correction to $\beta_\xi$) since we always have $\xi \gg 1/6$ and since $\lambda$ is small in the region $M_\text{Pl}/\xi \lesssim h \lesssim M_\text{Pl}/\sqrt\xi$ over which $\xi$ runs. It is also worth mentioning that the second method includes the effects of additional counterterms taken from the chiral electroweak theory~\cite{Dut08} that arise to cancel divergences in the non-renormalizable SM sector without a Higgs field. These effects appear through the additional couplings $\alpha_0$ and $\alpha_1$ that modify the renormalization of the Z boson mass~\cite{Bez09b} and contribute to the effective potential at the two-loop level. Numerically, however, the Z boson mass contribution at the two-loop level, and hence this effect, is subleading~\cite{Deg12}. The two methods of treating the suppressed Higgs loops for $h \gtrsim M_\text{Pl}/\xi$ are therefore quite similar, at least for large $\xi$. The first method uses $s$ factors to smoothly interpolate between the SM-like RG evolution at low energies and the RG evolution with suppressed physical Higgs propagators at high energies, while the second method models this transition as an abrupt change at $M_\text{Pl}/\xi$.\footnote{A smooth interpolation is preferred, but for $\xi \gg 1$ the function $s(h)$ changes rapidly in the region $h \sim M_\text{Pl}/\xi$ and so the modification of the numerics is negligible~\cite{Bez09b}.} Since the $s$-factor treatment also handles the case $\xi \sim 1$, though, we adopt the first method~\cite{DeS09,Cla09,Ler09,Ler11} and use a suppression factor $s$ for each off-shell Higgs propagator in the SM RG equations for our analysis of Higgs $\xi$-inflation. Despite the different treatments of the RG equations described above, the numerical differences are small enough that a two-loop analysis in the region $h \gtrsim M_\text{Pl}/\xi$ should be justified. With the recent SM calculation of the three-loop beta functions for the gauge couplings~\cite{Mil12} and the leading three-loop terms for $\beta_\lambda$, $\beta_{y_t}$, and $\gamma$~\cite{Che12}, it is relatively simple to include these contributions in the RG equations \eqref{eq:betalambda}--\eqref{eq:gamma} so that the Higgs $\xi$-inflation analysis matches the NNLO analysis of~\cite{Deg12} for $h \lesssim M_\text{Pl}/\xi$ (see appendix~\ref{app:RGE}). Note that we do not attempt to insert the appropriate factors of $s$ into these expressions since the corrections would be smaller than the uncertainty in the RG equations for $h \gtrsim M_\text{Pl}/\xi$. For a complete description of the RG evolution in Higgs $\xi$-inflation, the equations \eqref{eq:betalambda}--\eqref{eq:gamma} with the three-loop corrections \eqref{eq:deltabetalambda}--\eqref{eq:deltagamma} must be supplemented by values of the SM couplings at the electroweak scale and the value of $\xi$ at some high scale, say $M_\text{Pl}/\xi$. Appropriate initial values for the SM couplings can be found in~\cite{Deg12}. For the central values of $\alpha_Y^{-1}(M_Z)$ and $\alpha_2^{-1}(M_Z)$, the initial values of the gauge couplings $g^\prime$ and $g$ are \begin{align} \label{eq:ICgp} g^\prime(M_Z) &= \sqrt{\frac{4\pi}{98.35}} \simeq 0.3575, \\ \label{eq:ICg} g(M_Z) &= \sqrt{\frac{4\pi}{29.587}} \simeq 0.65171, \end{align} where $M_Z = 91.1876$~GeV. For the strong gauge coupling $g_s$, the initial value depends more sensitively on the uncertainty in $\alpha_s(M_Z) = 0.1184 \pm 0.0007$. It is given by \begin{equation} \label{eq:ICgs} g_s(M_t) = 1.1645 + 0.0031\paren{\frac{\alpha_s(M_Z)-0.1184}{0.0007}} -0.00046\paren{\frac{M_t}{\text{GeV}}-173.15}, \end{equation} where $M_t$ is the top quark pole mass determined from experiment. The initial value of the top quark Yukawa coupling $y_t$ is \begin{align} \label{eq:ICyt} y_t(M_t) &= 0.93587 + 0.00557\paren{\frac{M_t}{\text{GeV}}-173.15}-0.00003\paren{\frac{M_h}{\text{GeV}}-125}\nonumber \\ &\quad -0.00041\paren{\frac{\alpha_s(M_Z)-0.1184}{0.0007}} \pm 0.00200_\text{th}, \end{align} where $M_h$ is the Higgs pole mass, while the initial value of the Higgs quartic coupling $\lambda$ is \begin{equation} \label{eq:ICy} \lambda(M_t) = 0.12577 + 0.00205\paren{\frac{M_h}{\text{GeV}}-125} - 0.00004\paren{\frac{M_t}{\text{GeV}}-173.15} \pm 0.00140_\text{th}. \end{equation} Note that the theoretical uncertainty for $\lambda(M_t)$ in~\eqref{eq:ICy} is equivalent to an uncertainty in the Higgs pole mass of $\pm 0.7$~GeV~\cite{Deg12}. This means, in particular, that for the measured Higgs mass $M_h = 125.7 \pm 0.4$~\cite{Baa13} it is quite reasonable to use values of $M_h \simeq 124$--127~GeV in \eqref{eq:ICyt} and \eqref{eq:ICy}. For the non-minimal coupling $\xi$, we are (a priori) free to choose its initial value $\xi_0$ at some high scale. We take the scale to be $M_\text{Pl}/\xi_0$ so that, by definition, \begin{equation} \label{eq:ICxi} \xi(M_\text{Pl}/\xi_0) = \xi_0. \end{equation} The RG equations \eqref{eq:betalambda}--\eqref{eq:deltagamma} with the initial values \eqref{eq:ICgp}--\eqref{eq:ICxi} are therefore the ones we use to describe the RG evolution of the couplings for Higgs $\xi$-inflation. \subsection{Two-loop effective potential} \label{sec:potential} The effective potential for Higgs $\xi$-inflation, like the RG equations, differs from the well-known SM result~\cite{For92,Deg12} due to the suppression of the physical Higgs propagators. As described in~\cite{Bez09a}, however, the effective potential cannot be fixed unambiguously; there are two inequivalent renormalization prescriptions depending on whether quantum corrections to the potential are computed in the Einstein frame (prescription~I)~\cite{Bez08} or the Jordan frame (prescription~II)~\cite{Bar08}. Without knowing the behaviour of the quantum theory at the Planck scale, it is unclear which prescription should be used. The former prescription has been connected to ideas of a possible quantum scale invariance~\cite{Sha09a,Sha09b,Bez13} while~\cite{Bar08} has argued that the latter prescription is correct because the Jordan frame is the one in which physical distances are measured. For sufficiently large $\lambda_\text{eff}(\mu)$, the running of $\lambda_\text{eff}(\mu)$ during inflation is small and the choice of renormalization prescription is irrelevant from a practical point of view~\cite{Bez09a,Bez09b,Deg12}. For the small $\lambda_\text{eff}(\mu)$ allowed by the recent Higgs mass measurement, however, the choice of renormalization prescription can significantly affect the behaviour of $\lambda_\text{eff}(\mu)$ and hence the potential during inflation. Both renormalization prescriptions must therefore be considered. For prescription~I, the tree-level SM potential \begin{equation} \label{eq:Vtree} V_0(h) = \frac{\lambda}{4}\paren{h^2-v^2}^2 \simeq \frac{\lambda}{4}h^2 \end{equation} is first rewritten in the Einstein frame ($h = h(\chi)$) using \eqref{eq:potential}, giving \begin{equation} \label{eq:Utree} U_0(\chi) = \frac{\lambda h^4}{ 4 \Omega^4}. \end{equation} Note that the $v^2$ term in \eqref{eq:Vtree} has been safely neglected in the inflationary region $h^2 \gtrsim M_\text{Pl}^2/\xi \gg v^2$. The one-loop radiative corrections induced by the fields of the SM then take the Coleman-Weinberg form~\cite{Deg12}\footnote{Up to corrections from the time-dependence of the background Higgs field as it rolls down its potential. Such corrections have been considered in~\cite{Moo11,Geo12} for the simpler Abelian Higgs model but have not yet been studied for the Higgs $\xi$-inflation model. Analyzing these corrections goes beyond the scope of this paper.} \begin{align} U_1(\chi) &= \frac{1}{16\pi^2} \left[ \frac{3 M_W^4}{2} \paren{\ln \frac{M_W^2}{\mu^2} - \frac{5}{6} } + \frac{3 M_Z^4}{4} \paren{\ln \frac{M_Z^2}{\mu^2} - \frac{5}{6} } - 3 M_t^4 \paren{ \ln \frac{M_t^2}{\mu^2} - \frac{3}{2} } \right. \nonumber \\ &\quad + \left. \frac{M_h^4}{4} \paren{\ln \frac{M_h^2}{\mu^2} - \frac{3}{2} } + \frac{3M_G^4}{4} \paren{\ln \frac{M_G^2}{\mu^2} - \frac{3}{2} } \right], \label{eq:Uoneloop} \end{align} where the particle masses $M_W$, $M_Z$, $M_t$, $M_h$, and $M_G$ are computed from the tree-level potential~\eqref{eq:Utree}, giving~\cite{Bez08,Bez09a,Sal13}\footnote{We obtain this result by expanding $H = \frac{1}{\sqrt 2} \paren{\begin{smallmatrix} 0 \\ h \end{smallmatrix}} + \paren{\begin{smallmatrix} \hat{G}^+ \\ ( \hat{h} + i \hat{G}^0 ) / \sqrt 2 \end{smallmatrix}}$ in the full expression for the tree-level potential $U_0 = \lambda(H^\dag H)^2/\Omega^4 = \lambda(H^\dag H)^2/(1+2\xi H^\dag H / M_\text{Pl}^2)^2$ to quadratic order in the fields $\hat{h}$, $\hat{G}^+$, $\hat{G}^0$, where $h$ is the classical background value of the Higgs field $\hat{h}$~\cite{Sal13}.} \begin{alignat}{2} M_W^2 &= \frac{g^2 h^2}{4 \Omega^2}, &\qquad M_Z^2 &= \frac{\paren{g^2 + \gp{2}}h^2}{4\Omega^2}, \qquad M_t^2 = \frac{y_t^2 h^2}{2 \Omega^2}, \nonumber \\ \label{eq:massesI} M_h^2 &= \frac{3s\lambda h^2}{\Omega^4}\parenfrac{1-\xi h^2/M_\text{Pl}^2}{1+\xi h^2/M_\text{Pl}^2}, &\qquad M_G^2 &= \frac{\lambda h^2}{\Omega^4}. \end{alignat} Note that the particle masses $M_W^2$, $M_Z^2$, and $M_t^2$ in \eqref{eq:massesI} differ from the flat space results by the conformal factor $\Omega^2 = 1 + \xi h^2/M_\text{Pl}^2$ that appears in the denominator, while the physical Higgs mass $M_h^2$ and Goldstone boson mass $M_G^2$ contain additional factors. With the exception of the suppression factor $s = s(h)$ in the physical Higgs mass, the appearance of these additional factors in $M_h^2$ and $M_G^2$ is due to using the asymptotically flat tree-level potential~\eqref{eq:Utree} to determine particle masses rather than the Jordan frame potential~\eqref{eq:Vtree}. These additional factors lead to a suppression of the physical Higgs and Goldstone boson contributions to the effective potential (relative to those from $W$, $Z$, and $t$) during inflation for prescription~I, as found in~\cite{Bez08,Bez09a,Sal13}. The two-loop radiative corrections $U_2(\chi)$ can easily be found by using the modified particle masses \eqref{eq:massesI} in the two-loop SM result of~\cite{Deg12}, but due to the rather long and unenlightening form of this expression we do not reproduce it here. The RG-improved effective potential is then determined from $U_\text{eff}(\chi) = U_0 + U_1 + U_2$ in the usual way by using the RG equations from appendix~\ref{app:RGE} to run the couplings and making the replacement $h \rightarrow e^{\Gamma(\mu)} h$, where \begin{equation} \label{eq:anomalousdimension} \Gamma(\mu) = -\int_{M_t}^{\mu} \gamma(\mu^\prime) d \ln \mu^\prime \end{equation} and $\gamma = -d\ln h/d\ln\mu$ is the anomalous dimension of the Higgs field~\cite{For93}.\footnote{Note the difference in sign between the definition of $\gamma$ here and the definition of $\gamma$ in~\cite{Deg12}.} The effective Higgs quartic coupling $\lambda_\text{eff}(\mu)$ is then defined through \begin{equation} \label{eq:Ueff} U_\text{eff}(\chi) \equiv \frac{\lambda_\text{eff}(\mu)h^4}{4\Omega^4}, \end{equation} where all couplings in \eqref{eq:Ueff} are evaluated at some renormalization scale $\mu$. The dependence of the effective potential on the scale $\mu$ is spurious, but to minimize the logarithms from higher loop corrections it is appropriate to take $\mu = \kappa h / \Omega$ proportional to the background mass of a vector boson or top quark~\cite{Bez09b}. For simplicity, we choose the constant of proportionality to be $\kappa = 1$. For prescription~II, quantum corrections to the potential~\eqref{eq:Vtree} are computed in the Jordan frame before transforming to the Einstein frame. In this case, the one-loop radiative corrections to the effective potential take the form~\cite{Ler09} \begin{align} U_1(\chi) &= \frac{1}{16\pi^2\Omega^4} \left[ \frac{3 M_W^4}{2} \paren{\ln \frac{M_W^2}{\mu^2} - \frac{5}{6} } + \frac{3 M_Z^4}{4} \paren{\ln \frac{M_Z^2}{\mu^2} - \frac{5}{6} } - 3 M_t^4 \paren{ \ln \frac{M_t^2}{\mu^2} - \frac{3}{2} } \right. \nonumber \\ &\quad + \left. \frac{M_h^4}{4} \paren{\ln \frac{M_h^2}{\mu^2} - \frac{3}{2} } + \frac{3M_G^4}{4} \paren{\ln \frac{M_G^2}{\mu^2} - \frac{3}{2} } \right], \end{align} where the particle masses $M_W^2$, $M_Z^2$, $M_t^2$, $M_h^2$, and $M_G^2$ appear without the conformal factor $\Omega^2$ or additional factors in their denominators, \begin{alignat}{2} M_W^2 &= \frac{g^2 h^2}{4}, &\qquad M_Z^2 &= \frac{\paren{g^2 + \gp{2}}h^2}{4}, \qquad M_t^2 = \frac{y_t^2 h^2}{2}, \nonumber \\ \label{eq:massesII} M_h^2 &= 3s\lambda h^2, &\qquad M_G^2 &= \lambda h^2. \end{alignat} The two-loop radiative corrections $U_2(\chi)$ can be found by using the particle masses~\eqref{eq:massesII} in the two-loop SM result~\cite{Deg12} and dividing the expression by the conformal factor $\Omega^4$. The effective Higgs quartic coupling $\lambda_\text{eff}(\mu)$ is again defined through~\eqref{eq:Ueff}, but in this case taking the renormalization scale to be proportional to the background mass of a vector boson or top quark requires $\mu = \kappa h$. For simplicity, we again choose $\kappa = 1$. In practice, the most significant difference between the effective potentials for the two renormalization prescriptions is the functional dependence $\mu = \mu(h)$.\footnote{The additional suppression of the physical Higgs and Goldstone boson masses for prescription~I is relatively minor since these masses, and hence their contributions to the effective potential, are small compared to $M_W$, $M_Z$, and $M_t$ for the small $\lambda$ in the inflationary region.} For prescription~I, $\mu = h / \Omega$ approaches a constant value in the inflationary region $h \gtrsim M_\text{Pl}/\sqrt \xi$ (and hence so do the couplings $g(\mu)$, $g^\prime(\mu)$, etc.\ in \eqref{eq:massesI}) while for prescription~II the renormalization scale $\mu = h$ does not. As a result, the effective potential for prescription~I approaches a constant value in the inflationary region (even after including radiative corrections) while the effective potential for prescription~II, due to the continued running of the couplings, does not. This difference, as we will see, can have a large impact on Higgs $\xi$-inflation and its predictions for small $\lambda_\text{eff}(\mu)$. \section{Numerical results} \label{sec:results} For a fixed Higgs mass $M_h$, top quark mass $M_t$, strong coupling $\alpha_s(M_Z)$, and non-minimal coupling $\xi_0$, it is straightforward to numerically solve the RG equations \eqref{eq:betalambda}--\eqref{eq:deltagamma} with initial conditions \eqref{eq:ICgp}--\eqref{eq:ICxi} and use the effective potential $U(\chi)$ (for either prescription~I or II) to compute the inflationary parameters. However, since the focus of this paper is on the region of parameter space with $\lambda_\text{eff}(\mu) \ll 1$, we instead replace the parameter $M_t$ in favour of $\lambda_\text{eff}^\text{min} \equiv \min \{ \lambda_\text{eff}(\mu) \}$. Intuitively, this can be understood as adjusting the top quark mass $M_t$ to yield the desired $\lambda_\text{eff}^\text{min}$ for a fixed choice of $M_h$, $\alpha_s(M_Z)$, and $\xi_0$. Figure~\ref{fig:Mt} shows that the special region $\lambda_\text{eff}^\text{min} \simeq 0$ exists for a top quark mass $M_t \sim 171$~GeV about 2--3$\sigma$ below its central value. Since values of $\lambda_\text{eff}^\text{min} \sim 0.01$ are typical within the experimental and theoretical uncertainty of the various parameters, a fine-tuning of some combination of parameters is necessary to achieve $0 < \lambda_\text{eff}^\text{min} \lesssim 0.01$.\footnote{In~\cite{Sha10} it is argued that a UV fixed point in an asymptotically safe theory of gravity may ensure very small values of $\lambda(\mu)$ near the Planck scale. In this case, fine-tuning may only be necessary for values of $\lambda_\text{eff}^\text{min}$ smaller than the typical size of the shift in $\lambda_\text{eff}(\mu)$ due to radiative corrections to the effective potential, $\delta \lambda_\text{eff}(\mu \sim M_\text{Pl}) \sim 4 \times 10^{-4}$.} Note that negative values of $\lambda_\text{eff}^\text{min}$, as well as sufficiently small positive values, cause the effective potential to develop a second minimum below the inflationary scale and hence spoil Higgs $\xi$-inflation. We therefore restrict ourselves to the region $0 < \lambda_\text{eff}^\text{min} \lesssim 0.01$ in which the effective potential is stable. \begin{figure} \begin{center} \includegraphics[scale=1]{Mt.pdf} \end{center} \caption{Values of $\lambda_\text{eff}^\text{min}$ as a function of $M_t$ for fixed $\alpha_s(M_Z) = 0.1184$ and $\xi_0 = 1000$. The four solid (dashed) curves correspond to a Higgs mass $M_h$ of 124, 125, 126, and 127~GeV from bottom to top for renormalization prescription~I~(II). The vertical dashed and dotted lines give the central value and $\pm 2\sigma$ range for $M_t$~\cite{Deg12}. A shift in $\alpha_s(M_Z)$ of $\pm 1\sigma$ ($\pm 0.0007$) roughly corresponds to a shift in $M_h$ of $\pm 0.5$~GeV while changing $\xi_0$ by an order of magnitude has little effect.} \label{fig:Mt} \end{figure} The non-minimal coupling $\xi_0$ is not actually a free parameter, of course, but must be chosen to give the correct normalization of the CMB power spectrum (see section~\ref{sec:review}). For a fixed $M_h$ and $\alpha_s(M_Z)$, the procedure for determining the inflationary predictions for a particular choice of $\lambda_\text{eff}^\text{min}$ and renormalization prescription is as follows: \begin{enumerate} \item Choose a value of $\xi_0$. Adjust the top quark mass $M_t$ to give the desired value of $\lambda_\text{eff}^\text{min}$ when solving the RG equations. For $\lambda_\text{eff}^\text{min} \lesssim 0.01$, this may involve fine-tuning $M_t$. \item Use the effective potential $U(\chi)$ (for prescription~I or II) to compute the inflationary parameters and determine $U(h_0)/\epsilon(h_0)$ at a field value $h_0$ corresponding to $N = 59$ e-folds before the end of inflation. \item Repeat the steps above for different values of $\xi_0$ until the correct normalization $U/\epsilon \simeq \paren{0.0274M_\text{Pl}}^4$ is achieved.\footnote{Note that prescription~II with sufficiently small $\lambda_\text{eff}^\text{min}$ can have two solutions for $\xi_0$. The large $\xi_0$ solution, however, predicts $n_s > 1.02$ and is therefore inconsistent with observations~\cite{WMA12,Pla13}.} \item Compute the inflationary predictions for the spectral index $n_s$, the tensor-to-scalar ratio $r$, and the running of the spectral index $dn_s/d\ln k$. \end{enumerate} We discuss the numerical results for prescriptions~I and II separately. \subsection{Inflationary predictions for prescription I} For prescription~I, the results for $\xi_0$ and the inflationary predictions for $n_s$ and $r$ (as a function of $\lambda_\text{eff}^\text{min}$) are presented in figure~\ref{fig:resultsI}. The running of the spectral index $dn_s/d\ln k$ always remains small, within the range $(5.0\text{--}5.6) \times 10^{-4}$. \begin{figure} \begin{center} \includegraphics[scale=1]{xiI.pdf} \includegraphics[scale=1]{nsrI.pdf} \end{center} \caption{Numerical results for the non-minimal coupling $\xi_0$ and inflationary predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$ as a function of $\lambda_\text{eff}^\text{min}$ for prescription~I. The four solid curves correspond to a Higgs mass $M_h$ of 124, 125, 126, and 127~GeV from left to right while the dashed lines give the tree-level predictions. A shift in $\alpha_s(M_Z)$ of $\pm 2\sigma$ ($\pm 0.0014$) roughly corresponds to a shift in $M_h$ of $\mp 0.5$~GeV. Changing the number of e-folds from $N = 59$ to 62 shifts the tree-level predictions by a small (calculable) amount but does not change the qualitative behaviour of the curves about the tree-level predictions.} \label{fig:resultsI} \end{figure} Let us first discuss the non-minimal coupling $\xi_0$. Figure~\ref{fig:resultsI} shows that the value of $\xi_0$ required for the CMB normalization deviates from the tree-level estimate $\xi_0 \simeq 48000\sqrt{\lambda_\text{eff}^\text{min}}$ as $\lambda_\text{eff}^\text{min}$ decreases below about $10^{-4}$. In particular, $\xi_0$ reaches a minimum value of $\xi_0 \sim 400$ at $\lambda_\text{eff}^\text{min} \sim 10^{-4.4}$ and then begins to increase. This behaviour can be traced to the rapid decrease in $\epsilon$ (and hence the tensor-to-scalar ratio $r = 16\epsilon$) over this range, which causes $U/\epsilon$ to increase despite smaller values of $\lambda_\text{eff}^\text{min}$. A larger non-minimal coupling $\xi_0$ is therefore required to give the correct CMB normalization. This result demonstrates that the sharp decrease in $\xi_0$ seen in~\cite{Bez09a} and figure~4 of~\cite{Bez09b} for prescription~I does not continue indefinitely but only allows $\xi_0$ as small as about 400. The violation of perturbative unitarity at the scale $M_\text{Pl}/\xi_0 \ll M_\text{Pl} / \sqrt{\xi_0}$ therefore remains a problem for Higgs $\xi$-inflation in the small $\lambda_\text{eff}^\text{min}$ region. For sufficiently small $\lambda_\text{eff}^\text{min}$ (e.g.\ $\lesssim 10^{-4.6}$), no solutions for $\xi_0$ are possible since the effective potential develops a second minimum and hence spoils the Higgs $\xi$-inflation scenario.\footnote{In this case, the effective potential rises to a local maximum and then decreases slowly to a constant value as $h \rightarrow \infty$. The shape of the potential may be suitable for a sort of false vacuum inflation in which the Higgs field can start with any value $h \gtrsim M_\text{Pl}/\sqrt{\xi}$, but an analysis of this case goes beyond the scope of this paper.} Figure~\ref{fig:resultsI} also shows small deviations in the inflationary predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$. As $\lambda_\text{eff}^\text{min}$ decreases below about $10^{-3.5}$, the spectral index rises to about 0.970 from its tree-level prediction of 0.967 before decreasing rapidly, while the tensor-to-scalar ratio drops quickly below its tree-level prediction of 0.0031. Although a similarly rapid change in $n_s$ and $r$ can be seen in~\cite{Bez09a,Bez09b} as the Higgs mass approaches values corresponding to $\lambda_\text{eff}^\text{min} \simeq 0$, the results presented here (as a function of $\lambda_\text{eff}^\text{min}$) provide a much clearer picture of Higgs $\xi$-inflation in this now experimentally favoured region. From a practical point of view, we see that the deviations of $n_s$ and $r$ from the tree-level predictions are sufficiently small that they would be difficult to distinguish from the tree-level results observationally. Consequently, for all allowed values $10^{-4.6} \lesssim \lambda_\text{eff}^\text{min} \lesssim 10^{-2}$, the predictions for $n_s$ and $r$ are well within the current 1$\sigma$ limits~\cite{WMA12,Pla13}. The small prediction for $dn_s/d\ln k \sim 5 \times 10^{-4}$ is also consistent with observations at the 1--2$\sigma$ level~\cite{WMA12,Pla13}. \subsection{Inflationary predictions for prescription II} For prescription~II, there are two disjoint regions of $\lambda_\text{eff}^\text{min}$ that can lead to acceptable inflation: one with larger values $\lambda_\text{eff}^\text{min} \gtrsim 10^{-3.3}\text{--}10^{-2.3}$ (depending on $M_h$) and one with smaller values $\lambda_\text{eff}^\text{min} \sim 10^{-4}$. The results for $\xi_0$ and the inflationary predictions for $n_s$ and $r$ are quite different for these two regions and are presented in figures~\ref{fig:resultsII} and \ref{fig:lowII}, respectively. \begin{figure} \begin{center} \includegraphics[scale=1]{xiII.pdf} \includegraphics[scale=1]{nsrII.pdf} \end{center} \caption{Numerical results for the non-minimal coupling $\xi_0$ and inflationary predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$ in the larger $\lambda_\text{eff}^\text{min}$ region for prescription~II. The four solid curves correspond to a Higgs mass $M_h$ of 124, 125, 126, and 127~GeV from left to right while the dashed lines give the tree-level predictions. A shift in $\alpha_s(M_Z)$ of $\pm 2\sigma$ ($\pm 0.0014$) roughly corresponds to a shift in $M_h$ of $\mp 0.5$~GeV. Changing the number of e-folds from $N = 59$ to 62 shifts the tree-level predictions by a small (calculable) amount but does not change the qualitative behaviour of the curves about the tree-level predictions.} \label{fig:resultsII} \end{figure} Let us first consider the region of larger values of $\lambda_\text{eff}^\text{min}$, which is the only one that has been considered previously in the literature~\cite{DeS09,Bez09a,Bez09b}. Figure~\ref{fig:resultsII} shows that the required value of $\xi_0$ in this region behaves similarly to that of prescription~I except that the minimum value of $\xi_0$ --- if it can be reached without the potential developing a second minimum --- occurs at larger $\lambda_\text{eff}^\text{min}$ (i.e.\ $\lambda_\text{eff}^\text{min} \gtrsim 10^{-3}$). This difference is due to the stronger effect of the running of $\lambda_\text{eff}(\mu)$ for prescription~II. Specifically, the running of $\lambda_\text{eff}(\mu)$ to its minimum value overcomes the flattening of the potential in the inflationary region more quickly than for prescription~I, and hence causes the effective potential to develop a second minimum for more moderate values of $\lambda_\text{eff}^\text{min}$. As a result, a non-minimal coupling only as small as $\xi_0 \sim 2000$--4000 (depending on $M_h$) is allowed for this region of $\lambda_\text{eff}^\text{min}$. Similar lower limits for $\xi_0$, as well as the qualitative rise in $\xi_0$ as $\lambda_\text{eff}^\text{min}$ decreases, have been found in~\cite{Bez09a,Bez09b} for prescription~II. Again, these values of $\xi_0$ are not small enough to prevent the perturbative unitarity violation at $M_\text{Pl}/\xi_0$ from occurring well below the inflationary scale. Figure~\ref{fig:resultsII} also shows that the predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$ \emph{decrease} from their tree-level values as $\lambda_\text{eff}^\text{min} \rightarrow 0$. The decrease observed here (similar to prescription~I) is consistent with the results of~\cite{Bez09a,Bez09b} rather than with the increase observed in~\cite{DeS09}. Also note that the variation in $n_s$ over the allowed range of $\lambda_\text{eff}^\text{min}$ is larger for prescription~II than for prescription~I. Since a deviation from the tree-level prediction of $\Delta n_s \gtrsim 0.01$ should be visible by {\it Planck}~\cite{Bur10b}, it may therefore be possible to connect a measurement of the spectral index with the RG evolution of $\lambda_\text{eff}(\mu)$ near the Planck scale for prescription~II. The running of the spectral index $dn_s/d\ln k$ always remains quite small, within the range $(4.5\text{--}6.4) \times 10^{-4}$. While the results of the larger $\lambda_\text{eff}^\text{min}$ region for prescription~II are qualitatively similar to those for prescription~I, prescription~II also allows a region of smaller $\lambda_\text{eff}^\text{min}$ and $\xi_0$ with distinct inflationary predictions. The existence of this region, which has not been considered in the literature before, can be understood as follows. For typical Higgs $\xi$-inflation with large $\lambda_\text{eff}^\text{min}$, the slow roll parameter $\epsilon$ decreases rapidly in the inflationary region (see eq.~\eqref{eq:epsilon}) and the required $N = 59$ e-folds of inflation are produced quickly (see eq.~\eqref{eq:N}). For smaller $\lambda_\text{eff}^\text{min}$ and $\xi_0$, however, there is a region of parameter space in which the running of $\lambda_\text{eff}(\mu)$ causes $\epsilon$ to increase before the $N=59$ e-folds are reached. The inflationary observables are then computed at a field value $h_0$ in a qualitatively different region of parameter space with larger $\epsilon$, leading to distinct predictions. \begin{figure} \begin{center} \includegraphics[scale=1]{pred.pdf} \end{center} \caption{Predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$ for Higgs $\xi$-inflation with prescription~II and $\lambda_\text{eff}^\text{min} \sim 10^{-4}$. The solid blue (brown) curve gives the results for a Higgs mass $M_h = 124$~GeV (124.5~GeV) while the lower and upper shaded regions correspond to a shift in $\alpha_s(M_Z) = 0.1184$ of up to $\pm 2\sigma$ ($\pm 0.0014$), respectively. The marked points along the solid curves indicate values of $(\lambda_\text{eff}^\text{min},\xi_0)$. Results are shown with the marginalized joint 68\% and 95\% confidence level regions from {\it Planck} 2013~\cite{Pla13}.} \label{fig:lowII} \end{figure} Figure~\ref{fig:lowII} gives the numerical results for $\xi_0$ and the inflationary predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$ in this region. The results are shown together with the most recent constraints from {\it Planck}~\cite{Pla13}. Since many well-motivated models with Higgs $\xi$-inflation contain additional degrees of freedom that can contribute to the effective number of neutrino species $N_\text{eff}$ (e.g.\ the $\nu$MSM~\cite{Asa05,Sha05} with 3 light sterile neutrinos), it is most appropriate to compare the results with the $\Lambda$CDM+$r$+$N_\text{eff}$ data. It can be seen that, for $\lambda_\text{eff}^\text{min} \sim 10^{-3.9}$ and $M_h \simeq 124$~GeV, there is a region of Higgs $\xi$-inflation that is consistent with {\it Planck} at the 2--3$\sigma$ level.\footnote{Recall that even with the Higgs mass measurement of $M_h = 125.7 \pm 0.4$~\cite{Baa13}, using $M_h \simeq 124$~GeV in the RG evolution is still quite reasonable due to the theoretical uncertainty in determining $\lambda$ at the electroweak scale.} This region, though marginally disfavoured, is important for two reasons. First, unlike for the larger $\lambda_\text{eff}^\text{min}$ region, the tensor-to-scalar ratio $r \gtrsim 0.15$ in this region is quite large and would be visible by {\it Planck}~\cite{Bur10b}. It is therefore possible that the tensor modes from Higgs $\xi$-inflation could be detected in the {\it Planck} polarization data.\footnote{The running of the spectral index in this region is also much larger (and negative) than typically found in Higgs $\xi$-inflation: $dn_s/d\ln k \sim -0.002$ and $-0.008$ for $M_h = 124$ and 124.5~GeV, respectively. A large running of the spectral index relaxes the constraints on $r$ from {\it Planck}~\cite{Pla13} and could open up even more of the small $\lambda_\text{eff}^\text{min}$ region for detection.} Second, the non-minimal coupling $\xi_0 \sim 90$ required in this region is about an order of magnitude smaller than previously considered in the literature. Although still not small enough to address the problem of perturbative unitarity violation occurring below the inflationary scale, it provides the lower limit on $\xi_0$ that is acceptable for Higgs $\xi$-inflation. Smaller non-minimal couplings, including $\xi_0 \sim 1$, seem generally unattainable because they require $\lambda_\text{eff}^\text{min} \lesssim 10^{-6}$ to give the correct CMB normalization, which ultimately causes the effective potential to develop a second minimum before the inflationary scale. \section{Conclusions} \label{sec:conclusions} Higgs $\xi$-inflation is an attractive model of inflation since it does not require scalar degrees of freedom in addition to those of the SM. For a large non-minimal coupling $\xi$, however, the violation of perturbative unitarity at the scale $M_\text{Pl}/\xi \ll M_\text{Pl}$ threatens the self-consistency of the model in the inflationary region. In this paper we have investigated the possibility that a Higgs mass $M_h \simeq 125$--126~GeV --- a mass for which the effective Higgs quartic coupling $\lambda_\text{eff}(\mu)$ runs to very small values near the Planck scale --- may significantly reduce the size of $\xi$ required for inflation and address the perturbative unitarity violation problem. This possibility, like the Higgs $\xi$-inflation scenario in general, requires a top quark mass $M_t \sim 171$~GeV, about 2$\sigma$ below its central value. To investigate this possibility we have updated the two-loop analysis of Higgs $\xi$-inflation to include the three-loop SM beta functions for the gauge couplings as well as the leading three-loop terms for the RG evolution of $\lambda$, the top Yukawa coupling $y_t$, and the Higgs anomalous dimension $\gamma$. We have also included, for the first time, a complete two-loop insertion of suppression factors for the physical Higgs loops in the RG equations. The two-loop SM effective potential with particle masses modified appropriately for Higgs $\xi$-inflation has been used to match the level of the RG equations. We have found that successful inflation in the region $\lambda_\text{eff}(\mu) \ll 1$ requires smaller $\xi$ than previously considered in the literature, but even with a fine-tuning of parameters to give arbitrarily small $\lambda_\text{eff}^\text{min}$ it is not possible to achieve $\xi \sim 1$ and prevent the violation of perturbative unitarity below the inflationary scale. Specifically, we have found that the Einstein frame renormalization prescription (prescription~I) allows a non-minimal coupling as small as $\xi \sim 400$ for $\lambda_\text{eff}^\text{min} \sim 10^{-4.4}$ without the potential developing a second minimum and hence spoiling inflation. The predictions for the spectral index $n_s$ and the tensor-to-scalar ratio $r$ remain close to their tree-level values in this case and are within the 1$\sigma$ allowed region from CMB measurements. For the Jordan frame renormalization prescription (prescription~II), there are two distinct regions of $\lambda_\text{eff}^\text{min}$ that can lead to successful inflation. The larger $\lambda_\text{eff}^\text{min}$ region behaves similarly to prescription~I and allows a non-minimal coupling as small as $\xi \sim 2000$ without the potential developing a second minimum. The smaller $\lambda_\text{eff}^\text{min}$ region, which has not been considered in the literature before, requires $\xi \sim 90$ and predicts an observable tensor-to-scalar ratio $r \gtrsim 0.15$ for $\lambda_\text{eff}^\text{min} \sim 10^{-3.9}$. Smaller non-minimal couplings, including $\xi \sim 1$, seem generally unattainable since they require $\lambda_\text{eff}^\text{min} \lesssim 10^{-6}$ to give the correct CMB normalization, which ultimately causes the effective potential to develop a second minimum before the inflationary scale. \acknowledgments I am grateful to Graham Ross for much valuable input and to Subir Sarkar for helpful discussions. I would also like to thank Andrei Linde, Mikhail Shaposhnikov, and Alberto Salvio for valuable correspondence. This work was supported by the European Commission under the Marie Curie Initial Training Network UNILHC 237920 (Unification in the LHC era). Contents reflect only the author's views and not the views of the European Commission.
1,116,691,500,581
arxiv
\subsection{Introduction} In the physics of elementary particles we observe a wide separation of scales so that at each scale a finite set of operators effectively describes their physical processes, but often the microscopic origin of the scale separation remains unanswered. A well-known example is the electroweak theory that describes the electroweak process extremely well with no sign of its deviation at colliders. The origin of the electroweak scale or the vacuum expectation value of the Higgs field, $v_{\rm ew}=246~{\rm GeV}$, is however yet to be uncovered. One of the attractive ideas to explain the vast hierarchy of scales in quantum field theory is to generate it dynamically. A prominent example is quantum chromodynamics (QCD). Classically, QCD is scale-invariant, but the quantum effects create dynamically the scale, $\Lambda_{\rm QCD}$, by the so-called dimensional transmutation of the running coupling, which is determined experimentally to be about $220~{\rm MeV}$. Furthermore, in the infrared (IR) the gauge coupling becomes so strong that QCD undergoes a confinement phase transition, generating the vacuum energy, ${\cal E}_{\rm vac}\sim -\Lambda_{\rm QCD}^4$~\cite{Campostrini:1989uj}. The salient feature of the spontaneous generation of vacuum energy in the (quasi) scale-invariant theory is the appearance of a light dilaton in the low-energy spectrum~\cite{Ellis:1984jv,Yamawaki:1985zg,Bando:1986bg}, namely the Nambu-Goldstone (NG) boson associated with the spontaneous breaking of scale symmetry. Recently a model of near conformal dynamics that generates a large separation of scales is proposed to explain naturally the light Higgs and at the same time the dark matter with very light dilaton~\cite{Hong:2017smd}. If the near conformal dynamics is responsible for light Higgs, there will be other light states in addition to the dilaton, which might exhibit interesting signatures at colliders~\cite{Davoudiasl:2009cd,Agashe:2020wph}. Recent lattice study shows that the ground-state glueball behaves like a dilaton, exhibiting a universal scaling law in the confining gauge theories~\cite{Hong:2017suj}. It further shows that the next light state in pure Yang-Mills (YM) theories is the spin-2 glueball, $2^{++}$, whose mass may be related to the spin-0 ground-state glueball or the dilaton in a universal way for all gauge groups~\cite{Bennett:2020hqd}. In this paper, we calculate the one loop scattering amplitudes of dilatons and show how the violation of the perturbative unitarity of the scattering amplitude is improved with additional spin-2 massive particles till the energy approaches the mass of the spin-2 states. We find that the unitarity argument for the new heavier state requires the mass of the spin-2 state should be at the order of the dilaton decay constant, which is consistent with the universal behavior of mass predicted in~\cite{Bennett:2020hqd}. The mass ratio between two lowest-lying states hence measures the degree of the conformality of their microscopic theory. \subsection{Dilaton scattering amplitudes} When the scale symmetry is spontaneously broken, as in the confining phase of YM theories, the dilatation current creates the dilaton, $\sigma$, out of vacuum by the Goldstone theorem: \begin{equation} \left<0\right|{\cal D}_{\mu}\left|\sigma(p)\right>=-if_Dp_{\mu}e^{-ip\cdot x}\,, \end{equation} where $f_D$ is the dilaton decay constant and ${\cal D}_{\mu}$ is the dilatation current. Because of the scale anomaly, the dilatation current is not conserved, \begin{equation} \partial^{\mu}\left<{\cal D}_{\mu}\right>=\left<T_{\mu}^{\mu}\right>=4\,{\cal E}_{\rm vac}\,, \label{vac} \end{equation} leading to mass or potential energy to the dilaton. The low energy effective Lagrangian for the dilaton, that is consistent with the (anomalous) scale symmetry, can be written as, keeping only the lowest number of derivatives, \begin{equation} {\cal L}_{\rm eff}=\frac12\partial_{\mu}\chi\partial^{\mu}\chi-V_A(\chi)\,, \label{eff} \end{equation} where $\chi=f_De^{\sigma/f_D}$ describes the small fluctuations of the order parameter of the scale symmetry, namely the trace of the energy-momentum tensor, around the vacuum defined in~Eq.\,(\ref{vac}), \begin{equation} T_{\mu}^{\mu}\approx 4\,{\cal E}_{\rm vac}\left(\frac{\chi}{f_D}\right)^4\,. \end{equation} Being a NG boson, the dilaton transforms nonlinearly under the scale transformation, $x\to x^{\prime}=e^{\alpha}x$, \begin{equation} \sigma\to \sigma+\alpha\,f_D\,, \end{equation} the dilaton potential, not invariant under the scale transformation, is generated by the scale anomaly and also by other possible explicit symmetry-breaking terms in the original theory among which we ignore the latter for the current discussions. Matching the scale anomaly, Eq.\,(\ref{vac}), one finds the dilaton potential to be~\cite{Migdal:1982jp} \begin{equation} V_A(\chi)=\left|{\cal E}_{\rm vac}\right|\left(\frac{\chi}{f_D}\right)^4\left[4 \ln\left(\frac{\chi}{f_D}\right)-1\right]\,. \end{equation} Expanding the effective Lagrangian,~Eq.\,(\ref{eff}), in powers of $\sigma/f_D$, one gets therefore \begin{equation} {\cal L}_{\rm eff}=\frac12\partial_{\mu}\sigma\partial^{\mu}\sigma-\frac12m_D^2\sigma^2+\frac{\sigma}{f_D}\left(\partial_{\mu}\sigma\right)^2-\frac{4m_D^2}{3f_D}\sigma^3+\frac{\sigma^2}{f_D^2}\left(\partial_{\mu}\sigma\right)^2-2\frac{m_D^2}{f_D^2}\sigma^4+\cdots\,, \end{equation} where the ellipsis denotes the higher order terms and the dilaton mass $m_D^2=16\left|{\cal E}_{\rm vac}\right|/f_D^2$, given by the partially conserved dilatation current (PCDC) relation~\cite{Choi:2012kx}. By the construction the effective Lagrangian works well at low energy, consistently with the current algebra of the microscopic theory~\cite{Coleman:1969sm,Callan:1969sn}. As the energy increases, however, it will approach a cutoff above which the effective theory is no longer valid. Beyond the cutoff it exhibits unphysical behaviors such as the violation of the unitarity in the scattering amplitudes, signaling the existence of a new state. To see this, let's consider the $2\to2$ dilaton scattering amplitude \begin{equation} \sigma(p_1)+\sigma(p_2)\mapsto \sigma(p_3)+\sigma(p_4)\,. \end{equation} \begin{figure}[t] \centering \begin{minipage}[c]{\linewidth} \centering \includegraphics[scale=0.33]{d_tree} \end{minipage} \caption{Tree level diagrams for $2\to2$ scattering amplitude of dilatons, denoted as broken lines: (a) $s$-channel, (b) $t$-channel, (c) $u$-channel and (d) the contact term. } \label{tree} \end{figure} At the tree level, shown in Fig.~\ref{tree}, the amplitude becomes \begin{equation} {\cal A}_{\sigma}^{\rm tree}=-\frac{1}{f_D^2}\left[\frac{(s-6m_D^2)^2}{s-m_D^2}+\frac{(t-6m_D^2)^2}{t-m_D^2}+\frac{(u-6m_D^2)^2}{u-m_D^2} \right]\,-\frac{48 m_D^2}{f_D^2}, \end{equation} where the Mandelstam variables $s=(p_1+p_2)^2$, $t=(p_3-p_1)^2$, and $u=(p_4-p_1)^2$\,. For high energy, $s,t,u\gg m_D^2$, the tree amplitude is found to be well behaved, since $s+t+u=4m_D^2$; \begin{equation} {\cal A}_{\sigma}^{\rm tree}\approx-\frac{1}{f_D^2}\left(s+t+u+15m_D^2\right)=-\frac{19m_D^2}{f_D^2}\,. \end{equation} \begin{figure}[H] \centering \begin{minipage}[c]{\linewidth} \centering \includegraphics[scale=0.33]{d_one-loop} \end{minipage} \caption{$s$-channel one-loop amplitude; (a) box diagram, (b) fish diagram. $t$-channel and $u$-channel amplitudes are obtained by swapping $p_3\leftrightarrow p_2$ and $p_3\leftrightarrow p_4$ for each diagram. } \label{box} \end{figure} Because of the symmetry under the exchange of the external dilatons, the leading non-trivial amplitude comes from the one-loop, which is also consistent with the scale anomaly argument in~\cite{Komargodski:2011vj}. The one-loop box diagram from Fig.~\ref{box}\,(a), adding together with its counter-parts in $t$ and $u$ channels, grows quadratically at high energy, which we have found in the minimal subtraction scheme as \begin{equation} {\cal A}_{\sigma}^{\rm box}=\frac{1}{32\pi^2f_D^4}\left(s^2+t^2+u^2\right)\left[2+\ln\left(\frac{4\pi^2\mu^2}{m_D^2}\right)\right]+{\cal O}\left(\frac{m_D^4}{f_D^4}\right)\,, \label{box} \end{equation} while the fish diagram in Fig.~\ref{box}\,(b), summing together with the other two channels, turns out to be finite, \begin{equation} {\cal A}_{\sigma}^{\rm fish}=-\frac{26m_D^4}{f_D^4\pi^2}\cdot\ln\left(\frac{4\pi^2\mu^2}{m_D^2}\right)\,. \end{equation} \subsection{The spin-2 glueballs} In pure YM theory, the lightest state is known to be the spin-0 glueball, $0^{++}$, small fluctuations of the trace of the energy-momentum tensor of YM theory. Since the (anomalous) scale symmetry is spontaneously broken in the confined phase, the lightest $0^{++}$ state may be identified as the dilaton of pure YM theory~\cite{Hong:2017suj}. The lattice study further shows that the next-to-lightest state is the spin-2 state $2^{++}$ with a possible universal ratio of its mass with that of the lightest glueball~\cite{Bennett:2020hqd}. Therefore, as energy increases, the dilaton-dilaton scattering process will produce the spin-2 state, which may improve the ultraviolet divergence of the scattering amplitudes. Since the coupling of the spin-2 glueball with dilatons should preserve for consistency the diffeomorphism invariance, the spin-2 states couple to the energy-momentum tensor. The interaction Lagrangian density is given therefore in the leading order in the perturbation as \begin{equation} {\cal L}_{\rm int}=-\kappa \,h_{\mu\nu}T_D^{\mu\nu}\,, \end{equation} where $\kappa$ is the universal coupling of the spin-2 and $T_D^{\mu\nu}$ denotes the dilaton energy-momentum tensor. The propagator of the massive spin-2 field with mass $m_G$ is given as \begin{equation} \int_xe^{ip\cdot x}\left<0\right|T\left\{h_{\mu\nu}(x)h_{\alpha\beta}(0)\right\}\left|0\right>=\frac{iP_{\mu\nu\alpha\beta}}{p^2-m_G^2+im_G\Gamma_G+i\epsilon}\,, \end{equation} where $2P_{\mu\nu\alpha\beta}={\tilde\eta}_{\mu\alpha}{\tilde\eta}_{\nu\beta}+{\tilde\eta}_{\mu\beta}{\tilde\eta}_{\nu\alpha}-\frac{2}{3}{\tilde\eta}_{\mu\nu}{\tilde\eta}_{\alpha\beta}$ with ${\tilde\eta}_{\mu\nu}=\eta_{\mu\nu}-p_{\mu}p_{\nu}/m_G^2$ and $\eta_{\mu\nu}$ is the Minkowski metric. The decay width of the massive spin-2 state is given approximately in the leading order in $\kappa$ as~\cite{Agashe:2020wph} \begin{equation} \Gamma_G\sim \kappa^2\frac{m_G^3}{960\pi}\,. \end{equation} The decay width is quite narrow and negligible even for a rather strong coupling $\kappa\, m_G\sim 1$. Having the massive spin-2 glueball in the intermediate states, the dilaton scattering amplitude will be modified. The tree-level amplitude will have extra contributions mediated by the spin-2 glueball (Fig.~\ref{graviton}), which we find, neglecting the decay width, \begin{equation} {\cal A}_{\sigma}^{G}=-\frac{\kappa^2}{2}\left[\frac{-\frac23s^2+t^2+u^2}{s-m_G^2+i\epsilon}+\frac{s^2-\frac23t^2+u^2}{t-m_G^2+i\epsilon}+\frac{s^2+t^2-\frac23u^2}{u-m_G^2+i\epsilon}\right]\,. \end{equation} \begin{figure}[t] \centering \begin{minipage}[c]{\linewidth} \centering \includegraphics[scale=0.33]{d_graviton} \end{minipage} \caption{Tree level diagrams for $2\to2$ scattering, mediated by the spin-2 glueball, denoted as spring-like lines: (a) $s$-channel, (b) $t$-channel, (c) $u$-channel. } \label{graviton} \end{figure} Combining all the diagrams, the tree, the one-loop and the spin-2 mediated, the dilaton scattering amplitude becomes for $s^2\gg t^2,u^2$, taking $m_D\approx0$ and $\Gamma_G\approx0$, \begin{equation} {\cal A}_{\sigma}\approx\frac{s^2}{16\pi^2f_D^4}+\frac{\kappa^2}{3}\frac{s^2}{s-m_G^2}\,, \end{equation} where we have absorbed into $f_D$ the logarithmic correction in Eq.~(\ref{box}). \begin{figure}[t] \centering \begin{minipage}[c]{\linewidth} \centering \includegraphics[scale=0.36]{unitarity} \end{minipage} \caption{The unitarity bound, $\left|{\cal A}_{\sigma}\right|\le1$, for the $2\to2$ scattering amplitude of dilatons as a function of $s$ for $s^2\gg t^2,u^2$: Each plot corresponds to the inclusion of an intermediate spin-2 state of mass $m_G^2=4\pi\eta f_D^2$ for $\eta=1.6, 2, 2.5, \infty$ with the coupling $\kappa^2 =0.5/m_G^{2}$ as an example. $\eta\to\infty$ corresponds to the decoupling limit.} \label{unitarity} \end{figure} We plot the amplitude in Fig.~\ref{unitarity}. We see that the perturbative unitarity is violated at $s=4\pi f_D^2$ in the $2\to2$ dilaton scattering, if the spin-2 state is not included. Its inclusion however ameliorates the divergence~\footnote{In the case of $\pi\,\pi$ scattering. the inclusion of the spin-1 state, namely $\rho$ meson, improves the unitarity~\cite{Sannino:1995ik} }. Since the microscopic theory is unitary, there should appear the next-light states near the cutoff of the effective theory to restore the unitarity. As shown in Fig.~\ref{unitarity}, we find that the unitarity is improved beyond the cutoff, when the intermediate spin-2 state of mass around $\sqrt{4\pi}f_D$ is included in the scattering process. \subsection{Universal mass ratio} We find that the spin-2 state of mass around the cutoff scale, $m_G\sim\sqrt{4\pi}f_D$, should appear to improve the unitarity violation in the dilaton effective theory. The unitarity argument then suggests the mass ratio to be \begin{equation} R\equiv \frac{m_{G}}{m_{D}}\sim \frac{f_D^2}{\left|{\cal E}_{\rm vac}\right|^{1/2}}\,. \end{equation} The vacuum energy density, ${\cal E}_{\rm vac}$, measures how big the breaking of the scale symmetry is, defining the infrared (IR) scale of the theory, $\Lambda_{\rm IR}$, while $f_D$ defines the scale, $\Lambda_{\rm SB}$, at which the scale symmetry is spontaneously broken\,\cite{Hashimoto:2010nw}.\footnote{The PCDC relation, $f_D^2m_{D}^2=-16\,{\cal E}_{\rm vac}$ associates the dilaton mass with the vacuum energy, the explicit scale-symmetry breaking term. When the explicit-breaking term becomes vanishingly small, the ground state is almost degenerate along the scale transformation and the dilaton becomes almost massless. Therefore, the dilaton decay constant should remain finite in that limit, which shows that they are two independent quantities.} In pure YM theory they are expected to be close to each other because it admits only a single scale, uniquely given by the renormalization group equation. Indeed the lattice study shows that $R\simeq 1.4$ for pure YM theory in $3+1$ dimensions while the models of its gravity dual give $\sqrt{2}\lesssim R\lesssim 1.74$~\cite{Bennett:2020hqd}. In general, however, these two scales do not have to be of the same order. In fact, in theories of near conformal dynamics, they are predicted to be widely separated to follow the Miransky scaling or the Berezinskii-Kosterlitz-Thouless (BKT) scaling \begin{equation} \Lambda_{\rm IR}=\Lambda_{\rm SB}\exp\left(-\frac{c}{\sqrt{\alpha_*/\alpha_c-1}}\right)\,, \end{equation} where $c$ is a ${\cal O}(1)$ constant and $\alpha_c$ is the parameter of the theory with $\alpha_*$ being the critical point of the phase transition. In the case of the Banks-Zaks theory~\cite{Banks:1981nn} $\alpha_*$ is the would-be IR fixed-point of the $\beta$ function, $\alpha_c$ is the critical coupling for the chiral symmetry breaking, the constant $c=\pi$, and the IR scale, $\left|{\cal E}_{\rm vac}\right|^{1/4}$, is given by the dynamically generated fermion mass due to the chiral symmetry breaking, $\Lambda_{\rm IR}\simeq m_{\rm dyn}$~\cite{Miransky:1996pd,Choi:2012kx,Hong:2013eta} . By the suitable choice of the number of fermions and colors~\cite{Gies:2005as,Kaplan:2009kr}, or by turning on extra gauge interactions~\cite{Hong:2017smd}, one could make $\alpha_c$ very close to the critical point so that the separation of the scales can be as wide as possible. As anticipated in~\cite{Athenodorou:2016ndx}, we find that the unitarity argument shows that the mass ratio between the lowest spin-2 state and the dilaton measures the scale separation of the near conformal dynamics, \begin{equation} R\equiv\frac{m_G}{m_{D}}\sim \exp\left(\frac{2c}{\sqrt{\alpha_*/\alpha_c-1}}\right)\,. \label{ratio} \end{equation} If, therefore, one discovers both the new massive spin-2 state and the dilaton, one may be able to discern the correct conformal dynamics responsible for them. \subsection{Results and Discussion} In this paper, we have calculated in perturbation the $2\to2$ dilaton scattering amplitudes from the dilaton effective theory to find that the one-loop amplitude violates the unitary bound at energy, $E\sim \sqrt{4\pi}f_D$. Such violation should be interpreted as a signal for an additional state, the next-lightest resonance, in the effective theory. We show that the spin-2 state, $2^{++}$, of mass around $\sqrt{4\pi}f_D$ improves the unitarity violation of the dilaton scattering amplitude near the cutoff of the effective theory. The mass ratio between the dilaton and the spin-2 state, obtained from the unitarity argument, captures the degree of the conformality of the microscopic theory, Eq.\,(\ref{ratio}). The heavier the ground-state spin-2 is, compared to the dilaton, the more conformal the microscopic theory becomes. For instance, in pure YM theory, the lattice study shows the ratio is about 1.4~\cite{Bennett:2020hqd}, showing that the scale symmetry is badly broken in pure YM theory. This is expected, because in pure YM theory without fermions that screen the color charges, the $\beta$ function decreases from zero rapidly. However, in theories like Banks-Zaks, where the $\beta$ function is almost zero for a wide range of scales near IR, the ratio $R$ could be quite large, if the chiral symmetry breaking occurs near the quasi IR fixed point, $\alpha_*-\alpha_c\ll\alpha_*$. \subsection{Acknowledgements} We thank S.H. Im and J.-W. Lee for useful discussions. This work was supported by a 2-Year Research Grant of Pusan National University. \subsubsection{}
1,116,691,500,582
arxiv
\section{\label{s1}Introduction} The quark model provides a good description of both the ground states and some excited states of baryons. However, several resonances that are predicted by this model have not yet been observed, and hence there is an intense experimental effort underway to find these missing states~\cite{P1}. The baryon coupling in conventional production channels ({\it e.g.} $\gamma$-nucleon) can be quite small, but the coupling between baryons and $\chi_{cJ}$ decays via $gg$ gluons could be larger ({\it e.g.} $\psi$ or $\chi_{cJ}$ decays). For this reason, charmonium decay is a promising process to study excited nucleons and hyperons~\cite{P2}.\par The BES Collaboration has reported a study of $J/\psi\rightarrow\bar{p}K^{+}\Lambda+c.c.$ and $\psi(3686)\rightarrow\bar{p}K^{+}\Lambda+c.c.$ decays~\cite{P3}, in which a threshold enhancement in the $\bar{p}\Lambda$ mass spectrum was observed. Throughout this paper, the inclusion of charge conjugate channels is implied. The BESIII Collaboration also reported a study of $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\Lambda$ \cite{P4}, where a near threshold enhancement in the mass spectrum of $\bar{p}\Lambda$ was observed in $\chi_{c0}$ decay. This enhancement may be interpreted as a quasibound dibaryon state, or as an enhancement due to final-state interaction, or simply as an interference effect of high-mass $N^{*}$ and $\Lambda^{*}$ states~\cite{P4}. The study of the resonant structures in the similar decay modes $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$ may help in the understanding of the $\bar{p}\Lambda$ threshold structure. \par Until now, no experimental results exist concerning the decays $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$. In this analysis, the branching fractions (BFs) of $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ ($J$ = 0, 1, 2) and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$ are measured for the first time with a data sample of $448.1\times10^{6}$ $\psi(3686)$ events \cite{P5}. Moreover, possible substructures in invariant mass spectra of $\bar{p}K^{*+}$, $K^{*+}\Lambda$, and $\bar{p}\Lambda$ are investigated. \section{\label{s2}BESIII DETECTOR AND MONTE CARLO SIMULATION} The Beijing Electron Positron Collider II (BEPCII) is a double-ring $e^{+}e^{-}$ collider running at center-of-mass energy ranging from 2.0 to $4.6~\rm{GeV}$. The BESIII detector \cite{P6} at BEPCII, with a geometrical acceptance of $93\%$ of the $4\pi$ solid angle, operates in a magnetic filed of 1.0 T provided by a superconducting solenoid magnet. The detector is composed of a helium-based main drift chamber (MDC), a plastic-scintillator time-of-flight (TOF) system, a CsI(Tl) electromagnetic calorimeter (EMC) and a resistive plate chambers (RPC)-based muon chamber (MUC). The spatial resolution of the MDC is better than 130 $\mu$m, the charged track momentum resolution is $0.5\%$ at 1 GeV/$c$, and the energy-loss ($dE/dx$) resolution is better than $6\%$ for electrons from Bhabha events. The time resolution of the TOF is 80 ps (110 ps) in the barrel (endcaps. The energy resolution of the EMC at 1.0 GeV is $2.5\%$ ($5\%$) in the barrel (endcaps). The position resolution in the MUC is better than 2 cm.\par Simulated Monte Carlo (MC) events are used to determine the detection efficiency, optimize selection criteria and estimate the level of contamination from background processes. The {\sc geant}{\footnotesize 4}-based~\cite{P7} simulation package {\sc boost} includes a geometric and material description of the BESIII detector, detector response, and digitization models, and also tracks the running conditions and performance of the detector. The production of $\psi(3686)$ events is simulated with {\sc kkmc} \cite{P8}, where the known decay modes are generated by {\sc evtgen}~\cite{P9,P13} with their BFs taken from the Particle Date Group (PDG) \cite{P10}, and the remaining unknown decays are generated by {\sc lundcharm} \cite{P11}. Exclusive MC samples of $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$ are generated to determine detection efficiencies. In the signal MC simulation, the angular distribution of the decay $\psi(3686)\rightarrow\gamma\chi_{cJ}$ has the form $1+\alpha \cos^{2}\theta$ with $\alpha$=1, $-1/3$, 1/13 for $J=$0, 1, 2, respectively, where $\theta$ is the photon polar angle \cite{P12}. The weak decay of $\Lambda$ is generated with a model that includes parity violation. Other relevant decays are generated with {\sc besevtgen} \cite{P13} with a uniform distribution in phase space.\par \section{\label{s3}Analysis of $\bm{\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda}$} \subsection{\label{s3.1}Event selection} The process $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ is reconstructed with $\Lambda\rightarrow p\pi^{-}$, $K^{*+}\rightarrow K^{+}\pi^{0}$, and $\pi^{0}\rightarrow\gamma\gamma$. Events are required to have at least two positive and two negative charged tracks. For each charged track, the polar angle in the MDC must satisfy $|\cos\theta|< 0.93$. The combined TOF and $dE/dx$ information is used to form particle identification (PID) confidence levels for pion, kaon and proton hypotheses. Each track is assigned to the particle hypothesis with the highest confidence level. The identified $\bar{p}$ and $K^{+}$ candidates are further required to have their point of closest approach to the interaction point (IP) within $\pm$1 cm in the plane perpendicular to beam direction and within $\pm$10 cm in the plane of the beam direction. A common vertex constraint is applied to all $p\pi^{-}$ pairs assumed to arise from a $\Lambda$ decay, and the production of the $\Lambda$ candidates is constrained to be at the interaction point. Only $dE/dx$ information is used for the PID of $p$ and $\pi^{-}$ candidates in $\Lambda$ decays, because many of these particles do not reach the TOF on account of their low momentum. Photon candidates are required to have energy deposition greater than 25 MeV in the barrel EMC ($|\cos\theta|<0.8$) and 50 MeV in the end cap EMC ($0.86<|\cos\theta|<0.92$). To exclude showers from charged tracks, the angle between the direction of the photon and the nearest charged track is required to be greater than $5^{\circ}$. In addition, the angle between the direction of the photon and anti-proton is required to be greater than $10^{\circ}$ to suppress background from anti-proton annihilation in the detector. The measured EMC time is required to be within 0 and 700 ns of start time of the event to suppress electronic noise and any energy deposition unrelated to the event.\par To improve the mass resolution, the selected photons, anti-proton, kaon, and $\Lambda$ candidate are subjected to a five-constraint (5C) kinematic fit under the hypothesis of $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\pi^{0}\Lambda$ with the invariant mass of the two photons being constrained to the $\pi^{0}$ mass. The $\chi^{2}$ of the 5C fit is required to be less than 70. For events with more than one combination satisfying this requirement, only the combination with the smallest $\chi^{2}$ is accepted. To veto background events from $\psi(3686)\rightarrow\bar{p}K^{+}\pi^{0}\Lambda$ and $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\Lambda$, an alternative 5C (4C) kinematic fit is performed under the hypotheses of $\psi(3686)\rightarrow\bar{p}K^{+}\pi^{0}\Lambda$ ($\gamma\bar{p}K^{+}\Lambda$). We further require the confidence level of the kinematic fit for the $\psi(3686)\rightarrow\bar{p}K^{+}\pi^{0}\Lambda$ assignment to be larger than those for the $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\pi^{0}\Lambda$ and $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\Lambda$ hypotheses.\par The $K^{+}\pi^{0}$ invariant mass distribution is shown in Fig.\ \ref{fig2}(a), where an obvious $K^{*+}$ structure can be seen. The $K^{*+}$ candidates are selected by requiring $|M_{K^{+}\pi^{0}}-M_{K^{*+}}|<0.1~{\rm GeV}/c^{2}$, where $M_{K^{*+}}$ is the nominal mass of the $K^{*+}$ meson \cite{P10}. The $K^{*+}$ sidebands, also indicated in Fig.\ \ref{fig2}(a), are chosen to be $1.1 <M_{K^{+}\pi^{0}}<1.2~{\rm GeV}/c^{2}$ and $0.65 <M_{K^{+}\pi^{0}}<0.75~{\rm GeV}/c^{2}$. Figure\ \ref{fig2}(b) shows the $M_{p\pi^{-}}$ distribution, from which $\Lambda$ candidates are selected by requiring $|M_{p\pi^{-}}-M_{\Lambda}|<6~{\rm MeV}/c^{2}$, where $M_{\Lambda}$ is the nominal $\Lambda$ mass \cite{P10}. Background events from $\psi(3686)\rightarrow J/\psi\pi^{0}\pi^{0}$, $J/\psi\rightarrow\bar{p}K^{+}\Lambda$ are rejected by requiring $|M_{\bar{p}K^{+}\Lambda}-M_{J/\psi}|>0.05~{\rm GeV}/c^{2}$, where $M_{J/\psi}$ is the nominal $J/\psi$ mass \cite{P10}. To remove the background from the cascade decay $\psi(3686)\rightarrow \bar{p}K^{+}\Sigma^{0}$, $\Sigma^{0}\rightarrow\gamma\Lambda$, the additional selection requirement $M_{\gamma\Lambda}>1.21$~GeV/$c^{2}$ is applied.\par After applying these requirements, $\chi_{cJ}$ signals are clearly seen in the invariant mass spectrum of $\bar{p}K^{*+}\Lambda$, as shown in Fig.~\ref{fig3}. The mass windows used to select the $\chi_{c0}$, $\chi_{c1}$, $\chi_{c2}$ candidates correspond to about three times the $\chi_{cJ}$ width convolved with the mass resolution, which are 3.35-3.48, 3.49-3.53, and 3.53-3.59~GeV/$c^{2}$, respectively. The invariant mass spectra of the $\bar{p}K^{*+}$, $\bar{p}\Lambda$, and $K^{*+}\Lambda$ combinations and the corresponding Dalitz plots are shown in Fig.~\ref{fig4} for each $\chi_{cJ}$ state. No significant substructure is seen in the Dalitz plots of $\bar{p}K^{*+}\Lambda$ distributions. In order to search for the near-threshold structure of $M_{\bar{p}\Lambda}$ observed in Ref.\ \cite{P4} in the decay $\chi_{c0}\rightarrow\bar{p}K^{+}\Lambda$, fits are performed on $M_{\bar{p}\Lambda}$ where the structure is described by a weighted Breit-Wigner resonance with parameters fixed to those reported in Ref.\ \cite{P4}. These fits return a statistical significance for the structure of 2.1$\sigma$, 2.5$\sigma$, and 1.9$\sigma$ for the $\chi_{c0}$, $\chi_{c1}$, and $\chi_{c2}$ states, respectively. \par \begin{figure}{} \centering \subfigure{ \centering \includegraphics[width=1.5in]{fig2a.eps} \put(-30,60){(a)} } \subfigure{ \centering \includegraphics[width=1.5in]{fig2b.eps} \put(-30,60){(b)} } \caption{Invariant mass distribution of (a) $K^{+}\pi^{0}$ and (b) $p\pi^{-}$. The solid arrows indicate the mass windows used as the selection criteria in the analysis. The dashed arrows indicate the sidebands region.}\label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{fig4.eps}\\ \caption{Invariant mass spectrum of $\bar{p}K^{*+}\Lambda$. The three arrow-pairs indicate, from left to right, the mass windows for $\chi_{c0}$, $\chi_{c1}$, and $\chi_{c2}$, respectively.}\label{fig3} \end{figure} \begin{figure*}{} \centering \subfigure{ \centering \includegraphics[width=2.0in]{fig3a.eps} \put(-35,100){(a)} } \subfigure{ \centering \includegraphics[width=2.0in]{fig3b.eps} \put(-35,100){(b)} } \subfigure{ \centering \includegraphics[width=2.0in]{fig3c.eps} \put(-35,100){(c)} } \caption{The Dalitz plots of $\bar{p}K^{*+}\Lambda$ for $\chi_{c0}$ (a), $\chi_{c1}$ (b), and $\chi_{c2}$ (c).}\label{fig4} \end{figure*} \subsection{\label{s3.2}Background study} Using an inclusive MC sample of $506\times10^{6}$ $\psi(3686)$ events, the background from fake $\Lambda$ is found together with fake $K^{*+}$. So, the background can be categorized into the following four types: (1) events with a genuine $K^{*+}$ and a fake $\chi_{cJ}$ ($K^{*}$, non-$\chi_{cJ}$); (2) events with a genuine $\chi_{cJ}$ and a fake $K^{*}$ ($\chi_{cJ}$, non-$K^{*}$); (3) events with fake $K^{*}$ and $\chi_{cJ}$ candidates (non-$K^{*}$, non-$\chi_{cJ}$); (4) events containing a genuine $K^{*+}$ and a genuine $\chi_{cJ}$ ($K^{*}$, $\chi_{cJ}$). The contributions from the first three categories can be estimated by performing a two-dimensional (2-D) fit to the distribution of $M_{K^{+}\pi^{0}}$ versus $M_{\bar{p}K^{*+}\Lambda}$. The fourth type of background events come mainly from the processes $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda\rightarrow\gamma\gamma\bar{p}K^{+}\Lambda$, $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda\rightarrow\gamma\bar{p}K^{*+}\gamma p\pi^{-}$, $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\gamma J/\psi\rightarrow\gamma\gamma\bar{p}K^{*+}\Lambda$ and $\psi(3686) \rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Sigma^{0}$. The first two of these contributions are negligible, on account of the low BF of radiative $K^{*+}$ and $\Lambda$ decays. The level of contamination coming from the other two modes is assessed by applying the selection to samples of exclusive MC events. For the normalization procedure, the BF of $\psi(3686)\rightarrow\gamma\chi_{cJ},\chi_{cJ}\rightarrow\gamma J/\psi,J/\psi\rightarrow\bar{p}K^{*+}\Lambda$ is estimated to be less than $10^{-5}$, which implies negligible background of less than one event from this source. The normalized number of $\psi(3686)\rightarrow\gamma\chi_{cJ},\chi_{cJ}\rightarrow\bar{p}K^{*+}\Sigma^{0}$ background events is estimated to be 11.7$\pm$3.5, 5.1$\pm$2.3, 4.8$\pm$2.6 for $\chi_{cJ}$ ($J$=0, 1, 2), where the relative BFs used to calculate these yields are estimated from dedicated studies with the same data sample. \par To investigate possible background from continuum processes, the same selection criteria are applied to a data sample of 2.93 fb$^{-1}$ \cite{P15} collected at $\sqrt{s}=3.773$ GeV. After normalizing to the integrated luminosity of the $\psi(3686)$ data sample, 20.1$\pm$4.1 events survive and no peak is found in the mass spectrum of $M_{\bar{p}K^{*+}\Lambda}$. As a cross check the selection is also performed on a data sample of 44.5 pb$^{-1}$ collected at $\sqrt s=3.65$ GeV. Only one event survives, which corresponds to 14 events when normalized to the integrated luminosity of the $\psi(3686)$ data sample, and is consistent with the result of the first study. In the BF measurement any continuum contribution is included in the other sources of non-peaking background and the total is estimated by the 2-D fit described below. \subsection{\label{s3.3}Branching fraction measurement of $\bm{\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda}$} The distribution of $M_{K^{+}\pi^{0}}$ versus $M_{\bar{p}K^{*+}\Lambda}$ is shown in Fig.\ \ref{fig5}. An unbinned extended maximum-likelihood 2-D fit is performed on this distribution to determine the number of ($K^{*+}$, $\chi_{cJ}$) events. The composite probability density function (PDF) is constructed as follows: \begin{equation} \begin{split} F&=N_{\rm sig}^{\rm obs}\times(F^{K^{*}}_{\rm sig}\cdot F^{\chi_{cJ}}_{\rm sig}) \\ &+N_{\rm bkg}^{\chi_{cJ},{\rm non}-K^{*}}\times(F^{{\rm non}-K^{*}}_{\rm bkg}\cdot F^{\chi_{cJ}}_{\rm sig})\\ &+N_{\rm bkg}^{K^{*},{\rm non}-\chi_{cJ}}\times(F^{{\rm non}-\chi_{cJ}}_{\rm bkg}\cdot F^{K^{*}}_{\rm sig})\\ &+N_{\rm bkg}^{{\rm non}-K^{*}\chi_{cJ}}\times(F^{{\rm non}-K^{*}}_{\rm bkg}\cdot F^{{\rm non}-\chi_{cJ}}_{\rm bkg}). \end{split} \end{equation} Here, $N_{\rm sig}^{\rm obs}$, $N_{\rm bkg}^{\chi_{cJ},{\rm non}-K^{*}}$, $N_{\rm bkg}^{K^{*},{\rm non}-\chi_{cJ}}$, and $N_{\rm bkg}^{{\rm non}-K^{*}\chi_{cJ}}$ are the numbers of ($K^{*}$, $\chi_{cJ}$) signal events, ($\chi_{cJ}$, non-$K^{*}$), ($K^{*}$, non-$\chi_{cJ}$), and (non-$K^{*}$, non-$\chi_{cJ}$) background events, respectively. The shape of the $K^{*+}$ resonance, $F^{K^{*}}_{\rm sig}$, is described by a $P$-wave Breit-Wigner (BW) function \cite{P16} convolved with a double-Gaussian function ($DG$) that accounts for detector resolution, the parameters of which are determined from MC simulation. The definition of $F^{K^{*}}_{\rm sig}$ is \begin{equation} F^{K^{*}}_{\rm sig}(s)=\frac{M\Gamma(s)}{(s^{2}-M^{2})^{2}+M^{2}\Gamma(s)^{2}}\otimes DG(s), \end{equation} where $\Gamma(s)=\Gamma(\frac{M}{s})^{2}(\frac{q}{q_{0}})^{2L+1}$, $s$ is the invariant mass of the $K^{+}\pi^{0}$ pair, $M$ and $\Gamma$ are the $K^{*+}$ mass and width~\cite{P10}, $q$ is the $K^{+}$ momentum in the $K^{*+}$ rest frame, $q_{0}$ is the $q$ value at $s=M$, and $L=1$ is the relative orbital angular momentum of $K^{+}\pi^{0}$. The background distribution of the fake $K^{*+}$ contribution, $F^{{\rm non}-K^{*}}_{\rm bkg}$, is described by truncated polynomial function $F^{{\rm non}-K^{*}}_{\rm bkg}(s)=(s-m_{t})^{a}e^{-bs-cs^{2}}$ \cite{P16}, where $m_{t}$ is the threshold mass for $K^{+}\pi^{0}$ and $a$, $b$, $c$ are free parameters. The shape of the $\chi_{cJ}$ signal is described by \begin{equation} F^{\chi_{cJ}}_{\rm sig}=E_{\gamma}^{3}\cdot f(E_{\gamma})\cdot BW(m)\cdot \frac{B_{l}(Q)}{B_{l}(Q_{0})}\otimes G(m;\mu,\sigma). \end{equation} Here $E_{\gamma}^{3}$ is an E1 radiative-transition factor and $f(E_{\gamma})=\frac{E_{0}^{2}}{E_{\gamma}E_{0}+(E_{\gamma}-E_{0})^{2}}$ is a damping factor~\cite{P17}, where $E_{\gamma}$ is the energy of the radiative photon in the $\psi(3686)$ rest frame and $E_{0}=\frac{M_{\psi(3686)}^{2}-M_{\chi_{cJ}}^{2}}{2M_{\psi(3686)}^{2}}$. In the relativistic BW function $BW(m)$, the mass and width of the $\chi_{cJ}$ are fixed to the PDG \cite{P10} values. The Blatt-Weisskopf barrier factor \cite{P18} $B_{l}(Q)$ is a function of $Q$, which is the momentum of either the radiative photon or the $\chi_{cJ}$ in the $\psi(3686)$ rest frame, $Q_{0}$ is the $Q$ value at $m=M_{\chi_{cJ}}$, where $m$ is the invariant mass of the $\bar{p}K^{*+}\Lambda$ combination. Finally, $G(m;\mu,\sigma)$ is a modified Gaussian function parameterizing the instrumental mass resolution, taking the form~\cite{P19} \begin{equation} G(m;\mu,\sigma)=\frac{1}{\sqrt{2\pi}\sigma} e^{-(|\frac{m-\mu}{\sigma}|)^{1+\frac{1}{1+|\frac{m-\mu}{\sigma}|}}}, \end{equation} where the parameters are determined from MC simulation. The shape of fake $\chi_{cJ}$ candidates, $F^{{\rm non}-\chi_{cJ}}_{\rm bkg}$, is described by an ARGUS \cite{P20} function. The fit yields $254\pm35$ ($K^{*+}$, $\chi_{c0}$) events with a statistical significance of 7.2$\sigma$, $328\pm36$ ($K^{*+}$, $\chi_{c1}$) events with a statistical significance of 11.6$\sigma$, and $476\pm52$ ($K^{*+}$, $\chi_{c2}$) events with a statistical significance of 15.2$\sigma$. The statistical significance is determined from the change of the log-likelihood value and the degrees of freedom in the fit when performed with and without a signal component. The 2-D histogram sampled from the composite PDF and the projections of the fit on the $M_{K^{+}\pi^{0}}$ and $M_{\bar{p}K^{*+}\Lambda}$ distributions are shown in Fig.\ \ref{fig5}. The BF of $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ is calculated by \begin{equation}\label{2} \begin{split} \mathcal{B}&=\frac{N_{\rm sig}^{\rm obs}-N_{\rm bkg}}{\epsilon\cdot N_{\psi(3686)}\cdot \mathcal{B}(\psi(3686)\rightarrow\gamma\chi_{cJ})}\\ &\times\frac{1}{\mathcal{B}(\Lambda\rightarrow p\pi^{-})\cdot\mathcal{B}(K^{*+}\rightarrow K^{+}\pi^{0})\cdot\mathcal{B}(\pi^{0}\rightarrow\gamma\gamma)}, \end{split} \end{equation} where $N_{\rm sig}^{\rm obs}$ is the number of signal event returned from the 2-D fit and $N_{\rm bkg}=11.7\pm3.5$, $5.1\pm2.3$, $4.8\pm2.6$ are the numbers of ($K^{*}$, $\chi_{c0}$), ($K^{*}$, $\chi_{c1}$), ($K^{*}$, $\chi_{c2}$) peaking background events, respectively, which is reported in Sec.\ \ref{s3.2}; $N_{\psi(3686)}=(448.1\pm2.9)\times10^{6}$ is the number of $\psi(3686)$ events \cite{P5}, and $\epsilon$ are detection efficiencies which are determined from MC simulation and found to be $(5.51\pm0.05)\%$, $(7.07\pm0.06)\%$, and $(6.31\pm0.06)\%$ for the $\chi_{c0}$, $\chi_{c1}$, and $\chi_{c2}$ signals, respectively. The BFs $\mathcal{B}(\psi(3686)\rightarrow\gamma\chi_{cJ})$ , $\mathcal{B}(\Lambda\rightarrow p\pi^{-})$, $\mathcal{B}(K^{*+}\rightarrow K^{+}\pi^{0})$, and $\mathcal{B}(\pi^{0}\rightarrow\gamma\gamma)$ are taken from Ref.~\cite{P10}. The BFs of $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ are measured to be $(4.8\pm0.7)\times10^{-4}$ for the $\chi_{c0}$ mode, $(5.0\pm0.5)\times10^{-4}$ for the $\chi_{c1}$ mode, and $(8.2\pm0.9)\times10^{-4}$ for the $\chi_{c2}$ mode, where the uncertainties are statistical only. \begin{figure*} \centering \subfigure{ \centering \includegraphics[width=2.2in]{fig6.eps} \put(-80,70){(a)} }\hspace{-0.08in} \subfigure{ \centering \includegraphics[width=2.2in]{fig17.eps} \put(-80,70){(b)} } \subfigure{ \centering \includegraphics[width=2.2in]{fig8a_n.eps} \put(-30,80){(c)} } \subfigure{ \centering \includegraphics[width=2.2in]{fig8b_n.eps} \put(-30,80){(d)} } \caption{(a) Distribution of $M_{K^{+}\pi^{0}}$ versus $M_{\bar{p}K^{*+}\Lambda}$ from data. The three boxes indicate from left to right the signal region of $\chi_{c0}$, $\chi_{c1}$, and $\chi_{c2}$, respectively. (b) 2-D histogram sampled from the composite PDF of the 2-D fit. (c) and (d) are projections of the 2-D fit on the distributions of $M_{K^{+}\pi^{0}}$ and $M_{\bar{p}K^{*+}\Lambda}$, respectively. The dots with error bars are data; the solid curves show the fitting result; the long-dashed curves are ($K^{*+}, \chi_{cJ}$) signal; the short-dashed curves are ($K^{*+}$, non-$\chi_{cJ}$) background; the dot-dashed curves are ($\chi_{cJ}$, non-$K^{*+}$) background and the dotted curves are (non-$K^{*+}$, non-$\chi_{cJ}$) background.}\label{fig5} \end{figure*} \section{\label{s4}Study of $\bm{\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda}$} \subsection{\label{s4.1}Event Selection} Events are selected containing at least two photons, one $\bar{p}$, one $K^+$, and one $\Lambda$ candidate, identified using the same criteria as employed in the $\psi(3686)\rightarrow\gamma\bar{p}K^{*+}\Lambda$ analysis. The selected particles are subjected to a 5C kinematic fit under the hypothesis of $\psi(3686)\rightarrow\bar{p}K^{+}\pi^{0}\Lambda$, with the invariant mass of the two photons constrained to the $\pi^{0}$ mass. The $\chi^{2}$ of the 5C fit is required to be less than 100. For events with more than one combination meeting this requirement, only the combination with the smallest $\chi^{2}$ is retained for further analysis. To veto backgrounds from $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\pi^{0}\Lambda$ and $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\Lambda$, an alternative 5C (4C) kinematic fit is performed under the $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\pi^{0}\Lambda$ ($\gamma\bar{p}K^{+}\Lambda$) hypothesis. We further require that the confidence level of the kinematic fit for the $\psi(3686)\rightarrow\bar{p}K^{+}\pi^{0}\Lambda$ assignment is larger than those of the $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\pi^{0}\Lambda$ and $\psi(3686)\rightarrow\gamma\bar{p}K^{+}\Lambda$ hypotheses. \par The distribution of $M_{K^{+}\pi^{0}}$ versus $M_{p\pi^{-}}$ is shown in Fig.\ \ref{fig9}(a), where $K^{*+}$ and $\Lambda$ signals are visible. The $\Lambda$ candidates are selected by requiring $|M_{p\pi^{-}}-M_{\Lambda}|<6~{\rm MeV}/c^{2}$ and $K^{*+}$ candidates are selected by requiring $|M_{K^{+}\pi^{0}}-M_{K^{*+}}|<0.1~{\rm GeV}/c^{2}$. The $K^{*+}$ sidebands are defined to be $1.1<M_{K^{+}\pi^{0}}<1.2~{\rm GeV}/c^{2}$ and $0.65<M_{K^{+}\pi^{0}}<0.75~{\rm GeV}/c^{2}$. The distribution of $M_{p\pi^{-}}$ for events within the $K^{*+}$ signal region is shown in Fig.\ \ref{fig9}(b). The mass spectra of $\bar{p}K^{*+}$, $\bar{p}\Lambda$, $K^{*+}\Lambda $, and Dalitz plot after the application of all selection criteria are shown in Fig.\ \ref{fig11}. A near-threshold structure in the $M_{\bar{p}\Lambda}$ is fitted with a 1.7$\sigma$ signficance, using the the same parameterization as in the $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ analysis.\par \begin{figure} \centering \subfigure{ \centering \includegraphics[width=1.5in]{fig11.eps} \put(-28,73){(a)} }\hspace{-0.05in} \subfigure{ \centering \includegraphics[width=1.5in]{fig10.eps} \put(-28,73){(b)} } \caption{(a) Distribution of $M_{K^{+}\pi^{0}}$ versus $M_{p\pi^{-}}$. The box indicates the signal region. (b) Invariant mass distribution of $p\pi^{-}$. The arrows indicates the mass window used in the selection.}\label{fig9} \end{figure} \begin{figure*}{} \centering \subfigure{ \centering \includegraphics[width=2.2in]{fig12a.eps} \put(-25,90){(a)} } \subfigure{ \centering \includegraphics[width=2.2in]{fig12b.eps} \put(-110,90){(b)} } \subfigure{ \centering \includegraphics[width=2.2in]{fig12c.eps} \put(-25,90){(c)} } \subfigure{ \centering \includegraphics[width=2.2in]{fig16_n.eps} \put(-35,90){(d)} } \caption{Invariant mass spectra of (a) $M_{\bar{p}K^{*+}}$, (b) $M_{\bar{p}\Lambda}$, and (c) $M_{K^{*+}\Lambda}$. The dots with error bars are data. The shaded histograms are background from inclusive MC sample. The dashed lines are background that are estimated from the $K^{*+}$ sidebands and are normalized to the signal region. The solid lines are the sum of phase-space MC sample and non-$K^{*+}$ background that are normalized to signal yields. (d) Dalitz plot of $\bar{p}K^{*+}\Lambda$.}\label{fig11} \end{figure*} \subsection{\label{s4.2}Background study} Using an inclusive MC sample of $506\times10^{6}$ $\psi(3686)$ events, the background from fake $\Lambda$ is found together with fake $K^{*+}$. The sources of background can be categorized into two types: peaking background events with genuine $K^{*+}$ mesons in the final state and non-peaking background events with fake $K^{*+}$ candidates. The non-peaking background can be estimated from a fit to the $M_{K^{+}\pi^{0}}$ spectrum. The major peaking backgrounds are found to be: $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ ($J$=0, 1, 2) and $\psi(3686)\rightarrow\bar{p}K^{*+}\Sigma^{0},\Sigma^{0}\rightarrow\gamma\Lambda$. Corresponding exclusive MC samples are generated for further studies. The selection criteria are applied to these exclusive MC samples and the number of surviving events are normalized by the BFs of the relevant decay processes. The normalized number of $\psi(3686)\rightarrow\bar{p}K^{*+}\Sigma^{0}$ background events is 5.2$\pm$1.1 and the expected numbers of $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ ($J$=0, 1, 2) background decays are 1.9$\pm$0.3, 4.5$\pm$0.5 and 8.8$\pm$1.0, respectively. A data sample of 2.93 fb$^{-1}$ \cite{P15} collected at $\sqrt{s}=3.77~{\rm GeV}$ is used to investigate possible background from continuum processes. After normalizing to the integrated luminosity of the $\psi(3686)$ data sample, 164.1$\pm$9.5 events survive and a clear $K^{*+}$ peak is found in the $K^{+}\pi^{0}$ mass spectrum. This background yield is cross-checked by repeating the procedure on the data sample of 44.5 pb$^{-1}$ \cite{P21} collected at $\sqrt{s}=3.65$ GeV, and a compatible result of 207$\pm$61 events is obtained, after normalization. \subsection{\label{s4.3}Branching fraction measurement of $\bm{\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda}$} An unbinned maximum likelihood fit is performed to the distribution of $M_{K^{+}\pi^{0}}$ (Fig.\ \ref{fig15}) to extract the number of $K^{*+}$ signal events. The $K^{*+}$ signal shape is described by a $P$-wave BW function convolved with a double-Gaussian function, and the background shape is described by a truncated polynomial function. The definitions of these functions are the same as those introduced in Sec.\ \ref{s3.3}. The fit result is shown in Fig.\ \ref{fig15}. The BF of $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$ is calculated according to \begin{equation}\label{2} \begin{split} \mathcal{B}&=\frac{N_{\rm sig}^{\rm obs}-N_{\rm bkg}}{\epsilon\cdot N_{\psi(3686)}\cdot \mathcal{B}(\Lambda\rightarrow p\pi^{-})}\\ &\times\frac{1}{\mathcal{B}(K^{*+}\rightarrow K^{+}\pi^{0})\cdot\mathcal{B}(\pi^{0}\rightarrow\gamma\gamma)}, \end{split} \end{equation} where $N_{\rm sig}^{\rm obs}= 1011\pm60 $ is number of $K^{*+}$ signal events obtained from the fit, $N_{\rm bkg}=20.4\pm1.6$ is the number of peaking background events reported in Sec.\ \ref{s4.2}, and $\epsilon$ is the detection efficiency, $(14.0\pm0.1)\%$, estimated from MC simulation. The $\mathcal{B}(\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda)$ is measured to be $(6.3\pm0.5)\times10^{-5}$, where the uncertainty is statistical only. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{fig15b.eps}\\ \caption{Invariant-mass spectrum of $K^{+}\pi^{0}$, showing the fit result. The dots with error bars are data and the solid curve shows the fit. The short-dashed curve is $K^{*+}$ signal and the long-dashed curve is non-peaking background.}\label{fig15} \end{figure} \section{\label{s5}Systematic uncertainties} \begin{table*}[t] \centering \caption{Summary of systematic uncertainties (in \%) in the measured BFs of $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$.}\label{table3} \begin{tabular}{p{4.cm}p{2.cm}p{2.cm}p{2.cm}c} \hline \hline Source &\multicolumn{3}{c}{$\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$} &$\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$\\ \hline &$\chi_{c0}$ &$\chi_{c1}$ &$\chi_{c2}$\\ \hline MDC Tracking &4.0 &4.0 &4.0 &4.0\\ PID efficiency &4.0 &4.0 &4.0 &4.0\\ Photon detection &3.0 &3.0 &3.0 &2.0\\ $\Lambda$ mass window &0.1 &0.1 &0.1 &0.1\\ Kinematic fit &0.1 &0.5 &0.2 &1.4\\ Fit range &5.9 &2.1 &2.0 &3.0\\ Signal shape &4.9 &3.8 &4.1 &3.4\\ Background shape &1.3 &2.0 &0.7 &1.1\\ Number of $\psi(3686)$ events &0.7 &0.7 &0.7 &0.7\\ $\mathcal{B}(\Lambda\rightarrow p\pi^{-})$ &0.8 &0.8 &0.8 &0.8\\ $\mathcal{B}(\psi(3686)\rightarrow\gamma\chi_{cJ})$ &2.0 &2.5 &2.1 &--\\ \hline Total &10.3 &8.5 &8.2 &7.8\\ \hline \hline \end{tabular} \end{table*} Systematic uncertainties on the BF measurements arise from a variety of sources: \textit{Tracking efficiency}. The uncertainty due to data-MC difference in the tracking efficiency is 1\% for each charged track coming from a primary vertex according to a study of $J/\psi\rightarrow K^{*}\bar{K}$ and $J/\psi\rightarrow p\bar{p}\pi^{+}\pi^{-}$ events. For each track from $\Lambda$, the uncertainty is also 1\% from analysis of $J/\psi\rightarrow \bar{p}K^{+}\Lambda$ events~\cite{P4}.\par \textit{PID efficiency}. The candidates require tracks to be identified as $p$, $\bar{p}$, $K^{+}$, or $\pi^{-}$. The PID efficiency have been investigated using control samples of $J/\psi\rightarrow K^{0}_{S}K^{\pm}\pi^{\pm}$ and $J/\psi\rightarrow p\bar{p}\pi^{+}\pi^{-}$~\cite{P22,P23}. The uncertainty is assigned to be 1\% per charged track.\par \textit{Photon detection efficiency}. The photon detection efficiency was studied in the analysis of $J/\psi\rightarrow\rho\pi$ decays~\cite{P22}. The difference in the detection efficiency between the data and MC simulation is taken as the systematic uncertainty from this source, and $1\%$ is assigned for each photon.\par \textit{$\Lambda$ Mass window.} The systematic uncertainty from the requirement on the $\Lambda$ signal region is estimated by smearing the $p\pi^{-}$ invariant mass in the signal MC sample with a Gaussian function to compensate for the resolution difference between data and MC simulation. The smearing parameters are determined by fitting the $\Lambda$ distribution in data with the MC shape convolved with a Gaussian function. The difference in the detection efficiency as determined from signal MC sample with and without the extra smearing is taken as the systematic uncertainty. \par \textit{Kinematic fit.} The systematic uncertainty due to kinematic fitting is estimated by correcting the helix parameters of charged tracks according the method described in Ref.\ \cite{P24}. The differences in the detection efficiency between the MC samples with and without this correction are taken as the uncertainties, which are $0.1\%$, $0.5\%$, and $0.2\%$ for $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ ($J$=0, 1, 2) and $1.4\%$ for $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$.\par \textit{Fit range}. To estimate the systematic uncertainty due to fit range, several alternative fits in different ranges are performed. The resulting largest difference in the BF is assigned as the systematic uncertainty.\par \textit{Signal shape}. To estimate the uncertainty due to the choice of signal shape, the $K^{*+}$ and $\chi_{cJ}$ signal line shapes are replaced by alternative fits using MC shapes and the resulting differences in the BFs are assigned as systematic uncertainties.\par \textit{Background shape}. In the measurements of $\mathcal{B}(\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda)$ and $\mathcal{B}(\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda)$, the $\chi_{cJ}$ background shape is described by an ARGUS function and the $K^{*+}$ background shape is described by a second-order truncated polynomial function. To estimate the systematic uncertainty due to choice of background shape, an alternative fit is performed in which the ARGUS function is replaced with a second-order Chebychev polynomial function and the $K^{*+}$ signal is described with a third-order truncated polynomial. The change in the measured BF is assigned as the corresponding systematic uncertainty.\par \textit{Others.} The uncertainty due to the number of $\psi(3686)$ events is $0.7\%$ \cite{P5}. The systematic uncertainties associated with the intermediate-decay BFs of $\psi(3686)\rightarrow\gamma\chi_{cJ}$ and $\Lambda \rightarrow p\pi^{-}$ are taken from the PDG~\cite{P10}.\par The above systematic uncertainties are summarized in Table\ \ref{table3}. The total systematic uncertainty is calculated by assuming the individual components to be independent, and adding their magnitude in quadrature.\par \section{\label{s6}Results And Summary} \begin{table}[H] \centering \caption{The BFs of $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$, $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$, and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$, where the first uncertainties are statistical and the second ones systematic.}\label{table5} \begin{tabular}{lcc} \hline \hline Decay channel & &Branching fraction\\ \hline $\psi(3686)\rightarrow\gamma\chi_{c0}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ & &$(4.7\pm0.7\pm0.5)\times10^{-5}$ \\ \hline $\psi(3686)\rightarrow\gamma\chi_{c1}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ & &$(4.8\pm0.5\pm0.4)\times10^{-5}$ \\ \hline $\psi(3686)\rightarrow\gamma\chi_{c2}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ & &$(7.8\pm0.9\pm0.6)\times10^{-5}$ \\ \hline $\chi_{c0}\rightarrow\bar{p}K^{*+}\Lambda$ & &$(4.8\pm0.7\pm0.5)\times10^{-4}$ \\ \hline $\chi_{c1}\rightarrow\bar{p}K^{*+}\Lambda$ & &$(5.0\pm0.5\pm0.4)\times10^{-4}$ \\ \hline $\chi_{c2}\rightarrow\bar{p}K^{*+}\Lambda$ & &$(8.2\pm0.9\pm0.7)\times10^{-4}$ \\ \hline $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$ & &$(6.3\pm0.5\pm0.5)\times10^{-5}$\\ \hline \hline \end{tabular} \end{table} The processes $\psi(3686)\rightarrow\gamma\chi_{cJ}\rightarrow\gamma\bar{p}K^{*+}\Lambda$ and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$ are observed for the first time, using $448.1\times10^{6}$ $\psi(3686)$ events collected with the BESIII detector. Measurements of the $\mathcal{B}(\psi(3686)\rightarrow\gamma\chi_{cJ})\cdot\mathcal{B}(\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda)$ and $\mathcal{B}(\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda)$ are performed, for which the results are listed in Table\ \ref{table5}. For the processes of $\chi_{cJ}\rightarrow\bar{p}K^{*+}\Lambda$ ($J$=0, 1, 2) and $\psi(3686)\rightarrow\bar{p}K^{*+}\Lambda$, no significant substructure is observed in the invariant-mass spectra of $\bar{p}K^{*+}$ and $K^{*+}\Lambda$. The $\bar{p}\Lambda$ mass spectrum is also compatible with the absence of substructure, although fits for possible excesses in the threshold region return results of around two sigma significance in each of the four cases. The new measurements provide more information for understanding the mechanisms of charmonium decays. \par \begin{acknowledgements} The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11335008, 11425524, 11625523, 11635010, 11735014; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1532257, U1532258, U1732263; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contract No. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; The Knut and Alice Wallenberg Foundation (Sweden) under Contract No. 2016.0157; The Royal Society, UK under Contract No. DH160214; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0012069; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt \end{acknowledgements}
1,116,691,500,583
arxiv
\section{Introduction} \label{sect:intro} Reionization of the intergalactic medium (IGM) is a crucial epoch for the history of the universe, when the neutral gas produced at recombination is ionized by the Ultra-Violet (UV) radiation emitted by the first luminous sources. After this stage, the IGM contains a small amount of neutral gas, responsible for the absorptions in the spectra of background objects. Reionization is still a poorly understood process because of the unknown nature of the ionizing sources and of the complex physical mechanisms producing the radiation emission. However, as suggested by recent observations, it turns out to be better described as a spatially inhomogeneous and not instantaneous phase \citep[see, e.g.,][and references therein]{Ciardi2005}. While the Lyman-$\alpha$ transmitted flux in the high-$z$ quasar (QSO) spectra found by the Sloan Digital Sky Survey suggests that the end of reionization is at $z\sim 6$ \citep{fan2001,becker2001,white2003b,fan2006}, the last results from the Cosmic Microwave Background (CMB) polarization provide a Thompson optical depth of $\tau=0.09\pm 0.03$ that requires the completion of the reionization process at $z\sim 10$ \citep{spergel2007}. On the other hand, the IGM temperature measurements at $z<4$ \citep{hui2003} show that the reionization epoch occurs at $6<z<10$, while the lack of evolution in the Lyman-$\alpha$ galaxy luminosity function at $5.7\la z\la 6.5$ suggests that probably half of the IGM is ionized at $z\sim 6.5$ \citep{malhotra2004}. More recent estimates, based on high-$z$ HIRES QSOs, seem instead to argue against sudden changes in the IGM properties due to late reionization at $z\sim 6$ \citep{Becker:2006qj}: it is worth stressing, however, that the use of QSO near zones to probe the IGM ionization fraction could still be problematic \citep[see, e.g.,][]{Bolton2007}. Future observations, such as Lyman-$\alpha$ galaxy surveys \citep{kashicawa2006}, measurements of the CMB polarization and of the Sunyaev-Zeldovich effect that will be obtained by the Planck satellite, and, more importantly, neutral hydrogen 21 cm observations through new generation telescopes (LOFAR, SKA), must provide further information to constrain the reionization scenario. In recent years, several analytic, semi-analytic and numerical models \citep[see, e.g.][]{wyithe2003,barkana2004, haiman2003,madau2004,choudhury2006, gnedin2000,ciardi2003b,Wyithe2006,iliev2007} have been developed in order to describe how the first sources of UV-radiation impact on the IGM. The basic assumption of these approaches is to model the relations between the HII regions, the ionizing sources that allows to describe the morphology of bubbles, and the galactic physics, such as gas cooling, star formation, radiative feedback and radiative transfer. Despite some controversial results, the reionization process fits reasonably well in a ``standard'' $\Lambda$CDM cosmology, i.e. in a model where cold dark matter (CDM) and baryon density fluctuations grow in a flat universe dominated at late epochs by a dark energy component, consisting in cosmological constant. The latter is characterized by a constant equation of state $p=w\rho c^{2}$, with $w=-1$. At present, the best probe of dark energy is provided by SNe of type Ia. However, even including the most recent high$-z$ observations \citep[see, e.g.,][]{riess2007}, it is still not possible to obtain a high-precision determination of the dark energy equation. Tight constraints could only be put if strong and unjustified priors on its evolution are assumed. While observations suggest that $w\sim -1$ at late epochs, the time evolution of the dark energy component is basically not constrained. The so-called quintessence models, where $w$ is varying in time, are not excluded. In the CDM cosmologies with dynamic dark energy, the main consequence of having $w>-1$ at high $z$ is the earlier growth of the matter fluctuations. This determines a higher number density of haloes in the quintessence universe than in the standard cosmology at a fixed epoch \citep[see, e.g.][]{Maio2006}. Thereby, the presence of a dark energy component might affect the reionization process, requiring a different ionization efficiency to fully ionize the IGM at a given redshift. In this work, we use an analytic approach to investigate how reionization proceeds in quintessence cosmologies. In doing this, we consider two different cosmological models for which we assume that the redshift dependence of the equation of state parameter $w(z)$ follows the self-interacting potentials proposed by \citet{peebles2003} and \citet{brax2000}. The predicted scenario obtained by ``painting'' the evolution of the HII regions through the \cite[][hereafter F05]{furlanetto2005} analytic model, is compared to that expected for a $\Lambda$CDM universe. The paper is organized as follows. In Section \ref{sect:qd} we briefly outline the quintessence cosmologies considered here. In Section \ref{sect:model} we review the main features of the F05 model. Section \ref{sect:res} contains the results on the evolution of the ionized bubbles and their properties. The final Section \ref{sect:conclu} summarises our main conclusions. \section{Probing the dynamic quintessence cosmology} \label{sect:qd} The main aim of this paper is to investigate the process of cosmic reionization in quintessence cosmologies and compare to the predicted scenario for a standard flat $\Lambda$CDM universe. Thus, the $\Lambda$CDM cosmology will be our reference case, for which we assume that the contributions to the present density parameter from cosmological constant, matter, and baryons are $\Omega_{0\Lambda}=0.7$, $\Omega_{\rm 0m}=0.3$ and $\Omega_{\rm 0b}=0.046$, respectively; the Hubble constant is $H_{0}=70$ km/s/Mpc (i.e. $h=0.7$ in units of 100 km/s/Mpc). We also fix the normalization of the power spectrum of the matter fluctuations according to $\sigma_{8}=0.9$ and the spectral index is $n=1$. These parameters are in agreement with the WMAP first-year data \citep{spergel2003}. We recall that the more recent analysis of the WMAP three-year data \citep{spergel2007} suggests slightly different values (in particular a lower $\sigma_{8}$ and a smaller $\Omega_{\rm 0m}$). Our parameter choice, which is done to allow a direct comparison with the similar analysis made by F05, can have some small quantitative effect on some of the results, but cannot alter the general conclusions of our analysis, which is aimed at discussing the expected differences between models where the dark energy component is provided by a cosmological constant or by a dynamic quintessence. \begin{figure} \begin{center} \includegraphics[width=8.5cm, height=8.5cm]{fig1} \end{center} \caption{ Redshift evolution of the cosmic equation-of-state parameter $w$ (top panel) and of the growth factor, given in term of $D(z)(1+z)$ and normalized to its value at the present time (bottom panel). Different lines refer to $\Lambda$CDM model (solid line), RP model (dashed line) and SUGRA model (dotted-dashed line).} \label{fig:1b} \end{figure} Thus, the dark energy models we consider are cosmological scenarios in which the dynamic dark energy component is characterized by a self-interacting scalar field $\Phi$, evolving under the effects of the potential $V(\Phi)$. Here we summarize the main features of these models \citep[see][for more details]{peebles2003}. The potential has to satisfy the Klein-Gordon equation: \begin{equation}\label{eq:1b} \ddot{\Phi}+3H(z)\dot{\Phi}+\frac{\partial V}{\partial\Phi}=0\ , \end{equation} where $H(z)$ represents the redshift evolution of the Hubble constant given by the usual Friedmann equation. \begin{figure*} \includegraphics[width=15cm, height=5cm]{fig2} \caption{ The halo distribution at different epochs: redshifts $z=12$, $z=9$, and $z=6$ are shown in the panels, from left to right. For each cosmological model, the halo mass function is computed consistently to the PS74 formalism and written in terms of the halo radius. Different curves refer to $\Lambda$CDM model (solid line), RP model (dashed line) and SUGRA model (dotted-dashed line).} \label{fig:2b} \end{figure*} The corresponding $z$-evolution of the quintessence parameter $w$ is provided by the equation of state \begin{equation}\label{eq:2b} w\equiv \frac{p}{\rho c^{2}}=\frac{\frac{\dot{\Phi}^{2}} {2}-V(\Phi)}{\frac{\dot{\Phi}^{2}}{2}+V(\Phi)}\ , \end{equation} such that $w\rightarrow-1$ if $\dot{\Phi}^{2}\ll V(\Phi)$. Many analytic expressions have been proposed for the potential $V(\Phi)$. They describe the present dark energy amount simply by setting their amplitude on the initial conditions and determine different redshift evolutions for $w$. In our analysis, we use the potentials proposed by \citet{peebles2003} (RP hereafter), \begin{equation}\label{eq:3b} V(\Phi)=\frac{k}{\Phi^{\alpha}}\ , \end{equation} and by \citet{brax2000} \begin{equation}\label{eq:4b} V(\Phi)=k\Phi^{-\alpha}\exp\Big(\frac{\Phi^{2}}{2}\Big)\ , \end{equation} as suggested by supergravity theory (SUGRA hereafter); $k$ has dimension of mass raised to the power $(\alpha+4)$. The values for $k$ and $\alpha$ are fixed by assuming a flat universe with a dark energy contribution to the present density parameter $\Omega_{\rm 0de}=0.7$, and $w_{0}\equiv w(z=0)=-0.85$ and $-0.83$ for SUGRA and RP potentials, respectively. These choices for $w_0$, even if only marginally consistent with the observational constraints \citep[see, e.g.][]{riess2007}, have been made with the purpose of emphasizing differences with the $\Lambda$CDM model. The remaining parameters have been fixed to those of the $\Lambda$CDM model: $h=0.7$, $\sigma_8=0.9$ and $n=1$. Note that the quintessence models here considered do not violate the constraints recently obtained from the WMAP three-year data \citep{spergel2007} for the electron scattering optical depth, \begin{equation}\label{eq:5b} \tau_{e}=\int_{0}^{z_{r}}n_{e}(z)\sigma_{T}\frac{dL(z)}{dz}dz\ , \end{equation} computed at the reionization epoch, assumed to be $z_{r}=6$. In the previous equation, $\sigma_{T}$ represents the Thompson scattering cross section, and $n_{e}(z)$ and $L(z)$ are the electron density and the comoving distance, respectively. In more detail, we estimate for the $\Lambda$CDM, RP and SUGRA models $\tau_{e}=0.132, 0.130$ and $0.125$, which agree (at 1.5$\sigma$ level) with the WMAP optical depth. Among all the possible dark energy models, we concentrate on these two since they have been accurately investigated by other authors under the theoretical point of view and their impact on observational quantities have been extensively addressed \citep[see, e.g.,][]{dolag2004,meneghetti2005}. Furthermore, deviations from the $\Lambda$CDM behaviour are larger at high redshifts, in an interesting regime for the reionization analysis made here. This is evident by looking at the time evolution of the equation-of-state parameter $w$, shown in the upper panel of Fig.~\ref{fig:1b}. Though the RP and SUGRA display similar values for $w$ today, strong differences appear at high redshifts. Parametrizing the evolution of $w$ in terms of the expansion factor $a$ as $w(a)=w_{0}+w_{a}(1-a)$, we find that the RP and SUGRA models can be fitted by $w_{a}\sim 0.08$ and $0.55$ respectively. These values are still consistent with the present observational constraints. For example, \citet{Liddle2006} combining data from CMB, SNIa and baryonic acoustic oscillations found $w_{a}=0.0\pm 0.8$. Of course our results will depend on the considered dark energy scenarios: models having parameters more similar to (different from) the $\Lambda$CDM cosmology (for which $w_0=-1$, $w_a=0$) would show smaller (larger) effects on the observables discussed here. These high-redshift differences affect the initial phases of the reionisation process. In particular, we expect that at the redshifts of interest $(z\ga 6)$ the RP model behaves as an `intermediate case' between SUGRA and $\Lambda$CDM. Note that in quintessence cosmologies, the growth factor $D(z)$ is larger than in the standard cosmology, as shown in the lower panel of Fig.~\ref{fig:1b}, where the quantity $D(z)(1+z)$ has been normalized to its value at the present time. Thus, the main effects of having $w > -1$ at high redshifts are an earlier formation of the structures and a higher abundance of haloes than in the $\Lambda$CDM cosmology at a fixed epoch \citep[see the discussion in][]{Maio2006}. As an illustrative example, in Fig.~\ref{fig:2b} we show the distribution of the dark matter halo sizes as predicted by the \citet{press1974} theory (PS74) for the $\Lambda$CDM (solid line), RP (dashed line) and SUGRA (dotted-dashed line) cosmologies at three different redshifts. Indeed, at the same cosmological epoch, the halo distribution is dominated by larger objects in the quintessence cases, in particular in the SUGRA model. This can strongly affect the ionization process of the universe. Indeed numerical simulations \citep{ciardi2003,sokasian2003,sokasian2004} showed that reionization is an `inside-out' phenomenon, i.e. it begins in overdense regions and expands in underdense regions: this property predominantly originates from the fast increase of the abundance of ionising sources. However, we remark that a slower (or vanishing) change of the source population should cause a rapid escape of the ionising photons towards the external underdense regions: in this case reionization would be an outside-in process. \section{An analytic approach to cosmic reionization} \label{sect:model} To investigate how reionization occurs we use the analytic approach proposed by F05. In this section we review the main features of the model: for further details we refer to the original paper and to \citep[][hereafter F04]{furlanetto2004a}. \subsection{Evolution of bubbles without recombination} The evolution of the ionized bubbles is determined by the hierarchic growth of matter fluctuations. This can be described by making use of the extended PS74 formalism \citep{lacey1993}. A warning has to be kept in mind about the use of the Press \& Schechter mass function, that is proven to not work well for rare haloes at high redshifts. In particular, numerical simulations \citep[see, e.g.,][]{iliev2006b,reed2007, lukic2007} show that it underestimates their abundance by a significant factor, with not negligible effects on the bubble distribution. However, we recall that the extended formalism has not be developed for mass functions different from PS74: for this reason we have to rely on it, even if this can affect some of the the following results. In the PS74 scenario, at a fixed cosmological epoch, an ionized bubble grows around a galaxy of mass $m_{\rm gal} \ge m_{\rm min}(z)$, where $m_{\rm min}(z)$ represents the virial mass corresponding to $T=10^{4}$K, the temperature at which hydrogen cooling becomes efficient. The mass associated to the ionized region is $m_{HII}=\zeta m_{\rm gal}$, where $\zeta$ represents the ionization efficiency of the galaxy (here assumed to be constant), that depends on the star formation rate, on the escape fraction of photons and on the number of HII recombinations. Since each region is thought to be isolated, it must contain enough collapsed mass to fully ionize the inner gas. Thus $f_{\rm coll}(\delta,m) \ge \zeta^{-1}$, where $f_{\rm coll}$ is the collapsed volume fraction of a region of mass m $\ge m_{\rm min}$ with an inner overdensity $\delta$. In the F04 formalism, this leads to the following condition for the overdensity inside a bubble of a given mass $m$ (in Lagrangian space): \begin{equation} \label{eq:1c} \delta_{m} \ge \delta_{x}(m,z) \equiv \delta_{c}(z)-\sqrt{2}K(\zeta)[\sigma^{2}_{\rm min}- \sigma^{2}(m)]^{1/2}\ , \end{equation} where $K(\zeta) \equiv {\rm erf}^{-1}(1-\zeta^{-1})$, $\sigma^{2}(m)$ is the variance of density fluctuations smoothed on the scale $m$ and $\sigma^{2}_{\rm min} \equiv \sigma^{2}(m_{\rm min})$. As shown in F04, $\delta_{x}$ represents the ionization threshold for the density fluctuations in the Lagrangian space and it is assumed to be a linear barrier with respect to $\sigma^{2}(m)$: $\delta_{x}(m,z) \sim B(m,z)=B_{0}(z)+B_{1}(z)\sigma^{2}(m)$. Hence it is possible to obtain an analytic expression for the distribution of the bubbles with mass in the range $m\pm {\rm d}m/2$: \begin{equation} \label{eq:2c} n(m,z)=\sqrt{\frac{2}{\pi}}\frac{\bar{\rho}}{m^{2}}\Big\vert\frac{{\rm d}\ln\sigma}{{\rm d}\ln m}\Big\vert\frac{B_{0}(z)} {\sigma(m)}\exp\Bigg[-\frac{B^{2}(m,z)}{2\sigma^{2}(m)}\Bigg]\ , \end{equation} where $\bar{\rho}$ is the mean comoving matter density of the universe. In a similar way, adopting the \citet{lacey1993} formalism, we can write the merger rate of the HII regions as: \begin{eqnarray}\label{eq:3c} \frac{{\rm d}^{2} p(m_{1},m_{T},t)}{{\rm d} m_{2}{\rm d} t}&=&\sqrt{\frac{2}{\pi}}\frac{1}{t} \Big\vert\frac{{\rm d}\ln B}{{\rm d}\ln t}\Big\vert\Big\vert\frac{{\rm d}\ln\sigma_{T}}{{\rm d}\ln m_{T}}\Big\vert\times\nonumber\\ &&\Big(\frac{1}{m_{T}}\Big)\frac{B(m_{T},z)} {\sigma_{T}(1-\sigma^{2}_{T}/\sigma^{2}_{1})^{3/2}}\times\nonumber\\ &&\exp\Bigg[-\frac{B_{0}^{2}(z)}{2} \Bigg(\frac{1}{\sigma^{2}_{T}}-\frac{1}{\sigma^{2}_{1}}\Bigg)\Bigg]\ , \end{eqnarray} where ${\rm d}^{2} p(m_{1},m_{T},t)/{\rm d} m_{2}{\rm d} t$ is the probability per unit time that a given halo of mass $m_{1}$ merges with a halo of mass $m_{2}=m_{T}-m_{1}$. From equation (\ref{eq:3c}), it is possible to define the merger kernel \begin{equation}\label{eq:14c} Q(m_{1},m_{2},t)\equiv \frac{1}{n(m_{2},t)} \frac{{\rm d}^{2} p(m_{1},m_{T},t)}{{\rm d} m_{2}{\rm d}t}\ , \end{equation} that represents the rate at which each region of mass $m_{1}$ merges with a region of mass $m_{2}$. Since this quantity suffers from some limitations because the asymmetry in its arguments becomes important for large masses, then the use of $Q_{sym}(m_{1},m_{2})\equiv 1/2[Q(m_{1},m_{2})+Q(m_{2},m_{1})]$ is preferred for estimating the merger rate of the bubbles. This allows us to define the fractional volume accretion for a bubble of mass $m_{1}$ that merges with a mass $m_{1}$: \begin{eqnarray}\label{eq:15c} V(m_{1})^{-1}\frac{{\rm d}V}{{\rm d}z}&\equiv& \frac{V(m_{2})}{V(m_{1})}m_{2}n(m_{2},z)\nonumber\\ &&\times Q_{sym}(m_{1},m_{2},t)\Big\vert\frac{{\rm d}t}{{\rm d}z}\Big\vert. \end{eqnarray} Finally, we recall that the global ionized fraction can be calculated as $\bar{x}_{i}=\zeta f_{\rm coll,g}(z)$, where $f_{\rm coll,g}$ is the global collapsed volume fraction. \subsection{The recombination-limit effects} Up to now, the recombination limit has been neglected. As a bubble grows, the photons propagate more deeply into the neutral IGM, and both the clumpiness and the recombination rate of the ionized gas increase. The IGM distribution and its ionization state can be described using the analytic model of \citet{miralda2000} (MHR00). Analysing numerical simulations at $z < 4$, they found an analytic expression for the volume-weighted distribution of the gas density: \begin{equation}\label{eq:4c} P_{V}(\Delta)=A_{0}\Delta^{-\beta}\exp\Bigg[-\frac{(\Delta^{-2/3}-C_{0})^{2} }{2(2\delta_{0}/3)^{2}}\Bigg]\ , \end{equation} where $\Delta\equiv \rho/\bar{\rho}$, $\delta_{0}$ is the r.m.s. of density fluctuations smoothed on the Jeans mass at fixed $z$, so $\delta_{0}\propto (1+z)^{-1}$; $A_{0}$ and $C_{0}$ represent normalization constants and $\beta$ can be set equal to 2.5 as predicted for isothermal spheres. The ionization state of the IGM is determined by a density threshold $\Delta_{i}$ such that the gas with $\Delta < \Delta_{i}$ is totally ionized and the gas with $\Delta > \Delta_{i}$ is neutral. Under this assumption, the recombination rate can be written as \begin{equation}\label{eq:5c} A(\Delta_{i})=A_{u}\int^{\Delta_{i}}_{0}P_{V}(\Delta)\Delta^{2}{\rm d}\Delta \equiv A_{u}C\ , \end{equation} where $C$ represents the clumping factor and $A_{u}$ is the recombination rate per hydrogen atom in gas at the mean density. In the F05 model, $A_{u}$ is assumed consistently with the A-case of the MHR00 model \citep[see also][]{miralda2003}: $A_{u}\propto \alpha_{A}(T)$, where $\alpha_{A}(10^{4} K)=4\times 10^{-13}$ cm$^{3}$s$^{-1}$. The MHR00 model also provides a relationship between the mean free path $\lambda_{i}$ of the photons and the ionized fraction of gas $F_{V}(\Delta_{i})$, namely: \begin{equation}\label{eq:6c} \lambda_{i}=\lambda_{0}[1-F_{V}(\Delta_{i})]^{-2/3}\ , \end{equation} where $\lambda_{0}$ is a normalization constant such that $\lambda_{0}H(z)=60$ km s$^{-1}$ at $z < 4$. In the following, we assume that the mean free path derived in the MHR00 model could be used also in the quintessence models, since the properties of this function should reflect the gas properties at the Jeans scale, which is much smaller than the scales we are interested in. However, this approximation deserves further investigation with suitable hydrodynamical simulations. To consider the recombination process it is necessary to relate the recombination rate to the smoothed matter overdensity. In doing this, it must be remarked that the main effect of an inhomogeneous gas distribution is an increasing gas clumpiness and subsequently an increasing HII recombination rate. As a consequence, $A_{u}\propto(1+\delta)$. When a bubble grows the ionizing photons are able to reach its edge, then the threshold must satisfy the condition $\lambda_{i}(\Delta_{i}) \ge R$ that sets $\Delta_{i}$. However, at the same time, the inner high gas clumpiness causes an increase of the recombination rate and the photons can be absorbed inside the bubble before reaching the edge. Then, for a growing region, the ionization rate has to be larger than the recombination rate at every time: \begin{equation}\label{eq:17c} \zeta\frac{{\rm d}f_{\rm coll}(\delta,R)}{{\rm d}t}>A_{u}C(R)(1+\delta)\ , \end{equation} where $C(R)$ is computed as in equation (\ref{eq:5c}) for $R=\lambda_{i}$. The recombination barrier is obtained by searching for the minimun $\delta$ in the Lagrangian space that satisfies equation (\ref{eq:17c}) at each given mass. The recombination process affects the bubbles geometry. When the ionizing photons are totally absorbed by the inner recombination, the HII region stops growing and reaches a maximum size $R_{\rm max}$ that, in a $\Lambda$CDM universe, slowly increases with decreasing redshift, as shown by F05. In the excursion-set formalism, the recombination limit has a deep impact on the distribution of the bubbles. Assuming that the recombination barrier is a vertical line crossing $\delta_{x}$ at $R_{\rm max}$, the trajectories such that $\delta(R_{\rm max})< B(R_{\rm max},z)$ will be incorporated into HII regions with $m < m_{\rm max}$ and the mass function reads: \begin{equation}\label{eq:7c} n_{\rm rec}(m,z)=\int^{B(R_{\rm max})}_{-\infty}p(\delta\vert R_{\rm max}) n(m,z\vert\delta,m_{\rm max},z){\rm d}\delta\ . \end{equation} In the previous equation $p(\delta\vert R_{\rm max})$ represents the probability distribution at the scale $R_{\rm max}$ for a Gaussian density field \begin{equation}\label{eq:8c} p(\delta\vert R_{\rm max})= \frac{1}{\sqrt{2\pi}\sigma_{\rm max}}\exp\Bigg(-\frac{\delta^{2}}{2\sigma^{2}_{\rm max}}\Bigg)\ , \end{equation} and $n(m,z\vert\delta,m_{\rm max},z)$ is the conditional mass function for a random walk that begins at $(\delta,\sigma^{2}_{\rm max})$ \begin{eqnarray}\label{eq:9c} n(m,z\vert\delta,m_{\rm max},z)&=&\sqrt{\frac{2}{\pi}}\frac{\bar{\rho}}{m^{2}}\Big\vert \frac{{\rm d}\ln\sigma}{{\rm d}\ln m}\Big\vert\times\nonumber\\ &&\frac{\sigma^{2}[B(m_{\rm max},z)-\delta]}{(\sigma^{2}-\sigma^{2}_{\rm max})^{3/2}}\times\nonumber\\ &&\exp\Bigg\{-\frac{[B(m,z)-\delta]^{2}}{2(\sigma^{2}-\sigma^{2}_{\rm max})}\Bigg\} \ , \end{eqnarray} where $\sigma_{\rm max}\equiv \sigma(R_{\rm max})$. Since every trajectory lying above the ionization barrier at $R_{\rm max}$ belongs to a bubble with $R=R_{\rm max}$, the distribution of such Str\"omgren regions can be obtained from equation (\ref{eq:8c}): \begin{equation}\label{eq:10c} N_{\rm rec}=\frac{\bar{\rho}}{2m_{\rm max}}{\rm erfc}\Bigg[\frac{B(R_{\rm max},z)}{\sqrt{2}\sigma_{\rm max}}\Bigg]\ . \end{equation} After the saturation, bubbles can grow only by merging. For a single point of the IGM, reionization ends when it is incorporated in a recombination-limited region, since the ionizing background slightly increases after merging of bubbles. Thus `overlap' is a local phenomenon. The volume fraction of the IGM in bubbles with $R > R_{\rm max}$ is provided by the PS74 formalism using the ionization barrier and results to be: \begin{equation}\label{eq:11c} x_{\rm rec}=\int^{+\infty}_{m_{\rm max}}n(m,z)V(m){\rm d}m\ , \end{equation} where $V(m)$ is the volume of the ionized region. \subsection{The Lyman-$\alpha$ flux transmission} The morphology of bubbles affects the absorption of the Lyman-$\alpha$ flux emitted from high-redshift sources. Indeed, large ionized regions allow the transmission of the Lyman-$\alpha$ forest because the extent of neutral regions in the IGM is large enough to reduce the Lyman-$\alpha$ `damping wing' absorption, as pointed out by \citet{furlanetto2004c}. As shown by F05, since the recombination process affects the late stages of reionization, the way how the bubbles saturate can be constrained through the Ly-flux transmission. Therefore the observed transmitted flux from high-$z$ galaxies can constrain the predicted evolution of HII regions. Including the recombination limit in the F05 model allows to write the probability of having an optical depth smaller than $\tau_{i}$ for the $i$-th transition, determined by the distribution of the IGM and by the morphology of the ionized bubbles, as follows: \begin{equation}\label{eq:12c} P(<\tau_{i})=\int^{+\infty}_{m_{HII\rm min}}n(m,z)\frac{m}{\bar{\rho}}{\rm d}m \int^{\Delta_{\rm max}}_{0}P_{V}(\Delta){\rm d}\Delta\ , \end{equation} where $m_{HII\rm min}$ is the minimum mass of the HII regions and $\Delta_{\rm max}$ is the maximum density for which $\tau<\tau_{i}$. The analytic expression for the inner overdensity of each bubble is obtained by assuming the equation of state for the polytropic gas $T=T_{0}\Delta^{\gamma}$, with $T_{0}=10^{4}$K and the ionization equilibrium inside each bubble. Under these assumptions, the neutral hydrogen fraction can be written as a function of the matter overdensity $\Delta$: \begin{equation}\label{eq:18c} x_{HI}=\frac{\chi_{e}\bar{n}_{H}\alpha(T)}{\Gamma}\Delta\ , \end{equation} where $\chi_{e}$ is the correction for the singly-ionized helium and $\Gamma$ is the ionizing rate per hydrogen atom. It mainly depends on the total photons' emissivity $\epsilon_{T}$ and on the mean free path $\lambda_{i}$ as \begin{equation}\label{19c} \Gamma\propto\lambda_{i}\epsilon_{T}\Big(\frac{\eta}{3+\eta}\Big)\ . \end{equation} At the end of reionization $\epsilon_{T}\propto \zeta {\rm d}f_{\rm coll,g}/{\rm d}t$, $\eta=3/2$ if a starburst spectrum is assumed and $\lambda_{i}$ is set to the minimum value between the bubble radius and $R_{\rm max}$. Finally, $P_{V}(\Delta)$ is thought to be independent of the bubble morphology, that could be a good approximation at the end of reionization although we need high-resolution simulations to test it. Hence, the relation between the local overdensity and the IGM optical depth is: \begin{eqnarray}\label{eq:13c} \Delta(\tau_{i})&=&\Bigg\{170\frac{\eta}{3+\eta}\frac{\alpha_{A}(10^{4}{\rm K})}{\alpha_{A}(T_{0})}h(z)\Bigg(\frac{\lambda_{i}}{\rm Mpc}\Bigg)\times\nonumber\\ &&\zeta\Big\vert\frac{{\rm d}f_{\rm coll}}{{\rm d}z}\Big\vert\Bigg(\frac{\tau_{i}} {\tau_{GP,i}}\Bigg)\Bigg\}^{1/(2-0.7\gamma)}\ , \end{eqnarray} where $\tau_{GP,i}$ is the Gunn \& Peterson optical depth for the $i$-th transition. Note that in the equation above the value of the Hubble parameter $h(z)$ is taken consistently from the different cosmological models. Finally, the probability for the inhomogeneous IGM to have a given optical depth $\tau$ is obtained by substituting $\Delta_{\rm max}$ in equation (\ref{eq:12c}). A warning has to be kept in mind regarding the application of our model to derive the Lyman fluxes. Some of the simplifying assumptions present in the description of the recombination process, like the abrupt change of the IGM ionization state as a function of its density \citep[see, e.g., discussion in][]{miralda2000}. We expect, however, that a more realistic treatment would change our predictions in the same way for the different cosmological models. For this reason, we prefer to present our results in terms of ratios between the fluxes derived for the quintessence models and those predicted for the $\Lambda$CDM model. \section{Results and discussion} \label{sect:res} In this section we present the main results of the application of the previous model to cosmological scenarios including a dynamic quintessence (see Section \ref{sect:qd}). Notice that F04 and F05 apply their model to the `concordance' $\Lambda$CDM model only. First of all, we compute the minimum collapsed mass at each cosmological epoch from the mass-temperature relation proposed by \citet{barkana2001}: \begin{eqnarray}\label{eq:1d} T_{\rm vir}&=&1.98\times10^{4}\Big(\frac{\mu}{0.6}\Big) \Big(\frac{M}{10^{8}h^{-1}M_{\odot}}\Big)^{2/3}\times\nonumber\\ &&\Big[\frac{\Omega_{\rm m}}{\Omega_{\rm m}^{z}}\frac{\Delta_{c}} {18\pi^{2}}\Big]^{1/3}\Big(\frac{1+z}{10}\Big) {\rm K}\ , \end{eqnarray} where $T_{\rm vir}$ is the virial temperature of a halo having mass $M$ and $\mu$ is the mean molecular weight of its inner gas. In order to take into account the fact that the IGM inside a HII region is not totally ionized because of the ionization equilibrium assumption, here we prefer to set $\mu$ to the mean of the values discussed in \citet{barkana2001}. Anyway, we checked that fixing the molecular weight to the value corresponding to a fully ionized IGM ($\mu=0.6$) does not significantly affect the model predictions on the observables discussed at the end of this section. For example, using $\mu=0.6$ changes the Ly-$\alpha$ flux transmission by $\sim 1\%$ only, irrespectively of the considered cosmological model. The values for $\Delta_c$ and for the redshift evolution of $\Omega_{\rm m}(z)$ are computed consistently for the different cosmologies. The presence of a dynamic quintessence component affects the hierarchic evolution of the ionized regions. Indeed, the earlier growth of matter perturbations causes, at a fixed epoch, the formation of larger ionised regions in the quintessence cosmology. Another important consequence is that, even if the assumption of a linear ionization barrier is still correct, this is lower than in the $\Lambda$CDM universe. This is evident in Fig.~\ref{fig:1d}, where we show the behaviour of the ionization barrier in the three different cosmological models, computed at $z=8$, assuming $\zeta=6$. \begin{figure} \begin{center} \includegraphics[angle=0, height=4cm, width=7cm]{fig3} \caption{The ionization barrier is here shown at $z=8$, assuming an ionization efficiency $\zeta=6$. For each cosmological model, the thick line represents $\delta_{x}$ as defined in equation (\ref{eq:1c}), while the thin line is its linear approximation. The curves refer to the $\Lambda$CDM model (solid line), RP model (dashed line) and SUGRA model (dotted-dashed line).} \label{fig:1d} \end{center} \end{figure} \begin{figure*} \includegraphics[angle=0, height=5cm, width=15cm]{fig4} \caption{ The morphology of the HII regions. The probability distribution of the bubble sizes is computed here neglecting recombination. In each panel (corresponding to different cosmological models), the curves refer to different epochs: $z=8.8$ (dotted line), 8.6 (dotted-dashed line), 8.4 (short-dashed line), 8.2 (long-dashed line) and 8 (solid line). The corresponding values for the ionized volume fraction $\bar{x}_{i}$, computed assuming a ionization efficiency $\zeta=6$, are also reported.} \label{fig:2d} \end{figure*} This implies a higher probability that matter fluctuations go beyond the ionization threshold. Consequently the density of bubbles increases, resulting in a different morphology of HII regions in the three cosmologies considered, as shown in Fig.~\ref{fig:2d}. The earlier growth of the matter fluctuations in the quintessence models causes a faster evolution of the mass function with respect to the $\Lambda$CDM universe and a remarkable increase of the density of the largest regions. As a consequence, at the same cosmological epoch, we obtain a higher ionized fraction $\bar{x}_{i}$ in the quintessence cases. As an illustrative example, we observe that at $z=8.8$, reionization is at its initial phases in the `standard' universe ($\bar{x}_{i}=0.35$) while for the SUGRA cosmology this epoch corresponds to its late stages, $\bar{x}_{i}=0.65$. This is almost a factor $\sim$ 2 larger than in the `standard' case. Similarly, at $z=8$ reionization is at the final stage in the SUGRA universe, $\bar{x}_{i}=0.82$, while $\bar{x}_{i}=0.47$ for the `concordance' model. We recall that the model we adopt, being based on the extended PS74 formalism, can account only in a very approximate way, most often in the linear limit, for the source clustering, which is expected to have important effects on the morphology of the HII regions. Since the clustering amplitude depends on the source abundances which are different in the models considered here, our results could be affected by this bias. Work is in progress to properly address this problem by improving our modelling with suitable numerical simulations. \begin{figure*} \begin{center} \includegraphics[angle=0, height=5cm, width=15cm]{fig5} \caption{ The bubble morphology. The evolution of the bubble radius as a function of $\bar{x}_{i}$ is shown here neglecting recombination (left panel) and in the recombination limit (right panel). Solid, dotted and dotted-dashed lines are for $\Lambda$CDM, RP and SUGRA models, respectively. A complete reionization at $z=6$ is assumed here. } \label{fig:3d} \end{center} \end{figure*} \begin{figure*} \includegraphics[angle=0, height=5cm, width=15cm]{fig6} \caption{The hierarchic growth of bubbles. In each panel (corresponding to different cosmological models), we show the fractional volume accretion rate of an HII region having mass $M_{1}=10^{14}M_{\odot}$ that merges with a mass corresponding to a given radius $R$. The results, computed at $z=13$, are obtained assuming different ionization efficiencies: $\zeta=10$ (dotted line), $\zeta=20$ (dashed line) and $\zeta=30$ (solid line).} \label{fig:4d} \end{figure*} In Fig.~\ref{fig:3d} we show the different characteristic sizes, $R_{\rm char}$, of the ionized regions, as obtained by fixing $\zeta$ such that reionization ends at $z=6$ (note that $\zeta$ is not the same for the three models). Neglecting the recombination limit, the effects of the quintessence are obvious at the early stages of reionization, since $R_{\rm char}$ (representing the radius for which the bubbles distribution is maximum) is larger in the standard universe than in RP and SUGRA models. For example, at $x_{i}\simeq 0.2$ $R_{\rm char}=0.4, 0.6$ Mpc for $\Lambda$CDM and SUGRA, respectively. But the difference becomes increasingly smaller as reionization proceeds. This is caused by the different saturation regime of the IGM in the quintessence cosmologies. Ionized regions are smaller in the RP and SUGRA models at the beginning of reionization and their sizes evolve faster than in the $\Lambda$CDM universe due to the presence of large neutral voids around them, reaching the characteristic radius of the standard model in the final stages of reionization. Since the HII regions grow only by merging after the recombination limit, it is interesting to investigate how the dynamic dark energy component affects the bubbles' merger rates. As an example, Fig.~\ref{fig:4d} shows the merger probability of a region with mass $M=10^{14} M_{\odot}$ at $z=13$ for the $\Lambda$CDM, RP and SUGRA universes. Similarly to the $\Lambda$CDM case, also for the quintessence cosmologies the evolution of the bubbles is dominated by merging events between large systems, in particular in the late stages of the reionization process. The main difference is given by the size of the involved regions. For the RP and SUGRA models the merger probability is higher with bubbles that are even one order of magnitude larger than in the standard universe. \begin{figure*} \includegraphics[angle=0,height=9cm, width=8.3cm]{fig7} \caption{ The recombination limit. In each panel (corresponding to different cosmological models), we show the ionization barrier, computed as in F04 (thin curves), and the recombination barrier, computed as in F05 (thick curves). The results are shown at three different stages of reionization: $\bar{x}_{i}=0.49$ (dotted lines), $\bar{x}_{i}=0.82$ (dashed lines) and $\bar{x}_{i}=0.95$ (solid lines) at $z=6$.} \label{fig:5d} \end{figure*} \begin{figure*} \includegraphics[angle=0,height=5cm,width=15cm]{fig8} \caption{ The bubble size distribution considering the recombination process. The mass distributions show that in the recombination limit (thick curves) bubbles pile up at $R=R_{\rm max}$, while neglecting recombination (thin curves) they can reach sizes larger than $R_{\rm max}$. In each panel (corresponding to different cosmological models), the different curves refer to the HII regions distribution at different stages of reionization: $\bar{x}_{i}=0.51$ (dotted line), $\bar{x}_{i}=0.68$ (short-dashed line), $\bar{x}_{i}= 0.84$ (long-dashed line), $\bar{x}_{i}=0.92$ (solid line) assumed for $z=8$. The filled points represent the volume fraction in bubbles with $R=R_{\rm max}$ in the recombination limit.} \label{fig:6d} \end{figure*} \begin{figure*} \includegraphics[angle=0,height=5cm,width=15cm]{fig9} \caption{The ionized volume fraction. In each panel (corresponding to different cosmological models), the curves refer to the volume fraction contained in recombination-limited bubbles during the overlap phase at different redshifts: $z=6$ (solid line), $z=9$ (dashed line) and $z=12$ (dotted line). The results are computed assuming the MHR00 model with $\lambda_{0}=60$ kms$^{-1}$.} \label{fig:7d} \end{figure*} We can now investigate how the recombination limit due to the IGM clumpiness affects the geometry and the evolution of the ionized regions in the quintessence universes. As already said, in doing this, we assume that the results of the MHR00 simulations obtained for a standard cosmology are still valid for a dynamic dark energy-dominated universe, since we do not expect large differences for the IGM volume-weighted density distribution. As discussed above, the dark energy component causes the matter fluctuations to grow earlier. Hence, since the recombination rate depends on the inner overdensity of the bubbles, recombination is strong already at smaller scales in the quintessence universe, compared to the $\Lambda$CDM case. Thus, the HII regions reach the equilibrium on scales smaller than in the standard model. This is clear in Fig.~ \ref{fig:5d}, where we present three different stages of reionization. While the ionization threshold does not significantly change between $\Lambda$CDM and SUGRA models, the recombination barrier $\delta_{\rm rec}$ extends to smaller scales in the quintessence universes. This effect is more prominent at the late stages of reionization process, since the bubbles reach the equilibrium on scale of the order of $20-30$ Mpc in the $\Lambda$CDM universe instead of the $\sim 10$ Mpc predicted for SUGRA. In dynamic dark energy universes, the `earlier' (in term of comoving scales) recombination barrier involves a smaller value for $R_{\rm max}$, computed as the cross-point of $\delta_{x}$ and $\delta_{\rm rec}$. The same trend was already evident in Fig.~\ref{fig:3d}, where the assumed values for $\zeta$ were such that $\bar{x}_{i}(z=6)=1$. A peculiarity with respect to the standard model is the discontinuous recombination barrier found for the SUGRA cosmology, that crosses the ionization threshold more than once. To avoid further complications to the model, we choose to set $R_{\rm max}$ to the mean value of those achieved at the cross-points (which are in any case very close to each other). \begin{figure} \includegraphics[angle=0,height=9cm,width=8cm]{fig10} \caption{ The Lyman transitions. The probability distribution of the IGM optical depth $p(\tau)$ for the quintessence models is compared to that in the `standard' universe ($p(\tau)_{\Lambda CDM}$). Results for the RP and SUGRA models are shown in the panels in the left and right columns, respectively, and refer to Lyman-$\alpha$, Lyman-$\beta$ and Lyman-$\gamma$, in the different panels, from top to bottom. The curves are computed in the case of complete reionization ($\bar{x}_{i}=0.95$) at $z=6$ for different values of $R_{\rm max}$: 10 Mpc (dotted line), 20 Mpc (dotted-dashed line), 30 Mpc (short-dashed line), 60 Mpc (long-dashed line) and 600 Mpc (solid line).} \label{fig:8d} \end{figure} Fig.~\ref{fig:6d} shows an important effect on the bubble distribution of the fact that the maximum radius is smaller, illustrated at $z=8$ for different stages of reionization. As a result, the HII regions with $M<M_{\rm max}$ tend to pile up on $\la 10$ Mpc scales in the SUGRA universe, in particular at the end of reionization. Furthermore, the drop of the ionization threshold causes an increase of the volume fraction contained in the recombination-limited regions. As mentioned before, for a random point in the IGM, the reionization process can be considered complete when the point joins a sphere with $M=M_{\rm max}$. Then, since $R_{\rm max}$ is smaller in the quintessence cosmologies, the density of the points belonging to a region with $M>M_{\rm max}$ increases and the volume fraction inside bubbles larger than $M_{\rm max}$ becomes larger. As shown in Fig.~\ref{fig:7d}, $x_{\rm rec}$ increases moving from the $\Lambda$CDM to the SUGRA models as reionization proceeds involving different `epochs of overlapping'. In this case, $x_{\rm rec}\sim 0.5$ is reached earlier in the quintessence models than in the standard universe. This effect is analogous to that discussed by F05 for a small mean free path of the ionizing photons. As illustrated before, the bubble morphology and the IGM ionization state affect the IGM optical depth $\tau$ and consequently the transmission of the Lyman-$\alpha$ flux. To investigate how the IGM optical depth distribution depends on $R_{\rm max}$ in the quintessence models, we compute the probability distribution $p(\tau)$ of the IGM optical depth for the Ly-$\alpha,\beta,\gamma$ transmission following equation (\ref{eq:12c}). In Fig.~\ref{fig:8d} we compare $p(\tau)$ for the RP and SUGRA scenarios to that obtained considering the $\Lambda$CDM model, $p(\tau)_{\scriptscriptstyle\rm \Lambda CDM}$, for different values of $R_{\rm max}$. The transmission is lower for small values of $\tau$. We find that the trends with $R_{\rm max}$ for the Lyman-$\beta$ (central panels) and Lyman-$\gamma$ (lower panels) are analogous to that for the Lyman-$\alpha$ one (upper panels). \section{Conclusions} \label{sect:conclu} The purpose of this work is to give a picture of the reionization epoch in the universes dominated by a dynamic dark energy component at late epochs, tracing the HII regions evolution using an analytic approach based on the hierarchic growth of matter fluctuations. In doing this, we consider two cosmological models in which the dark energy density varies with time driven by the \citet{peebles2003} and \citet{brax2000} potentials. Then, we used the analytic approach proposed by F05 to outline the main differences between the evolution of bubbles in the quintessence models and in the standard $\Lambda$CDM cosmology. Our results can be summarized as follows. \begin{enumerate} \renewcommand{\theenumi}{(\arabic{enumi})} \item The growth of density fluctuations occurs earlier and the ionization barrier $\delta_{x}$ is lower in the RP and SUGRA universes compared to the $\Lambda$CDM one. This causes a strong increase of the high-density regions with respect to the $\Lambda$CDM case at the same epoch. \item Neglecting the recombination limit, the characteristic size of the HII regions is smaller in the RP and SUGRA cases at the early stage of reionization, but the difference is weakened as reionization proceeds. \item In the recombination limit, the early growth of the matter fluctuations causes the increase of the IGM clumpiness and the inner recombination of the bubbles becomes more efficient. As a consequence, the HII regions reach the ionization equilibrium on slightly smaller scales and the bubble abundance tends to increase. The IGM volume fraction contained in bubbles larger than $R_{\rm max}$ increases requiring an earlier `epoch of overlap' in the quintessence universe compared to $\Lambda$CDM. \item The main effect on the high-$z$ QSO radiation transmission due to the different evolution of the HII regions is the lower Lyman flux absorption at small optical depths in RP and SUGRA cosmologies compared to the $\Lambda$CDM model. \end{enumerate} \section*{acknowledgements} We acknowledge financial contribution from contracts ASI-INAF I/023/05/0, ASI-INAF I/088/06/0 and INFN PD51. We thank Steven Furlanetto, Peng Oh, James Bolton, Enzo Branchini and Micol Bolzonella for useful discussions. We thank the anonymous referee for her/his constructive comments. \newcommand{\noopsort}[1]{}
1,116,691,500,584
arxiv
\section*{\normalsize\bf\thesection.~#1} } \begin{document} \thispagestyle{empty} \section*{\Large\bf Heavy Tails in Multi-Server Queue\footnote{Supported by EPSRC grant No.~R58765/01, INTAS Project No.~00-265 and RFBR grant No.~02-01-00358}\\[2mm] \normalsize\rm SERGUEI FOSS \hfill [email protected]\\ {\it Department of Actuarial Mathematics and Statistics, School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh EH14 4AS, Scotland}\\ DMITRY KORSHUNOV \hfill [email protected]\\ \it Sobolev Institute of Mathematics, Novosibirsk 630090, Russia } \vspace{5mm} {\small{\bf Abstract.} In this paper, the asymptotic behaviour of the distribution tail of the stationary waiting time $W$ in the $GI/GI/2$ FCFS queue is studied. Under subexponential-type assumptions on the service time distribution, bounds and sharp asymptotics are given for the probability ${\bf P}\{W>x\}$. We also get asymptotics for the distribution tail of a stationary two-dimensional workload vector and of a stationary queue length. These asymptotics depend heavily on the traffic load.\\[5mm] {\bf Keywords:} FCFS multi-server queue, stationary waiting time, large deviations, long tailed distribution, subexponential distribution. } \mysection{Introduction} It is well known (see, for example, [\ref{P}, \ref{Ver}, \ref{APQ}]) that in the stable single server {\it first-come-first-served} queue $GI/GI/1$ with typical interarrival time $\tau$ and typical service time $\sigma$ the tail of stationary waiting time $W$ is related to the service time distribution tail $\overline B(x)={\bf P}\{\sigma>x\}$ via the equivalence \begin{eqnarray}\label{W.single} {\bf P}\{W>x\} &\sim& \frac{1}{{\bf E}\tau-{\bf E}\sigma} \int_x^\infty \overline B(y)\,dy \quad\mbox{ as }x\to\infty, \end{eqnarray} provided the {\it subexponentiality} of the {\it integrated tail distribution} $B_I$ defined by its tail \begin{eqnarray*} \overline B_I (x) &\equiv& \min\Bigl(1,\ \int_x^\infty \overline B(y)\,dy \Bigr), \ \ x>0. \end{eqnarray*} As usual we say that a distribution $G$ on ${\bf R}^+$ is subexponential (belongs to the class $\mathcal S$) if $\overline{G*G}(x)\sim2\overline G(x)$ as $x\to\infty$. The converse assertion is also true, that is, the equivalence (\ref{W.single}) implies the subexponentiality of $B_I$, see [\ref{P}, Theorem 1] for the case of Poisson arrival stream and [\ref{K}, Theorem 1] for the general case. In this paper we consider the $GI/GI/s$ FCFS queue which goes back to Kiefer and Wolfowitz [\ref{KW55}]. We have $s$ identical servers, i.i.d.\ interarrival times $\{\tau_n\}$ with finite mean $a={\bf E}\tau_1$, and i.i.d.\ service times $\{\sigma_n\}$ with finite mean $b={\bf E}\sigma_1$. The sequences $\{\tau_n\}$ and $\{\sigma_n\}$ are mutually independent. The system is assumed to be {\it stable}, i.e., $\rho\equiv b/a\in(0,s)$. We are interested in the asymptotic tail behaviour of the stationary waiting time distribution ${\bf P}\{W>x\}$ as $x\to\infty$. It was realized recently (see, for example, existence results for moments in [\ref{S}], [\ref{SS}]; an asymptotic hypothesis in [\ref{W}]; asymptotic results for fluid queues fed by heavy-tailed on-off flows in [\ref{BMZ}]) that the heaviness of the stationary waiting time tail depends substantially on the load $\rho$ in the system. More precisely, it depends on $\rho$ via the value of $k\in\{0,1,\ldots,s-1\}$ for which $k\le\rho<k+1$. In particular, Whitt conjectured that \begin{eqnarray*} {\bf P}\{W>x\} &\sim& \gamma\left(\int_{\eta x}^\infty\overline B(y)dy\right)^{s-k} \quad \mbox{ as } x\to\infty, \end{eqnarray*} ``where $\gamma$ and $\eta$ are positive constants (as functions of $x$)'' [sic, [\ref{W}]]. In the present paper we show that, in general, the tail behaviour of $W$ is more complicated. Let $R(w)=(R_1(w),\ldots,R_s(w))$ be the operator on ${\bf R}^s$ which orders the coordinates of $w\in{\bf R}^s$ in ascending order, i.e., $R_1(w)\le\cdots\le R_s(w)$. Then the residual work vector $W_n=(W_{n1},\ldots,W_{ns})$ which the $n$th customer observes just upon its arrival satisfies the celebrated Kiefer--Wolfowitz recursion: $W_1=i\cdot0$, \begin{eqnarray*} W_{n+1} &=& R((W_{n1}+\sigma_n-\tau_{n+1})^+,(W_{n2}-\tau_{n+1})^+, \ldots, (W_{ns}-\tau_{n+1})^+)\\ &=& R(W_n+e_1\sigma_n-i\tau_{n+1})^+, \end{eqnarray*} here $e_1=(1,0,\ldots,0)$, $i=(1,\ldots,1)$ and $w^+=(\max(0,w_1),\ldots,\max(0,w_s))$. The value of $W_{n1}$ is the delay which customer $n$ experiences. In particular, the stationary waiting time $W$ is a weak limit for $W_{n1}$. The process $W_n$ is a Markov chain in ${\bf R}^s$. It is well known that, for general multi-dimensional Markov chains, large deviation problems are very difficult to solve even for stationary distributions. Usually they can be solved in low dimensions only, 2 or 3 at most, see [\ref{IMSh}, \ref{BMumn}]. Almost all known results are derived for so-called Cram\'er case which corresponds to light-tailed distributions of jumps. In the heavy-tailed case almost nothing is known for general multi-dimensional Markov chains. The process $W_n$ presents a particular but very important example of a Markov chain in ${\bf R}^s$, even if we are interested in the first component $W_{n1}$. As follows from our analysis, the case $s=2$ can be treated in detail. The stability condition for this particular case is $b<2a$. One of the following cases can occur: (i) the maximal stability case when $b<a$; (ii) the intermediate case when $b=a$; (iii) the minimal stability case when $b\in(a,2a)$. We find the exact asymptotics for ${\bf P}\{W>x\}$ in the maximal and minimal cases. We also describe the most probable way for the occurrence of large deviations. In the intermediate case, we only provide upper and lower bounds. Then we study the asymptotics for the tail of the distribution of a stationary two-dimensional workload vector and give comments on the tail asymptotics of the stationary queue length. For $s>2$, the stability condition is $b<sa$. We hope that, for $s>2$, direct modifications of our arguments may lead to exact asymptotics in two particular cases when either $b<a$ (the maximal stability) or $b \in ((s-1)a, sa)$ (the minimal stability). However, one has to overcome many extra technicalities for that. Insofar as the case $b\in[a,(s-1)a]$ is concerned, we are extremely sceptical on the possibility to get any sharp tail asymptotics in explicit form. For the two-server queue, in the maximal stability case, we prove the following: \begin{Theorem}\label{th.2.max} Let $s=2$ and $b<a$. When the integrated tail distribution $B_I$ is subexponential, the tail of the stationary waiting time satisfies the asymptotic relation, as $x\to\infty$, \begin{eqnarray*} {\bf P}\{W>x\} &\sim& \frac{1}{a(2a-b)}\Bigl[(\overline B_I (x))^2 +b\int_0^\infty \overline B_I(x+ya)\overline B(x+y(a-b))dy\Bigr]. \end{eqnarray*} \end{Theorem} The proof follows by combining the lower bound given in Theorem \ref{th.2.max.lower} (Section \ref{sec.2.max.lower}) and the upper bound given in Theorem \ref{th.2.max.upper} (Section \ref{sec.2.max.upper}). Simpler lower and upper bounds for ${\bf P}\{W>x\}$ are given in the following \begin{Corollary}\label{cor.2.max} Under the conditions of Theorem {\rm\ref{th.2.max},} \begin{eqnarray*} \frac{2a+b}{2a^2(2a-b)} &\le& \liminf_{x\to\infty} \frac{{\bf P}\{W>x\}}{(\overline B_I (x))^2} \le \limsup_{x\to\infty} \frac{{\bf P}\{W>x\}}{(\overline B_I (x))^2} \le \frac{1}{2a(a-b)}. \end{eqnarray*} \end{Corollary} In our opinion, in Theorem \ref{th.2.max} it is possible to obtain a compact expression for the tail asymptotics of ${\bf P} (W>x)$ only in the regularly varying case. A distribution $G$ (or its tail $\overline{G}$) is {\it regularly varying} at infinity with index $\gamma >0$ (belongs to the class ${\cal RV}$), if $\overline{G}(x)>0$ for all $x$ and, for any fixed $c>0$, $\overline{G}(cx)/\overline{G}(x)\to c^{-\gamma}$ as $x\to\infty$. \begin{Corollary}\label{cor.2.max.reg} Let $b<a$ and the tail distribution $\overline B$ of service time be regularly varying with index $\gamma>1$. Then \begin{eqnarray*} {\bf P}\{W>x\} &\sim& c'(\overline B_I(x))^2, \end{eqnarray*} where \begin{eqnarray*} c' &=& \frac{1}{a(2a-b)}\Bigl[1+\frac{b}{\gamma-1} \int_0^\infty\frac{dz}{(1+za)^{\gamma-1}(1+z(a-b))^\gamma}\Bigr]. \end{eqnarray*} \end{Corollary} Recall definitions of a number of classes of heavy-tailed distributions. A distribution $G$ is {\it long-tailed} (belongs to the class ${\mathcal L}$) if $\overline{G}(x)>0$ for all $x$ and, for any fixed $t$, \begin{eqnarray*} \frac{\overline G(x+t)}{\overline G(x)} &\to& 1 \ \ \mbox{ as }x\to\infty. \end{eqnarray*} A distribution $G$ belongs to the class ${\cal IRV}$ of {\it intermediate regularly varying distributions} if $\overline{G}(x) >0$ for all $x$ and $$ \lim_{c\downarrow 1}\liminf_{x\to\infty} \frac{\overline{G}(cx)}{\overline{G}(x)} =1. $$ Clearly, ${\cal RV} \subset {\cal IRV}$. In the minimal stability case, we prove the following \begin{Theorem}\label{th.2.min.reg} Let $s=2$ and $a<b<2a$, $B\in{\mathcal S}$ and $B_I\in{\cal IRV}$. Then \begin{eqnarray*} {\bf P}\{W>x\} &\sim& \frac{1}{2a-b} \overline B_I\Bigl(\frac{b}{b-a}x\Bigr) \quad\mbox{ as }x\to\infty. \end{eqnarray*} \end{Theorem} The proof is given in Section \ref{min.stab.proof} and is based on the lower and upper bounds stated in Theorems \ref{th.2.min.lower} and \ref{min_upper} respectively. One can provide simple sufficient conditions for $B\in{\mathcal S}$ and $B_I\in{\cal IRV}$. Let ${\cal D}$ be the class of all distributions $G$ on ${\bf R}^+$ such that $\overline{G}(x) >0$ for all $x$ and $\liminf_{x\to\infty}\overline G(2x)/\overline G(x)>0$. Then the following are known: (i) ${\cal RV } \subset {\cal IRV} \subset ({\cal L} \bigcap {\cal D})\subset{\mathcal S}$; (ii) if $B\in {\cal D}$ has a finite first moment, then $B_I\in{\cal IRV}$ (see e.g. [\ref{BoF}]). Therefore, if $B\in {\mathcal L} \bigcap {\cal D}$ and has a finite first moment, then $B$ satisfies the conditions of Theorem \ref{th.2.min.reg}. Note that the converse is not true, in general: there exists a distribution $B\in{\mathcal S}$ with a finite first moment such that $B_I\in{\cal IRV}$, but $B\notin{\mathcal L} \bigcap{\cal D}$ (see Example 2 in [\ref{DFK}, Section 6]). The paper is organized as follows. Section \ref{preliminaries} contains some auxiliary results. In Section \ref{sec.2.max.lower}, we formulate and prove a result concerning a lower bound for ${\bf P}\{W>x\}$ in the maximal stability case. The corresponding upper bound is given in Section \ref{sec.2.max.upper}. Sections \ref{sec.2.min.lower}, \ref{sec.2.min.upper}, and \ref{min.stab.proof} deal, respectively, with lower bounds, upper bounds, and asymptotics for ${\bf P}\{W>x\}$ in the minimal stability case. In Section \ref{workload}, we prove further results related to the joint distribution of a stationary workload vector. Comments on the asymptotics for a stationary queue length distribution may be found in Section \ref{stat.q.l}. A number of upper and lower bounds for ${\bf P}\{W>x\}$ in $s$-server queue are proposed in Remarks \ref{rem.s.min}, \ref{rem.s.min.up}, \ref{sserverlower}, and \ref{sserverupper}. \mysection{Preliminaries\label{preliminaries}} {\bf \ref{preliminaries}.1. Reduction to deterministic input stream case in assertions associated with upper bounds.} Consider a general $GI/GI/s$ queue. Take any $a'\in(b/s,a)$. Consider an auxiliary $D/GI/s$ system with the same service times $\{\sigma_n\}$ and deterministic interarrival times $\tau_n'\equiv a'$: $W'_1=0$ and \begin{eqnarray*} W'_{n+1} &=& R(W_n'+e_1\sigma_n-ia')^+. \end{eqnarray*} Let $W'$ be a stationary waiting time in this auxiliary system. \begin{Lemma}\label{upper.bound} If ${\bf P}\{W'>x\} \le \overline G(x)$ for some long-tailed distribution $G$, then \begin{eqnarray*} \limsup_{x\to\infty} \frac{{\bf P}\{W>x\}}{\overline G(x)} &\le& 1. \end{eqnarray*} \end{Lemma} \proof. Denote $\xi_n=a'-\tau_n$. Put $M_0 = 0$ and, for $n\ge1$, \begin{eqnarray*} M_n &=& \max\{0,\ \xi_n,\ \xi_n+\xi_{n-1},\ \ldots,\ \xi_n+\cdots+\xi_1\}\\ &=& \max (0, \xi_n+M_{n-1}) = (\xi_n+M_{n-1})^+. \end{eqnarray*} First, we use induction to prove the inequality \begin{eqnarray}\label{upper.via.M} W_n &\le& W'_n+iM_n\quad\mbox{ a.s.} \end{eqnarray} Indeed, for $n=1$ we have $0\le0+iM_1$. Assume the inequality is proved for some $n$; we prove it for $n+1$. Indeed, \begin{eqnarray*} W_{n+1} &=& R(W_n+e_1\sigma_n-i\tau_{n+1})^+\\ &\le& R(W_n'+iM_n+e_1\sigma_n-i\tau_{n+1})^+\\ &=& R(W_n'+e_1\sigma_n-ia'+i(M_n+\xi_{n+1}))^+. \end{eqnarray*} Since $(u+v)^+\le u^++v^+$, \begin{eqnarray*} W_{n+1} &\le& R(W_n'+e_1\sigma_n-ia')^+ +i(M_n+\xi_{n+1})^+ \equiv W_{n+1}'+iM_{n+1}, \end{eqnarray*} and the proof of (\ref{upper.via.M}) is complete. Let $M$ be the weak limit for $M_n$ which exists due to ${\bf E}\xi_1=a'-a<0$ and Strong Law of Large Numbers. The following stochastic equality holds: \begin{eqnarray*} M &=_{\rm st}& \max\{0,\ \xi_1,\ \xi_1+\xi_2,\ \ldots,\ \xi_1+\cdots+\xi_n,\ \ldots\}. \end{eqnarray*} Since the random variable $\xi_1$ is bounded from above (by $a'$), there exists $\beta>0$ such that ${\bf E}e^{\beta\xi_1}=1$. Then by Cram\'er estimate (see, for example, [\ref{Cr}, Section 5]), for any $x$, \begin{eqnarray}\label{exp.bound.for.M} {\bf P}\{M>x\} &\le& e^{-\beta x}. \end{eqnarray} The inequality (\ref{upper.via.M}) implies that $W \le_{\rm st} W'+M$, where $W'$ and $M$ are independent. Let a random variable $\eta$ have distribution $G$ and be independent of $M$. Since $\eta\ge_{\rm st}W'$, we have $W \le_{\rm st} \eta+M$. Therefore, for any $h>0$, \begin{eqnarray*} {\bf P}\{W>x\} &\le& \int_0^{x-h} {\bf P}\{M>x-y\}{\bf P}\{\eta\in dy\} +{\bf P}\{\eta>x-h\}\\ &\le& \int_0^{x-h} e^{-\beta(x-y)} G(dy)+\overline G(x-h), \end{eqnarray*} by (\ref{exp.bound.for.M}). Integrating by parts yields \begin{eqnarray*} \int_0^{x-h} e^{-\beta(x-y)} G(dy) &=& -e^{-\beta(x-y)} \overline G(y)\Big|_0^{x-h} +\beta \int_0^{x-h} \overline G(y) e^{-\beta(x-y)}dy\\ &\le& e^{-\beta x} +\beta \int_h^x \overline G(x-y) e^{-\beta y}dy. \end{eqnarray*} The distribution $G$ is long-tailed, thus, for any $\varepsilon>0$ there exists $x(\varepsilon)$ such that \begin{eqnarray*} \overline G(x-1) &\le& \overline G(x)e^\varepsilon \end{eqnarray*} for any $x\ge x(\varepsilon)$. Hence, there exists $c(\varepsilon)<\infty$ such that \begin{eqnarray*} \overline G(x-y) &\le& c(\varepsilon)\overline G(x)e^{\varepsilon y} \end{eqnarray*} for any $x\ge x(\varepsilon)$. Take $\varepsilon=\beta/2$. Then \begin{eqnarray*} \int_h^x \overline G(x-y) e^{-\beta y}dy &\le& c(\varepsilon)\overline G(x)\int_h^x e^{-\beta y/2}dy \le \frac{c(\varepsilon)}{\beta/2}\overline G(x) e^{-\beta h/2}. \end{eqnarray*} Hence, \begin{eqnarray*} {\bf P}\{W>x\} &\le& e^{-\beta x} +2c(\varepsilon)\overline G(x) e^{-\beta h/2} +\overline G(x-h). \end{eqnarray*} Taking into account also that $\overline G(x-h)\sim\overline G(x)$ for any fixed $h>0$, we obtain \begin{eqnarray*} \limsup_{x\to\infty} \frac{{\bf P}\{W>x\}}{\overline G(x)} &\le& 2c(\varepsilon) e^{-\beta h/2}+1. \end{eqnarray*} Letting $h\to\infty$ yields the conclusion of the Lemma. {\bf \ref{preliminaries}.2. Reduction to deterministic input stream case in assertions associated with lower bounds.} Take any $a'>a$. As in the previous subsection, consider an auxiliary $D/GI/s$ system with the same service times $\{\sigma_n\}$ and deterministic interarrival times $\tau_n'\equiv a'$. Let $W'$ be a stationary waiting time in this auxiliary system. \begin{Lemma}\label{lower.bound} If ${\bf P}\{W'>x\} \ge \overline G(x)$ for some long-tailed distribution $G$, then \begin{eqnarray*} \liminf_{x\to\infty} \frac{{\bf P}\{W>x\}}{\overline G(x)} &\ge& 1. \end{eqnarray*} \end{Lemma} \proof. Put $\xi_n=\tau_n-a'$, $M_0 = 0$ and \begin{eqnarray*} M_n &=& \max\{0,\ \xi_n,\ \xi_n+\xi_{n-1},\ \ldots,\ \xi_n+\cdots+\xi_1\} =(M_{n-1}+\xi_n)^+. \end{eqnarray*} For any $n\ge1$, the following inequality holds: \begin{eqnarray}\label{lower.via.M} W_n &\ge& W'_n-iM_n. \end{eqnarray} Indeed, by induction arguments, \begin{eqnarray*} W_{n+1} &=& R(W_n+e_1\sigma_n-i\tau_{n+1})^+\\ &\ge& R(W_n'-iM_n+e_1\sigma_n-i\tau_{n+1})^+\\ &=& R(W_n'+e_1\sigma_n-ia'-i(M_n+\xi_{n+1}))^+. \end{eqnarray*} Since $(u-v)^+\ge u^+-v^+$, \begin{eqnarray*} W_{n+1} &\ge& R(W_n'+e_1\sigma_n-ia')^+ -i(M_n+\xi_{n+1})^+ \equiv W_{n+1}'-iM_{n+1}, \end{eqnarray*} and the proof of (\ref{lower.via.M}) is complete. Let $M$ be the weak limit for $M_n$ which exists due to ${\bf E}\xi_1=a-a'<0$ and the Strong Law of Large Numbers. The inequality (\ref{lower.via.M}) implies that $W \ge_{\rm st} W'-M$ where $W'$ and $M$ are independent. Therefore, for any $h>0$, \begin{eqnarray*} {\bf P}\{W>x\} &\ge& {\bf P}\{W'>x+h\} {\bf P}\{M\le h\} \ge\overline G(x+h) {\bf P}\{M\le h\}. \end{eqnarray*} The distribution $G$ is long-tailed, thus $\overline G(x+h)\sim\overline G(x)$ for any fixed $h>0$ and \begin{eqnarray*} \liminf_{x\to\infty} \frac{{\bf P}\{W>x\}}{\overline G(x)} &\ge& {\bf P}\{M\le h\}. \end{eqnarray*} Letting $h\to\infty$, we obtain the desired estimate from below. {\bf \ref{preliminaries}.3. Adapted versions of the Law of Large Numbers.} It is well known that obtaining lower bounds for systems under assumptions of heavy tails usually requires some variant of the Law of Large Numbers. Here we provide such a tool for the two-server queue. \begin{Lemma}\label{SLLN.max} Let $(\xi_n,\eta_n)$, $n=1$, $2$, \ldots, be independent identically distributed pairs of random variables. Let the two-dimensional Markov chain $V_n=(V_{n1},V_{n2})$, $n=1$, $2$, \ldots, be defined in the following way: $V_1$ has an arbitrary distribution and \begin{eqnarray*} V_{n+1} &=& \left\{ \begin{array}{lll} V_n+(\xi_n,\eta_n),\ &{\rm if}&\ V_{n1}\le V_{n2},\\ V_n+(\eta_n,\xi_n),\ &{\rm if}&\ V_{n1}>V_{n2}. \end{array} \right. \end{eqnarray*} If ${\bf E}\eta_1<{\bf E}\xi_1$, then the following convergence in probability holds: \begin{eqnarray*} \frac{V_n}{n} &\to& \Bigl(\frac{{\bf E}\xi_1+{\bf E}\eta_1}{2},\ \frac{{\bf E}\xi_1+{\bf E}\eta_1}{2}\Bigr) \ \ \mbox{ as }n\to\infty. \end{eqnarray*} \end{Lemma} \proof. Since $V_{n+1,1}+V_{n+1,2}=V_{n1}+V_{n2}+\xi_n+\eta_n$, by the Law of Large Numbers \begin{eqnarray}\label{sum.of.VV} \frac{V_{n1}+V_{n2}}{n} &\to& {\bf E}\xi_1+{\bf E}\eta_1 \quad\mbox{ as }n\to\infty. \end{eqnarray} Define a Markov chain $U_n=V_{n2}-V_{n1}$. If $U_n\ge0$, then $U_{n+1}-U_n=\eta_n-\xi_n$, while if $U_n<0$, then $U_{n+1}-U_n=\xi_n-\eta_n=-(\eta_n-\xi_n)$, so, $U_n$ is the oscillating random walk. Since ${\bf E}\xi_1>{\bf E}\eta_1$, the mean drift of the chain $U_n$ is negative on the positive half-line and is positive on the negative half-line. Therefore, for any sufficiently large $A$, the set $[-A,A]$ is positive recurrent for this Markov chain. In particular, the distributions of $U_n$ are tight. Hence, $U_n/n \to 0$ in probability as $n\to\infty$. Together with (\ref{sum.of.VV}), it implies the desired assertion of Lemma. The classical Law of Large Numbers and Lemma \ref{SLLN.max} imply the following \begin{Corollary}\label{SLLN.max.2} Let ${\bf E}\eta_1<{\bf E}\xi_1<0$ and $\varepsilon>0$. Then \begin{eqnarray*} {\bf P}\{V_{n1}>0,\,V_{n2}>0\,|\,V_1=(v_1,v_2)\} &\to& 1 \end{eqnarray*} as $N\to\infty$ uniformly in $n\ge N$ and in $(v_1,v_2)$ on the set $$ \Bigl\{v_1, v_2>n(|{\bf E}\xi_1|+\varepsilon),\ v_1+v_2>n(|{\bf E}\xi_1|+|{\bf E}\eta_1|+\varepsilon)\Bigr\}. $$ \end{Corollary} \begin{Corollary}\label{SLLN.max.3} Let ${\bf E}\eta_1<{\bf E}\xi_1<0$ and $\varepsilon>0$. Then \begin{eqnarray*} {\bf P}\{V_{n1}>0,\,V_{n2}>0\,|\,V_1=(v_1,v_2)\} &\to& 0 \end{eqnarray*} as $N\to\infty$ uniformly in $n\ge N$ and in $(v_1,v_2)$ on the complementary set $$ \overline{\{v_1>n(|{\bf E}\xi_1|-\varepsilon),\ v_2>n(|{\bf E}\xi_1|-\varepsilon),\ v_1+v_2>n(|{\bf E}\xi_1|+|{\bf E}\eta_1|-\varepsilon)\}}. $$ \end{Corollary} \begin{Corollary}\label{SLLN.max.4} Let ${\bf E}\eta_1<0$, ${\bf E}\xi_1>0$, ${\bf E}\eta_1+{\bf E}\xi_1<0$ and $\varepsilon>0$. Then \begin{eqnarray*} {\bf P}\{V_{n1}>x,\,V_{n2}>x\,|\,V_1=(v_1,v_2)\} &\to& 1 \end{eqnarray*} as $x$, $N\to\infty$ uniformly in $n\ge N$ and in $(v_1,v_2)$ on the set $$ \Bigl\{v_1>x-n({\bf E}\xi_1-\varepsilon),\ v_2>2x+n(|{\bf E}\xi_1+{\bf E}\eta_1|+\varepsilon)\Bigr\}. $$ \end{Corollary} \mysection{The maximal stability case: a lower bound \label{sec.2.max.lower}} \begin{Theorem}\label{th.2.max.lower} Assume $b\in (0,a)$. Let the integrated service time distribution $B_I$ be long-tailed. Then the tail of the stationary waiting time $W$ admits the following estimate from below: as $x\to\infty$, \begin{equation} \label{firstlb} {\bf P}\{W>x\} \ge \frac{1+o(1)}{a(2a-b)} \Biggl[(\overline B_I (x))^2+ b\int_0^\infty \overline B_I (x+ya)\overline B(x+y(a-b))dy\Biggr]. \end{equation} \end{Theorem} \remark\label{rem.rou} From (\ref{firstlb}), one can get the lower bound in Corollary \ref{cor.2.max}. Namely, replace $\overline{B}(x+y(a-b))$ by a smaller term $\overline{B}(x+ya)$ in the integral in the RHS of (\ref{firstlb}). Then the new integral is equal to $b(\overline{B}_I(x))^2/2a$, and the lower bound follows since $$ \frac{1}{a(2a-b)} \Bigl(1+\frac{b}{2a}\Bigr) = \frac{2a+b}{2a^2(2a-b)}. $$ \remark\label{rem.s.min} By use of Strong Law of Large Numbers, one can get the following result for $s$-server queue, $s\ge 2$. If $b<a$, then there exists a constant $K\equiv K(a,b,s)$ such that $$ {\bf P}\{W>x\} \ge (K+o(1))(\overline{B}_I(x))^s. $$ We start with some auxiliary results. The proof of the theorem is given in subsection \ref{sec.2.max.lower}.4. {\bf \ref{sec.2.max.lower}.1. An integral equality.} \begin{Lemma}\label{calcul.1} Let $f(y)$ be an integrable function. Put $f_I(y)\equiv\int_y^\infty f(z)dz$. Then, for any positive $\alpha$ and $\beta$, $\alpha>\beta$, \begin{eqnarray*} J &\equiv& \int_0^\infty \int_0^\infty f(\alpha y{+}\beta z)f(\beta y{+}\alpha z)dydz\\ && \hspace{30mm} =\ \frac{(f_I(0))^2}{\alpha^2{-}\beta^2} - \frac{2\beta}{\alpha^2{-}\beta^2} \int_0^\infty f_I(\alpha u)f(\beta u)du. \end{eqnarray*} \end{Lemma} \proof. Put $u=\alpha y{+}\beta z$ and $v=\beta y{+}\alpha z$. Then \begin{eqnarray*} J &=& \frac{1}{\alpha^2{-}\beta^2} \int_0^\infty f(u)du \int_{\beta u/\alpha}^{\alpha u/\beta} f(v)dv\\ &=& \frac{1}{\alpha^2{-}\beta^2} \int_0^\infty f(u)f_I(\beta u/\alpha)du -\frac{1}{\alpha^2{-}\beta^2} \int_0^\infty f(u)f_I(\alpha u/\beta)du\\ &=& \frac{\alpha}{\alpha^2{-}\beta^2} \int_0^\infty f(\alpha u)f_I(\beta u)du -\frac{\beta}{\alpha^2{-}\beta^2} \int_0^\infty f(\beta u)f_I(\alpha u)du. \end{eqnarray*} Integration by parts yields \begin{eqnarray*} \int_0^\infty f_I(\beta u)f(\alpha u)du &=& \frac{(f_I(0))^2}{\alpha}- \frac{\beta}{\alpha} \int_0^\infty f_I(\alpha u)f(\beta u)du. \end{eqnarray*} By substituting this equality into the previous one, we arrive at the conclusion of the Lemma. {\bf \ref{sec.2.max.lower}.2. Some calculations with two big service times.} Fix $\varepsilon>0$ and put $b'=b-\varepsilon$. For $k$ and $l$, $k<l\le n$, define the events $A_{nkl}$ and $C_{nkl}$ by the equalities \begin{eqnarray*} A_{nkl} &=& \Bigl\{\sigma_k>x+(l-k)a+(n-l)(a-b'),\ \sigma_l>x+(n-l)(a-b'),\\ && \hspace{40mm} \sigma_k+\sigma_l>2x+(l-k)a+(n-l)(2a-b')\Bigr\} \end{eqnarray*} and \begin{eqnarray*} C_{nkl} &=& \bigcap_{\stackrel{j=1}{j\ne k,l}}^n\Bigl\{ \sigma_j\le x+(n-j)(a-b')\Bigr\}. \end{eqnarray*} Note that the events $A_{nkl}\cap C_{nkl}$ are disjoint for different pairs $(k,l)$. Due to the existence of ${\bf E}\sigma$, uniformly in $n\ge1$ and $k<l\le n$, \begin{eqnarray}\label{slln.for.C} {\bf P}\{\overline C_{nkl}\} &\le& \sum_{j=0}^\infty {\bf P}\{\sigma_1>x+j(a-b')\} \to 0 \quad\mbox{ as }x\to\infty. \end{eqnarray} \begin{Lemma}\label{int.2} Assume $b\in (0,a)$. Let the integrated tail distribution $B_I$ be long-tailed. Then, for any fixed $N\ge1$ and for any $\varepsilon >0$, as $x\to\infty$, \begin{eqnarray*} \lim_{n\to\infty} \sum_{\stackrel{k,l=1}{k<l}}^{n-N} {\bf P}\{A_{nkl}\} &\sim& \frac{1}{a(2a-b')} \Biggl[(\overline B_I (x))^2 + b'\int_0^\infty \overline B_I(x+ya)\overline B(x+y(a-b'))dy \Biggr]. \end{eqnarray*} \end{Lemma} \proof. Put \begin{eqnarray*} A_{kl}' &=& \{\sigma_1>x+ka+l(a{-}b'),\ \sigma_2>x+l(a{-}b'),\ \sigma_1+\sigma_2>2x+ka+l(2a{-}b')\}, \end{eqnarray*} so that ${\bf P}\{A_{nkl}\}={\bf P}\{A'_{l-k,n-l}\}$ and \begin{eqnarray}\label{throw.A.prime} \lim_{n\to\infty}\sum_{\stackrel{k,l=1}{k<l}}^{n-N} {\bf P}\{A_{nkl}\} &=& \lim_{n\to\infty}\sum_{l=N}^{n-1}\sum_{k=1}^{n-l-1} {\bf P}\{A_{kl}'\} = \sum_{l=N}^\infty\sum_{k=1}^\infty {\bf P}\{A_{kl}'\}. \end{eqnarray} Consider also the events \begin{eqnarray*} A(y,z) &=& \{\sigma_1>x{+}ya{+}z(a{-}b'),\ \sigma_2>x{+}z(a{-}b'),\ \sigma_1{+}\sigma_2>2x{+}ya{+}z(2a{-}b')\}, \end{eqnarray*} which satisfy $A(k,l)=A'_{kl}$. Since the probability ${\bf P}\{A(y,z)\}$ is non-increasing in $y$ and $z$, we have the inequalities \begin{eqnarray}\label{diff.i-.i+} I_-\equiv\int_N^\infty\int_1^\infty {\bf P}\{A(y,z)\}dydz &\le& \sum_{l=N}^\infty\sum_{k=1}^\infty {\bf P}\{A_{kl}'\} \nonumber\\ &\le& \int_0^\infty\int_0^\infty {\bf P}\{A(y,z)\}dydz \equiv I_+. \end{eqnarray} The values of integrals $I_-$ and $I_+$ are close to each other in the following sense: \begin{eqnarray*} \lefteqn{I_+-I_-}\\ &\le& \int_0^N\int_0^\infty {\bf P}\{A(y,z)\}dydz + \int_0^\infty\int_0^1 {\bf P}\{A(y,z)\}dydz\\ &\le& N{\bf P}\{\sigma_2>x\} \int_0^\infty{\bf P}\{\sigma_1>x+ya\}dy +{\bf P}\{\sigma_1>x\} \int_0^\infty{\bf P}\{\sigma_1>x+z(a{-}b')\}dz. \end{eqnarray*} Recall that the distribution $\overline B_I(x)$ is long tailed, which is equivalent to $\overline B(x)=o(\overline B_I(x))$. Therefore, as $x\to\infty$, \begin{eqnarray*} I_+-I_- &\le& \frac{N+1}{a-b'}\overline B(x) \overline B_I(x) =o((\overline B_I(x))^2). \end{eqnarray*} Now it follows from (\ref{diff.i-.i+}) that, as $x\to\infty$, \begin{eqnarray}\label{integral.repr} \sum_{l=N}^\infty\sum_{k=1}^\infty {\bf P}\{A_{kl}'\} &=& \int_0^\infty\int_0^\infty {\bf P}\{A(y,z)\}dydz +o((\overline B_I(x))^2). \end{eqnarray} Further, \begin{eqnarray*} \lefteqn{{\bf P}\{A(y,z)\}}\\ &=& \overline B(x+ya+za) \overline B(x+z(a-b'))\\ && +\ {\bf P}\Bigl\{x+ya+z(a-b')<\sigma_1\le x+ya+za, \ \sigma_2>x+z(a-b'),\\ && \hspace{60mm} \sigma_1+\sigma_2> 2x+ya+z(2a-b')\Bigr\}\\ &=& \overline B(x+ya+za) \overline B(x+z(a-b'))\\ && +\ {\bf P}\Bigl\{x+ya+z(a{-}b')<\sigma_1\le x+ya+za,\ \sigma_1+\sigma_2> 2x+ya+z(2a{-}b')\Bigr\}\\ &\equiv& \overline B(x+ya+za) \overline B(x+z(a-b'))+Q(y,z), \end{eqnarray*} since the event $\{\sigma_1\le x+ya+za,\sigma_1+\sigma_2>2x+ya+z(2a-b')\}$ implies the event $\{\sigma_2>x+z(a'-b)\}$. Consequently integrating over $y$ and $z$, we obtain \begin{eqnarray*} \int_0^\infty\int_0^\infty \overline B(x{+}ya{+}za) \overline B(x{+}z(a{-}b')) dydz &=& \frac{1}{a} \int_0^\infty \overline B_I (x{+}za) \overline B(x{+}z(a{-}b'))dz. \end{eqnarray*} By the total probability formula, \begin{eqnarray*} Q(y,z) &=& \int_0^{zb'} {\bf P}\{\sigma_1\in x+ya+z(a{-}b')+dt\} {\bf P}\{\sigma_2>x+za-t\}\\ &=& \int_0^{zb'} \overline B(x+za-t) B(x+ya+z(a{-}b')+dt). \end{eqnarray*} The integration against $y$ leads to the equalities \begin{eqnarray*} \int_0^\infty Q(y,z)dy &=& \frac{1}{a} \int_0^{zb'} \overline B(x+za-t) B_I(x+z(a-b')+dt)\\ &=& \frac{1}{a} \int_0^{zb'} \overline B(x+za-t) \overline B(x+z(a-b')+t)dt\\ &=& \frac{b'}{a} \int_0^z \overline B(x+za-tb') \overline B(x+z(a-b')+tb')dt. \end{eqnarray*} Integrating against $z$, we obtain: \begin{eqnarray*} \int_0^\infty \int_0^\infty Q(y,z) dydz &=& \frac{b'}{a} \int_0^\infty\int_0^z \overline B(x+za-tb') \overline B(x+z(a-b')+tb')dtdz\\ &=& \frac{b'}{a} \int_0^\infty \int_t^\infty \overline B(x+za-tb') \overline B(x+z(a-b')+tb')dzdt\\ &=& \frac{b'}{a} \int_0^\infty \int_0^\infty \overline B(x+za+t(a{-}b')) \overline B(x+z(a{-}b')+ta)dzdt. \end{eqnarray*} By Lemma \ref{calcul.1} with $f(y)=\overline B(x+y)$, $\alpha=a$, and $\beta=a-b'$, the latter integral is equal to \begin{eqnarray*} \frac{1}{a(2a-b')} (\overline B_I (x))^2 -\frac{2(a-b')}{a(2a-b')} \int_0^\infty \overline B_I (x+ya) \overline B(x+y(a-b'))dy. \end{eqnarray*} Putting everything together into (\ref{integral.repr}), we obtain the following equivalence, as $x\to\infty$: \begin{eqnarray*} \sum_{l=1}^\infty \sum_{k=1}^\infty {\bf P}\{A_{kl}'\} &\sim& \frac{1}{a(2a-b')} (\overline B_I(x))^2\\ && \hspace{10mm} +\frac{b'}{a(2a-b')} \int_0^\infty \overline B_I(x+ya) \overline B(x+y(a-b'))dy, \end{eqnarray*} which due to (\ref{throw.A.prime}) completes the proof of Lemma. {\bf \ref{sec.2.max.lower}.3. Proof of Theorem \ref{th.2.max.lower}.} If $\overline B_I (x)$ is long-tailed, then the function in $x$ \begin{eqnarray*} (\overline B_I (x))^2+ b\int_0^\infty \overline B_I (x+ya)\overline B(x+y(a-b))dy \end{eqnarray*} is long-tailed as well. Indeed, for any fixed $t$, we have, as $x\to\infty$, \begin{eqnarray*} \int_0^\infty \overline B_I (x{+}t{+}ya) \overline B(x{+}t{+}y(a{-}b))dy &\sim& \int_0^\infty \overline B_I (x{+}ya) \overline B(x{+}t{+}y(a{-}b))dy. \end{eqnarray*} Integrating by parts we get the equality for RHS integral \begin{eqnarray*} \lefteqn{-\frac{1}{a-b}\overline B_I (x{+}ya) \overline B_I(x{+}t{+}y(a{-}b))\Bigr|_0^\infty -\int_0^\infty \overline B(x{+}ya) \overline B_I (x{+}t{+}y(a{-}b))dy}\\ && \hspace{30mm} \sim \frac{1}{a-b}(\overline B_I (x))^2 -\int_0^\infty \overline B(x{+}ya) \overline B_I (x{+}y(a{-}b))dy\\ && \hspace{60mm} = \int_0^\infty \overline B_I(x{+}ya) \overline B (x{+}y(a{-}b))dy. \end{eqnarray*} So, we can apply Lemma \ref{lower.bound}, and it is sufficient to prove the lower bound of Theorem \ref{th.2.max.lower} for the queueing system $D/GI/2$ with deterministic input stream. Let the interarrival times $\tau_n$ be deterministic, i.e., $\tau_n\equiv a$. Then the event $A_{nkl}$ implies the event \begin{eqnarray*} \lefteqn{\Bigl\{W_{k+1,2}>x+(l{-}k)a+(n{-}l)(a{-}b')-a,\ W_{l+1,1}>x+(n{-}l)(a{-}b')-a,}\\ && \hspace{35mm} W_{l+1,1}+W_{k+1,2}>2x+(l{-}k)a+(n{-}l)(2a{-}b')-2a\Bigr\}, \end{eqnarray*} which implies \begin{eqnarray*} \Bigl\{W_{l+1,2}, W_{l+1,1}>x+(n{-}l)(a{-}b'){-}a, \ W_{l+1,1}{+}W_{l+1,2}>2x+(n{-}l)(2a{-}b'){-}2a\Bigr\}. \end{eqnarray*} Thus, by Corollary \ref{SLLN.max.2} (with $\xi=\sigma-\tau$ and $\eta=-\tau$), there exists $N$ such that \begin{eqnarray}\label{corr.to.corr2} {\bf P}\{W_n>x \mid A_{nkl}\} &\ge& 1-\varepsilon \end{eqnarray} for any $n>N$ and $k<l<n-N$. Taking into account that the events $A_{nkl}\cap C_{nkl}$ are disjoint for distinct pairs $(k,l)$, we obtain the following estimates: \begin{eqnarray*} {\bf P}\{W_n>x\} &\ge& \sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{W_n>x,A_{nkl},C_{nkl}\}\\ &\ge& \sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{W_n>x,A_{nkl}\} -\sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{A_{nkl},\overline C_{nkl}\}. \end{eqnarray*} Since the events $A_{nkl}$ and $C_{nkl}$ are independent, \begin{eqnarray*} {\bf P}\{W_n>x\} &\ge& \sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{W_n>x,A_{nkl}\}-\sup_{kl} {\bf P}\{C_{nkl}\} \sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{A_{nkl}\}\\ &=& \sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{W_n>x \mid A_{nkl}\}{\bf P}\{A_{nkl}\} -o(1)\sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{A_{nkl}\} \end{eqnarray*} as $x\to\infty$ uniformly in $n$, by (\ref{slln.for.C}). Together with (\ref{corr.to.corr2}) it implies that, for sufficiently large $x$ and $n>N$, \begin{eqnarray*} {\bf P}\{W_n>x\} &\ge& (1-2\varepsilon)\sum_{k=1}^{n-N} \sum_{l=k+1}^{n-N} {\bf P}\{A_{nkl}\}. \end{eqnarray*} Letting now $n\to\infty$, we derive from Lemma \ref{int.2} the following lower bound, for all sufficiently large $x$: \begin{eqnarray*} {\bf P}\{W>x\} &\ge& \frac{1-3\varepsilon}{a(2a-b')} \Bigl[(\overline B_I(x))^2 + b'\int_0^\infty \overline B_I(x+ya) \overline B(x+y(a-b'))dy\Bigr]. \end{eqnarray*} Note that, for any $b'<b<a$, $$ \int_0^\infty \overline B_I(x+ya) \overline B(x+y(a-b'))dy \ge \frac{a-b}{a-b'} \int_0^\infty \overline B_I(x+ya) \overline B(x+y(a-b))dy. $$ We complete the proof of the Theorem by letting $\varepsilon\downarrow 0$. \mysection{The maximal stability case: an upper bound \label{sec.2.max.upper}} \begin{Theorem}\label{th.2.max.upper} Assume $b\in (0,a)$. Suppose that the distribution $B_I$ is subexponential. Then, as $x\to\infty$, \begin{eqnarray*} {\bf P}\{W>x\} &\le& \frac{1{+}o(1)}{a(2a{-}b)} \Biggl[(\overline B_I(x))^2 + b \int_0^\infty \overline B_I(x{+}ya) \overline B(x{+}y(a{-}b))dy\Biggr]. \end{eqnarray*} \end{Theorem} By Lemma \ref{upper.bound}, it is sufficient to prove this upper bound for the queueing system $D/GI/2$ with deterministic input stream. So, let the interarrival times $\tau_n$ be deterministic, i.e., $\tau_n\equiv a$. Let $\sigma_n^{(1)}$ and $\sigma_n^{(2)}$, $n\ge1$, be independent random variables with common distribution $B$. In this Section, define the service times $\sigma_n$ recursively. For that, we have to associate workloads with servers. Put $U_1=(U_{1,1},U_{1,2})=(0,0)$ and introduce the recursion \begin{eqnarray}\label{UU} U_{n+1} = (U_n + e_{\alpha_n}\sigma_n - i a)^+ \end{eqnarray} where $\alpha_n=1$ if $U_{n,1}<U_{n,2}$ and $\alpha_n=2$ if $U_{n,1}>U_{n,2}$. If $U_{n,1}=U_{n,2}$, then $\alpha_n$ takes values $1$ and $2$ with equal probabilities independently of everything else. Note that $W_n = R(U_n)$ a.s. for any $n=1$, 2, \ldots. Now we can define $\sigma_n$ by induction. Indeed, $\alpha_0$ is chosen at random from the set $\{1 , 2\}$. Put $\sigma_0 = \sigma_0^{(\alpha_0)}$. Then $U_1$ is defined by recursion (\ref{UU}) with $n=0$. Assume that $U_n$ is defined for some $n>0$. Then $\alpha_n$ is defined, too. Put $\sigma_n= \sigma_n^{(\alpha_n)}$ and determine $U_{n+1}$ by (\ref{UU}). Due to the symmetry, for any $n$, \begin{eqnarray}\label{symmetry} {\bf P}\{\alpha_n=1\} &=& {\bf P}\{\alpha_n=2 \}=1/2. \end{eqnarray} Consider two auxiliary $D/GI/1$ queueing systems which work in parallel: at any time instant $T_n = na$, $n=1$, 2, \ldots, one customer arrives in the first queue and one in the second. Service times in queue $i=1,2$ are equal to $\sigma_n^{(i)}$. Denote by $W_n^{(i)}$, $i=1$, $2$, the waiting times in the $i$th queue and put $W_n^{(1)}=W_n^{(2)}=0$. Since $b<a$, both queues are stable. Let $W^{(i)}$ be a stationary waiting time in the $i$th queue. By monotonicity, with probability 1, \begin{eqnarray}\label{W.le.min.W1.W2.prest} W_n &\le& \min\,(W_n^{(1)},\,W_n^{(2)}) \end{eqnarray} for any $n\ge1$. Hence, \begin{eqnarray}\label{W.le.min.W1.W2} W &\le& \min\,\{W^{(1)},\,W^{(2)}\}. \end{eqnarray} \begin{Lemma}\label{max_upper_independence} The waiting times $\{W_n^{(1)}\}$ and $\{W_n^{(2)}\}$ are independent. \end{Lemma} \proof\ \ follows from the observation that the input (deterministic) stream and service times in the first queue do not depend on the input (also deterministic) stream and service times in the second one. Provided $B_I$ is a subexponential distribution, \begin{eqnarray}\label{W1.sim.BI} {\bf P}\{W^{(i)}>x\} &\sim& \frac{1}{a-b}\overline B_I(x) \quad\mbox{ as } x\to\infty. \end{eqnarray} Then Lemma \ref{max_upper_independence} together with (\ref{W.le.min.W1.W2}) implies the following simple upper bound: \begin{eqnarray}\label{W.le.W1sq} \limsup_{x\to\infty} \frac{{\bf P}\{W>x\}}{(\overline B_I(x))^2} &\le& \frac{1}{(a-b)^2}. \end{eqnarray} \remark\label{rem.s.min.up} For a $GI/GI/s$ queue with $a<b$ and subexponential distribution $B_I$, similar arguments lead to $$ \limsup_{x\to\infty} \frac{{\bf P}\{W>x\}}{(\overline{B}_I(x))^s} \le \frac{1}{(a-b)^s}. $$ Introduce the events, for $k<n$, \begin{eqnarray*} A_{nk}^{(1)} &=& \{\sigma_k^{(1)}>x+(n-k)(a-b)\},\\ A_{nk}^{(2)} &=& \{\sigma_k^{(2)}>x+(n-k)(a-b)\}. \end{eqnarray*} \begin{Lemma}[See also {[}\ref{BaF}, Theorem 5{]}] \label{upper_1server} Provided the distribution $B_I$ is subexponential, for any fixed $N$, \begin{eqnarray*} \limsup_{n\to\infty} {\bf P}\Bigl\{W^{(1)}_n>x,\, \bigcap_{k=1}^{n-N}\overline{A^{(1)}_{nk}}\Bigr\} &=& o(\overline B_I (x)) \quad\mbox{ as }x\to\infty. \end{eqnarray*} \end{Lemma} \proof. For any $\delta>0$, consider the disjoint events \begin{eqnarray*} C_{nk}^{(1)} &=& \Bigl\{\Bigl\{\sigma_k^{(1)}>x+(n-k)(a-b+\delta)\Bigr\}\cap \bigcap_{\stackrel{j=1}{j\neq k}}^{n-1}\Bigl\{\sigma_j^{(1)}\le x+(n-j)(a-b)\Bigr\}\Bigr\}. \end{eqnarray*} Due to the Law of Large Numbers, there exists $M>N$ such that \begin{eqnarray*} {\bf P}\{W^{(1)}_n>x\mid C_{nk}^{(1)}\} &\ge& 1-\delta \end{eqnarray*} for any $n\ge M$ and $k\le n-M$ and, by the limit at (\ref{slln.for.C}), \begin{eqnarray*} {\bf P}\{C_{nk}^{(1)}\} &\ge& (1-\delta){\bf P}\{\sigma_k^{(1)}>x+(n-k)(a-b+\delta)\}. \end{eqnarray*} The events $C_{nk}^{(1)}$, $k\le n-M$, are disjoint, hence, \begin{eqnarray*} {\bf P}\Bigl\{W^{(1)}_n>x,\,\bigcup_{k=1}^{n-M} C^{(1)}_{nk}\Bigr\} &=& \sum_{k=1}^{n-M} {\bf P}\{W^{(1)}_n>x,\,C^{(1)}_{nk}\}\\ &\ge& (1-\delta)^2\sum_{k=M}^{n-1} {\bf P}\{\sigma_k^{(1)}>x+k(a-b+\delta))\}. \end{eqnarray*} The latter implies the following lower bound: \begin{eqnarray*} \liminf_{n\to\infty} {\bf P}\Bigl\{W^{(1)}_n>x,\,\bigcup_{k=1}^{n-M} C^{(1)}_{nk}\Bigr\} &\ge& (1-\delta)^2\sum_{k=M}^\infty \overline B(x+k(a-b+\delta))\\ &\sim& \frac{(1-\delta)^2}{a-b+\delta}\overline B_I(x) \end{eqnarray*} as $x\to\infty$. Since $A^{(1)}_{nk}\supseteq C^{(1)}_{nk}$ and since $M>N$ and $\delta>0$ can be chosen arbitrarily, \begin{eqnarray*} \liminf_{n\to\infty} {\bf P}\Bigl\{W^{(1)}_n>x,\,\bigcup_{k=1}^{n-N} A^{(1)}_{nk}\Bigr\} &\ge& \frac{1+o(1)}{a-b}\overline B_I(x) \quad\mbox{ as }x\to\infty. \end{eqnarray*} Together with (\ref{W1.sim.BI}), it implies the assertion of Lemma. \proof\ of Theorem \ref{th.2.max.upper} continued. Estimate (\ref{W.le.min.W1.W2.prest}) and Lemma \ref{max_upper_independence} imply \begin{eqnarray*} {\bf P}\Bigl\{W_n>x,\, \bigcap_{k=1}^{n-N}\overline{A^{(1)}_{nk}}\cup \bigcap_{l=1}^{n-N}\overline{A^{(2)}_{nl}}\Bigr\} &\le& {\bf P}\Bigl\{W^{(1)}_n>x,\,W^{(2)}_n>x,\, \bigcap_{k=1}^{n-N}\overline{A^{(1)}_{nk}}\cup \bigcap_{l=1}^{n-N}\overline{A^{(2)}_{nl}}\Bigr\}\\ &\le& {\bf P}\Bigl\{W^{(1)}_n>x,\, \bigcap_{k=1}^{n-N}\overline{A^{(1)}_{nk}}\Bigr\} {\bf P}\{W^{(2)}_n>x\}\\ && + {\bf P}\{W^{(1)}_n>x\} {\bf P}\Bigl\{W^{(2)}_n>x,\, \bigcap_{l=1}^{n-N}\overline{A^{(2)}_{nl}}\Bigr\}. \end{eqnarray*} Applying now Lemma \ref{upper_1server} and relation (\ref{W1.sim.BI}), we conclude that, as $x\to\infty$, \begin{eqnarray*} \limsup_{n\to\infty} {\bf P}\Bigl\{W_n>x,\, \bigcap_{k=1}^{n-N}\overline{A_{nk}^{(1)}}\cup \bigcap_{l=1}^{n-N}\overline{A_{nl}^{(2)}}\Bigr\} &=& o((\overline B_I(x))^2). \end{eqnarray*} Since \begin{eqnarray*} \bigcap_{k=1}^{n-N}\overline{A_{nk}^{(1)}}\cup \bigcap_{l=1}^{n-N}\overline{A_{nl}^{(2)}} &=& \bigcap_{k,l=1}^{n-N}\Bigl(\overline{A_{nk}^{(1)}}\cup \overline{A_{nl}^{(2)}}\Bigr) = \overline{\bigcup_{k,l=1}^{n-N}\Bigl(A_{nk}^{(1)}\cap A_{nl}^{(2)}\Bigr)}, \end{eqnarray*} we obtain the equivalent relation, as $x\to\infty$, \begin{eqnarray}\label{upper_2server} \limsup_{n\to\infty} {\bf P}\Biggl\{W_n>x,\, \overline{\bigcup_{k,l=1}^{n-N}\Bigl(A_{nk}^{(1)}\cap A_{nl}^{(2)}\Bigr)}\Biggr\} &=& o((\overline B_I(x))^2). \end{eqnarray} Fix $\varepsilon>0$ and put $b'=b+\varepsilon$. For any $n$ and $k\le l\le n$, define \begin{eqnarray*} D^{(1)}_{nk} &=& \{\sigma^{(1)}_k>x+(l-k)a+(n-l)(a-b')\},\\ D^{(2)}_{nl} &=& \{\sigma^{(2)}_l>x+(n-l)(a-b')\},\\ D_{nkl} &=& \{\sigma^{(1)}_k+\sigma^{(2)}_l>2x+(l-k)a+(n-l)(2a-b')\}. \end{eqnarray*} For any $n$ and $l\le k\le n$, define \begin{eqnarray*} D^{(1)}_{nk} &=& \{\sigma^{(1)}_k>x+(n-k)(a-b')\},\\ D^{(2)}_{nl} &=& \{\sigma^{(2)}_l>x+(k-l)a+(n-k)(a-b')\},\\ D_{nkl} &=& \{\sigma^{(1)}_k+\sigma^{(2)}_l>2x+(k-l)a+(n-k)(2a-b')\}. \end{eqnarray*} Denote \begin{eqnarray*} F_{nkl} &=& D^{(1)}_{nk}\cap D^{(2)}_{nl}\cap D_{nkl}. \end{eqnarray*} We can derive an upper bound on the probability of the event $\{W_n>x\}$ as follows: \begin{eqnarray}\label{P1.P2.P3.dec} \lefteqn{{\bf P}\{W_n>x\}}\nonumber\\ &\le& {\bf P}\Bigl\{W_n>x,\,\bigcup_{k,l=1}^{n-N} F_{nkl}\Bigr\} + {\bf P}\Bigl\{W_n>x,\, \overline{\bigcup_{k,l=1}^{n-N} F_{nkl}},\, \bigcup_{k,l=1}^{n-N}\Bigl(A_{nk}^{(1)}\cap A_{nl}^{(2)}\Bigr)\Bigr\}\nonumber\\ && + {\bf P}\Bigl\{W_n>x,\, \overline{\bigcup_{k,l=1}^{n-N}\Bigl(A_{nk}^{(1)}\cap A_{nl}^{(2)}\Bigr)}\Bigr\}\nonumber\\ &\equiv& P_{n1}+P_{n2}+P_{n3}. \end{eqnarray} Here the first term is not greater than \begin{eqnarray}\label{dec.of.p1} P_{n1} &\le& {\bf P}\Bigl\{W_n>x,\, \bigcup_{\stackrel{k,l=1}{k<l}}^{n-1} F_{nkl}\Bigr\} + {\bf P}\Bigl\{W_n>x,\, \bigcup_{\stackrel{k,l=1}{k>l}}^{n-1} F_{nkl}\Bigr\} + {\bf P}\Bigl\{W_n>x,\, \bigcup_{k=1}^{n-1}F_{nkk}\Bigr\}\nonumber\\ &\equiv& P_{n11}+P_{n12}+P_{n13}. \end{eqnarray} The third probability is negligible in the sense that \begin{eqnarray}\label{p13} P_{n13} &\le& {\bf P}\Bigl\{\bigcup_{k=1}^{n-1} (D^{(1)}_{nk}\cap D^{(2)}_{nk})\Bigr\} \le\sum_{k=1}^{n-1} {\bf P}\{D^{(1)}_{nk}\} {\bf P}\{D^{(2)}_{nk}\}\nonumber\\ &\le& \overline B(x)\sum_{k=1}^\infty \overline B(x+k(a-b-\varepsilon))\nonumber\\ &\le& \overline B(x)\overline B_I(x)/(a-b-\varepsilon) = o((\overline B_I(x))^2) \end{eqnarray} as $x\to\infty$, since $\overline B(x)=o(\overline B_I (x))$. The first probability in (\ref{dec.of.p1}) admits the following upper bound: \begin{eqnarray*} P_{n11} &\le& \sum_{k=1}^{n-1} {\bf P}\Bigl\{W_n>x,\,D^{(1)}_{nk}, \alpha_k=1, \bigcup_{l=k+1}^{n-1}(D^{(2)}_{nl}\cap D_{nkl})\Bigr\}\\ && + \sum_{k=1}^{n-1} {\bf P}\Bigl\{W_n>x,\,D^{(1)}_{nk}, \alpha_k=2, \bigcup_{l=k+1}^{n-1}(D^{(2)}_{nl}\cap D_{nkl})\Bigr\} \equiv\Sigma_1+\Sigma_2. \end{eqnarray*} For $\Sigma_1$, we have the following inequality and equalities: \begin{eqnarray}\label{sigma.1} \Sigma_1 &\le& \sum_{\stackrel{k,l=1}{k<l}}^{n-1} {\bf P}\Bigl\{D^{(1)}_{nk}, \alpha_k=1,D^{(2)}_{nl},D_{nkl}\Bigr\}\nonumber\\ &=& \sum_{\stackrel{k,l=1}{k<l}}^{n-1} {\bf P}\{\alpha_k=1\} {\bf P}\Bigl\{D^{(1)}_{nk},D^{(2)}_{nl},D_{nkl}\Bigr\} = \frac{1}{2}\sum_{\stackrel{k,l=1}{k<l}}^{n-1} {\bf P}\{F_{nkl}\}, \end{eqnarray} by independence of the event $\{ \alpha_k=1 \}$ from $D^{(1)}_{nk}$, $D^{(2)}_{nl}$ and $D_{nkl}$ and by the symmetry (\ref{symmetry}). The sum $\Sigma_2$ is not greater than \begin{eqnarray*} \Sigma_2 &\le& \sum_{k=1}^{n-1} {\bf P}\Bigl\{W_n>x,\,D^{(1)}_{nk}, \alpha_k=2\Bigr\}\\ &=& \sum_{k=1}^{n-1} {\bf P}\{D^{(1)}_{nk}\} {\bf P}\Bigl\{W_n>x,\, \alpha_k=2\Bigr\}\le {\bf P}\{W_n>x\} \sum_{k=1}^{n-1} {\bf P}\{D^{(1)}_{nk}\}. \end{eqnarray*} Hence, $\Sigma_2=o({\bf P}\{W_n>x\})$ as $x\to\infty$ uniformly in $n$. Combining the latter fact with estimate (\ref{sigma.1}) for $\Sigma_1$, we get \begin{eqnarray}\label{p11} P_{n11} &\le& \frac{1}{2}\sum_{\stackrel{k,l=1}{k<l}}^{n-1} {\bf P}\{F_{nkl}\}+o({\bf P}\{W_n>x\}). \end{eqnarray} Taking into account the equality $P_{n11}=P_{n12}$, we obtain from (\ref{dec.of.p1}), (\ref{p13}) and (\ref{p11}) the following estimate: \begin{eqnarray*} P_{n1} &\le& \sum_{\stackrel{k,l=1}{k<l}}^{n-1} {\bf P}\{F_{nkl}\}+o((\overline B_I(x))^2) \end{eqnarray*} as $x\to\infty$ uniformly in $n$. Now applying the calculations of Section 3.3 we can write down the following estimate, as $x\to\infty$: \begin{eqnarray}\label{P1.dec} \limsup_{n\to\infty} P_{n1} &\le& \frac{1+o(1)}{a(2a-b')}\Bigl[(\overline B_I (x))^2 +b'\int_0^\infty\overline B_I(x+ya')\overline B(x+y(a-b'))dy\Bigr]. \nonumber\\ \end{eqnarray} It is proved in (\ref{upper_2server}) that, uniformly in $n$, \begin{eqnarray}\label{P2.dec} P_{n3} &=& o((\overline B_I(x))^2) \quad\mbox{ as }x\to\infty. \end{eqnarray} We have \begin{eqnarray*} \overline{\bigcup_{k,l=1}^{n-N} F_{nkl}} \cap \bigcup_{k,l=1}^{n-N}\Bigl(A_{nk}^{(1)}\cap A_{nl}^{(2)}\Bigr) &\subseteq& \bigcup_{k,l=1}^{n-N}\Bigl(A_{nk}^{(1)}\cap A_{nl}^{(2)}\cap\overline F_{nkl}\Bigr). \end{eqnarray*} Thus, \begin{eqnarray}\label{P3.dec.pr} P_{n2} &\le& \sum_{k,l=1}^{n-N} {\bf P}\{W_n>x \mid A_{nk}^{(1)},A_{nl}^{(2)},\overline F_{nkl}\} {\bf P}\{A_{nk}^{(1)}\cap A_{nl}^{(2)}\}. \end{eqnarray} Conditioning on $W_{nk}$ and $W_{nl}$ yields, for any $w>0$, \begin{eqnarray*} {\bf P}\{W_n>x \mid A_{nk}^{(1)},A_{nl}^{(2)},\overline F_{nkl}\} &\le& {\bf P}\{W_n>x \mid W_{k1}\le w,W_{l2}\le w, A_{nk}^{(1)},A_{nl}^{(2)},\overline F_{nkl}\}\\ && + {\bf P}\{W_{k1}>w\}+{\bf P}\{W_{l2}>w\}. \end{eqnarray*} Since $b<2a$, the two-server queue is stable and, in particular, the sequence of distributions of random variables $(W_{n1},W_{n2})$ is tight. It means that, for any fixed $\varepsilon>0$, there exists $w$ such that, for any $k\ge0$ and $l\ge0$, \begin{eqnarray*} {\bf P}\{W_{k1}>w\} &\le& \varepsilon \quad \mbox{ and }\quad {\bf P}\{W_{l2}>w\} \le \varepsilon. \end{eqnarray*} Also, from the stability and from Corollary \ref{SLLN.max.3}, for any fixed $\varepsilon>0$ and $w>0$, there exists $N$ such that, for any $n\ge N$ and $k$, $l\le n-N$, \begin{eqnarray*} {\bf P}\{W_n>x \mid W_{k1}\le w,W_{l2}\le w, A_{nk}^{(1)},A_{nl}^{(2)},\overline F_{nkl}\} &\le& \varepsilon. \end{eqnarray*} Combining these estimates we obtain from (\ref{P3.dec.pr}), \begin{eqnarray*} P_{n2} &\le& 3\varepsilon \sum_{k,l=1}^{n-N} {\bf P}\{A_{nk}^{(1)}\cap A_{nl}^{(2)}\} =3\varepsilon \Bigl(\sum_{k=1}^{n-N} {\bf P}\{A_{nk}^{(1)}\}\Bigr)^2. \end{eqnarray*} Hence, \begin{eqnarray}\label{P3.dec} P_{n2} &\le& 3\varepsilon \Bigl(\sum_{k=1}^\infty \overline B(x+k(a-b'))\Bigr)^2 \le \frac{3\varepsilon}{(a-b')^2}(\overline B_I(x))^2. \end{eqnarray} Since the choice of $\varepsilon>0$ is arbitrary, relations (\ref{P1.P2.P3.dec})--(\ref{P2.dec}) and (\ref{P3.dec}) imply the conclusion of Theorem \ref{th.2.max.upper}. \mysection{The minimal stability case: lower bounds\label{sec.2.min.lower}} \begin{Theorem}\label{th.2.min.lower} Let $b\in(a,2a)$ and the integrated tail distribution $B_I$ be long tailed. Then the tail of the stationary waiting time satisfies the following inequality, for any fixed $\delta>0$: \begin{eqnarray*} {\bf P}\{W>x\} &\ge& \frac{1+o(1)}{2a-b} \overline B_I\Bigl(\frac{b+\delta}{b-a}x\Bigr) \quad \mbox{ as }x\to\infty. \end{eqnarray*} \end{Theorem} Notice that if $b\in(a,2a)$ then $\frac{b}{b-a}>2$. \remark\label{sserverlower} By use of similar arguments, one can get the following result for an $s$-server queue, $s\ge2$: if the integrated distribution $B_I$ is long tailed and $b\in((s-1)a,sa)$, then, for any $\delta>0$, \begin{eqnarray*} {\bf P}\{W>x\} &\ge& \frac{1+o(1)}{sa-b} \overline B_I \left(\frac{(s-1)b-s(s-2)a+\delta}{b-(s-1)a}x\right) \quad \mbox{ as }x\to\infty. \end{eqnarray*} Theorem \ref{th.2.min.lower} implies the following \begin{Corollary}\label{min_lower.reg} Assume that $B_I \in {\cal IRV}$. Then, as $x\to\infty$, \begin{eqnarray*} {\bf P}\{W>x\} &\ge& \frac{1+o(1)}{2a-b} \overline B_I\Bigl(\frac{b}{b-a}x\Bigr). \end{eqnarray*} \end{Corollary} In the case $b\in[a,2a)$, one can also derive a lower bound which is similar to (\ref{firstlb}). More precisely, assume $b\in[a,2a)$. Then introduce another two-server queue with the same service times and with inter-arrival times $\tau'_n=c\tau_n$, where $c>b/a$. For this queue, denote by $W'$ a stationary waiting time of a typical customer. Due to monotonicity, ${\bf P}\{W'>x\} \le {\bf P}\{W>x\}$ for all $x$. Applying Theorem \ref{th.2.max.lower} and Remark \ref{rem.rou}, we get the following lower bound for the case $b\in[a,2a)$: if the integrated tail distribution $B_I$ is long-tailed, then, for any $c>b/a$, \begin{eqnarray}\label{anotherlower} {\bf P}\{W>x\} &\ge& (1+o(1))\frac{2ca+b}{2c^2a^2(2ca-b)}(\overline B_I (x))^2. \end{eqnarray} \proof\ of Theorem \ref{th.2.min.lower}. By Lemma \ref{lower.bound}, it is sufficient to prove the lower bound for the queueing system $D/GI/2$ with deterministic input stream. Let the interarrival times $\tau_n$ be deterministic, i.e., $\tau_n\equiv a$. For any $\delta>0$, set $\varepsilon=\frac{\delta(b-a)}{a+\delta}$. Put $b'=b-\varepsilon$ and $N=\frac{x}{b'-a}$. For any $k\in[1,n-N]$, consider the events \begin{eqnarray*} A_{nk} &=& \{\sigma_k>2x+(2a-b')(n-k)\},\\ C_{nk} &=& \bigcap_{\stackrel{l=1}{l\ne k}}^n \{\sigma_l\le 2x+(2a-b')(n-l)\}. \end{eqnarray*} Since ${\bf E}\sigma$ is finite, \begin{eqnarray}\label{c.to.1} {\bf P}\{\overline C_{nk}\} &\le& \sum_{l=1}^\infty {\bf P}\{\sigma_1>2x+(2a-b')l\} = O(\overline B_I (2x)) \to 0 \end{eqnarray} as $x\to\infty$ uniformly in $n\ge1$ and $k\le n$. Since the events $A_{nk}\cap C_{nk}$, $k\in[1,n]$, are disjoint, we obtain \begin{eqnarray*} {\bf P}\{W_n>x\} &\ge& \sum_{k=1}^{n-N}{\bf P}\{W_n>x,A_{nk},C_{nk}\}\\ &\ge& \sum_{k=1}^{n-N}{\bf P}\{W_n>x,A_{nk}\}- \sum_{k=1}^{n-N}{\bf P}\{A_{nk},\overline C_{nk}\}. \end{eqnarray*} Since the events $A_{nk}$ and $C_{nk}$ are independent, we get \begin{eqnarray}\label{pequ} {\bf P}\{W_n>x\} &\ge& \sum_{k=1}^{n-N}{\bf P}\{W_n>x,A_{nk}\}- \sup_{k\le n}{\bf P}\{\overline C_{nk}\} \sum_{k=1}^{n-N}{\bf P}\{A_{nk}\}\nonumber\\ &=& \sum_{k=1}^{n-N}{\bf P}\{W_n>x\mid A_{nk}\} {\bf P}\{A_{nk}\}- o(1)\sum_{k=1}^{n-N}{\bf P}\{A_{nk}\} \end{eqnarray} as $x\to\infty$ uniformly in $n\ge1$, by (\ref{c.to.1}). The event $A_{nk}$ implies the event $$ W_{k+1,2}>2x+(2a-b')(n-k)-a. $$ Thus, it follows from Corollary \ref{SLLN.max.4} that \begin{eqnarray*} {\bf P}\{W_n>x \mid A_{nk}\} &\to& 1 \end{eqnarray*} as $x\to\infty$ uniformly in $n$ and $k\le n-N$. Therefore, we can derive from (\ref{pequ}) the estimate \begin{eqnarray*} {\bf P}\{W>x\}=\lim_{n\to\infty} {\bf P}\{W_n>x\} &\ge& (1-\varepsilon)\lim_{n\to\infty} \sum_{k=1}^{n-N}{\bf P}\{A_{nk}\}\\ &=& (1-\varepsilon)\sum_{k=N}^\infty \overline B(2x+(2a-b')k), \end{eqnarray*} which is valid for all sufficiently large $x$. Since the tail $\overline B_I(v)$ is long-tailed, \begin{eqnarray*} \sum_{k=N}^\infty \overline B(2x+(2a-b')k) &\sim& \frac{1}{2a-b'} \overline B_I(2x+(2a-b')N)\\ &=& \frac{1}{2a-b'} \overline B_I \Bigl(\frac{b'}{b'-a}x\Bigr) = \frac{1}{2a-b'} \overline B_I \Bigl(\frac{b+\delta}{b-a}x\Bigr) \end{eqnarray*} as $x\to\infty$. The proof is complete. \mysection{The minimal stability case: an upper bound} \label{sec.2.min.upper} \begin{Theorem} \label{min_upper} Assume $b\in[a,2a)$. Let both $B$ and $B_I$ be subexponential distributions. Then the tail of the stationary waiting time satisfies the following inequality, as $x\to\infty${\rm:} \begin{eqnarray*} {\bf P}\{W\ge x\} &\le& \frac{1+o(1)}{2a-b} \overline{B}_I(2x). \end{eqnarray*} \end{Theorem} \remark\label{sserverupper} By use of the same arguments, one can get the following result for any $s$-server queue, $s\ge2$: if $B_I\in{\mathcal S}$ and $b<sa$, then \begin{eqnarray*} {\bf P}\{W>x\} &\le& \frac{1+o(1)}{sa-b} \overline{B}_I (sx) \quad \mbox{ as }x\to\infty \end{eqnarray*} provided that either (i) $\sigma_1\ge (s-1)a$ a.s., or (ii) $B\in{\mathcal S}$. \remark \label{FChupper} For an $s$-server queue, Foss and Chernova [\ref{FCh}] have proposed another way of obtaining upper bounds; it is based on comparison with a queue with the so-called {\it cyclic} service discipline. \proof\ of Theorem \ref{min_upper}. From Lemma \ref{SLLN.max}, it is sufficient to consider the case of constant interarrival times $\tau_n\equiv a$ only. Put $M_{n,0}=0$ and $$ M_{n,i+1} = (M_{n,i}+\sigma_{n+i}-a)^+. $$ Since $b>a$, $M_{0,n}\to\infty$ a.s. as $n\to\infty$ and, due to the Law of Large Numbers, \begin{equation} \label{SSL} \frac{M_{0,n}}{n} \to b-a\quad \mbox{a.s.} \end{equation} and in mean. Note that ${\bf E}M_{0,n}\ge n(b-a)$, since $M_{0,n}\ge\sigma_0+\ldots+\sigma_{n-1}-na$. For any given $\varepsilon>0$, choose an integer $L>0$ such that \begin{equation}\label{LL} \frac{{\bf E} M_{0,L}}{L} \in [b-a, b-a+\varepsilon ). \end{equation} Consider any initial workload vector $W_0=(W_{0,1}, W_{0,2}) \ge 0$. Put $Z_n = W_{n,1}+W_{n,2}$. Since the increments of the minimal coordinate of the waiting time vector is not greater than the increments of $M_{0,n}$, \begin{eqnarray*} W_{1,n}-W_{1,0} &\le& M_{0,n}\quad\mbox{ for any }n. \end{eqnarray*} Hence, provided $W_{n,2}\ge a$, we have the inequality \begin{eqnarray*} Z_{n+1}-Z_n &\le& M_{0,n+1}-M_{0,n}-a. \end{eqnarray*} If $Z_0\ge 2aL$, then $W_{0,2}\ge aL$ and, for $n=0$, \ldots, $L-1$, $W_{n,2}\ge a(L-n)\ge a$. Therefore, if $Z_0 \ge 2aL$, then \begin{eqnarray*} Z_L &\le& Z_0+M_{0,L}-aL. \end{eqnarray*} Monotonicity implies, for any initial vector $W_0$, \begin{eqnarray*} Z_L &\le& \max \{2aL,Z_0\}+M_{0,L}-aL. \end{eqnarray*} Thus, the following inequalities are valid for any $n$: \begin{eqnarray}\label{recu} Z_{(n+1)L} &\le& \max \{2aL,Z_{nL}\}+M_{nL,L}-aL. \end{eqnarray} Consider a single-server queue with i.i.d.\ service times $\widehat{\sigma}_n= M_{nL,L}$ and constant inter-arrival times $\widehat\tau_n=La$ and denote by $\widehat W_n$ a waiting time of $n$th customer. This queue is stable since $\widehat b\equiv{\bf E}\widehat \sigma_1<aL\equiv\widehat a$. Put $\widehat W_0=0$. Assuming that $Z_0=0$, we can derive from (\ref{recu}) the following bounds: for all $n=0$, 1, \ldots, \begin{eqnarray}\label{Z.nL.W} Z_{nL} &\le& 2aL+\widehat W_n \quad \mbox{a.s.} \end{eqnarray} Denote $\overline G(x)={\bf P}\{\widehat\sigma_0>x\}$. We show that integrated tail distribution $G_I$ is subexponential one. We need to consider only the case $L>1$. Note first that \begin{equation}\label{first} \sigma_0+\ldots + \sigma_{L-1}-La \le \widehat\sigma_0 \leq \sigma_0+\ldots+\sigma_{L-1} \quad \mbox{a.s.} \end{equation} Since the distribution of $\sigma_1$ is assumed to be subexponential, the asymptotics for the lower and upper bounds in the latter inequalities are the same: as $x\to\infty$, \begin{equation}\label{second} {\bf P}\Bigl\{\sum_0^{L-1}\sigma_i-La>x\Bigr\} \sim {\bf P}\Bigl\{\sum_0^{L-1}\sigma_i>x\Bigr\} \sim L\overline B(x). \end{equation} Therefore, the tail $\overline G(x)$ has the same asymptotics and $G$ is a subexponential distribution. Thus, \begin{eqnarray}\label{G_I.via.B_I} \overline G_I(x) &=& \int_x^\infty \overline G(y)dy \sim L\overline B_I(x). \end{eqnarray} and $G_I$ is a subexponential distribution, too. Thus, by classic result (\ref{W.single}) for the single server queue, the steady state distribution of the waiting time $\widehat W_n$ satisfies the following relations, as $x\to\infty$: \begin{eqnarray}\label{asy.for.hat.W} \lim_{n\to\infty}{\bf P}\{\widehat W_n>x\} &\sim& \frac{1}{\widehat a{-}\widehat b}\overline G_I(x) \le \frac{1}{(2a{-}b{-}\varepsilon)L}\overline G_I(x) \sim \frac{1}{2a{-}b{-}\varepsilon}\overline B_I(x), \end{eqnarray} by (\ref{LL}) and (\ref{G_I.via.B_I}). Since $Z_n=W_{n,1}+W_{n,2}\ge 2W_{n,1}$, \begin{eqnarray*} {\bf P}\{W>x\} &=& {\bf P}\{2W>2x\} \le \lim_{n\to\infty}{\bf P}\{Z_{nL}>2x\}. \end{eqnarray*} Now it follows from (\ref{Z.nL.W}) and (\ref{asy.for.hat.W}) that \begin{eqnarray*} {\bf P}\{W>x\} &\le& \lim_{n\to\infty}{\bf P}\{\widehat W_n>2x-2aL\}\\ &\le& \frac{1+o(1)}{2a-b-\varepsilon}\overline B_I(2x-2aL) \sim \frac{1}{2a-b-\varepsilon}\overline B_I(2x), \end{eqnarray*} since $B_I$ is long-tailed. Letting $\varepsilon\downarrow 0$ concludes the proof. \mysection{The minimal stability case: exact asymptotics} \label{min.stab.proof} In this Section, we prove Theorem \ref{th.2.min.reg}. First note that, as follows from (\ref{anotherlower}), the tail ${\bf P}\{W>x\}$ may be heavier than that in Theorem \ref{th.2.min.reg}, in general. For instance, this happens if \begin{equation}\label{ppp} \overline B_I\Bigl(\frac{b}{b-a}x\Bigr) =o(\overline B_I^2(x)) \quad \mbox{ as }x\to\infty. \end{equation} Assume $b\in(a,2a)$ and consider, for example, a service time distribution with the Weibull integrated tail $\overline B_I(x) = e^{-x^\beta}$, $\beta\in(0,1)$. Then (\ref{ppp}) holds if $\bigl(\frac{b}{b-a}\bigr)^\beta>2$. \proof\ of Theorem \ref{th.2.min.reg}. Since $B_I\in{\cal IRV}$, both the lower bound in Theorem \ref{th.2.min.lower} and the upper bound in Theorem \ref{min_upper} are of the same order, \begin{eqnarray}\label{2x.frac} \overline B_I(2x) &=& O\Bigl(\overline B_I\Bigl(\frac{b}{b-a}x\Bigr)\Bigr). \end{eqnarray} We use the notation from the previous Section. In particular, we fix $\varepsilon>0$ and choose $L$ satisfying (\ref{LL}). For any constant $c\ge0$, (\ref{first}) implies \begin{eqnarray*} \bigcup_{i=0}^{L-1} \{\sigma_{kL+i}>x+La+(L-i)c\} &\subseteq& \bigg\{\sum_{i=0}^{L-1} \sigma_{kL+i}-La>x\bigg\} \subseteq \{\widehat\sigma_k>x\}. \end{eqnarray*} Therefore, from (\ref{first}) and (\ref{second}), \begin{equation}\label{oo} {\bf P}\Bigl\{\{\widehat\sigma_k>x\} \setminus \bigcup_{i=0}^{L-1}\{\sigma_{kL+i}>x+La+(L-i)c\}\Bigr\} = o(\overline B(x)). \end{equation} Take $c=(\widehat a-\widehat b)/L$. By (\ref{Z.nL.W}), \begin{eqnarray*} {\bf P}\{W>x\} &=& \lim_{n\to\infty} {\bf P}\{W_{nL,1}>x\} = \lim_{n\to\infty} {\bf P}\{W_{nL,1}>x,\widehat W_n>2x-2aL\}. \end{eqnarray*} Standard arguments concerning how large deviations in the single server queue $\widehat W_n$ occur imply the relation \begin{eqnarray*} {\bf P}\{W>x\} &=& \lim_{n\to\infty} \sum_{k=0}^{n-1} {\bf P}\{W_{nL,1}>x, \widehat\sigma_k > 2x+(n-k)(\widehat a-\widehat b)\} +o(\overline B_I(2x))\\ &=& \lim_{n\to\infty} \sum_{i=0}^{nL-1} {\bf P}\{W_{nL,1}>x,\sigma_i>2x+(n-i)c\} +o(\overline B_I(2x)), \end{eqnarray*} by (\ref{oo}). Now it follows from (\ref{LL}) that \begin{eqnarray*} \lefteqn{{\bf P}\{W>x\} \le \lim_{n\to\infty} \sum_{i=0}^{nL-1} {\bf P}\{W_{nL,1}>x,\sigma_i>2x{+}(n{-}i)(2a{-}b{-}\varepsilon)\}+ o(\overline B_I(2x))}\\ &\le& \lim_{n\to\infty} \sum_{j=1}^{nL} {\bf P}\{W_{nL,1}>x,\sigma_{nL-j}>2x{+}j(2a{-}b{+}\varepsilon)\} +\varepsilon O(\overline B_I(2x))+o(\overline B_I(2x))\\ &=& \lim_{n\to\infty} \Biggl(\sum_{j=1}^{N(1-\varepsilon)} +\sum_{j=N(1-\varepsilon)}^{nL}\Biggr)+ \varepsilon O(\overline B_I(2x)) \equiv \lim_{n\to\infty}(\Sigma_1+\Sigma_2) +\varepsilon O(\overline B_I(2x)), \end{eqnarray*} where $N=x/(b-a)$. The second term admits the following estimate \begin{eqnarray*} \Sigma_2 &\le& \sum_{j=N(1-\varepsilon)}^\infty {\bf P}\{\sigma>2x+j(2a-b)\}\\ &\sim& \frac{1}{2a-b} \overline B_I(2x+N(1-\varepsilon)(2a-b)) =\frac{1}{2a-b} \overline B_I\Bigl(\frac{b}{b-a}x -\varepsilon\frac{2a-b}{b-a}x\Bigr). \end{eqnarray*} It follows from $B_I\in {\cal IRV}$ that, for any $\delta >0$, there exists $\varepsilon>0$ such that \begin{eqnarray*} \Sigma_2 &\le& \frac{1}{2a-b} \overline B_I\Bigl(\frac{b}{b-a}x\Bigr) +\delta\overline B_I(2x), \end{eqnarray*} which coincides with the lower bound in Theorem \ref{th.2.min.lower}. Now consider the first term $\Sigma_1$. Since the queue is stable, one can choose $K>0$ such that ${\bf P}\{W_{n,2}\le K\}\ge 1-\varepsilon$ for all $k$. Then \begin{eqnarray*} \Sigma_1 &\le& \sum_{j=1}^{N(1-\varepsilon)} {\bf P}\{W_{nL-j,2}>K,\sigma_{nL-j}>2x+(2a-b+\varepsilon)j\}\\ && + \sum_{j=1}^{N(1-\varepsilon)} {\bf P}\{W_{nL,1}>x,W_{nL-j,2}\le K, \sigma_{nL-j}>2x+(2a-b+\varepsilon)j\}\\ &\equiv& \Sigma_{1,1}+\Sigma_{1,2}. \end{eqnarray*} We have \begin{eqnarray*} \Sigma_{1,1} &=& \sum_{j=1}^{N(1-\varepsilon)} {\bf P}\{W_{nL-j,2}>K\}{\bf P}\{\sigma_1>2x+(2a-b+\varepsilon)j\}\\ &\le& \varepsilon \sum_{j=1}^\infty {\bf P}\{\sigma_1>2x+(2a-b)j\} \le \frac{\varepsilon}{2a-b} \overline B_I(2x). \end{eqnarray*} Note that if $W_{nL-j,2}\le K$, then $W_{nL,1}\le K+M_{nL-j+1, j-1}$. Therefore, \begin{eqnarray*} \Sigma_{1,2} &\le& \sum_{j=1}^{N(1-\varepsilon)} {\bf P}\{\sigma_{nL-j}>2x+(2a-b)j, K+M_{nL-j+1,j-1}>x\}\\ &=& \sum_{j=1}^{N(1-\varepsilon)} {\bf P}\{\sigma_{nL-j}>2x+(2a-b)j\} {\bf P}\{K+M_{0,j-1}>x\}. \end{eqnarray*} Since the sequence $M_{0,j}$ stochastically increases, \begin{eqnarray*} \Sigma_{1,2} &\le& {\bf P}\{K+M_{0,N(1-\varepsilon)}>x\} \sum_{j=1}^\infty {\bf P}\{\sigma_1>2x+(2a-b)j\}\\ &\le& {\bf P}\{M_{0,N(1-\varepsilon)}>x-K\} \frac{1}{2a-b} \overline B_I(2x). \end{eqnarray*} Since $$ \frac{x-K}{N(1-\varepsilon)} \to \frac{b-a}{1-\varepsilon} > b-a\quad\mbox{ as } x\to\infty, $$ we have by (\ref{SSL}) $$ {\bf P}\{M_{0,N(1-\varepsilon)}>x-K\} = {\bf P}\Bigl\{\frac{M_{N(1-\varepsilon)}}{N(1-\varepsilon)} > \frac{x-K}{N(1-\varepsilon)}\Bigr\} \to 0. $$ Thus, we have shown that the upper bound for ${\bf P}\{W>x\}$ is not bigger than the lower bound in Theorem \ref{th.2.min.lower} plus a term of order $$ (\varepsilon+\delta)O(B_I(2x)) \le (\varepsilon+\delta) O\Bigl(B_I\Bigl(\frac{bx}{b-a}\Bigr)\Bigr) $$ due to (\ref{2x.frac}). Since $\varepsilon>0$ and $\delta>0$ may be chosen as small we please, the proof of Theorem \ref{th.2.min.reg} is complete. \mysection{Tail asymptotics for the two-dimensional workload vector}\label{workload} Denote by $W^0=(W_1^0,W_2^0)$ a weak limit for the vectors $W_n$ as $n\to\infty$. Clearly, $W=W_1^0$. {\bf \ref{workload}.1. Maximal stability case.} First, we obtain simple lower and upper bounds which are equivalent up to some constant. Second, we give (without a proof) a result related to the exact asymptotics. \begin{Theorem}\label{bounds.joint.max} Let $b<a$ and $B_I\in{\mathcal L}$. Then, as $x$, $y\to\infty$, $x\le y$, \begin{eqnarray*} {\bf P}\{W_1^0>x, W_2^0>y\} &\ge& \frac{1+o(1)}{a^2}\overline B_I(x)\overline B_I(y). \end{eqnarray*} If, in addition, $B_I\in{\mathcal S}$, then \begin{eqnarray*} {\bf P}\{W_1^0>x, W_2^0>y\} &\le& \frac{2+o(1)}{(a-b)^2}\overline B_I(x)\overline B_I(y). \end{eqnarray*} \end{Theorem} \proof. Fix $\varepsilon>0$ and put $a'=a+\varepsilon$. For $k$, $l\le n$, $k\neq l$, define the events $A_{nkl}$ and $C_{nkl}$ by the equalities \begin{eqnarray*} A_{nkl} &=& \Bigl\{\sigma_k>x+(n-k)a',\ \sigma_l>y+(n-l)a'\Bigr\} \end{eqnarray*} and \begin{eqnarray*} C_{nkl} &=& \bigcap_{\stackrel{j=1}{j\ne k,l}}^n\Bigl\{ \sigma_j\le x+(n-j)a'\Bigr\}. \end{eqnarray*} Note that the events $A_{nkl}\cap C_{nkl}$ are disjoint for different pairs $(k,l)$ and \begin{eqnarray*} {\bf P}\{W_{n1}>x,W_{n2}>y\} &\ge& \sum_{k=1}^n \sum_{l=k+1}^n {\bf P}\{W_{n1}>x,W_{n2}>y,A_{nkl},C_{nkl}\}. \end{eqnarray*} Then the same calculations as in Subsection \ref{sec.2.max.lower}.3 imply the estimate, as $x$, $y\to\infty$, \begin{eqnarray*} {\bf P}\{W_{n1}>x,W_{n2}>y\} &\ge& (1+o(1))\sum_{k=1}^{n-1} \sum_{l=k+1}^{n-1} \overline B(x+(n-k)a')\overline B(y+(n-l)a')\\ &=& (1+o(1))\sum_{k=1}^{n-1} \sum_{l=1}^{n-k-1} \overline B(x+ka')\overline B(y+la'). \end{eqnarray*} Hence, \begin{eqnarray*} {\bf P}\{W_1^0>x,W_2^0>y\} &\ge& (1+o(1))\sum_{k=1}^\infty \sum_{l=1}^\infty \overline B(x+ka')\overline B(y+la') \sim \overline B_I(x)\overline B_I(y)/a'^2 \end{eqnarray*} and the lower bound is proved. Proceed to the upper bound. Due to construction of the majorant $(W_n^{(1)},W_n^{(2)})$ in Section \ref{sec.2.max.upper}, we have the inequality \begin{eqnarray*} {\bf P}\{W_1^0>x, W_2^0>y\} &\le& \lim_{n\to\infty}\Bigl[ {\bf P}\{W_n^{(1)}>x, W_n^{(2)}>y\} +{\bf P}\{W_n^{(1)}>y, W_n^{(2)}>x\}\Bigr]\\ &=& 2\lim_{n\to\infty} {\bf P}\{W_n^{(1)}>x\}{\bf P}\{W_n^{(2)}>y\}. \end{eqnarray*} Together with (\ref{W1.sim.BI}) it implies the desired upper bound. Theorem \ref{bounds.joint.max} is proved. Turn now to the exact asymptotics. Below is the result. The proof is rather complicated and will be presented in another paper. Denote $$ R(x,y) = \overline B_I(x)\overline B_I(y) + b \int_0^\infty \overline B_I(y+za)\overline B(x+x(a-b)) dz. $$ Recall that Theorem 1 states that ${\bf P}\{W_1^0>x\} \sim R(x,x)/a(2a-b)$ given $B_I\in {\mathcal S}$. \begin{Theorem} \label{joint.max} Assume $b<a$ and $B_I\in {\mathcal S}$. Let $x, y\to \infty$, $x\le y$. Then \begin{eqnarray*} {\bf P}\{W_1^0>x, W_2^0>y\} &\sim& \frac{1}{a(2a-b)}R(y,y)+\frac{1}{a^2}(R(x,y)-R(y,y)). \end{eqnarray*} \end{Theorem} {\bf \ref{workload}.2. Minimal stability case.} We prove the following \begin{Theorem} \label{joint.min} Assume $a<b<2a$, $B\in {\mathcal S}$, and $B_I\in {\cal IRV}$. Let $x, y\to \infty$ in such a way that $y/x\to c \in [1,\infty ]$. Then \begin{eqnarray*} {\bf P}\{W_1^0>x, W_2^0>y\} &\sim& \frac{1}{a} \overline B_I \Bigl(y\Bigl(1+\frac{a}{c(b-a)}\Bigr)\Bigr) +\frac{b-a}{a(2a-b)}\overline B_I \Bigl(y\frac{b}{b-a}\Bigr). \end{eqnarray*} \end{Theorem} \proof. Start with the case $c=\infty$. From Theorem 10 in [\ref{BaF}], one can get the following: \begin{Corollary} \label{maxx} Assume $b\in (a,2a)$. If $B\in {\mathcal S}$ and $B_I\in {\mathcal S}$, then, as $y\to\infty$, \begin{eqnarray*} {\bf P}\{W_2^0>y\} &\sim& \frac{1}{a}\overline B_I (y) +\frac{b-a}{a(2a-b)} \overline B_I \Bigl(y\frac{b}{b-a}\Bigr). \end{eqnarray*} \end{Corollary} It is clear that $$ {\bf P}\{W_1^0>x, W_2^0>y\} \le {\bf P}\{W_2^0>y\}. $$ On the other hand, for any $N=1$, 2, \ldots, \begin{eqnarray*} {\bf P}\{W_1^0>x, W_2^0>y\} &=& \lim_{n\to\infty} {\bf P}\{W_{n,1}>x,W_{n,2}>y\}\\ &\ge& \lim_{n\to\infty} {\bf P}\Bigl\{W_{n-N,2}>y+Na, \sum_{j=n-N}^{n-1} (\sigma_j-\tau_j)>x\Bigr\}\\ &=& \lim_{n\to\infty} {\bf P}\{W_{n-N,2}>y+Na\} {\bf P}\Bigl\{\sum_{j=1}^N (\sigma_j-\tau_j)>x\Bigr\}. \end{eqnarray*} Fix $\varepsilon>0$. Put $N=N(x)=x(1+\varepsilon)/(b-a)$. Then by LLN $$ {\bf P}\Bigl\{\sum_{j=1}^N(\sigma_j-\tau_j)>x \Bigr\} \ge 1-\varepsilon $$ for all sufficiently large $x$ and, as $n\to\infty$, $$ {\bf P}\{W_{n-N,2}>y+Na\} \to {\bf P}\{W_2^0>y+Na\}. $$ Since $B_I\in {\cal IRV}$, $$ {\bf P}\{W_2^0>y+Na\}\sim {\bf P}\{W_2^0>y\} \quad \mbox{as} \quad y\to\infty. $$ By letting $\varepsilon \to 0$, we get the result. Now consider the case $c<\infty$. If $c=1$, then the result follows from Theorem \ref{th.2.min.reg}. Let $c\in(1,\infty)$. We give here only a sketch of the proof, by making links to the proof of Theorem \ref{th.2.min.reg}. Since $$ {\bf P}\{W_1^0>y\} \le {\bf P}\{W_1^0>x, W_2^0>y\} \le {\bf P}\{W_1^0>x\} $$ and $$ {\bf P}\{W_1^0>y\} \sim {\bf P}\{W_1^0>cx\} \ge (K+o(1)){\bf P}\{W_1^0>x\} $$ where $K=\inf_t \overline B_I(ct)/\overline B_I(t)>0$, one can get from the proof of Theorem \ref{th.2.min.reg} the following equivalences: for $N_x=x/(b-a)$, $N_y=y/(b-a)$, and for $\varepsilon\in(0,1-1/\sqrt c)$, \begin{eqnarray*} \lefteqn{{\bf P}\{W_1^0>x, W_2^0>y\}}\\ &=& \lim_{n\to\infty} \sum_{i=1}^{n-N_x(1-\varepsilon)} {\bf P}\{W_{n,1}>x, W_{n,2}>y, \sigma_i > 2x+(n{-}i)(2a{-}b)\} +\varepsilon O(\overline B_I(2x))\\ &=& \lim_{n\to\infty}\Biggl( \sum_{i=1}^{n-N_y(1+\varepsilon)} +\sum_{i=n-N_y(1+\varepsilon)}^{n-N_x(1-\varepsilon)}\Biggr) +\varepsilon O(\overline B_I(2x)) \equiv (\Sigma_1+\Sigma_2) +\varepsilon O(\overline B_I(2x)). \end{eqnarray*} Choose $K>0$ such that ${\bf P}\{W_{n,2}>K\}\le\varepsilon$ for all $n$. Then \begin{eqnarray*} \Sigma_2 &=& \lim_{n\to\infty} \sum_{i=n-N_y(1+\varepsilon)}^{n-N_x(1-\varepsilon)} {\bf P}\{W_{i,2}\le K, W_{n,1}>x, W_{n,2}>y, \sigma_i >2x+(n{-}i)(2a{-}b)\}\\ &&\hspace{100mm} + \varepsilon O(\overline B_I(2x)). \end{eqnarray*} From Lemma 2 and its Corollaries, \begin{eqnarray*} \Sigma_2 &=& (1+o(1))\sum_{j=N_x(1-\varepsilon )}^{N_y(1+\varepsilon )} {\bf P}\{\sigma_1>y+ja\}+\varepsilon O(\overline B_I(x))\\ &=& \frac{1+o(1)}{a}\Bigl(\overline B_I \Bigl(y +\frac{x(1-\varepsilon)a}{b-a} \Bigr) - \overline B_I \Bigl(\frac{y(b+\varepsilon a)}{b-a}\Bigr) \Bigr)+\varepsilon O(\overline B_I(x))\\ &=& \frac{1+o(1)}{a}\Bigl(\overline B_I \Bigl(y\Bigl(1+\frac{a}{c(b-a)}\Bigr)\Bigr) - \overline B_I\Bigl(\frac{yb}{b-a}\Bigr)\Bigr) + (\varepsilon+\delta) O(\overline B_I(x)), \end{eqnarray*} due to $B_I\in{\cal IRV}$. From Lemma 3 and its Corollaries, one can also conclude that, for $i< n-N_y(1+\varepsilon )$, if $\sigma_i<2y+(n-i)(2a-b-\varepsilon)$ and $W_{i,2} \le K$, then, with probability close to one, both coordinates of the vector $(W_{n,1},W_{n,2})$ take values less then $y$ for all sufficiently large $n$. From the other side, if $\sigma_i > 2y+ (n-i) (2a-b+\varepsilon )$, then, with probability close to one, $y < W_{n,1} \le W_{n,2}$. Therefore, \begin{eqnarray*} \Sigma_1 &=& (1+o(1)) \sum_{j=N_y(1+\varepsilon)}^\infty {\bf P}\{\sigma_1>2y+j(2a-b)\} + \varepsilon O(\overline B_I(x))\\ &=& \frac{1+o(1)}{2a-b}\overline B_I \Bigl(\frac{yb}{b-a}\Bigr) + \varepsilon O(\overline B_I(x)). \end{eqnarray*} Summing up the terms and letting $\varepsilon$ and $\delta \to 0$ concludes the proof. \mysection{Comments on stationary queue length}\label{stat.q.l} Let $Q_n$ be a queue length viewed by an arriving customer $n$, and $Q$ its stationary version in discrete time (i.e. Palm-stationary). Due to the distributional Little's law, $$ {\bf P}\{Q>n\} = {\bf P}\{W>T_n\} $$ where $W$ is the stationary waiting time, $T_n=\tau_1+\ldots+\tau_n$, and $W$ and $T_n$ do not depend on each other. When a distribution of $W$ is long-tailed, the asymptotics for ${\bf P}\{W>T_n\}$, $n\to\infty$, have been found in [\ref{AKS}] and in [\ref{FK}]. If, in addition, $\tau_n$ has a non-lattice distribution, there exists a stationary distribution $G$ for a queue length in continuous time. Then, from Lemma 1 in [\ref{FK}], $$ \overline G(n) \sim {\bf P}\{Q>n\} \quad \mbox{as} \quad n\to\infty. $$ \section*{\normalsize Acknowledgment} The authors gratefully acknowledge helpful discussions with Onno Boxma and Bert Zwart, and comments from Daryl Daley. \section*{\normalsize References} \newcounter{bibcoun} \begin{list}{[\arabic{bibcoun}]}{\usecounter{bibcoun}\itemsep=0pt} \small \item\label{APQ} S. Asmussen, {\it Applied Probability and Queues}, 2nd ed. (Springer, New York, 2003). \item\label{AKS} S. Asmussen, C. Kluppelberg and K. Sigman, Sampling at subexponential times, with queueing applications, Stoch. Process. Appl. 79 (1999) 265--286. \item\label{BaF} F. Baccelli and S. Foss, Moments and tails in monotone-separable stochastic networks, Ann. Appl. Probab. 14 (2004) 612--650. \item\label{BMumn} A. A. Borovkov and A. A. Mogul'skii, Large deviations for Markov chains in the positive quadrant, Russ. Math. Surv. 56 (2001) 803--916. \item\label{BMZ} S. Borst, M. Mandjes and A. P. Zwart, Exact asymptotics for fluid queues fed by heavy-tailed On-Off flows, Ann. Appl. Probab. 14 (2004) 903--957. \item\label{BoF} O. J. Boxma, S. G. Foss, J.-M. Lasgouttes and R. Nunez Queija, Waiting time asymptotics in the single server queue with service in random order, Queueing Systems 46 (2004) 35--73. \item\label{BDZ} O. J. Boxma, Q. Deng and A. P. Zwart, Waiting-time asymptotics for the $M/G/2$ queue with heterogeneous servers, Queueing Systems 40 (2002) 5--31. \item\label{Cr} H. Cram\'er, {\it Collective risk theory} (Esselte, Stockholm, 1955). \item\label{DFK} D. Denisov, S. Foss and D. Korshunov, Tail asymptotics for the supremum of a random walk when the mean is not finite, Queueing Systems 46 (2004) 15--33. \item\label{FCh} S. G. Foss and N. I. Chernova, On optimality of FCFS discipline in multi-channel queueing systems and networks, Siberian Math. J. 42 (2001) 372--385. \item\label{FK} S. Foss and D. Korshunov, Sampling at a random time with a heavy-tailed distribution, Markov Processes and Related Fields 6 (2000) 643--658. \item\label{IMSh} I. A. Ignatyuk, V. Malyshev and V. Scherbakov, Boundary effects in large deviation problems, Russ. Math. Surv. 49 (1994) 41--99. \item\label{KW55} J. Kiefer and J. Wolfowitz, On the theory of queues with many servers, Tran. Amer. Math. Soc. 78 (1955) 1--18. \item\label{K} D.~Korshunov, On distribution tail of the maximum of a random walk, Stochastic Process. Appl. 72 (1997) 97--103. \item\label{P} A. G. Pakes, On the~tails of waiting-time distribution, J. Appl. Probab. 12 (1975) 555--564. \item\label{S} A. Scheller-Wolf, Further delay moment results for FIFO multiserver queues, Queueing Systems 34 (2000) 387--400. \item\label{SS} A. Scheller-Wolf and K. Sigman, Delay moments for FIFO $GI/GI/s$ queues, Queueing Systems 25 (1997) 77--95. \item\label{Ver} N. Veraverbeke, Asymptotic behavior of Wiener-Hopf factors of a random walk, Stochastic Process. Appl. 5 (1977) 27--37. \item\label{W} W. Whitt, The impact of a heavy-tailed service-time distribution upon the $M/GI/s$ waiting-time distribution, Queueing Systems 36 (2000) 71--87. \end{list} \end{document}
1,116,691,500,585
arxiv
\section{Introduction} Let $X_{ij},{ }1\le i\le p, 1\le j\le n$, be independent random variables with $\mathbb{E} X_{ij}=0$ and $\mathbb{E} X_{ij}^2=1$ and $\bold X_p=\Big(X_{ij}\Big)_{\{1\le i\le p,{ } 1\le j\le n\}}$. Denote by $\lambda_1\le\ldots\le \lambda_p$ the eigenvalues of the symmetric matrix $$ \bold W:=\bold W_p:=\frac1n \bold X _p\bold X_p^T $$ and define its empirical distribution by $$ F_p(x)=\frac1p\sum_{k=1}^pI_{\{\lambda_k\le x\}}, $$ where $I_{\{B\}}$ denotes the indicator of an event $B$. We shall investigate the rate of convergence of the expected spectral distribution $\mathbb{E} F_p(x)$ as well as $F_p(x)$ to the Marchenko-Pastur distribution function $F_y(x)$ with density $$ f_y(x)=\frac1{2xy\pi}\sqrt{(b-x)(x-a)}I_{\{[a,b]\}}(x)+ I_{\{[1,\infty)\}}(y)(1-y^{-1})\delta(x), $$ where $y\in (0,\infty)$ and $a=(1-\sqrt y)^2$, $b=(1+\sqrt y)^2$. Here we denote by $\delta(x)$ the Dirac delta-function and by $I_{\{[a,b]\}}(x)$ the indicator function of the interval $[a,b]$. As in Marchenko and Pastur \cite{M-P:67} and Pastur \cite{P:73} assume that $X_{ij}$, $i,j\ge1$, are independent identically distributed random variables such that $$ \mathbb{E} X_{ij}=0,\qquad \mathbb{E} X_{ij}^2=1 \quad \text{ and }\qquad \mathbb{E} |X_{ij}|^4<\infty,\qquad\text{ for all } i,j . $$ Then $\mathbb{E} F_p\to F_y$ and $F_p\to F_y$ in probability, where \newline $y=\lim_{n\to\infty}y_p:=\lim_{n\to\infty} (\frac p n)\in(0,\infty)$. Let $y:=y_p:=p/n$. We introduce the following distance between the distributions $\mathbb{E} F_p(x)$ and $F_{y}(x)$ $$ \Delta_p:=\sup_x|\mathbb{E} F_p(x)-F_{y}(x)| $$ as well as another distance between the distributions $F_p(x)$ and $F_{y}(x)$ $$ \Delta_p^*:=\mathbb{E} \sup_x|F_p(x)-F_{y}(x)|. $$ We shall use the notation $\xi_n=O_P(a_n)$ if, for any $\varepsilon>0$, there exists an $L>0$ such that $\mathbb{P}\{|\xi_n| \ge L a_n\}\le \varepsilon$. Note that, for any $L>0$, $$ \mathbb{P}\{\sup_x|F_p(x)-F_{y}(x)|\ge L\}\le \frac{\Delta_p^*}{L}. $$ Hence bounds for $\Delta_p^*$ provide bounds for the rate of convergence in probability of the quantity $\sup_x|F_p(x)-F_{y}(x)|$ to zero. Using our techniques it is straightforward though technical to prove that the rate of almost sure convergence is at least $O(n^{-1/2+\epsilon})$, for any $ \epsilon >0$. In view of the length of the proofs for the results stated above we refrain from including those details in this paper as well. Bai \cite{Bai:93} proved that $\Delta_p=O(n^{-\frac14})$, assuming $\mathbb{E} X_{ij}=0$, $\mathbb{E} X_{ij}^2=1$,\newline $\sup\limits_n \sup\limits_{i,j} \mathbb{E} X_{ij}^4\bold I_{\{|X_{ij}|>M\}}\to 0,\quad\text{as } M\to\infty$, and $$ y\in(\theta,\Theta)\text{ such that }0<\theta<\Theta<1 \text{ or } 1<\theta<\Theta<\infty. $$ If $y$ is close to $1$ the limit density and the Stieltjes transform of the limit density have a singularity. In this case the investigation of the rate of convergence is more difficult. Bai \cite{Bai:93} has shown that, if $0<\theta\le y_p\le \Theta<\infty$, $ \Delta_p=O(n^{-\frac5{48}}) $. Recently Bai et al. \cite{Bai:03} have shown for $y_p$ equal to $1$ or asymptotically near $1$ that $\Delta_p=O(n^{-\frac 1 {8}})$ (see also \cite{Bai:05}). It is clear that the case $y_p \approx 1$ requires different techniques. Results of the authors \cite{GT:05} show that for Gaussian r.v. $X_{ij}$ actually the rate $\Delta_p =O(n^{- 1 })$ is the correct rate of approximation including the case $y=1$. By $C$ (with an index or without it) we shall denote generic absolute constants, whereas $C(\,\cdot\,,\,\cdot\,)$ will denote positive constants depending on arguments. Introduce the notation, for $k\ge 1$, $$ M_k:=M_k^{(n)}:=\sup_{1\le j,k\le n}\mathbb{E}|X_{jk}|^k. $$ Our main results are the following \begin{thm} \label{thm1.1} Let $1\ge y>\theta>0$, for some positive constant $\theta$. Assume that $\mathbb{E} X_{jk}=0$, $\mathbb{E}|X_{jk}|^2=1$, and \begin{equation} M_4:=\sup_{1\le j,k\le n}\mathbb{E}|X_{jk}|^4<\infty. \end{equation} Then there exists a positive constant $C(\theta)>0$ depending on $\theta$ such that $$ \Delta_p\le C(\theta)\,M_4^{\frac12} n^{-1/2}. $$ \end{thm} \begin{thm} \label{thm1.2} Let $1\ge y>\theta>0$, for some positive constant $\theta$. Assume that $X_{ij}$ $\mathbb{E} X_{jk}=0$, $\mathbb{E}|X_{jk}|^2=1$, and $$ M_{12}:=\sup_{1\le j,k\le n}\mathbb{E}|X_{jk}|^{12}<\infty. $$ Then there exists a positive constant $C(\theta)>0$ depending on $\theta$ such that $$ \Delta_p^*=\mathbb{E} \sup_x|F_p(x)-G(x)| \le C(\theta)M_{12}^{\frac16}\, n^{-1/2}. $$ \end{thm} We shall prove the same result for the following class of sparse matrices. Let $\varepsilon_{jk}$, $j=1,\ldots,n$, $k=1,\ldots,p$, denote Bernoulli random variables which are independent in aggregate and independent of $(X_{jk})$ with $p_n:=\mathbb{P}\{\varepsilon_{jk}=1\}$. Consider the matrix $\bold X^{(\varepsilon)}=\frac1{\sqrt{np_n}}(\varepsilon_{jk}X_{jk})$. Let $\lambda_1^{(\varepsilon)},\ldots,\lambda_p^{(\varepsilon)}$ denote the (complex) eigenvalues of the matrix $\bold X^{(\varepsilon)}$ and denote by $F_p^{(\varepsilon)}(x)$ the empirical spectral distribution function of the matrix $\bold X^{(\varepsilon)}$, i. e. \begin{equation} F_p^{(\varepsilon)}(x):=\frac1p\sum_{j=1}^pI_{\{\lambda_j^{(\varepsilon)}\leq x,\}}. \end{equation} \begin{thm}\label{sparse} Let $X_{jk}$ be independent random variables with \begin{equation}\notag \mathbb{E} X_{jk}=0,\qquad \mathbb{E} |X_{jk}|^2=1,\quad \text{ and}\quad \mathbb{E} |X_{jk}|^{4}. \end{equation} Assume that $np_n\to\infty$ as $n\to\infty$ Then \begin{equation} \Delta_n^{(\varepsilon)}:=\sup_x|\mathbb{E} F_p^{(\varepsilon)}(x)-F_p(x)|\le CM_4^{1/2}(np_n)^{-\frac12}. \end{equation} . \end{thm} We have developed a new approach to the investigation of convergence of spectra of sample covariance matrices based on the so-called Hadamar matrices. Note that our approach allows us to obtain a bound of the rate of convergence to the Marchenko-Pastur distribution uniformly in $1\ge y\ge\theta$ (including $y=1$). In this paper we give the proof of Theorem \ref{thm1.1} only. To prove Theorem \ref{thm1.2} and \ref{sparse} it is enough to repeat the proof of Theorem 1.2 and Corollary 1.3 in \cite{GT:03} with inessential changes. \section{ Inequalities for the distance between distributions via Stieltjes transforms.} We define the Stieltjes transform $s(z)$ of a random variables $\xi$ with the distribution function $F(x)$ (the Stieltjes transform $s(z)$ of distribution function $F(x)$) $$ s(z):=\mathbb{E}\frac1{\xi-z}=\int_{-\infty}^{\infty}\frac1{x-z}d\,F(x), \quad z=u+iv,\quad v>0. $$ \begin{lem} Let $F$ and $G$ be a distribution functions such that \begin{equation} \int_{-\infty}^{\infty}|F(x)-G(x)|\,dx<\infty. \end{equation} Denote their Stieltjes transforms by $s(z)$ and $t(z)$ respectively. Assume that the distribution $G(x)$ has support contained in the bounded interval $I=[a,b]$. Assume that there exists a positive constant $c_g$ such that \begin{equation} \sup_x\frac d{dx}G(x)\le c_g. \end{equation} Denote their Stieltjes transforms by $s(z)$ and $t(z)$ respectively. Let $v>0$. Then there exist some constants $C_1(c_g),\, C_2(c_g),\, C_3(c_g)$ depending only on $c_g$, such that \begin{align} \Delta(F,G)&:=\sup_{x}\, |F(x)-G(x)| \\&\le \, C_1\, \sup_{x\in I} \, |\text{\rm{Im}}\,\Big(\int_{-\infty}^x(s(z)-s_y(z))\, du\Big)|+ C_2\, v , \end{align} where $z=u+iv$. \end{lem} A proof of Lemma 2.1 in G\"otze, Tikhomirov \cite{GT:03}, . \begin{cor} The following inequality holds, for any $0<v<V$, \begin{align} \Delta(F,G)\le &C_1\int_{-\infty}^{\infty}|(s(u+iV)-t(u+iV))\, |du+ C_2\, v\\& +C_1\sup_{x\in I} \left|{\mathrm{Re}\;\!}\,\left\{\int_{v}^V (s(x+iu)-t(x+iu))du\right\}\right|. \end{align} \end{cor} \section{ The main Lemma} Let $\xi\ge 0$ be a positive random variables with distribution function $F(x)$. Let $\varkappa$ be a Rademacher random variable with value $\pm 1$ with porbability $1/2$. Consider a random variable $\widetilde\xi:=\varkappa\xi$ and denote its distribution function by $\widetilde F(x)$. For any $x$, we have \begin{equation} \widetilde F(x)=\frac12(1+\text{\rm sgn}x\,F(x^2)) \end{equation} This equality implies that \begin{equation} \widetilde p(x):=\frac d{dx}\widetilde F(x)=|x|p(x), \end{equation} where \begin{equation} p(x)=\frac d{dx}F(x). \end{equation} For the Marchenko--Pastur distribution with parameter $y\in(0,1]$, we have \begin{equation} \widetilde p_y(x)=|x|p_y(x)=\frac1{2\pi y|x|}\sqrt{(x^2-a)(b-x^2)}. \end{equation} It is straighforward to check that, for $y\in(0,1]$, \begin{equation}\label{density} \sup_x\widetilde p_y(x)\le \frac1{\pi \sqrt y(1+\sqrt y)}. \end{equation} Note also that the distribution $\widetilde F_y(x)$ has a support which is contained in the union of the intervals $[-(1+\sqrt y),-(1-\sqrt y)]\cup[(1-\sqrt y),(1+\sqrt y)]$. Introduce the following matrix \begin{equation} \bold H:=\left(\begin{matrix}{\bold O\quad\bold X}\\{\bold X^*\quad\bold O}\end{matrix}\right), \end{equation} where $\bold O$ is the matrix with zero entries only. Consider the resolvent matrix \begin{equation} \bold R(z)=(\bold H-z\bold I)^{-1}, \end{equation} where $\bold I$ denotes the identity matrix of order $n+p$. Let $s_y(z)$ denote the Stieltjes transform of the Marchenko--Pastur distribution function with parameter $y$. Denote by $\widetilde s_y(z)$ the Stieltjes transform of the distribution function $\widetilde F_y(x)$. It is straighforward to check that \begin{equation} \widetilde s_y(z)=zs_y(z^2). \end{equation} For the Stieltjes transform of the expected spectral distribution function of the sample covariance matrix $s_p(z)$ and its ``symmetrization'' $\widetilde s_p(z)$ we have, \begin{equation} \widetilde s_p(z)=zs_p(z^2). \end{equation} From the equation for $s_y(z)$ \begin{equation} s_y(z)=-\frac1{z+y-1+yzs_y(z)} \end{equation} it follows that \begin{equation} \widetilde s_y(z)=-\frac1{z+y\widetilde s_y(z)+\frac{y-1}z}. \end{equation} By inversion of the partitioned matrix formula (see \cite{Horn}, p. 18, Section 0.7.3) , we have \begin{equation} \bold R(z)=\left(\begin{matrix}{z(\bold X\bold X^*-z^2\bold I_n)^{-1}\quad \bold X(\bold X^*\bold X-z^2\bold I_p)^{-1}}\\{(\bold X^*\bold X-z^2\bold I_p)^{-1}\bold X^*\quad (\bold X^*\bold X-z^2\bold I_p)^{-1}}\end{matrix}\right) \end{equation} This equality implies that \begin{equation} \widetilde s_p(z)=\frac1n\sum_{j=1}^n\mathbb{E} R_{jj}(z)=\frac1{n}\sum_{j=1}^pR_{j+n,j+n}(z)+\frac{y-1}{z} \end{equation} and \begin{equation} \frac1{p}\sum_{j=1}^pR_{j+n,j+n}(z)=y\frac1n\sum_{j=1}^nR_{j,j}(z)+\frac{1-y}{z}. \end{equation} Tfor the readers convenient we state here two Lemmas, which follow from Shur's complement formula (see, for example, \cite{GT:03}). Let $\bold A=\Big(a_{kj}\Big)$ denote a matrix of order $n$ and $\bold A_k$ denote the principal sub-matrix of order $n-1$, i.e. $\bold A_k$ is obtained from $\bold A$ by deleting the $k$-th row and the $k$-th column. Let $\bold A^{-1}=\Big(a^{jk}\Big)$. Let $\bold a_k'$ denote the vector obtained from the $k$-th row of $\bold A$ by deleting the $k$-th entry and $\bold b_k$ the vector from the $k$-th column by deleting the $k$-th entry. Let $\bold I$ with subindex or without denote the identity matrix of corresponding size. \begin{lem}\label{Lemma 3.1} Assume that $\bold A$ and $\bold A_k$ are nonsingular. Then we have $$ a^{kk}=\frac1{a_{kk}-\bold a'_k\bold A_k^{-1}\bold b_k}. $$ \end{lem} \begin{lem}\label {Lemma 3.2} Let $z=u+iv$, and $\bold A$ be an $n\times n$ symmetric matrix. Then \begin{align} \mathrm{Tr}\;\! (\bold A-z\bold I_n)^{-1}- \mathrm{Tr}\;\! (\bold A_k-z\bold I_{n-1})^{-1} &=\frac{1+\bold a_k'(\bold A_k-z\bold I_{n-1})^{-2}\bold a_k}{a_{kk}-z- \bold a_k'(\bold A_k-z\bold I_{n-1})^{-1}\bold a_k}\notag\\&=(1+\bold a_k'(\bold A_k-z\bold I_{n-1})^{-2}\bold a_k)\,a^{kk}. \end{align} and $$ \Big|\mathrm{Tr}\;\! \Big(\bold A-z\bold I_n\Big)^{-1}-\mathrm{Tr}\;\!\Big(\bold A_k-z\bold I_{n-1}\Big) ^{-1}\Big| \le v^{-1}. $$ \end{lem} Applying Lemma \ref{Lemma 3.1} with $\bold A=\bold W$ we may write, for $j=1,\ldots,n$ \begin{align}\label{ineq3.1} R_{j,j}&=-\frac1{z+y\widetilde s_p(z)+\frac{y-1}z-\varepsilon_j}=-\frac1{z+y\widetilde s_p(z)+\frac{y-1}z}\notag\\&+ \frac{\varepsilon_j}{(z+y\widetilde s_p(z)+\frac{y-1}z)(z+y\widetilde s_p(z)+\frac{y-1}z-\varepsilon_j)}\notag\\&= -\frac1{z+y\widetilde s_p(z)+\frac{y-1}z}\left(1-\varepsilon_jR_{j,j}\right), \end{align} where \begin{equation}\label{rep5.1} \varepsilon_j=\varepsilon_j^{(1)}+\varepsilon_j^{(2)}+\varepsilon_j^{(3)}+\varepsilon_j^{(4)} \end{equation} with \begin{align} \varepsilon_j^{(1)}&=\frac1p\sum_{1\le k\ne l\le p}X_{jk}X_{jl}^*R^{(j)}_{k+n,l+n},\quad \varepsilon_j^{(2)}=\frac1p\sum_{k=1}^p(|X_{j,k}|^2-1)R^{(j)}_{k+n,k+n}\notag\\ \varepsilon_j^{(3)}&=\frac1p\sum_{k=1}^pR^{(j)}_{k+n,k+n}-\frac1p\sum_{k=1}^pR_{k+n,k+n},\quad \varepsilon_j^{(4)}=\frac{1}p\sum_{k=1}^pR_{k+n,k+n}-\frac{1}p\mathbb{E}\left(\sum_{k=1}^pR_{k+n,k+n}\right).\notag \end{align} This implies that \begin{equation}\label{qq} \widetilde s_p(z)=-\frac1{z+y\widetilde s_p(z)+\frac{y-1}z}+\delta_p(z), \end{equation} where \begin{equation} \delta_p(z)=\frac1{n\;(z+y\widetilde s_p(z)+\frac{y-1}z)}\sum_{j=1}^n\varepsilon_jR_{jj}. \end{equation} Throughout this paper we shall consider $z=u+iv$ with $a\le |u|\le b$ and \newline$0<v<C$. The main result of this Section is \begin{lem}\label{Lemma 3.4} Let $$ \text{\rm{Im}}\,\Big\{y\delta_p(z)+z +\frac{y-1}z\Big\}\ge 0. $$ Then $$ \left|z+\frac{y-1}z+ys_p(z)\right|\ge 1. $$ \end{lem} \begin{proof} From representation (\ref{qq}) it follows that \begin{equation} \mathrm{Im}\;\!\left\{ys_p(z)+z+\frac{y-1}z\right\}=\frac{\mathrm{Im}\;\!\left\{ys_p(z)+z+\frac{y-1}z\right\}} {|ys_p(z)+z+\frac{y-1}z|^2}+\mathrm{Im}\;\!\{\delta_p(z)+z+\frac{y-1}z\}. \end{equation} This equality concludes the proof. \end{proof} \section{Bounds for $\delta_p(z)$} We start from the simple bound for the $\delta_p(z)$. \begin{lem}\label{Lem3.0}Under the conditions of Theorem \ref{thm1.1} the following bound holds for $1\ge v\ge CM^{1/2}n^{-1/2}$ \begin{equation} |\delta_p(z)|\le \frac 1{|z+y\widetilde s_p(z)+\frac{y-1}z|^2}\frac C{nv^4}. \end{equation} \end{lem} \begin{proof} Note that \begin{equation} |\delta_p(z)|\le \frac 1{|z+y\widetilde s_p(z)+\frac{y-1}z|^2}(\frac1n\sum_{j=1}^n|\mathbb{E} \varepsilon_{j}|+\frac1n\sum_{j=1}^n\mathbb{E} \varepsilon_{j}|^2|R_{j,j}|). \end{equation} Using inequalities (\ref{ineq01}), (\ref{ineq01a}), (\ref{ineq01b}), and (\ref{ineq01c}) below and inequality $|R_{j,j}|\le 1/v$, we get \begin{align} |\delta_p(z)|&\le \frac 1{|z+y\widetilde s_p(z)+\frac{y-1}z|^2}(\frac1{nv}+\frac1{nv}\sum_{j=1}^n\mathbb{E} |\varepsilon_{(j)}|^2\notag\\ &\le \frac 1{|z+y\widetilde s_p(z)+\frac{y-1}z|^2}(\frac1{nv}+ \frac C{nv^3}) \end{align} Thus the Lemma is proved. \end{proof} In this Section we give bounds for remainder term $\delta_p(z)$ in the equation (\ref{qq}).We first start with bounds assuming that there exist positive constants $a_1$, $a_2$ such that \begin{equation}\label{con5.2} a_1\le\left|z+\frac{y-1}z+ys_p(z)\right|\le a_2. \end{equation} \begin{lem}\label{lem:4.1} There exists a positive absolute constant $C$ such that, for $v\ge cn^{-1}$ with some other positive absolute constant $c$, \begin{equation}\label{ineq01} \mathbb{E}|\varepsilon_j^{(1)}|^2\le \frac{C(1+|s_p(z)|)}{nv} \end{equation} \begin{equation}\label{ineq01a} \mathbb{E}|\varepsilon_j^{(2)}|^2\le \frac{C(1+|s_p(z)|)}{nv} \end{equation} and \begin{equation}\label{ineq02} \mathbb{E}|\varepsilon_j^{(1)}|^4\le \frac{CM_4^2(1+|\widetilde s_p(z)|)}{n^2v^2}. \end{equation} \end{lem} \begin{proof} Consider inequality (\ref{ineq01}). We have \begin{equation}\label{ineq4.5} \mathbb{E}|\varepsilon_j^{(1)}|^2\le\frac2{p^2}\sum_{k,l=1}^p\mathbb{E}|R^{(j)}_{k,l}|^2\le \frac1{p^2}\mathbb{E}\mathrm{Tr}\;\!\bold R^{(j)}(\bold R^{(j)})^*\le \frac2{p^2v}\mathbb{E}\mathrm{Im}\;\!\mathrm{Tr}\;\!\bold R^{(j)}. \end{equation} Applying Lemma \ref{Lemma 3.2}, we get \begin{equation} |\mathrm{Tr}\;\!\bold R-\mathrm{Tr}\;\!\bold R^{(j)}|\le 1/v. \end{equation}Note that \begin{equation} \frac1{2n}\mathbb{E}\mathrm{Im}\;\!\mathrm{Tr}\;\! \bold R(z)\le (1+y)|\widetilde s_p(z)|+\left|\mathrm{Im}\;\!\left\{\frac{1-y}{z}\right\}\right|. \end{equation} It is straighforward to check that \begin{equation} \left|\mathrm{Im}\;\!\left\{\frac{1-y}{z}\right\}\right|\le 1 \end{equation} The last inequalities together conclude the proof of inequality (\ref{ineq01}). The proof of inequality (\ref{ineq01a}) is similar. Furthermore, \begin{equation} \mathbb{E}|\varepsilon_j^{(1)}|^4\le\frac {CM_4^2}{p^4}\mathbb{E}\left(\sum_{k,l=1}^p|R^{(j)}_{k,l}|^2\right)^2 \le\frac {CM_4^2}{p^2v^2}\mathbb{E}\left(\frac1p\,\mathrm{Im}\;\!\mathrm{Tr}\;\!\bold R^{(j)}\right)^2. \end{equation} Similar to inequality (\ref{ineq01}) we get \begin{equation} \mathbb{E}|\varepsilon_j^{(1)}|^4\le \frac {CM_4^2(1+|\widetilde s_p(z)|)^2}{p^2v^2} \end{equation} Thus the Lemma is proved. \end{proof} \begin{lem}\label{lem4.1} For any $j=1,\ldots, n$ the following inequality \begin{equation}\label{ineq01b} |\varepsilon_j^{(3)}|\le \frac1{nv} \end{equation} holds. \end{lem} \begin{proof} The result follows immediately from Lemma \ref{Lemma 3.2} with $\bold A= \bold H$. \end{proof} \begin{lem}\label{lem:4.2}The follwoing bound holds for all $v>0$ \begin{equation}\label{ineq01c} \mathbb{E}|\varepsilon_j^{(4)}|^2\le \frac 4{nv^2}. \end{equation} There exist positive constants $c$ and $C$ depending on $a_1$ and $a_2$ such that for any $v\ge cn^{-\frac12}$ \begin{equation}\label{ineq10} \mathbb{E}|\varepsilon_j^{(4)}|^2\le \frac {CM_4(1+|\widetilde s_p(z)|)}{n^2v^3} \end{equation} and \begin{equation}\label{110} \mathbb{E}|\varepsilon_j^{(4)}|^3\le \frac {CM_4(1+|\widetilde s_p(z)|)}{n^{\frac52}v^4} \end{equation} and \begin{equation}\label{ineq11} \mathbb{E}|\varepsilon_j^{(4)}|^4\le \frac {CM_4(1+|\widetilde s_p(z)|)}{n^3v^5}. \end{equation} \end{lem} \begin{proof}Note that \begin{equation} \varepsilon_j^{(4)}=\frac1p(\sum_{j=1}^pR_{j+n,j+n}-\mathbb{E}\sum_{j=1}^pR_{j+n,j+n})= \frac1p(\mathrm{Tr}\;\!\bold R(z)-\mathbb{E}\mathrm{Tr}\;\!\bold R(z)) \end{equation} Let $\mathbb{E}_k$ denote the conditional expectation given $X_{lm},\ 1\le l\le k; \ 1\le m\le p$. \begin{equation}\label{ineq1} \mathbb{E}|\varepsilon_j^{(4)}|^2=\frac1{p^2}\sum_{k=1}^n\mathbb{E}|\gamma_k|^2, \end{equation} where \begin{equation} \gamma_k=\mathbb{E}_{k}(\mathrm{Tr}\;\!\bold R)-\mathbb{E}_{k-1}(\mathrm{Tr}\;\!\bold R). \end{equation} Since $\mathbb{E}_{k}\mathrm{Tr}\;\!\bold R^{(k)}=\mathbb{E}_{k-1}\mathrm{Tr}\;\!\bold R^{(k)}$ we have \begin{equation} \gamma_k=\mathbb{E}_{k}\sigma_k-\mathbb{E}_{k-1}\sigma_k, \end{equation} where \begin{equation} \sigma_k=(\mathrm{Tr}\;\!\bold R-\mathrm{Tr}\;\!\bold R^{(k)}) . \end{equation} According to Lemma \ref{Lemma 3.2}, we may represent $\sigma_k$ as follows \begin{equation} \sigma_k=\sigma_k^{(1)}+\sigma_k^{(2)}+\sigma_k^{(3)}+\sigma_k^{(4)}, \end{equation} where \begin{align} \sigma_k^{(1)}&=\frac{1+\frac1p\sum_{r=1}^n\sum_{s=1}^pX_{kr}\overline X_{ks}(\bold R^{(k)})^2_{rs}}{z+y\widetilde s_p(z)+\frac{y-1}z}\notag\\ \sigma_k^{(2)}&=\frac{\varepsilon_k\sigma_k}{z+y\widetilde s_p(z)+\frac{y-1}z}\notag\\ \sigma_k^{(3)}&=\frac{\frac1p\left(\sum_{r=1}^n\sum_{s=1}^pX_{kr}\overline X_{ks}(\bold R^{(k)})^2_{rs}-\mathrm{Tr}\;\! (\bold R^{(k)})^2\right)}{z+y\widetilde s_p(z)+\frac{y-1}z}\notag. \end{align} Since \begin{equation} \mathbb{E}_{k}\sigma_k^{(1)}=\mathbb{E}_{k-1}\sigma_k^{(1)}, \end{equation} we get \begin{equation}\label{ineq4} \mathbb{E}|\gamma_k|^2\le 2(\mathbb{E}|\sigma_k^{(2)}|^2+\mathbb{E}|\sigma_k^{(3)}|^2)\le C(\frac 1{v^2} \mathbb{E}|\varepsilon_k|^2+\mathbb{E}|\sigma_k^{(3)}|^2). \end{equation} By definition of $\varepsilon_k$, we have \begin{equation} \mathbb{E}|\varepsilon_k|^2\le 4\mathbb{E}|\varepsilon_k^{(1)}|^2+4\mathbb{E}|\varepsilon_k^{(2)}|^2+4\mathbb{E}|\varepsilon_k^{(3)}|^2 +4\mathbb{E}|\varepsilon_k^{(4)}|^2. \end{equation} According to Lemmas \ref{lem:4.1} -- \ref{lem:4.2}, we have \begin{equation}\label{ineq5} \mathbb{E}|\varepsilon_k|^2\le \frac{C(1+|\widetilde s_p(z)|)}{nv}+4\mathbb{E}|\varepsilon_k^{(4)}|^2. \end{equation} Furthermore, \begin{equation}\label{ineq6} \mathbb{E}|\sigma_k^{(3)}|^2\le \frac C{n^2v^3}\mathrm{Im}\;\!\mathrm{Tr}\;\! \bold R^{(k)}\le \frac{C(1+|\widetilde s_p(z)|)}{nv^3}. \end{equation} Inequalities (\ref{ineq4}), (\ref{ineq5}) and (\ref{ineq6}) together imply that \begin{equation}\label{ineq7} \mathbb{E}|\gamma_k|^2\le \frac{C(1+|\widetilde s_p(z)|)}{nv^3}+\frac C{v^2}\mathbb{E}|\varepsilon_k^{(4)}|^2 \end{equation} From the inequalities (\ref{ineq1}) and (\ref{ineq7}) it follwos that \begin{equation} \mathbb{E}|\varepsilon_k^{(4)}|^2\le\frac{C(1+|\widetilde s_p(z)|)}{n^2v^3}+\frac C{nv^2}\mathbb{E}|\varepsilon_k^{(4)}|^2. \end{equation} For $v\ge cn^{-\frac12}$ with some sufficiently small positive absolute constant $c$, we get \begin{equation} \mathbb{E}|\varepsilon_k^{(4)}|^2\le\frac{C(1+|\widetilde s_p(z)|)}{n^2v^3}. \end{equation} Thus the inequality (\ref{ineq10}) is proved. To prove inequality (\ref{ineq11}) we use the Burkholder inequality for martingales (see Hall and Heyde \cite{hall}, p.24). We get \begin{equation}\label{ineq12} \mathbb{E}|\varepsilon_k^{(4)}|^4\le \frac n{p^4}\sum_{l=1}^n\mathbb{E}|\gamma_l|^4. \end{equation} Using that $|\gamma_l|\le \frac2v$, we get \begin{equation}\label{ineq13} \mathbb{E}|\gamma_l|^4\le \frac2{v^2}\mathbb{E}|\gamma_l|^2\le \frac{CM_4(1+|\widetilde s_p(z)|^4)}{nv^5}. \end{equation} Inequalities (\ref{ineq12}) and (\ref{ineq13}) together imply that \begin{equation} \mathbb{E}|\varepsilon_k^{(4)}|^4\le \frac{CM_4(1+|\widetilde s_p(z)|^4)}{n^3v^5}. \end{equation} Thus the Lemma is proved. \end{proof} \begin{lem}\label{Lemma 5.4} There exist some positive constants $c$ and $C$ such that, for any $1\ge v\ge cn^{-\frac12}$, the following inequality holds \begin{equation} \frac1n\sum_{j=1}^n\mathbb{E}|R_{k,k}|^2\le C. \end{equation} \end{lem} \begin{proof} To prove this Lemma we repeat the proof of Lemma 5.4 in \cite{GT:03}. Let \begin{equation} U^2=\frac1n\sum_{j=1}^{n+p}\mathbb{E}|R_{k,k}|^2. \end{equation} By equality (\ref{ineq3.1}), we have \begin{equation}\label{ineq:011} U^2\le C(1+\frac1n\sum_{j=1}^n\mathbb{E}|\varepsilon_{j}|^2|R_{j,j}|^2). \end{equation} Applying Lemmas \ref{lem:4.1}--\ref{lem:4.2}, we obtain \begin{equation}\label{ineq:012} \frac1n\sum_{j=1}^n\mathbb{E}|\varepsilon_{j}^{(1)}|^2|R_{j,j}|^2\le \frac{CM_4}{nv^2}\left(\frac1n\sum_{j=1}^n\mathbb{E}|R_{j,j}|^2\right)^{\frac12}. \end{equation} Furthermore, \begin{equation}\label{ineq:013} \frac1n\sum_{j=1}^n\mathbb{E}|\varepsilon_{j}^{(3)}|^2|R_{j,j}|^2\le\frac{C}{n^2v^4}. \end{equation} To bound $\frac1n\sum_{j=1}^n\mathbb{E}|\varepsilon_{j}^{(4)}|^2|R_{j,j}|^2$ we use that $\varepsilon_j^{(4)}$ does not depend on $j$. We write \begin{align}\label{ineq15} \frac 1n\sum_{j=1}^n\mathbb{E}|\varepsilon_{j}^{(4)}|^2|R_{j,j}|^2&= \mathbb{E}|\varepsilon_{1}^{(4)}|^2\left(\frac 1n\sum_{j=1}^n |R_{j,j}|^2\right)\notag\\ &\le \frac Cv\mathbb{E}|\varepsilon_{1}^{(4)}|^2\frac1n\mathrm{Im}\;\!\mathrm{Tr}\;\!\bold R(z)\notag\\ &\le \frac {C|\widetilde s_p(z)|}{v} \mathbb{E}|\varepsilon_{1}^{(4)}|^2+\frac Cv\mathbb{E}|\varepsilon_{1}^{(4)}|^2|\frac 1n(\mathrm{Tr}\;\! \bold R(z)- \mathbb{E}\mathrm{Tr}\;\!\bold R(z)|\notag\\&\le\frac {C(1+|\widetilde s_p(z)|)}{v} \mathbb{E}|\varepsilon_{1}^{(4)}|^2+ \frac Cv\mathbb{E}|\varepsilon_{1}^{(4)}|^3 \end{align} Inequalities (\ref{ineq10}), (\ref{ineq11}), and (\ref{ineq15}) together imply \begin{equation}\label{ineq16} \frac1n\sum_{j=1}^n\mathbb{E}|\varepsilon_{j}^{(4)}|^2|R_{j,j}|^2\le\frac {CM_4(1+|\widetilde s_p(z)|)}{n^2v^4} +\frac {CM_4(1+|\widetilde s_p(z)|)}{\sqrt{n^5v^{10}}}. \end{equation} Let \begin{equation} T:=\frac1n\sum_{j=1}^{n+p}\mathbb{E}|\varepsilon_{j}^{(2)}|^2|R_{j,j}|^2. \end{equation} From inequalities (\ref{ineq:011}), (\ref{ineq:012}), (\ref{ineq:013}), and (\ref{ineq16}) it follows that, for $v\ge cn^{-\frac12}$, \begin{equation} U^2\le C+\delta U+T. \end{equation} Solving this equation with respect to $U$, we get \begin{equation}\label{ineq:5.58} U^2\le C+T. \end{equation} To bound $T$ we start from the obvious inequality \begin{equation}\label{4.41} T\le \frac1{v^2}\frac1n\sum_{j=1}^{n+p}\mathbb{E}|\varepsilon_{j}^{(2)}|^2\le \frac C{nv^2}\frac1n\sum_{j=1}^{n+p}\left(\frac1n{\sum}^{(j)}\mathbb{E}|R^{(j)}_{k,k}|^2\right), \end{equation} where ${\sum}^{(j)}$ denotes the sum over all $k=1,\ldots, n+p$ except $k=j$. Introduce now some integer number $m=m(n)$ depending on $n$ such that \newline $mv^{-1}\le a_1/4$. Without loss of generality we may assume that $m\le n/2$. Since \newline $|\widetilde s_{p-l}(z)-\widetilde s_{p-l-1}(z)|\le \frac{1}{n-l}$ we get $$ a_1/2\le \min_{1\le l\le m}|\widetilde s_{p-l}(z)+z+\frac{y-1}{z}|\le \max_{1\le l\le m} |y\widetilde s_{p-l}(z)+z+\frac{y-1}{z}|\le \frac32 a_2. $$ Let $\bold j^{(r)}=(j_1,\ldots,j_r)$ with $1\le j_1\ne j_2\ldots\ne j_r\le n$, $r=1,\ldots ,m$. Denote by $\bold H^{(\bold j^{(r)})}$ the matrix which is obtained from $\bold H$ by deleting the $j_1$th, $\ldots$, $j_r$th rows and columns, and let $$ \bold R^{(\bold j^{(r)})}= \left(\frac1{\sqrt{n-r}}\bold H^{(\bold j^{(r)})}-z\bold I_{n+p-r}\right)^{-1}. $$ Arguing similar as in inequality (\ref{4.41}) we get that uniformly for $r=1,\ldots,m-1$, and for $v\ge C_1(a_1,a_2)n^{-\frac12}M^{\frac12}$ \begin{align}\label{5.45} \frac1{n}\sum_{k=1,\, k\notin \bold j^{(r)}}^n\mathbb{E}|R^{(\bold j^{(r)})}_{k,k}|^2 &\le \frac{C_0(a_1,a_2)M}{nv^2} \Big(\frac1n\sum_{k=1,\,k\notin \bold j^{(r)}}^n \Big(\frac1n\sum_{j=1,j\notin \bold j^{(r+1)}}^n \mathbb{E}|R^{(\bold j^{(r+1))}}_{j,j}|^2\Big)\Big)\notag \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+C_0(a_1,a_2). \end{align} Note that the constants $C_0(a_1,a_2)$ and $C_1(a_1,a_2)$ do not depend on $l=1,\ldots,m$. Applying inequality (\ref{5.45}) recursively we get for $1\ge v\ge C_1(a_1,a_2)n^{-1/2}M^{\frac12}$, \begin{align}\label{5.46} \frac1n\sum_{k=1}^n\mathbb{E}|R_{k,k}|^2&\le C_0(a_1,a_2)\sum_{r=0}^{m-1} \Big(\frac{C_0(a_1,a_2)M}{nv^2}\Big)^r\notag\\+ \Big(\frac{C_0(a_1,a_2)M}{nv^2}&\Big)^m\Big(\frac1n\sum_{k=1,\, k\notin \bold j^{(m-1)}}^n \Big(\frac1n\sum_{j=1,\,j\notin \bold j^{(m)}}^n \mathbb{E}|R^{\bold j^{(m)}}_{j,j}|^2\Big)\Big) \end{align} Without loss of generality we may assume that $$ \frac{C_0(a_1,a_2)M}{nv^2}\le \frac12. $$ Similar to inequality (\ref{ineq4.5}) we get that \begin{equation}\label{5.47} \frac1n\sum_{j=1,\, j\notin \bold j^{(m)}}^n \mathbb{E}|R_{\bold j^{(m)}}(j,j)|^2\le \mathbb{E}\mathrm{Tr}\;\!|R_{\bold j^{(m)}}|^2 \le\frac {C_0(a_1,a_2)}v. \end{equation} The inequalities (\ref{5.46}) and (\ref{5.47}) together imply that \begin{equation}\label{5.48} \frac1n\sum_{k=1}^n\mathbb{E}|R(k,k)|^2\le 2C_0(a_1,a_2)+ \frac1{2^m}\frac Cv. \end{equation} Choosing $m=[C\log n]$ such that $2^{-m}\le Cv$ concludes the proof. \end{proof} \begin{lem}\label{Lemma 5.5}Assume that condition $(\ref{con5.2})$ holds. Then there exist positive constants $C_3(a_1,a_2)$ and $C_4(a_1,a_2)$ such that for $v\ge C_3(a_1,a_2) n^{-1/2}M^{1/2}$ the following inequality holds $$ |\delta_p(z)|\le \frac{C_4(a_1,a_2)M}{nv}. $$ \end{lem} \begin{proof}The equalities (4.5) and (4.6) imply that \begin{equation}\label{5.49} |\delta_p(z)|\le \frac{C}{|z+y\widetilde s_p(z)+\frac{y-1}z|^2}\Bigl(\frac1p\sum_{k=1}^{n+p}|\mathbb{E}\varepsilon_k|+ \frac1p\sum_{k=1}^{n+p}\mathbb{E}|\varepsilon_k|^2|R(j,j)|\Bigr). \end{equation} According to Lemma \ref{lem4.1} and inequality (\ref{con5.2}) we get \begin{equation}\label{5.50} \frac{C}{|z+ys_n(z)+\frac{y-1}z|^2}\Bigl(\frac1n\sum_{k=1}^n|\mathbb{E}\varepsilon_k|\Bigr)\le \frac{C}{nva_1^{2}}\le \frac{C(a_1,a_2)}{nv}. \end{equation} Using the representation (\ref{rep5.1}), we obtain \begin{equation}\label{5.51} \frac{C}{|z+ys_n(z)+\frac{y-1}z|^2} \Bigl(\frac1n\sum_{k=1}^n\mathbb{E}|\varepsilon_k|^2|R(j,j)|\Bigr)\le C(a_1,a_2)\sum_{\nu=1}^4 \Bigl(\frac1n\sum_{k=1}^n\mathbb{E}|\varepsilon_k^{(\nu)}|^2|R(j,j)|\Bigr). \end{equation} Similar to inequality (\ref{5.46}) and by Lemma \ref{Lemma 3.4} we arrive at \begin{align}\label{5.52} \frac1n\sum_{k=1}^n\mathbb{E}|\varepsilon_k^{((1)}|^2|R(k,k|&\le \Big(\frac1n\sum_{k=1}\mathbb{E}|\varepsilon_k^{(1)}|^4\Big)^{1/2} \Big(\frac1n\sum_{k=1}^n\mathbb{E}|R(k,k)|^2\Big)^{1/2}\\&\le \frac {C(a_1,a_2)M^{\frac12}}{nv}. \end{align} By Lemma \ref{lem4.1}, $|\varepsilon_k^{(3)}|\le (nv)^{-1}$ we have \begin{equation}\label{5.53} \frac1n\sum_{k=1}^n\mathbb{E}|\varepsilon_k^{(3)}|^2|R_{k,k}|\le \frac{1}{n^2v^3} \le \frac{C(a_1,a_2)}{nv}. \end{equation} Finally, note that $$ \frac1n\sum_{k=1}^n\mathbb{E}|\varepsilon_k^{(2)}|^2|R(k,k|\le \frac1{nv}\sum_{k=1}^n\mathbb{E}|\varepsilon_k^{(2)}|^2\le \frac{C(a_1,a_2)M}{nv}\Big(\frac1n\sum_{j=1,j\ne k}\mathbb{E}|R_(j,j)^{(k)}|^2\Big). $$ Applying Lemma \ref{Lemma 5.4} to the matrix $\bold H^{(k)}$ we get \begin{equation}\label{5.55} \frac1n\sum_{k=1}^n\mathbb{E}|\varepsilon_k^{(2)}|^2|R(k,k|\le\frac{C(a_1,a_2)M}{nv}. \end{equation} The inequalities (\ref{5.49})--(\ref{5.55}) together imply that for $1\ge v\ge C_1(a_1,a_2)n^{-1/2}M^{\frac12}$ $$ |\delta_n(z)|\le \frac{C(a_1,a_2)M}{nv}, $$ which proves Lemma \ref{Lemma 5.5}. \end{proof} \begin{lem}\label{lem5.1} Assuming the conditions of Theorem \ref{thm1.1}, there exists an absolute positive constant $C$ such that for any $1\ge v\ge CM^{1/2}n^{-1/2}$ and $u\in [a,b]$, the following inequality holds \begin{equation}\label{ineq5.0} \mathrm{Im}\;\!\Bigl\{z+y\widetilde s_p(z)+\frac{y-1}z\Bigr\}>0,\quad z=u+iv. \end{equation} \end{lem} \begin{Proof of} Lemma \ref{lem5.1}. Assume that for $r_n(z):=z+y\delta_p(z)+\frac{y-1}z$ the following equality holds \begin{equation}\label{5.56} \mathrm{Im}\;\!\bigl\{r_n(z)\bigr\}=0. \end{equation} Denote be $t(z):=y\widetilde s_p(z)+\frac{y-1}z+z$. Since $$ t(z)=-\frac y{t(z)}+r_n(z) $$ this immediately implies that $$ \mathrm{Im}\;\! t(z)=-\mathrm{Im}\;\!\Bigl\{\frac y{t(z)}\Bigr\}. $$ Since $\mathrm{Im}\;\!\{t(z)\}\ge\mathrm{Im}\;\! z=v>0$ this implies that $$ |t(z)|=\sqrt y. $$ Hence condition (\ref{con5.2}) holds with $a_1=a_2=\sqrt y$ and we have $$ |\delta_p(z)|\le \frac{CM}{nv}. $$ Then for any $v\ge 2n^{-\frac12}\sqrt{CM}$, $$ |\delta_n(z)|\le \frac14 v<v, $$ holds. But condition (\ref{5.56}) implies that $$ |\delta_p(z)|\ge v, $$ which is a contradiction. Hence we conclude that $\mathrm{Im}\;\!\{z+y\delta_p(z)+\frac{y-1}z\}\ne0$ in the region $v\ge 2n^{-\frac12}\sqrt{CM}$. From Lemma \ref{Lem3.0} it follows for example that, for $v=1$, $\mathrm{Im}\;\!\{r_n(z)\}>0$. Since the function $\mathrm{Im}\;\!\{r_n(z)\}$ is continuous in the region $v\ge C_1n^{-\frac12}\sqrt{M}$ we get that $\mathrm{Im}\;\!\{r_n(z)\}>0$ for $v\ge C_1n^{-\frac12}\sqrt{M}$. This proves Lemma \ref{lem5.1}. \end{Proof of} \begin{Proof of} Theorem \ref{thm1.1}. Recall that $1\ge y\ge \theta>0$. Let $v_0=\max\{\gamma_0\Delta_p,2n^{-\frac12}C_1M^{\frac12}\}$ with a $\gamma_0$ such that $1>\gamma_0>0$ to be chosen later. By Lemma \ref{lem5.1} for any $1\ge v\ge v_0$ we have $$ \mathrm{Im}\;\!\{z+y\delta_p(z)+\frac{y-1}z\}>0. $$ Note that the constant $C_1$ does not depend on $\gamma_0$. In addition we have \begin{align} |\widetilde s_p(z)-\widetilde s_y(z)|&= \Bigl|\int_{-\infty}^{\infty}\frac1{x-z}d\Bigl(\mathbb{E}\,\widetilde F_p(x)-\widetilde F_y(x)\Bigr)\Bigr| \\&=\Bigl|\int_{-\infty}^{\infty}\frac{\mathbb{E}\,\widetilde F_p(x)-\widetilde F_y(x)}{(x-z)^2}dx\Bigr|\le \frac{\Delta_p}v\le \frac 1\gamma_0. \end{align} This implies that for $z=u+iv$ such that $|u|\in[a,b]$, $1\ge v\ge v_0$, we have \begin{equation}\label{5.57} |y\widetilde s_p(z)+z+\frac{y-1}z|\le \frac 1\gamma_0+5. \end{equation} From equality (\ref{qq}) it follows that \begin{equation}\label{q0} s_p(z)=-\frac1{2y}\Bigl(z+\frac{y-1}z-y\delta_p(z)-\sqrt{(z+\frac{y-1}z+y\delta_p(z))^2-4y}\Bigr). \end{equation} Introduce the function \begin{equation}\label{q1} q(z):=-\frac1{2y}(z-\sqrt{z^2-4y}). \end{equation} Equalities (\ref{q0}) and (\ref{q1}) together imply that for $v\ge v_0$ \begin{equation} z+y\widetilde s_p(z)+\frac{y-1}z=q(\omega+y\delta_p(z))\label{5.58} \end{equation} where $\omega:=z+\frac{y-1}z$. Let $s(z)$ denote the Stieltjes transform of the semicircular law. Then $q(z)=\frac1{sqrt y}s(z/\sqrt y)$. This implies in particular that $|q(z)|\le 1/\sqrt y$. Since $\mathrm{Im}\;\!\{y\delta_p(z)+\omega\}>0$ the equality (\ref{5.58}) immediately implies that \begin{equation} |z+y\widetilde s_p(z)+\frac{y-1}z|\ge 1/\sqrt y,\qquad \text{for}\qquad v\ge v_0\label{5.59} \end{equation} From the inequalities (\ref{5.58}) and (\ref{5.59}) it follows that condition (\ref{con5.2}) holds with $a_1=1$, and $a_2=\frac1{\gamma_0}+5$. The relation (\ref{5.58}) implies that \begin{equation} |\widetilde s_p(z)-\widetilde s_y(z)|\le \frac1{\sqrt y}\left|{q(\omega)}-{q(\omega+y\delta_p(z)}\right|.\label{5.60} \end{equation} After a simple calculation we get \begin{equation} |\widetilde s_p(z)-\widetilde s_y(z)|\le \frac{y|\delta_n(z)|}{|\sqrt{(\omega+y\delta_p(z))^2-4y}+\sqrt{\omega^2-4y}}.\label{5.61} \end{equation} By Lemma \ref{Lemma 5.5} we obtain for $1\ge v\ge v_0$, \begin{equation} |\delta_n(z)|\le \frac14v,\label{5.62} \end{equation} and for $z=u+iv$ such that $u\in I$ we get \begin{equation} \min\{\sqrt{|\omega^2-4y|},\sqrt{|(\omega+y\delta_n(z))^2-4y|}\}\ge C\sqrt{v}.\label{5.63} \end{equation} Inequalities (5.61)--(5.63) imply that for $z=u+iv$ such that $u\in I$ and $1\ge v\ge v_0$ \begin{equation} |\widetilde s_p(z)-\widetilde s_y(z)|\le \frac {C|\delta_p(z)|}{\sqrt v}.\label{5.64} \end{equation} By Lemma \ref{Lemma 5.5} we have \begin{equation} |\delta_p(z)|\le \frac{C(\gamma_0)M}{nv}.\label{5.65} \end{equation} From (5.64) and (5.65) it follows that $$ |\widetilde s_p(z)-\widetilde s(z)|\le \frac{C(\gamma_0)M}{nv^{\frac32}}. $$ Choosing in Corollary 2.3 $V=1$ and using the inequality (4.29) we get after integrating in $u$ and $v$ $$ \Delta_n \le C_1Mn^{-1}+C_2v_0+C_3(\gamma_0)Mn^{-1}v_0^{-1}. $$ Since $v_0\ge 2n^{-\frac12}\sqrt{C_1(\gamma_0)M}$ we get $$ \Delta_n\le C(\gamma_0)M^{\frac12}n^{-\frac12}+C_3v_0 $$ Recall that $C_2$ does not depend on $\gamma_0$. If $v_0=2n^{-\frac12}C_1(\gamma_0)M^{\frac12}$ then $$ \Delta_n\le C(\gamma_0)M^{\frac12}n^{-\frac12}. $$ We choose $\gamma_0=\frac1{2C_3}$. If $v_0=\gamma_0\Delta_n$ then $$ \Delta_n\le C(\gamma_0)M^{\frac12}(1-C_3\gamma_0)^{-1}n^{-\frac12}\le 2C(\gamma_0)M^{\frac12}n^{-\frac12}. $$ This completes the proof of Theorem \ref{thm1.1}. \end{Proof of} {\bf Acknowledgment.} The authors would like to thank Dmitry Timushev for careful reading of the manuscript.
1,116,691,500,586
arxiv
\section{Introduction} Twistor-string theory provides a dramatic reformulation of perturbative $N=4$ super-Yang-Mills and conformal super-gravity scattering amplitudes in terms of integrals over moduli spaces of algebraic curves in super-twistor space (a supersymmetric version $\mathbb{CP}^{3|4}$ of complex projective three space, $\mathbb{CP}^3$), Witten (2004). Whilst it is widely believed that the twistor-string formulation is correct at tree level, no systematic proof is known. The purpose of this article is to provide a derivation of these formulae from first principles. It starts with the space-time action, and proceeds via a twistor space action associated to a corresponding twistor construction for fields that are not necessarily self-dual. The price we pay for this extra generality over and above the standard twistor correspondences for self-dual fields is that the twistor almost complex structures are no longer generally integrable. This limits the applicability of the constructions to problems in classical geometry and reflects the lack of integrability of the classical equations. Nevertheless, the constructions are sufficent to provide a derivation of twistor-string theory. In particular we give a formal proof that the twistor-string generating functionals for perturbative scattering amplitudes are correct at the classical limit, i.e., for tree diagrams; more work is required to extend the approach rigorously to loop diagrams, although it provides a platform from which one can investigate the problem. This approach also disentangle Yang-Mills from conformal gravity and the supersymmetric theories from their bosonic constituents, thus making the study of loops much more straightforward; from the twistor-string point of view, a higher genus contribution automatically includes conformal supergravity modes and supersymmetric partners in the loops. The proof is formal to the extent that it relies on an expansion of the classical limit of the path integral and infrared divergences are not addressed. In twistor-string theory, scattering amplitudes for gluons in helicity eigenstates are given by integrals over the moduli space of algebraic curves in twistor space of degree $d$, where $d=q-1+l$, $l$ is the number of loops and $q$ is the number of external gluons of helicity $+1$. The genus of the curves is also bounded by the number of loops. Most of the investigations have been confined to tree diagrams and hence are concerned with moduli spaces of rational curves (genus 0). There have been, roughly speaking, two approaches to twistor-string theory. Cachazo, Svrcek and Witten (2004) consider integrals over the moduli space of maximally disconnected curves, i.e., $d$ lines, whereas Roiban, Spradlin and Volovich (2004) consider integrals over moduli spaces of connected rational curves. In the former approach, the lines must be connected into a tree by holomorphic Chern-Simons propagators although these are absent in the latter approach for tree diagrams. Gukov, Motl and Nietzke (2004) argue that the two approaches are equivalent. Perhaps the most elegant formula in the subject is that for the on-shell generating functional for tree-level scattering amplitudes $\mathcal{A}[a,g]$ in the Roiban, Spradlin \& Volovich approach (here $(a,g)$ are the on-shell twistor fields, both being $(0,1)$-forms on a region in $\mathbb{CP}^3$ with values in the endomorphisms of some given smooth bundle $E\to\mathbb{CP}^{3}$, but with $g$ having homogeneity degree $-4 $ and $a$ homogeneity degree 0). In this case, the twistor on-shell fields $(a,g)$ define a $\bar\partial$-operator $\bar\partial^s$ on a bundle $E$ over $\mathbb{CP}^{3|4}$. The generating functional for processes with $d+1$ external fields of helicity $+1$ is then: \begin{equation}\label{deteq} \mathcal{A}^d[a,g] = \int_{\mathscr{M}^d} \det (\bar\partial^s) \;\d \mu \end{equation} where $\d \mu$ is a natural measure on the moduli space $\mathscr{M}^d$ of connected rational curves in $\mathbb{CP}^{3|4}$ of degree $d$.\footnote{see the transparencies from Witten's lectures posted at www.maths.ox.ac.uk/$\sim$lmason/Tws.} For the CSW version, extra terms associated with the holomorphic Chern-Simons theory need to be incorporated also as described in \S\ref{twistorstring} and this is the version that is proved here. We appeal to Gukov, Motl and Nietzke (2004) for the proof that this implies the connected formulation given above. Whilst the extensions to the $N=4$ supersymmetric versions of Yang-Mills and conformal gravity are likely to be straightforward, they are nevertheless complicated and ommitted here. We work in Euclidean signature throughout and ignore infrared divergences. \medskip \noindent A summary of the rest of the article follows. In \S\ref{action}, the Chalmer \& Siegel Lagrangian for the anti-self-dual sector of Yang-Mills on space-time and its generalisation to full Yang-Mills is set out. Witten's twistor space reformulation of the anti-self-dual sector as a holomorphic Chern-Simons theory is then reviewed. We then give a twistorial formulation of the extra term, $I$, required in the action on twistor space to generalise to full Yang-Mills. This term is a two-point integral on twistor space. It is then shown that the full action correctly reproduces full Yang-Mills theory on space-time by virtue of a generalisation of the Ward construction for anti-self-dual gauge fields to gauge fields that are not anti-self-dual. The action is a functional of a $\bar\partial$ operator on a bundle $E\to \mathbb{PT}$ where $\mathbb{PT}$ is a region in $\mathbb{CP}^3$ and a homogeneous $(0,1)$-form $g$ with values in $\mathrm{End}(E)$. The construction relies on the fact that the restriction of a $\bar\partial$-operator to a Riemann sphere, $\mathbb{CP}^1$, is automatically integrable. Not only does the twistor action reproduce the correct equations of motion, but it also takes the same value as the space-time action when evaluated on a solution to the field equations. In \S\ref{construction} the twistor space action is expressed more explicitly and the field equations derived and solved in terms of an arbitrary solution to the Yang-Mills equations on space-time; the general solution is gauge equivalent to such a solution. \S\ref{twistorstring} contains the main derivation of the the twistor-string on-shell generating functionals for tree-level scattering amplitudes. In \S\ref{susyaction} it is shown that the the extra term $I$ has simple alternative expressions when written in terms of integrals over super twistor space $\mathbb{CP}^{3|4}$. Then in \S\ref{generatingfnls} the general definition of on-shell generating functionals is reviewed and expressed as the classical limit of a path integral. In \S\ref{twistorstringgenfnls} the twistor-string generating functionals are reviewed and that appropriate to the Cachazo, Svrcek and Witten's approach is presented. This is then expanded and resummed to show equivalence with the appropriate formulae from the twistor Lagrangians derived in the previous section. Since the classical approximation uses only the value of the action and this takes on the same value as the space-time action, this shows that the twistor-string formulae provide the correct generating function for Yang-Mills scattering theory at tree level. In \S\ref{confgrav} the same process is worked through for conformal gravity. First we review the analogues of the Chalmer \& Siegel Lagrangians appropriate to conformal gravity and their twistor space reformulations as given by Berkovits and Witten (2004). Then we move on to finding the extra (non-local) term required in the twistor action to extend to the full theory. Finally in analogy with the Yang-Mills case, we expand and resum the path integral to obtain the relevant twistor-string formulae for the generating function for perturbative scattering amplitudes. \subsection*{Acknowledgements} I would like to thank the Department of Mathematics at the University of Edinburgh for hospitality while this work was completed. I would also like to thank Roger Penrose, Michael Singer and David Skinner for a number of helpful remarks, and Philip Candelas, Xenia de la Ossa and the speakers at the twistor-string workshop in Oxford, January 2005,\footnote{See www.maths.ox.ac.uk/$\sim$lmason/Tws for the collected slides from the talks.} for educating me about some of the many facets of string theory, perturbative gauge theory and twistor-string theory. \section{The Yang-Mills actions}\label{action} For the purposes of this paper, we will Wick rotate to euclidean signature and use the appropriate euclidean signature conventions. It is not clear that euclidean signature is essential for all of what follows, but it helps avoid a number of technical difficulties. Thus $\mathbb{M}$ will denote $\mathbb{R}^4$ but with the standard flat euclidean metric $\eta$. We will take coordinates $x^a$, $a=0,\ldots 3$ on $\mathbb{M}$ and will use the metric $\eta_{ab}$ to raise and lower indices as usual. We will denote self dual spinors with a primed upper case roman index, e.g., $\pi_{A'}$, $A'=0', 1'$. Anti-self-dual spinors will be denoted by $\omega^A$, $A=0,1$. In Euclidean signature, the reality structure is quaternionic $$ \omega^A\rightarrow \hat\omega^A=(\bar \omega^1, -\bar\omega^0) $$ so that $\hat{\hat\omega}^A=-\omega^A$. We can represent a vector index as a pair of spinor indices, so the coordinates $x^a$ on $\mathbb{M}$ can be represented as $x^{AA'}$ and we define $\partial_{AA'}=\partial/\partial x^{AA'}$. \subsection{The action on space-time} The basic variable for Yang-Mills theory is a 1-form $A$ on Minkowski space $\mathbb{M}$ with values in the Lie algebra of some gauge group. Let $F=\d A + [A,A]$ be the associated curvature; it is a Lie-algebra valued 2-form. Then the standard action for the Yang-Mills equations is $$ S[A]=\int_\mathbb{M} \, \mathrm{tr}(F\wedge F^*) $$ where $F^*$ is the Hodge dual of $F$ (in indices, $F^*_{ab}=\frac{\scriptstyle 1}{\scriptstyle 2}\varepsilon _{abcd}F^{cd}$) and $\, \mathrm{tr}$ is an ad-invariant inner product on the Lie algebra. Since $\int \, \mathrm{tr} (F^2)$ is a topological invariant, for perturbative purposes, one can add any multiple of this into the action without changing the perturbative theory and this allows one to rewrite the action as $$ S[A]=\int_\mathbb{M}\, \mathrm{tr} (F^+\wedge F^+) $$ where $F^+=\frac{\scriptstyle 1}{\scriptstyle 2}(F+F^*)$ is the self-dual part of $F$ satisfying $F^{+*}=+F^+$. (Here we have used the fact that $F=F^+ + F^-$ and $F^+\wedge F^-=0$ automatically.) The anti-self-dual sector of the theory is the case when $F^+=0$, but in perturbing away from this to first order, one would want to introduce $G$, a Lie algebra valued self-dual 2-form, so that $\epsilon G$ represents the infinitesimal value of $F^+$. Chalmers \& Siegel have proposed the following action for the anti-self-dual sector of the Yang-Mills equations on Minkowski space $\mathbb{M}$: $$ S_{\mathrm{asd}}[A,G]=\int_\mathbb{M} \, \mathrm{tr}(G\wedge F) $$ The Euler-Lagrange equations imply that $F$ is anti-self-dual, and $G$ is covariantly closed. To obtain the full Yang Mills equations we add the term $-\frac \epsilon 2 I[G]$ where $$ I[G]=\int_\mathbb{M} \, \mathrm{tr} (G\wedge G) \, . $$ The action for full Yang-Mills is then $$ S_{\mathrm{YM}}=S_{\mathrm{asd}}-\frac\epsilon 2I[G]\, . $$ When $\epsilon\neq 0$, the Euler-Lagrange equations imply that $F^+$, the self-dual part of $F$ is $\epsilon G$, which is in turn covariantly closed, so that the full Yang-Mills equations are satisfied. Forming a perturbation series in $\epsilon$ around $\epsilon=0$ therefore gives a way of perturbing full Yang-Mills theory around its anti-self-dual sector. We also note that the value of this last action on a solution to the field equations is, up to an overall multiplicative factor, the same as for the standard Yang-Mills equations at least perturbatively so that the topological term does not contribute. \subsection{The actions on twistor space} We first review twistor space geometry and notational conventions that we will use. Twistor space $\mathbb{PT}$ will be taken to be some neighbourhood of a line in complex projective 3-space, $\mathbb{CP}^3$. We will work in a space-time of euclidean signature. In this case, we can choose our neighbourhood so that $\mathbb{PT}$ fibres over an open set $U\subset \mathbb{M}$, $p:\mathbb{PT} \to U$, with fibre the Riemann sphere, $\mathbb{CP}^1$. This fibre is best thought of as the projectivisation of the space $\mathbb{C}^2$ of the self dual spinors, $\pi_{A'}$, $A'=0', 1'$ at $x\in U$. Thus $(x^{AA'},\pi_{A'})$ are coordinates on non-projective twistor space $\mathbb{T}$, and the projective space is obtained by modding out the scale of $\pi_{A'}$. Homogeneous coordinates on twistor space are provided by $Z^\alpha=(\omega^A,\pi_{A'})$ where $\omega^A=x^{AA'}\pi_{A'}$. We note that the complex conjugation on spinors induces a similar conjugation $Z^\alpha\to\hat Z^\alpha$ with $\hat{\hat{Z}}{}^\alpha=-Z^\alpha$. This conjugation restricts to give the antipodal map on each $\mathbb{CP}^1$ fibre of $\mathbb{PT}\to\mathbb{M}$. The coordinates $(\omega^A,\pi_{A'})$ are holomorphic coordinates for the standard complex structure on $\mathbb{PT}$. In terms of these coordinates, the projection $p$ is given by $$ p(\omega^A,\pi_{A'})=\{x^{AA'}=\frac1{\hat\pi^{C'}\pi_{C'}}\left( \omega^A \hat\pi^{A'}-\hat\omega^A\pi^{A'}\right) \}\, . $$ The complex structure can also be represented in terms of the distribution of $(0,1)$ vectors $\mathrm{D}= \{ \partial/\partial\hat\pi_{A'},\pi^{A'}\partial_{AA'}\}$, where $\partial_{AA'}=\partial/\partial x^{AA'}$. It can also be represented by the $\bar\partial$-operator (written here on the non-projective space) $$ \bar\partial=\frac1{\pi^{A'}\hat\pi_{A'}}\d x^{AA'}\hat\pi_{A'}\pi^{B'}\partial_{AB'} + \d\hat\pi_{A'}\frac{\partial}{\partial\hat\pi_{A'}} $$ The connection 1-form, $A$ is a connection on a bundle $E$ which, with an abuse of notation, can be pulled back to give a smooth bundle $E\to\mathbb{PT}$. The connection then allows one to define a d-bar operator $\bar\partial_a=\bar\partial +a$ on $E\to \mathbb{PT}$ where $a$ is a $(0,1)$-form with values in $\mathrm{End}(E)$ and is the $(0,1)$-part of the pullback of $A$ to $\mathbb{PT}$. We will see that $G$ corresponds to a $(0,1)$-form $g$ with values in $\mathrm{End}(E)\otimes\O(-4)$ where $\O(-1)$ is the tautological bundle over $\mathbb{CP}^3$. Witten (2004) shows that $S_{\mathrm{asd}}$ has a direct analogue on twistor space in the form of the spin-1 part \begin{equation}\label{twistasdaction} S_{\mathrm{asd}} [a,g]=\int_\mathbb{PT} \, \mathrm{tr}(g\wedge f)\wedge \Omega \end{equation} of a super Chern-Simons Lagrangian where $f:=\bar\partial a+a\wedge a=\bar\partial_a^2$ is the $(0,2)$-part of the curvature of a connection with $(0,1)$-part $a$ and $\Omega = \varepsilon_{\alpha\beta\gamma\delta} Z^\alpha\d Z^\beta \d Z^\gamma \d Z^\delta\in \Gamma(\mathbb{PT}, \Omega^{(3,0)}(4))$ is the (weighted) holomorphic volume form (here as usual $\varepsilon_{\alpha\beta\gamma\delta} = \varepsilon_{[\alpha\beta\gamma\delta]}$, $\varepsilon_{0123}=1$). The correspondence is precise for classical fields modulo their appropriate gauge freedoms: the Euler-Lagrange equations from this action imply that $f=0$ and $[\bar\partial+a,g]=0$. The Lagrangian is invariant under the usual group of gauge transformations (automorphisms) of $E$ together with $g\rightarrow g+\bar\partial_a\chi$ for smooth sections $\chi$ of $\mathrm{End}(E)(-4)$. Thus, modulo gauge freedoms, the first equation implies that $\bar\partial+a$ defines a holomorphic structure on $E$ up to gauge transformations, and $g$ defines a cohomology class in $H^1(U,\mathrm{End}(E)(-4))$. Holomorphic vector bundles $E$ correspond to anti-self-dual Yang-Mills gauge connections $A$ by the Ward transform, and elements $g\in H^1(U,\mathrm{End}(E)(-4))$ corresponds to covariantly closed self-dual 2-forms with values in the Lie algebra of the gauge group by a standard generalisation of the Penrose transform as follows. In the abelian case, the Penrose tranform $g\to G$ is implemented by \begin{equation}\label{pentrg}G=p_* (g\wedge \Omega)\end{equation} that is, integrate over the fibres of $p$ to obtain a 2-form on $\mathbb{M}$. This necessarily provides a self-dual 2-form, since in $(x^{AA'},\pi_{A'})$ coordinates we can write $$ \Omega= \mathrm{D}\pi\wedge \pi_{B'}\pi_{C'}\varepsilon_{BC} \d x^{BB'}\wedge\d x^{CC'} \, , \qquad \mbox{ where }\quad \mathrm{D}\pi=\pi^{A'} \d\pi_{A'} $$ and so if we set $$ G_{A'B'}(x)=\int_{L(x)}\pi_{A'}\pi_{B'}g\wedge\mathrm{D}\pi $$ then the above formula gives $$ G=G_{A'B'}\varepsilon_{AB}\d x^{AA'}\wedge\d x^{BB'}\, , $$ which is necessarily self-dual. It is easily seen that $G$ must be closed since $g\wedge\Omega$ is. In order to formulate full Yang-Mills on twistor space, we need to find the appropriate twistor version of the $I=\int_{\mathbb{M}} \, \mathrm{tr} (G\wedge G)$ term. It follows from the above that in order to express $I[G]$ in terms of $g$ in the abelian case, we can consider the integral \begin{equation}\label{Idef} I[g]= \int_{\mathbb{PT}\times_\mathbb{M}\mathbb{PT}} \, \mathrm{tr}(g(Z_1)\wedge g(Z_2)) \wedge \Omega(Z_1)\wedge\Omega(Z_2)\, , \end{equation} where $\mathbb{PT}\times_\mathbb{M}\mathbb{PT}=\{(Z_1,Z_2)\in\mathbb{PT}\times\mathbb{PT} | p(Z_1)=p(Z_2)\in\mathbb{M}\}$ is the fibrewise product of $\mathbb{PT}$ with itself over $\mathbb{M}$ with fibre $\mathbb{CP}^1\times\mathbb{CP}^1$.\footnote{This is the first point at which we needed to have specified a real slice $\mathbb{M}$ of complex Minkowski space $\mathbb{CM}$ (as indeed one must for the ordinary action principle). } To make sense of (\ref{Idef}) in the nonabelian case, we must find some way of comparing the fibre of $\mathrm{End}(E)$ at $Z_1$ with that at $Z_2$. In the integral we have already restricted to $\mathbb{PT}\times_\mathbb{M}\mathbb{PT}$ and so $Z_1$ and $Z_2$ both lie on the line $L(x)$ where $x=p(Z_1)=p(Z_2)$. Although the $\bar\partial_a$ operator is not a-priori integrable, it is nevertheless necessarily integrable on restriction to lines. We make the assumption that $(E,\bar\partial_a)$ is holomorphically trivial along such lines $L(x)$ for $x\in\mathbb{M}$; this will be the case for small $a$ and hence perturbatively. We therefore define $\, \mathrm{tr}_a(g(Z_1)\wedge g(Z_2))$ to be the trace taken in such a frame that is globally holomorphic along the line from $Z_1$ to $Z_2$. We now generalize equation (\ref{Idef}) to the non-abelian case as \begin{equation}\label{Iadef} I[g,a]= \int_{\mathbb{PT}\times_\mathbb{M}\mathbb{PT}} \, \mathrm{tr}_a(g(Z_1)\wedge g(Z_2)) \wedge \Omega(Z_1)\wedge\Omega(Z_2)\, . \end{equation} This defines the appropriate additional term, but is now a functional of $a$ also, $I:=I[g,a]$. This is not the most helpful form of $I[g,a]$ and we rewrite it as follows. First note that $\mathbb{PT}\times_\mathbb{M}\mathbb{PT}=\mathbb{M}\times\mathbb{CP}^1\times\mathbb{CP}^1$ and we coordinatize it by $(x,\pi_1,\pi_2)$ by setting $$ (Z_1,Z_2)=\left( (x^{AA'}\pi_{1A'},\pi_{1A'}), (x^{BB'}\pi_{2B'},\pi_{2B'})\right)\, .$$ In these coordinates \begin{equation}\label{Omegaeqs} \Omega(Z_1)\wedge \Omega(Z_2)= (\pi_{1}\cdot\pi_{2})^2 \mathrm{D} \pi_1\wedge \mathrm{D}\pi_2 \, \d^4x \end{equation} where $$ \mathrm{D}\pi= \pi^{B'}\d\pi_{B'}\, , \quad \mbox{ and } \quad \pi_1\cdot\pi_2= \pi_1^{A'}\pi_{2A'}\, . $$ Thus $$ I[g,a]=\int_{\mathbb{PT}\times_\mathbb{M} \mathbb{PT}} \, \mathrm{tr}_a(g(Z_1)\wedge g(Z_2))(\pi_1\cdot\pi_2)^2 \mathrm{D}\pi_1\wedge\mathrm{D}\pi_2\wedge \d^4x\, . $$ In the non-abelian case, the integral formula for $G$ in terms of $g$ is: \begin{equation}\label{Gintformula} G_{A'B'}(x)=\int_{\omega^A=ix^{AA'}\pi_{A'}} \pi_{A'}\pi_{B'} g\wedge \pi_{A'}\d\pi^{A'} \end{equation} as before, but the integral must be performed in a holomorphic trivialisation of $E$ over the Riemann sphere $p^{-1}(x)$. The 2-form $$ G=G_{A'B'}\varepsilon_{AB}\d x^{AA'}\wedge\d x^{BB'}=\int_{p^{-1}(x)} g\wedge \Omega $$ with the same proviso concerning the frame for $E$ and so we can see that $I[g,a]=I[G]$. We will therefore consider the twistor action $$ S_T=S_{\mathrm{asd}}[a,g]-\frac\epsilon2 I[g,a]. $$ The gauge symmetry of this action is the group of gauge transformations of the bundle $E\rightarrow \mathbb{PT}$ together with $g\rightarrow g+\bar\partial_a\chi$. It is easily seen that the action is invariant under the group of gauge transformations of $E$. To see invariance under $g\rightarrow g+\bar\partial_a\chi$ for $I[g,a]$, note that in the frame that is holomorphic up the fibres of $p$ in which the trace is taken, $\bar\partial_a=\bar\partial$ on restriction to the fibres of $p$ and so the integral over one of the $\mathbb{CP}^1$ factors of a fibre will give zero if we replace the corresponding $g$ by $\bar\partial\chi$. The invariance of $S_{\mathrm{asd}}$ is elementary. \begin{propn} The action $S_T=S_{\mathrm{asd}}[a,g]- \frac \epsilon 2 I[a,g]$ is equivalent at the classical level to $S_{\mathrm{YM}}$. This is true both in the sense that gauge equivalence classes of solutions to the Euler Lagrange equations on twistor space are in $1:1$ correspondence with gauge equivalence classes of solutions to the Yang-Mills equations on space-time, and in the sense that the twistor action takes the same values on $(a,g)$ as the space-time action does on the corresponding $(A,G)$. \end{propn} \noindent {\bf Proof:} In this subsection we give a quick but inexplicit proof of this theorem. In the next we will develop more notation so as to be more explicit. Given $(E\to\mathbb{PT}, a,g)$ satisfying the variational equations of the action $S_T$, we wish to construct $(E\to\mathbb{M}, A,G) $ satisfying the Yang-Mills equations. We first define the bundle $E\rightarrow\mathbb{M}$ to be the bundle whose fibre at $x$ is the space of global $\bar\partial_a$-holomorphic sections of $E\to p^{-1}(x)\subset\mathbb{PT}$ (recalling that the bundle is assumed to be trivial on such lines). We then note that we can define $G\in \Omega^{2+}\otimes \mathrm{End} (E)$ at each $x$ to be the two form obtained by pushing down $g\wedge \Omega$ to $\mathbb{M}$ in the associated global holomorphic frame of $\mathrm{End}(E)$ over $p^{-1}(x)$. This necessarily provides a self-dual 2-form as before and we see that $$ I[g,a]=I[G]\, . $$ The classical equations of motion obtained by varying $g$ are \begin{equation}\label{fieldeq} \bar\partial a + a\wedge a= \epsilon \int_{Z'\in p^{-1}(p(Z)) } g(Z')\Omega(Z')\, . \end{equation} where the left hand side is evaluated at $Z$ and as usual the integration is in a global holomorphic trivialisation of $E$ over $p^{-1}(p(Z))$. The integral therefore yields the projection $G_{(0,2)}$ of $p^*G(p(Z))$ onto the $(0,2)$-forms at $Z$. Thus $$\bar\partial_a^2=G_{(0,2)}\, . $$ It follows that $f=\bar\partial_a^2$ has no component up the fibres of $p$. This allows us to define a connection $A$ on $E\to\mathbb{M}$ as follows. Pull back a section $s$ of $E\to \mathbb{M}$ to $E\to \mathbb{PT}$, then $\bar\partial_a s$ is holomorphic up the fibre of $p:\mathbb{PT}\to\mathbb{M}$. More concretely on $\mathbb{PT}$, $\pi^{A'}\partial_{AA'}\hook \bar\partial_a s$ is therefore holomorphic in $\pi_{A'}$. It is also global with homogeneity degree 1 over the Riemann sphere with homogeneous coordinates $\pi_{A'}$, and depends linearly on $s$. We can deduce from a generalization of Liouville's theorem that $\pi^{A'}\partial_{AA'}\hook\bar\partial_a s=\pi^{A'}(\partial_{AA'} +A_{AA'})s$ for some connection 1-form $A=A_{AA'}\d x^{AA'}$ on $\mathbb{M}$. This is in effect the standard Ward argument for constructing a connection $A$ on $E\to\mathbb{M}$ from $a$. The $\bar\partial_a$ operator can therefore be represented in a frame pulled back from $E\to\mathbb{M}$ as $$ \bar\partial_a=\frac{1}{\pi^{C'}\hat\pi_{C'}} \d x^{AB'}\hat\pi_{B'}\pi^{A'}(\partial_{AA'}+A_{AA'}) +\d\hat\pi_{C'}\frac{\partial}{\partial\hat\pi_{C'}}\, . $$ Now we claim that $S_{\mathrm{asd}}[a,g]=S_{\mathrm{asd}}[A,G]$. This follows by using the gauge invariance on twistor space to use a gauge pulled back from $E\to\mathbb{M}$. In this gauge, $a$ is the projection onto $(0,1)$-forms of $A_{AA'}\d x^{AA'}$ and $f$ is the projection onto $(0,2)$-forms of $F^+$. Thus $S_{\mathrm{asd}}[a,g]=\int \, \mathrm{tr} (F^+\wedge g)\wedge\Omega$. Integrating over the fibres of $p$ then gives directly that $S_{\mathrm{asd}}[a,g]=S_{\mathrm{asd}}[A,G]$. We have now reduced the desciption to that of the Chalmers \& Siegel Lagrangian and so we have obtained the appropriate field equations as claimed. $\Box$ \medskip We have not in fact proved everything here: we have only provided a map from solutions to the field equations associated to $S_T$ to those of $S_{\mathrm{YM}}$. To see that it is $1:1$ and onto gauge equivalence classes we need to work more explicitly which we do in the next subsection. \subsection{The twistor action, field equations and solutions} \label{construction} We can therefore take the full twistor-space Lagrangian to be $$ S_T[a,g]=S_{\mathrm{asd}} [a,g]-\frac\epsilon 2 I[g,a]\, . $$ The equation of motion obtained by varying $g$ is given in equation (\ref{fieldeq}) but that obtained by varying $a$ is more complicated, and we now make the $I[g,a]$ term more explicit in order to calculate the equations of motion. We also show in this subsection how every solution to the Yang-Mills equations on space-time gives rise to a solution to the Euler-Lagrange equations of the twistor action and that every solution to the Euler-Lagrange equations of the twistor action is gauge equivalent to a space-time solution arising in this way. In the following our expressions will be functions of two or more twistors, $Z_1$, $Z_2$, $\ldots$ or $\pi$ spinors, $\pi_1$, $\pi_2$, \ldots. We will adopt the convention that $g_1$ will denote a function of $Z_1$ and so on. We first introduce global holomorphic frames $F(x,\pi)$ over the line $L_x$ corresponding to $x\in\mathbb{M}$ by $F(x,\pi):E_Z\rightarrow \mathbb{C}^r$, ($r$ is the rank of the bundle $E$) where $\bar\partial_a F|_{L_x}=0$ and $F(x,\pi)$ is unique up to $F(x,\pi)\rightarrow F(x,\pi)\gamma(x)$ where $\gamma$ is a function on $\mathbb{M}$ with values in the gauge group. We can then write $I[g,a]$ as $$ I[g,a]=\int_{\mathbb{M}\times\mathbb{CP}^1\times\mathbb{CP}^1} \, \mathrm{tr}( F_1^{-1}g_1F_1\wedge F_2^{-1} g_2F_2)\wedge \Omega_1\wedge\Omega_2\, , $$ where $F_1$ and $g_1$ are evaluated at $\pi_1$ and $Z_1$ with $Z_1=(x^{AA'}\pi_{1A'},\pi_{1A'})$, etc.. To reformulate this further, we note that the greens function $K_{12}:=K(x,\pi_1,\pi_2)$ for the d-bar operator $\bar\partial_a|_{L_x}$ on sections of $E\otimes\O(-1)|_{L_x}$ is, for $Z_1, Z_2\in L_x $, $$ K_{12}=\frac 1{2\pi i}\frac{F_1F_2^{-1}}{\pi_1\cdot\pi_2} $$ thus using equation (\ref{Omegaeqs}) and ignoring certain multiples of $2\pi i$ (which can be absorbed into the definition of $\epsilon$ we can put \begin{equation}\label{newI} I[g,a]=\int_{\mathbb{M}\times \mathbb{CP}^1\times\mathbb{CP}^1} \, \mathrm{tr}(K_{21} g_1 K_{12}g_2) (\pi_1\cdot\pi_2)^4\mathrm{D}\pi_1\mathrm{D}\pi_2\;\d^4 x \end{equation} where $D\pi=\pi^{A'}\d\pi_{A'}$ and we use the fact that $\Omega_1\wedge\Omega_2=(\pi_1\cdot\pi_2)^2\d^4x\wedge D\pi_1\wedge\mathrm{D}\pi_2$ as above. The variation of $K$ with respect to $a$ is given by $$ \delta K_{12}=\int K_{13}\delta a_3 K_{32}\; \mathrm{D}\pi_3 \, . $$ We can use this to calculate the variation of $I[g,a]$ with respect to $a$ and hence the Euler-Lagrange equation obtained by varying $a$ in the action. This yields, after some manipulation, \begin{equation}\label{eqmotion2} \bar\partial_{a_3} g_3=\epsilon \int_{\mathbb{CP}_1\times\mathbb{CP}_1} [K_{31} g_1 K_{13}, K_{32} g_2 K_{23}](\pi_1\cdot\pi_2)^3\pi_{1(A'}\pi_{2B')}\; \mathrm{D}\pi_1\mathrm{D}\pi_2 \; \d^2x^{A'B'}_{(0,2)}\, . \end{equation} In this notation, the equation of motion from varying $g$ (\ref{fieldeq}) is \begin{equation}\label{eqmotion1} \bar\partial a_1+a_1\wedge a_1=\epsilon \int_{\mathbb{CP}_1} K_{12}g_2 K_{21} (\pi_1\cdot\pi_2)^2\pi_{2A'}\pi_{2B'}\mathrm{D}\pi_2\; \d^2x^{A'B'}_{(0,2)}\, . \end{equation} where in both the above two equations $\d^2x^{A'B'}_{(0,2)}$ denotes the $(0,2)$-part of $\d^2x^{A'B'}:=\varepsilon_{AB}\d x^{AA'}\wedge\d x^{BB'}$ which is $\d^2x^{A'B'}_{(0,2)} = \d^2x^{C'D'} \hat\pi_{C'}\hat\pi_{D'}\pi^{A'}\pi^{B'}/(\pi\cdot\hat\pi)^2$. As a check on these equations, it is helpful to see how they can be solved in terms of the standard space-time data for a solution to the full Yang-Mills equations. Thus, let $A$ be a connection 1-form on $\mathbb{M}$ for a solution to the full Yang-Mills equations and let $G_{A'B'}$ be the self-dual part of its curvature. The space-time field equations are $$ \partial^A_{(A'}A_{B')A}+ A_{(A'}^AA_{B')A}=\epsilon G_{A'B'}\, , \qquad \nabla^{A'}_AG_{A'B'} =0\, , $$ where $\nabla_{AA'}=\partial_{AA'}+A_{AA'}$ is the gauge covariant derivative and is understood to act in the standard way on the adjoint representation. Using the standard Euclidean fibration $p: \mathbb{PT}\rightarrow\mathbb{M}$, we define $a$ to be the $(0,1)$ part of the pull-back of $A$ to $\mathbb{PT}$ and using Woodhouse (1985), we define \begin{eqnarray}\label{harmgauge} a&=&\frac1{\pi\cdot\hat\pi}A_{AA'}\pi^{A'}\hat\pi_{B'}\d x^{AB'}\, , \nonumber \\ g&=&\frac1{(\pi\cdot\hat\pi)^4} \left( 3G_{A'B'}\hat\pi^{A'} \hat\pi^{B'} \mathrm{D}\hat\pi + \nabla_{AA'}G_{B'C'}\hat\pi^{A'} \hat\pi^{B'} \hat\pi^{C'}\hat\pi_{D'} \d x^{AD'}\right) \, . \end{eqnarray} It can now be checked that if the Yang-Mills equations hold, we have, in this gauge, \begin{eqnarray*} \bar\partial a+ a\wedge a&=& \epsilon G_{A'B'}\pi^{A'}\pi^{B'}\d^2x_{(0,2)} \\ \bar\partial_a g&=&\frac\epsilon{(\pi\cdot\hat\pi)^2} \left( \hat\pi^{A'}\hat\pi^{B'}[ G_{A'}^{E'},G_{B'E'}] \d^2 x_{(0,2)}\right) \, , \end{eqnarray*} where $\d^2x_{(0,2)}$ is the $(0,2)$-form with values in $\O(-2)$ $$ \d^2x_{(0,2)}:=\frac1{(\pi\cdot\hat\pi)^2}\varepsilon_{AB} \hat\pi_{A'}\hat\pi_{B'}\d x^{AA'}\wedge\d x^{BB'} $$ In this gauge, the matrix $F$ can be taken to be the identity, and $K_{12}=1/2\pi i (\pi_1\cdot\pi_2)$ and equations (\ref{eqmotion1}) and (\ref{eqmotion2}) can be verified using equation (\ref{Gintformula}). We finally note that since both the action and field equations are invariant under the full group of gauge transformations on twistor space $$ (\bar\partial_a,g)\rightarrow (\bar\partial_{a'},g')=(H^{-1}\bar\partial_a H, H^{-1}(g+\bar\partial_a \chi) H)$$ where $H$ is an arbitrary smooth complex gauge transformation of the bundle $E \rightarrow\mathbb{PT}$ and $\chi$ an arbitrary smooth section of $E\otimes\O(-4)$ over $\mathbb{PT}$, given an arbitrary solution to equations (\ref{eqmotion1},\ref{eqmotion2}), we can find a gauge transformation to a frame that is holomorphic on the fibres of $p:\mathbb{PT}\rightarrow \mathbb{M}$ (so that $a$ vanishes on restriction to the fibres of $p$) and so that $g$ is a harmonic representative on each of the fibres of $p$, see Woodhouse (1985). If, furthermore, $(a,g)$ are solutions to (\ref{eqmotion1},\ref{eqmotion2}) then we know from the previous subsection that they correspond to a solution of the Yang-Mills equations and that $a$ and the vertical part of $g$ have the form given above. It is then straightforward to see that the solution is precisely as given above up to a space-time gauge transformation. We therefore see that the solutions to the Euler-Lagrange equations of the twistor action are in $1:1$ correspondence with gauge equivalence classes of solutions to the space-time Yang-Mills equations. \section{Twistor-string Yang-Mills generating functionals from the twistor action}\label{twistorstring} The twistor-string formulae refer to a holomorphic Chern-Simons theory on super twistor space $\mathbb{PT}_s$ which is an appropriate subset of $\mathbb{CP}^{3|4}$. This space is obtained by introducing odd homogeneous twistor coordinates $\psi_i$, $i=1,\ldots,4$ in addition to the standard bosonic homogeneous coordinates $Z^\alpha$ so that $\mathbb{CP}^{3|4}$ is the space of non-zero $(Z^\alpha,\psi_i)\in\mathbb{C}^{4|4}$ modulo the equivalence relation $(Z^\alpha,\psi_i)\sim (\lambda Z^\alpha,\lambda\psi_i), \lambda\in\mathbb{C}^*$. Chiral Super-Minkowski space $\mathbb{M}_s$ is then $\mathbb{R}^{4|8}$ with coordinates $(x^{AA'},\theta^{iA'})$ and there is the incidence relation with supertwistor space given by \begin{equation}\label{superincidence} (\omega^A,\pi_{A'}, \psi_i)=(x^{AA'}\pi_{A'}, \pi_{A'}, \theta^{A'}_i\pi_{A'})\, . \end{equation} Since we have stripped out all the superpartners except those of helicity $\pm 1$, we set $\Psi=\psi_1\psi_2\psi_3\psi_4$ and this will be the supersymmetric quantity that we will use most. In the following our expressions will be functions of two or more twistors, $Z_1$, $Z_2$, $\ldots$ or $\pi$ spinors, $\pi_1$, $\pi_2$, \ldots. We will again adopt the convention that $g_1$ will denote a function of $Z_1$ and so on. \subsection{Supersymmetric D-instanton reformulation of twistor action}\label{susyaction} To make closer contact with twistor string formulae, we can consider the $(\pi_1\cdot\pi_2)^4$ term in equations (\ref{newI})to arise from a superspace integral using the identity $$ \int \Psi_1\Psi_2\; \d^8\theta=(\pi_1\cdot\pi_2)^4\, , $$ where $\Psi_1=\Pi_{i=1}^4 \theta_i^{A'}\pi_{1A'}$ and similarly for $\Psi_2$. Therefore we can write: $$ I[g,a]= \int_{\mathbb{M}_s}\int_{L_{(x,\theta)} \times L_{(x,\theta)}} \, \mathrm{tr}(K_{21} \Psi_1g_1 K_{12}\Psi_2g_2) \; \mathrm{D}\pi_1\mathrm{D}\pi_2\, \d^4 x\, \d^8\theta $$ We now wish to reformulate the Green's functions $K_{12}$ in terms of vacuum expectation values of fermion currents. We use a device introduced in Mason, Singer \& Woodhouse (2002) in the context of the Ward construction for integrable systems. Introduce fermion spinor fields $\alpha$ and $\beta$ (i.e., fields of homogeneity $-1$) on each $L_x$ taking values in $E$ and $E^*$ respectively with action $$S[\alpha,\beta]=\int_{L_{(x,\theta)}}\beta\bar\partial_a\alpha\wedge\mathrm{D}\pi $$ where $L(x,\theta)$ is the line in super twistor space given by holding $(x,\theta)$ fixed in equation (\ref{superincidence}). These will be the D-instantons of twistor-string theory. Then $$ K_{12}=\langle\alpha_1\beta_2\rangle $$ where $\langle \O\rangle$ denotes the vacuum expectation of the operator $\O$ associated to the quantum field theory of the fermions $\alpha $ and $\beta$ on $\mathbb{CP}^1$. In the above we are taking $\alpha_1$ and $\beta_2$ to be associated to the same line, $L(x,\theta)$; if they are taken to be associated to $L(x,\theta)$ and $L(x',\theta')$ respectively, we would have instead $K_{12}$ if $(x,\theta)=(x',\theta')$ or zero otherwise---there are no singular terms in $x$. With this, we can express $I[g,a]$ as follows: \begin{eqnarray} I[g,a]&=&\int_{\mathbb{M}_s} \left\langle\int_{L_{(x,\theta)}\times L_{(x,\theta)}} \, \mathrm{tr}(J_{a1} \Psi_1 g_1)\, \mathrm{tr}(J_{a2} \Psi_2 g_2) \right\rangle \d^4x\d^8\theta \nonumber \\ &=&\int_{\mathbb{M}_s} \d^4x\d^8\theta \left\langle \left(\int_{L_{(x,\theta)}} \, \mathrm{tr}(J_{a} \Psi g)\right)^2 \right\rangle \label{YMfin} \end{eqnarray} where $J_a=:\alpha\beta:\mathrm{D}\pi$ is the current associated to $\alpha$ and $\beta$ (the :: denoting wick-ordering and the subscript $a$ denotes dependence on $a$) and the subscript 1 or 2 denotes evaluation at a point of the first or second factor of $L(x,\theta)\times L(x,\theta)$. \subsection{A digression concerning generating functions for scattering amplitudes}\label{generatingfnls} We now wish to use this twistor form of the Yang-Mills action to show that the conjectured formulae for the generating functional $\mathcal{A}_{\mathrm{TS}}[a,g]$ for tree level QCD amplitudes from twistor string theory is the same as the generating functional $\mathcal{A}_{\mathrm{YM}}$ obtained from the standard Yang-Mills action. We first need to establish some basic facts about these generating functions. We are concerned here with on-shell generating functionals rather than the more common off-shell generating functional, usually denoted $Z[J]$, which is a functional of a source usually denoted $J$ which is an arbitrary function on space-time. Instead, both the twistor-string and and the standard Yang-Mills generating functionals $\mathcal{A}[a,g] $ are functionals of linearised free gluon fields, which we will represent by their on-shell twistor data $(a,g)$ where on shell means that they satisfy the linearised equations at $\epsilon=0$: $a$ and $g$ are therefore both linear dolbeault cohomology classes of homogeneity $0$ and $-4$ respectively. The $\mathcal{A}[a,g]$ are generating functionals in the sense that n-point scattering amplitudes are obtained as functionals of positive frequency linear fields by taking the n'th functional derivative of $\mathcal{A}_{\mathrm{TS}}$ with respect to $(a,g)$ in the directions of the given positive frequency linear fields and evaluating at $(a,g)=(0,0)$. The fact that we are considering a functional only of positive frequency fields means that the generating functional generates diagrams with no incoming fields, just outgoing fields. This is sufficient as crossing symmetry then allows one to construct all other processes. For a generic quantum field theory of a field theory for a field $\phi$ with action, say $S[\phi]=\frac{\scriptstyle 1}{\scriptstyle 2}(\partial_a\phi\partial^a\phi - m^2\phi^2)-\lambda V(\phi)$, such a generating functional would have the path-integral expression $$ \mathcal{A}[\phi]=\int \mathrm{D}\tilde \phi \;\exp \frac i\hbar S[\tilde \phi] $$ where the functional integration is understood to be over fields $\tilde \phi$ such that, as $t\rightarrow +\infty$, the negative frequency part of $\tilde\phi-\phi$ tends to zero, and as $t\rightarrow -\infty$, the positive frequency part of $\tilde\phi-\phi$ tends to zero, Faddeev \& Slavnov (1991). Thus $\mathcal{A}[\phi]$ is the `wave functional' of the theory. Given the complications associated with defining even the perturbation series for a functional integral, it is difficult to deduce rigourously that the quantum theory will be correctly reproduced after a manipulation of the exponential of the action in the path integral. However, more can be said about the classical limit. The classical limit $\mathcal{A}^{\mathrm{cl}}[\phi]$ generates all the tree diagrams. It can be obtained by first constructing the classical solution $\tilde \phi$ that is appropriately asymptotic to $\phi$ as above by iterating the integral form of the field equations to produce a sequence of fields $\phi_n$ such that $$ \phi_{n+1}(x)= \phi(x)+ \lambda \int \Delta_F(x,x')V(\phi_n(x'))\d^4x' $$ where $\Delta_F$ is the Feynman propagator that inverts $\partial_a\partial^a +m^2$ and $\phi_0=\phi$. We can then define $\tilde\phi=\lim_{n\rightarrow\infty}\phi_n$ as a power series in $\lambda$. Then we have $$\mathcal{A}^{\mathrm{cl}}=\exp \frac i\hbar S[\tilde\phi].$$ In such perturbative studies, the free field $\phi$ is taken to be a plane wave. Such fields are entire on complex Minkowski space (but are singular at infinity in the conformal compactification). It follows from the analyticity properties of the Feynman propagator that the corresponding solution $\tilde\phi$ can be analytically continued to the Euclidean section from the Minkowski signature section. We can therefore assume that all integrals are over the Euclidean section $\mathbb{M}$. Following the above, the generating functional for tree level Yang-Mills amplitudes $\mathcal{A}_{\mathrm{YM}}[a,g]$ is given by \begin{equation}\label{YMgen1} \mathcal{A}_{\mathrm{YM}}[a,g]=\exp \frac i\hbar \left(S_{\mathrm{asd}}[\tilde a,\tilde g]+ \frac{\epsilon^2\hbar}{2i}I[\tilde g,\tilde a]\right)\, , \end{equation} where it is undertood that $(\tilde a,\tilde g)=\lim_{n\rightarrow\infty}(a_n,g_n)$ where $$ (a_{n+1},g_{n+1})=(a,g) + \bar\partial^{-1}\left( -a_n\wedge a_n + \epsilon \ldots, -a_n\wedge g_n + \epsilon\ldots\right) $$ where the $\ldots$ denotes the terms on the right hand sides of equations (\ref{eqmotion1}) and (\ref{eqmotion2}). We deduce from this that any formal manipulation of the exponential of the action that preserves the value it takes on solutions to the equations will give rise to the correct tree diagrams in perturbation theory. This will nevertheless be somewhat formal as, even at tree level, these series exhibit infrared divergences. Infrared divergences are a standard problem in quantum field theory with a number of standard resolutions and we will not consider them further. \subsection{The twistor string generating functionals}\label{twistorstringgenfnls} The conjectures for the twistor-string form of the generating functional are only confident in the classical approximation corresponding to tree diagrams in perturbation theory and rational (genus 0) curves in twistor space; we restrict attention to this classical limit here and omit the `cl' superscript on $\mathcal{A}$ in the following. There is some evidence that the conjecture might be valid for the full quantum field theory (which would correspond to loops in perturbation theory and curves of higher genus in twistor space in twistor-string theory) but we will not be able to say anything definitive here. In its simplest form, the generating functional is given by: \begin{equation}\label{tsgenerate} \mathcal{A}_{\mathrm{TS}}[a,g] = \sum_d \int_{\mathscr{M}^d_s} {\mathrm{Det}}(\bar\partial_{a+\epsilon\Psi g}|_C)\;\d\mu \end{equation} Here $C\in \mathscr{M}^d_s$ where $\mathscr{M}^d_s$ is a totally real submanifold (or contour) in the space of connected degree-$d$ rational curves in super-twistor space, $\epsilon$ is a small parameter used to expand about the self-dual sector of the theory and $\d\mu$ is a naturally defined measure on $\mathscr{M}^d_s$. This determinant has a standard functional integral representation $$ \mathcal{A}_\mathrm{TS} [a,g]=\sum_d \int_{\mathscr{M}^d_s} \int\mathrm{D}\alpha\mathrm{D}\beta \exp \left(\int_C \beta\bar\partial_{a+\epsilon\Psi g}\alpha \right)\, , $$ where the functional integral is over the space of $\alpha$s and $\beta$s which are fermionic spinors on each $C$ with values in $E$ and $E^*$ respectively. These conjectured forms are only confidently expected to be valid for tree diagrams when we consider connected rational curves and a special normal form of the Dolbeault representatives for $(a,g)$. Here we will only make contact with the MHV diagram formulation of twistor-string theory, Cachazo, Svrcek and Witten (2004), in which instead of connected rational curves, we consider maximally disconnected rational curves so that each $C$ is the union of $d$ lines (degree 1 curves) in super-twistor space. In this approach, the $d$ lines need to be connected into a tree by Chern-Simons propagators. It has been argued that this disconnected formulation is equivalent to the connected formulation by Gukov, Motl and Nietzke (2004). To obtain a generating functional in the MHV diagram formulation, we write $C=\cup_{r=1}^d L(x_r,\theta_r)$ where $(x_r,\theta_r)$ are $d$ points in super Minkowski space $\mathbb{M}_s$. The moduli space of such disconnected curves is therefore the $d$-fold product of super-Minkowski space $\mathbb{M}^d_s$. The generating function for this version of the theory needs to include the Chern-Simons action to give $$ \mathcal{A}_\mathrm{TS}[a,g]= \mathrm{e}^{\frac i\hbar S_{\mathrm{asd}}[\tilde a,\tilde g]}\sum_d \int_{\mathbb{M}^d_s}\d\mu_d\; \int\mathrm{D}\alpha\mathrm{D}\beta\; \exp \left(\sum_{r=1}^d\int_{L(x_r,\theta_r)} \beta\bar\partial_{\tilde a+\epsilon\Psi \tilde g}\alpha \right)\, , $$ where $\d\mu_d= \Pi_{r=1}^d\d^4 x_r\d^8\theta_r$ and here, as in the previous subsection, $(\tilde a, \tilde g)$ are understood to be the solutions to the classical field equations obtained by iterating the appropriate integral versions of the field equations with inhomogeneous terms given by $(a,g)$.\footnote{In the main applications of twistor string theory $(a,g)$ are taken to be the Penrose transform of plane waves which are entire on complex Minkowski space, but singular at infinity. Hence, $(a,g)$ can be defined smoothly over the complement of the line in $\mathbb{PT}$ corresponding to the point at infinity in space-time. Wick rotation, using the analycity properties of the Feynman propagator and its counterpart on twistor space, can then be invoked to analytically continue the integrals over the Euclidean section. We will therefore assume that all integrals are over the Euclidean section $\mathbb{M}$. } Expanding this in $\epsilon$, it is straightforward to see that the supersymmetric integrals over $\theta_r$ only give nontrivial contributions from terms in the expansion in which there are precisely two $\Psi$s integrated over each set of $\theta_r$. Thus $\mathcal{A}_\mathrm{TS}=\sum_d \epsilon^{2d}\mathcal{A}_\mathrm{TS}^d$ where \begin{eqnarray} \mathcal{A}_\mathrm{TS}^d&=& \mathrm{e}^{\frac i\hbar S_{\mathrm{asd}}[\tilde a,\tilde g]} \int_{\mathbb{M}^d_s} \d\mu_d \int\mathrm{D}\alpha\mathrm{D}\beta \; \mathrm{e}^{ \left(\sum_r\int_{L_{(x_r,\theta_r)}} \beta\bar\partial_{\tilde a}\alpha\right)} \frac{(2d)!}{2^d d!}\frac{\Pi_s\left(\int_{L_{(x_s,\theta_s)}}\beta \Psi \tilde g\alpha \right)^2}{(2d)!} \nonumber\\ &=& \mathrm{e}^{\frac i\hbar S_{\mathrm{asd}}[\tilde a,\tilde g]} \int_{\mathbb{M}^d_s} \Pi_{r=1}^d\d^4 x_r\d^8\theta_r \frac{\left\langle\left(\int_{L_{(x_r,\theta_r)}}\beta \Psi \tilde g\alpha \right)^2\right\rangle}{2^dd!} \nonumber\\ &=&\mathrm{e}^{\frac i\hbar S_{\mathrm{asd}}[\tilde a,\tilde g]} \frac{I[\tilde g,\tilde a]^d}{2^d d!} \, , \label{maincalc} \end{eqnarray} where the combinatorial factor in the first line comes from the number of choices of pairs of integrals over the $i$th copy of $\mathbb{M}_s$ over the $2d$ factors in the $2d$th term of the expansion of the exponential and in the second we have used the formula for $I[g,a]$ in equation (\ref{YMfin}). We can now resum over $d$ to obtain \begin{equation}\label{YMgen} \mathcal{A}_\mathrm{TS}=\exp \frac i\hbar \left(S_{\mathrm{asd}}[\tilde a,\tilde g]+ \frac{\epsilon^2\hbar}{2i}I[\tilde g,\tilde a]\right)\, . \end{equation} As can be seen, up to a redefinition of the expansion parameter, this gives rise to the classical Yang-Mills action generating functional as desired. \subsection{Extension to the full quantum field theory} The full quantum field theoretic generating functionals for Yang-Mills are expressed formally in terms of the path integral $$ \mathcal{A}[a,g]=\int \mathrm{D} \tilde a\mathrm{D}\tilde g \exp\frac i\hbar \left( S_{\mathrm{asd}}[\tilde a,\tilde g] + \epsilon I[\tilde g,\tilde a]\right) $$ It is clear that formally the expansion and resummation in (\ref{maincalc})and (\ref{YMgen}) will be possible in the full quantum field theoretic path integral as in the generating functionals for tree diagrams discussed above to yield $$ \mathcal{A}[a,g]=\int \mathrm{D} \tilde a\mathrm{D}\tilde g \mathrm{D}\alpha\mathrm{D}\beta\;\mathrm{e}^{\frac i\hbar S_{\mathrm{asd}}[\tilde a,\tilde g]} \sum_d\int_{\mathbb{M}_s^d}\d\mu_d\; \mathrm{e}^{\sum_{r=1}^d\int_{L(x_r,\theta_r)}\beta\bar\partial_{\tilde a+ \epsilon \Psi\tilde g}\alpha} $$ However, in the full quantum field theory we would need to consider the gauge fixing. There is a useful Poincar\'e invariant twistor space gauge in which the $(0,1)$-forms $(a,g)$ on twistor space are orthogonal to the fibres of the projection $\mathbb{PT}\to \mathbb{CP}^1$. This is particularly useful because it linearizes the Chern-Simons theory as $a\wedge a$ and $a\wedge g$ vanish identically. This is the gauge in which most of the twistor-string calculations have so far taken place. There is also a gauge that is adapted to the space-time description in which the cohomology classes are required to be harmonic up the fibres of the fibration $p:\mathbb{PT}\to\mathbb{M}$; this reduces $(a,g)$ to the forms given in equations (\ref{harmgauge}). In this latter gauge the quantum field theory will be equivalent to the standard space-time formulation because the Faddeev-Popov determinants will be independent of $A$ and $G$. The task then is to use BRST to see that the quantum theory is the same in these two different gauges. In particular, we would like to see that all loops, and only Yang-Mills loops are obtained by some suitable interpretation or generalisation of equation (\ref{tsgenerate}) in which general algebraic curves of higher genus contribute to $\mathscr{M}^d_s$. We will discuss this problem in a subsequent paper. \section{Conformal gravity}\label{confgrav} Berkovits and Witten (2004) have analyzed the twistor-string formulation of conformal (super-)gravity. In this section we give the twistor construction, action and twistor-string reformulation for conformal gravity. This proceeds very much analogously to the corresponding ideas for Yang-Mills and so we will sketch the ideas relatively briefly in this section. We take twistor space $\mathscr{PT}$ now to be a manifold diffeomorphic to $\mathbb{R}^4\times S^2$ endowed with an almost complex structure $\mathcal{J}$, i.e., $\mathcal{J}$ is an endomorphism of the real tangent bundle satisfying $\mathcal{J}^2=-1$. For the case of anti-self dual conformal gravity, Berkovits \& Witten provide an analogue of the truncation of the Chern-Simons action on super twistor space which is a functional of the almost complex structure tensor $\mathcal{J}$, and a second tensor $k$. As usual, one can use $\mathcal{J}$ to define subbundles $T^{(0,1)}$ and $T^{(1,0)}$ of the complexified tangent bundle $T_\mathbb{C}$ as the $-i$ and $+i$ eigenspaces of $\mathcal{J}:T_\mathbb{C}\to T_\mathbb{C}$ respectively and then define correspondingly subbundles $\Omega^{(p,q)}$ of the bundles $\Omega^{p+q}$ of complex differential forms. Similarly, $\partial$ and $\bar\partial$-operators can be defined as the projection of the exterior derivative $\d$ acting on sections of $\Omega^{p,q}$ onto $\Omega^{(p+1,q)}$ and $\Omega^{p,q+1}$ respectively. However, in general, we will have that $N:=\bar\partial^2\in\Omega^{(0,2)}\otimes T^{(1,0)}$ does not vanish. In order for the ingredients to make contact with ordinary twistor theory, we must require that $\mathcal{J}$ is chosen so that canonical bundle $\Omega^{(3,0)}$ has Chern class $-4$ on the $S^2$ factors. The tensor $k$ is a section of $\Omega^{(1,1)}\otimes \Omega^{(3,0)}$. The twistor space Lagrangian for the anti-self dual field is $$ S[\mathcal{J},k]=\int_{\mathscr{PT}}(\bar\partial^2, k) $$ where the pairing $(,)$ denotes both the contraction of the holomorphic $T^{(1,0)}$ index of $N=\bar\partial^2$ with the $\Omega^{(1,0)}$ index of $k$ and the wedge product of the antiholomorphic form indices with each-other and the $\Omega^{(3,0)}$ indices. It is easily seen that the action is diffeomorphism invariant. Furthermore we have \begin{lemma} The action $S[\mathcal{J},k]$ is invariant under $k\rightarrow k+ \bar\partial c$, where $c$ is a compactly supported section of $\Omega^{(1,0)}\otimes\Omega^{(3,0)}$. \end{lemma} \Proof This follows from an identity obeyed by $N$ that arises as follows. In general the exterior derivative maps $$ \d:\Omega^{(p,q)}\rightarrow\Omega^{(p+2,q-1)} \oplus\Omega^{(p+1,q)}\oplus\Omega^{(p,q+1)}\oplus\Omega^{(p-1,q+2)} \, . $$ The map $\d:\Omega^{(p,q)}\rightarrow\Omega^{(p+2,q-1)}$ is given by contraction with the vector index of $-N$ and wedge product over the form indices which can we write as $\alpha\rightarrow -N\hook\wedge \alpha$. The map $\d:\Omega^{(p,q)}\rightarrow\Omega^{(p-1,q+2)}$ is similarly determined by $\bar N$. It is a consequence of $\d^2=0$ that for $\alpha\in\Omega^{(1,0)}$, $\bar\partial(N\hook\alpha)-N\hook\wedge\bar\partial\alpha=0$. We can therefore see that if $k=\bar\partial (c\otimes \nu)$, where $\nu$ is a section of $\Omega^{(3,0)}$ and $c$ a section of $\Omega^{(1,0)}$, then we have $$ (N,\bar\partial (c\otimes \nu))= (N\hook\wedge\bar\partial c)\wedge \nu + (N\hook c)\wedge\bar\partial\nu=\d(N\hook c\wedge\nu) $$ and this implies the appropriate gauge invariance.$\Box$ \medskip The field equations from this action are that $\bar\partial^2=0$, i.e., that $\mathcal{J}$ should be integrable and $\bar\partial k=0$. Given the gauge invariance, $k$ defines an element of $H^1(\mathscr{PT}, \Omega^{(1,0)}\otimes\Omega^{(3,0)})$. The standard nonlinear-graviton construction, Penrose (1976), applied to $\mathscr{PT}$ constructs a complex 4-manifold $\mathscr{M}$ with holomorphic conformal structure $[g]$ that has anti-self-dual Weyl curvature. $\mathscr{M}$ is the space of rational curves in $\mathscr{PT}$ with normal bundle $\O(1)\oplus\O(1)$ (this requires either the existence of one rational curve in $\mathscr{PT}$ with normal bundel $\O(1)\oplus\O(1)$ or that $\mathcal{J}$ be close to the standard complex structure on a neighbourhood of a line in $\mathbb{CP}^3$). The Penrose transform for $k$ leads to a self-dual spinor $K_{A'B'C'D'}$ obeying the equation \begin{equation}\label{linsdweyl} (\nabla^{A'}_A\nabla^{B'}_B + \Phi^{A'B'}_{AB})K_{A'B'C'D'}=0\, , \end{equation} which are the linearised self-dual conformal gravity equations with $K_{A'B'C'D'}$ playing the role of an infinitesimal self-dual Weyl spinor on the anti-self-dual background. We will see how this can be done explicitly in a somewhat more general context later. We note, following Atiyah, Hitchin and Singer (1978), that $\mathscr{M}$ admits a real slice $M$ on which the conformal structure has Euclidean signature iff $\mathscr{PT}$ admits a conjugation $\hat{\;}:\mathscr{PT}\to\mathscr{PT}$ (i.e., it reverses the sign of $\mathcal{J}$) with no fixed points. The real space-time $M$ is then the space of rational curves that are left invariant by the conjugation. Following Berkovits \& Witten, we note that the above action corresponds to the space-time action $$ S_{\mbox{asd}}[g, K]=\int_M \psi^{A'B'C'D'}K_{A'B'C'D'}\d^4x\, . $$ for anti-self-dual conformal gravity on space-time where $g$ is a conformal structure and $\psi_{A'B'C'D'}$ is its self-dual Weyl spinor. The field equations implies the vanishing of $\psi_{A'B'C'D'}$ and equation (\ref{linsdweyl}) for $K_{A'B'C'D'}$. \subsection{The extension to non anti-self-dual fields} This ASD space-time action can be extended to full conformal gravity if we include the term $$ I[g,K]=\frac\epsilon2\int_M K^{A'B'C'D'}K_{A'B'C'D'}\d^4x\, . $$ We will reformulate this on twistor space in terms of $k$ to obtain $I[k, \mathcal{J}]$. We will take our integral to be that of a product of $k_1$ and $k_2$ over an 8-dimensional contour in $\mathscr{PT}\times\mathscr{PT}$. We must first develop the Penrose non-linear graviton construction in the case that the complex structure $\mathcal{J}$ is not integrable in order to define the ingredients that we will need. We first introduce a conjugation $\hat{\;}:\mathscr{PT}\rightarrow \mathscr{PT}$, $\hat{}\;^2=1$ that reverses $\mathcal{J}$, i.e., $\hat{\;}^*\mathcal{J}=-\mathcal{J}$. There are two types of such conjugations normally employed in twistor theory, depending on whether the conjugation has fixed points in twistor space or not. The latter case leads to Euclidean signature on space-time and we will assume that to be the case hereon. We now consider the moduli space $\mathscr{M}$ of pseudo-holomorphic rational curves in $\mathscr{PT}$, i.e., the space of embedded $S^2$s in $\mathscr{PT}$ in the same topological class as the $S^2$ factors in $\mathscr{PT}=\mathbb{R}^4\times S^2$, such that $\mathcal{J}$ leaves the tangent space invariant inducing a complex structure thereon. Theorems in McDuff and Salamon (2004) imply that $\mathscr{M}$ exists and is 8-dimensional if $\mathcal{J}$ is close to the standard complex structure on a neighbourhood of a line in $\mathbb{CP}^3$ (and we will assume this to be the case hereon). This follows from the ellipticity of the equations defining such a $\mathcal{J}$-holomorphic curve and the index theorem applied to its linearization. The conjugation $\hat{\;}$ induces a conjugation $\hat{\;}:\mathscr{M}\rightarrow \mathscr{M}$, $\hat{}\;^2=1$ and we define $M$ to be the (4-dimensional) fixed point set of $\hat{\;}$ on $\mathscr{M}$. We take $M$ to be our candidate space-time and we will have a projection $p:\mathscr{PT}\rightarrow M$ as a consequence of the fact that, with our assumptions, there will be a unique rational curve in $\mathscr{PT}$ through $Z$ and $\hat Z$. The fibres of $p$ are, by construction, Riemann spheres, $\mathbb{CP}^1$. We define $\mathscr{T}$ to be the total space of the line bundle $\left(\Omega^{(3,0)}\right)^{1/4}$ (this 4th root exists as a consequence of our assumptions on the topology of $\mathscr{PT}$ and $\mathcal{J}$, in particular that $\Omega^{(3,0)}$ has Chern class $-4$). Since $\Omega^{(3,0)}$ is an almost complex line bundle, its total space and its powers are almost complex, so that $\mathscr{T}$ has an almost complex structure. We denote the complex line bundles $\left(\Omega^{(3,0)}\right)^{-n/4}$ by $\O(n)$. On restriction to each $\mathbb{CP}^1$ fibre, $\mathscr{T}$ will be a line bundle of degree $-1$ and is hence the tautological bundle on each $\mathbb{CP}^1$ fibre of $p$. Let $\tilde p:\mathscr{T}\to M$ denote the projection induced by $p$. The fibres of $\tilde p$ minus the zero section are canonically identifiable with the complement of the zero-section in a rank two vector bundle with structure group $\, \mathrm{SU}(2)$ over $M$ and, with an abuse of notation, we will think of $\mathscr{T}$ as being this complex rank two vector bundle. Introduce linear coordinates $\pi_{A'}$, $A'=0',1'$ on the fibres of $\tilde p$. Define the Euler homogeneity operator $\Upsilon=\pi_{A'}\partial/\partial\pi_{A'}$. We choose a frame for $\Omega^{(1,0)}(\mathscr{T})$ as follows. Choose $1$-forms $D\pi_{A'}$ in $\Omega^{(1,0)}(\mathscr{T})$ of homogeneity degree 1 in $\pi_{A'}$, ${\mathcal{L}}_\Upsilon D\pi_{A'}=D\pi_{A'}$, and so that on restriction to the fibres of $\tilde p$, $D\pi_{A'}=\d\pi_{A'}$. In order to achieve this, in general $D\pi_{A'}$ will have to have non-holomorphic dependence on $\pi_{A'}$. The 1-form $\mathrm{D}\pi:=\pi^{A'}\mathrm{D}\pi_{A'}$ descends to $\mathscr{PT}$ to give a 1-form with values in $\O(2)$. At each point we can find a pair of complex 1-forms, $\theta^A$, $A=0,1$ homogeneous degree 1 in $\pi_{A'}$, such that $\theta^A$ are orthogonal to the fibres of $p$ and to $T^{(0,1)}$. The condition that $\Omega^{(3,0)}=\O(-4)$ by definition means that we have a canonical section $\Omega$ of $\Omega^{(3,0)}\otimes\O(4)$. Thus, since $\theta^A$ are sections of $\O(1)\otimes\Omega^{(1,0)}$ we can also require $\Omega=\theta^0\wedge\theta^1\wedge\pi_{A'}\d\pi^{A'}$. Such $\theta^A$ can be chosen to be global and non-vanishing on $\mathscr{PT}$. This gives our basis $\theta^\alpha=(\theta^A,D\pi_{A'})$ of $\Omega^{(1,0)}$. We can now study the Penrose transform of $k$ by setting $$ k=(k^{A'}\wedge\mathrm{D}\pi_{A'}+k_A\wedge\theta^A)\otimes\Omega $$ where $k^{A'}$ and $k_A$ are $(0,1)$-forms of homogeneity degree $-5$. Note that $\Upsilon\hook k=0$ so $k^{A'}\pi_{A'}=0$ so that $k^{A'}=\pi^{A'}\varkappa$ for some $(0,1)$-form $\varkappa$ with values in $\O(-6)$. We can now finally define the indexed 2-form $K^{A'}_{B'}$ on $M$ by $$ K^{A'}_{B'}(x) =\int_{L_x} \pi_{B'} k^{A'}\wedge\Omega\, =\int_{L_x} \pi_{B'} \pi^{A'}\varkappa \wedge\Omega\, . $$ Clearly $K^{A'}_{A'}=0$. We then define $$ I[k,\mathcal{J}]=\int_M K^{A'}_{B'}\wedge K^{B'}_{A'}\, . $$ This can be expressed directly in terms of $k$ and $\varkappa$ as follows. Let $\mathscr{PT}\times_M\mathscr{PT}$ be the 8-dimensional space which fibres over $M$ with fibre $\mathbb{CP}^1\times\mathbb{CP}^1$, the cartesian product of two copies of the fibre of $p:\mathscr{PT}\to M$. This has two projections $p_1$ and $p_2$ onto $\mathscr{PT}$, one on each factor. Let $k_1=p_1^*k$ and $k_2=p_2^*k$ and similarly $\varkappa_1=p_1^*\varkappa$ etc.. Our integral, then, is $$ I[k,\mathcal{J}]=\int_{\mathscr{PT}\times_M\mathscr{PT}} (\pi_1\cdot\pi_2)^2\varkappa_1\wedge\varkappa_2 \wedge\Omega_1\wedge\Omega_2\, . $$ where $(\pi_{1A'},\pi_{2B'})$ are homogeneous coordinates on the $\mathbb{CP}^1\times \mathbb{CP}^1$ fibres of $\mathscr{PT}\times_M\mathscr{PT}\to M$. This integral is invariant under $k\rightarrow k+\bar\partial l$ since this induces a variation of the integrand in $I[k,\mathcal{J}]$ that is exact on the fibres of $p_1$ and $p_2$. \begin{propn} Solutions to the classical field equations up to diffeomorphism arising from the action $S_T[\mathcal{J},k]=S_{\mbox{asd}}[\mathcal{J},k]-\frac\epsilon 2 I[k,\mathcal{J}]$ on twistor space are in one to one correspondence with solutions to the conformal gravity equations up to diffeomorphism. \end{propn} \Proof We proceed as before for Yang-Mills and focus on the field equation that arises from varying $k$. Varying first with respect to $k_A$ we find $$\bar\partial^2\hook\theta^A=0.$$ Varying $k^{A'}$ (or equivalently $\varkappa$) we obtain: \begin{equation}\label{confgrav1} (\bar\partial^2\hook \mathrm{D}\pi )|_Z=\int_{p^{-1}(p(Z))} \pi^{B'}\pi_{A'}\pi^1_{B'}k_1^{A'}\wedge\Omega_1= \pi_{A'}\pi^{B'}K_{B'}^{A'(0,2)} \end{equation} where the subscript $(0,2)$ on a 2-form denotes projection onto the $(0,2)$ part. In particular, the (0,2)-form part of $\bar\partial^2$ annihilates vertical vectors. The normal bundle to the fibre $L_x$ of $p$ over $x$ is therefore a holomorphic vector bundle on $L_x$. On each $L_x$, $\theta^A$ can therefore be chosen uniquely up to a fibrewise global $\mathrm{GL}(2,\mathbb{C})$ action on the $A$ index to be holomorphic. The topological assumption that the canonical bundle of twistor space is $\O(-4)$ means that the normal bundle should have first Chern class 2, so that it has generic splitting type $\O(1)\oplus\O(1)$, and must be isomorphic to this with our assumption that $\mathcal{J}$ is close to the standard one; $\theta^A$ can be defined to be this isomorphism. Since $\theta^A$ is global and holomorphic in $\pi_{A'}$ with homogeneity degree 1, there exists 1-forms $\theta^{AA'}$ on $M$ such that $\theta^A=\theta^{AA'}\pi_{A'}$. This yields a conformal structure $$ \d s^2=\varepsilon_{AB}\varepsilon_{A'B'}\theta^{AA'}\theta^{BB'}\, . $$ Similarly, $\mathrm{D}\pi_{A'}$ can be chosen to be holomorphic up the fibres and chosen globally up to a freedom $\mathrm{D}\pi_{A'}\rightarrow \mathrm{D}\pi_{A'}+\gamma_{AA'}\theta^A$ for some $\gamma_{AA'}$ that depends only on $x$. Equation (\ref{confgrav1}) implies the vanishing of the conformally invariant part of the torsion of the connection determined by the horizontal subspaces defined by $\mathrm{D}\pi_{A'}$. We can now see that $M$ is a manifold with Riemmannian conformal structure and that $\mathscr{PT}\to M$ is its projective spin bundle with the standard twistorial almost complex structure as in Atiyah Hitchin and Singer (1978). For this almost complex structure, it is standard that $$ \bar\partial^2= \pi_{A'}\psi^{A'}_{B'C'D'}\varepsilon_{CD}\theta^{CC'}\wedge\theta^{DD'} \frac{\partial}{\partial\pi_{B'}} $$ where $\psi_{A'B'C'D'}$ is the self-dual Weyl-spinor. Thus the field equation obtained by varying $k$ implies that $$ \psi_{A'B'C'D'}=\epsilon K_{A'B'C'D'} $$ We can now see that the action reduces to the space-time action $S_{\mathrm{asd}}[g,K] + I[g,K]$ and so this action is in fact equivalent to that for conformal gravity.$ \Box$ We note that in the above, the diffeomorphism freedom appropriate to full twistor space is broken to the diffeomeorphism freedom on space-time, together with the automorphisms of the bundle of self-dual spinors. In order to similarly reduce the gauge freedom for $k$, we can require that, on each fibre of $p$, it is a harmonic representative of the restriction of the cohomology class. As before, an explicit expression for $k$ can be given in this gauge that satisfies the field equations arising from the twistor action when the conformal structure has vanishing Bach tensor. \subsection{Reformulation on supertwistor space}\label{susyconfgrav} As in the Yang-Mills case, we can rewrite the $I[g,K]$ term in terms of an integral over $M$ and the two copies of the fibre $L_x$ of $\mathscr{PT}\to M$. \begin{equation}\label{confgravIasD} I[k,\mathcal{J}]= \int_{\mathscr{PT}\times_M\mathscr{PT}} (\pi_1\cdot\pi_2)^4 \varkappa_1\wedge\varkappa_2 \wedge\mathrm{D}\pi_1\wedge\mathrm{D}\pi_2\wedge\d^4x \end{equation} As before, we can introduce $N=4$ super-twistor space $\mathscr{PT}_s$ and its correspondence with super space-time, $M_s$ as follows. Let $\psi_i$, $i=1,\ldots, 4$ be anticommuting variables on the supersymmetric twistor space $\mathscr{PT}_s$ with values in $\O(1)$, and let $\theta_i^{A'}$ be the corresponding anti-commuting coordinates on the super space-time $M_s$ with incidence relation $\psi_i=\pi_{A'}\theta^{A'}_i$. As before set $\Psi=\psi_1\psi_2\psi_3\psi_4$ and $\Omega^s=\Omega\wedge \d\psi_1\wedge\ldots\wedge\d\psi_4$. Using again the relation $\int \d^8\theta \;\Psi_1\Psi_2=(\pi_1\cdot\pi_2)^4$ we can write \begin{eqnarray} I[k,\mathcal{J}]&=&\int_{\mathscr{PT}_s\times_M\mathscr{PT}_s} (\pi_1\cdot\pi_2)^{-2}(\Psi_1 \varkappa_1 )\wedge(\Psi_2\varkappa_2 )\wedge\Omega^s_1\wedge\Omega^s_2 \nonumber\\ &=& \int_{M_s\times L(x,\theta)\times L(x,\theta)} (\Psi_1\varkappa_1\wedge\mathrm{D}\pi_1) \wedge (\Psi_2\varkappa_2 \wedge \mathrm{D}\pi_2)\wedge \d^4x\wedge\d^8\theta\nonumber \\ &=& \int_{M_s\times L(x,\theta)\times L(x,\theta)} (\Psi_1 k)\wedge(\Psi_2 k)\wedge \d^4x\wedge\d^8\theta\nonumber\\ &=&\int_{M_s} \d^4x\d^8\theta \left(\int_{L(x,\theta)}\Psi k\right)^2 \end{eqnarray} where the second last identity follows simply from the fact that $k|_{L(x,\theta)}=\varkappa\wedge\mathrm{D}\pi$ and we now think of $k\in\Omega^{(1,0)}(-4)$ rather than as a 1-form with values in $\Omega^{(3,0)}$. \subsection{Twistor-string theory for conformal gravity}\label{tsconfgrav} For simplicity we work with formal path integral formulae. Following the logic of \S\ref{twistorstring} backwards now, we start with the twistor version of the path integral for conformal gravity and work towards a formulation along the lines of equation (\ref{tsgenerate}). We have that the generating functional for conformal gravity scattering in terms of the twistor Lagrangians is $$ \mathcal{A}[\mathcal{J},k]=\int \mathrm{D}\tilde\mathcal{J}\, \mathrm{D}\tilde k\; e^{S_{\mathrm{asd}}[\tilde\mathcal{J},\tilde k]-\frac{\epsilon^2}{2} I[\tilde k,\tilde \mathcal{J}]} $$ where again the path integral is over fields $\tilde \mathcal{J}$ and $\tilde k$ that are suitably aymptotic to $\mathcal{J}$ and $k$. We can manipulate this as before to obtain: \begin{eqnarray} \mathcal{A}[\mathcal{J},k]&=&\int \mathrm{D}\tilde\mathcal{J}\, \mathrm{D}\tilde k\; e^{S_{\mathrm{asd}}[\tilde \mathcal{J},\tilde k]} \sum_{d=0}^\infty\frac{\epsilon^{2d}I[\tilde k,\tilde \mathcal{J}]^d}{2^dd!} \nonumber \\ &=& \int \mathrm{D}\tilde\mathcal{J}\, \mathrm{D}\tilde k\; e^{S_{\mathrm{asd}}[\tilde \mathcal{J},\tilde k]} \sum_{d=0}^\infty \frac{\epsilon^{2d}}{2^d d!} \int_{M_s^d}\Pi_{r=1}^d \d^4x_r\d^8\theta_r \left(\int_{L(x_r,\theta_r)}\Psi_r\tilde k_r\right)^2\nonumber \\ &=& \sum_{d=0}^\infty \int_{M_s^d}\Pi_{r=1}^d \d^4x_r\d^8\theta_r \int \mathrm{D}\tilde\mathcal{J}\, \mathrm{D}\tilde k\; e^{S_{\mathrm{asd}}[\tilde \mathcal{J},\tilde k] + \epsilon\sum_{r=1}^d \int_{L(x_r,\theta_r)} \Psi \tilde k} \end{eqnarray} This yields the coupling of the D1 instantons, $L(x_r,\theta_r)$, to the 1-form $\Psi k$ precisely as proposed in Berkovits and Witten (2004). \section*{References} Atiyah, M.F., Hitchin, N.J.\ and Singer, I.M.\ (1978) Self-duality in four dimensional Riemannian geometry, {\it Proc. Roy Soc. Lond.}, {\bf A 362}, 425-61. \smallskip \noindent Berkovits, N., and Witten, E. (2004) Conformal supergravity in Twistor-String theory, arXiv: hep-th/0406051. \smallskip \noindent Cachazo, F., and Svrcek, P. (2005) Lectures on twistor strings and perturbative Yang-Mills theory, arXiv:hep-th/0504194. \smallskip \noindent Cachazo, F., Svrcek, P. and Witten, E. (2004) MHV vertices and tree amplitudes in gauge theory, JHEP, 0409:006, arXiv:hep-th/0403047. \smallskip \noindent Chalmers, G., and Siegel, W. (1996) The self-dual sector of QCD amplitudes, Phys. Rev. D54, 7628-33. arXiv:hep-th/9606061. \smallskip \noindent Faddeev, L., and Slavnov, A. (1991) Gauge fields: an introduction to quantum theory, Frontiers in Physics, Perseus. \smallskip \noindent Gukov, S., Motl, L., and Nietzke, A., (2004) Equivalence of twistor prescriptions for super Yang-Mills, arXiv:hep-th/0404085. \smallskip \noindent Mason, L.J., Singer, M.A., and Woodhouse, N.M.J.\ (2002) Tau functions, twistor theory and quantum field theory, Comm.\ Math.\ Phys.\, {\bf 230}, no.\ 3, 389-420, arXiv: math-ph/0105038. \smallskip \noindent McDuff, D., and Salamon, D. (2004) J-homolmorphic curves and symplectic topology, Colloquium publications {\bf 52}, AMS. \smallskip \noindent Penrose, R. (1976) Nonlinear gravitons and curved twistor theory, Gen. Rel. Grav., {\bf 7}, 31-52. \smallskip \noindent Roiban, R., Spradlin, M., and Volovich, A. (2004) On the tree level S-matrix for Yang-Mills theory, Phys. Rev. D70, 026009, arXiv:hep-th/0403190. See also: Roiban, R., Spradlin, M., and Volovich, A. (2004) A googly amplitude from the B model on Twistor space, JHEP 0404, 012, arXiv:hep-th/0402016 and Roiban, R., and Volovich, A. (2004) All googly amplitudes from the B model in Twistor space, Phys.\ Rev.\ Lett.\ {\bf 93}, 131602, arXiv:hep-th/0402121. \smallskip \noindent Witten, E. (2004) Perturbative gauge theory as a string theory in twistor space, {\it Comm. Math. Phys.}, {\bf 252}, p189, arXiv:hep-th/0312171. \smallskip \noindent Woodhouse, N.M.J (1985) Real methods in twistor theory, Class.\ Quant.\ Grav.\, {\bf 2}, 257-91. \end{document}
1,116,691,500,587
arxiv
\section{Introduction} The most numerous extragalactic very high energy (VHE, E$>$100\,GeV) $\gamma$-ray sources are blazars, which are active galactic nuclei (AGN) with a relativistic jets pointing close to our line of sight. Within the blazar group the most numerous VHE $\gamma$-ray emitters are X-ray bright BL Lacertae objects (BL Lacs) while only three blazars of the flat spectrum radio quasars (FSRQs) type have been detected to emit VHE $\gamma$ rays. Blazars typically show variable emission in all wavebands from radio to $\gamma$ rays. FSRQs are more luminous than BL Lacs at $\gamma$ rays and so they could, in principle, be observed at greater distances at very high energies. The SEDs of both types of sources show two peaks: the first is generally attributed to synchrotron emission and the second one to inverse Compton (IC) scattering, though hadronic mechanisms have also been proposed for producing the second peak \citep[see e.g.][]{bottcher}. In FSRQs the first peak is usually in the infrared regime, while for BL~Lacs it is between infrared and hard X-rays. The optical spectra of FSRQs show broad emission lines, indicative of high velocity gas in the so-called broad line region (BLR) close (0.1 to 1 parsec) to the central engine \citep[e.g.][]{kaspi}, while BL Lacs show very weak or no emission lines in their spectra. Because of these properties FSRQs were not thought to be good candidates to emit VHE $\gamma$ rays: the low synchrotron peak frequency may imply efficient synchrotron cooling, which makes it difficult to produce VHE $\gamma$-ray emission. Additionally, if the $\gamma$ rays are produced close to the central engine, the BLR clouds absorb the $\gamma$-ray emission via pair production. The high redshift also implies strong absorption of VHE $\gamma$ rays by the extragalactic background light \citep[EBL;][]{Ste92,Hau01}. Despite these difficulties, MAGIC detected VHE $\gamma$ rays from the FSRQ 3C~279 (z=0.536) in 2006 \citep{Science}. This discovery was followed by a second detection in 2007 \citep{3C279} and the detection of two other FSRQs \object{PKS~1510$-$089} (z=0.36) by H.E.S.S. \citep{hess} in 2009 and PKS~1222+216 (z=0.432) in 2010 \citep{1222}. In this paper we report the detection of VHE $\gamma$ rays from \object{PKS~1510$-$089} in 2012 February-April \citep{atel} by the MAGIC telescopes. The standard picture for FSRQs is that the $\gamma$ rays are emitted close to the central black hole (so called ``near-dissipation zone"), where the external photons from BLR can serve as seed photons for IC scattering \citep[e.g.][]{hartman}. This picture was already challenged in the EGRET era by the observations of a connection between radio outburst and $\gamma$-ray flares \citep[e.g.][]{jorstad01,lahteenmaki,lindfors06}. The observations of VHE $\gamma$ rays from FSRQs have further challenged the ``near-dissipation zone'' emission scenario \citep[see e.g.][]{3C279,1222}, because in order to produce the observed VHE $\gamma$-ray flux, the MeV $\gamma$-ray flux would have to be much higher than observed. Moreover, the combined HE to VHE $\gamma$-ray spectrum does not show a break at a few tens of GeV as would be expected if the $\gamma$ rays originated in inside the BLR \citep[e.g.][]{FM}. In addition, at least in some cases (3C~279 in 2007 and PKS~1222+216 in 2010), the VHE $\gamma$-ray detections were coincident with zero-separation epochs of new knots emerging from the 43\,GHz Very Long Baseline Array (VLBA) core \citep{larionov,jorstad11, marscher12}, suggesting that VHE $\gamma$ rays could be emitted in these knots, tens of parsecs away from the central engine. Arguments for and against the ``near-dissipation zone'' are systematically discussed in e.g. \citet{sikora09}. In general, the main argument against emission originating far away from the central engine has been the fast variability observed in $\gamma$ rays. However, the recent model by \citet{marscher14} where relativistic turbulent plasma crosses a standing shock, could potentially explain both the observed radio-gamma connection and the fast variability of $\gamma$ rays. \object{PKS~1510$-$089} is a $\gamma$-ray bright quasar, whose jet exhibits one of the fastest apparent motions (up to 45$c$) amongst all blazars \citep{jorstad05}. It was discovered in HE $\gamma$ rays by EGRET, but no variability was detected \citep{hartman99}, while in the AGILE and {\it Fermi} era it has shown several flaring epochs. A variability study of this source with AGILE data in the period 2007 July -- 2009 October was presented in \citet{verrecchia13}. The source showed bright flares at radio, optical, X-ray and HE $\gamma$-ray energies at the beginning of 2009 \citep{marscher,abdo10,dammando11}. The discovery of VHE $\gamma$ rays from \object{PKS~1510$-$089} also took place in this period, displaying a rather low flux (F\,($>$150 GeV)=($1.0\pm0.2_{\mathrm stat}\pm0.2_{\mathrm sys})\cdot10^{-11}$ ph cm$^{-2}$ s$^{-1}$, $\sim$3\% of Crab Nebula flux) and a very soft spectrum \citep[with photon index, $\Gamma = 5.4\pm 0.7_{\rm stat}\pm 0.3_{\rm sys}$,][]{hess}. In HE $\gamma$ rays this outburst consisted of several flares. In X-rays flaring was moderate and not correlated with the $\gamma$-ray flaring, but the last $\gamma$-ray flare was accompanied by a large optical outburst (reaching a peak flux of 18\,mJy in the R-band while the quiescent level flux is typically $\sim2$\,mJy) and a large radio outburst (reaching a maximum of 4\,Jy, 1-2\,Jy being the normal quiescent state flux at 37\,GHz). During the $\gamma$-ray flares the optical electric vector position angle (EVPA) rotated by $>720^\circ$ and during the major optical flare, the optical polarisation degree increased to $>30\%$. In the 43\,GHz VLBA maps a superluminal knot emerged from the VLBA core with a zero-separation epoch essentially simultaneous with this sharp optical flare. \citet{marscher} concluded that the $\gamma$-ray flaring activity was taking place in a knot seen in the VLBA images at later times, placing the emission region distant from the central engine. This and the variable synchrotron to $\gamma$-ray ratio require that there are local sources of seed photons for IC scattering within or just outside the jet (e.g. a slow sheath of a jet). In contrast, based on the ratio between optical and $\gamma$-ray variability \citet{abdo10} concluded that the $\gamma$-ray emission favors an external Compton model where the seed photons are provided by the BLR clouds. In second half of 2011 the source again showed activity in several bands. First, in 2011 July, there were optical and HE $\gamma$-ray flares accompanied by the rotation of the EVPA by $>380^\circ$ \citep{orienti}. In second half of 2011 \object{PKS~1510$-$089} underwent the brightest radio flare ever observed from the source and there was associated high activity in the HE $\gamma$-ray band. The flare was accompanied by the appearance of a new component in the VLBA jet at 15\,GHz \citep[][]{orienti} and by extremely fast $\gamma$-ray variability with time scales down to 20 minutes \citep[e.g.][]{saito,2013arXiv1304.2878F}. Unfortunately, during this period the source was not observable for ground based optical and $\gamma$-ray instruments. In 2012 February \object{PKS~1510$-$089} showed again high activity in HE $\gamma$ rays \citep{lucarelli}. This triggered observations of the source with the MAGIC telescopes which resulted with a significant detection of VHE $\gamma$ rays \citep{atel, decaneva}. The results from the MAGIC observations (Section 2) are presented together with HE $\gamma$-ray data from AGILE and {\it Fermi} (Section 3), X-ray data from {\it Swift} (Section 4), near infrared, optical, ultraviolet (Section 5), and radio observations (Section 6) from several instruments. A subset of the data presented here have been previously presented in \citet{fermisymp}, while in this paper we present the full analysis of the multi-frequency behaviour of the source during 2012 February-April and compare it with the previous flaring epochs of \object{PKS~1510$-$089}. \section{MAGIC VHE $\gamma$-ray observations, data analysis, and results} \subsection{Observations and data analysis} MAGIC is a system of two 17\,m diameter Imaging Air Cherenkov Telescopes (IACTs) located at the Roque de los Muchachos Observatory on La Palma, one of the Canary Islands (28$^\circ$46$^\prime$ N, 17$^\circ$53.4$^\prime$ W at 2231 m a.s.l.). The large collection area of the telescopes and the advanced observational techniques enables us to reach a low energy threshold of 50\,GeV (in a normal stereo trigger mode) at low zenith angles. In late 2011 the telescope readout system was upgraded and replaced \citep{2013arXiv1305.1007S}. The MAGIC target of opportunity (ToO) observations of \object{PKS~1510$-$089} were carried out from 2012 February 3 to April 3 (MJD~55960-56020). During 28 nights $\sim$25\,hours of data were taken with the stereo trigger, of which 21.4\,hours data passed quality selection. The data were collected at zenith angles between 37$^\circ$ and 49$^\circ$. The telescopes were operated with the false source tracking method \citep{Fomin94}, the so-called wobble mode, in which the pointing direction counter-changes every 20 minutes between four sky positions at 0.4$^\circ$ offset with respect to the source position. Four wobble positions improve the background statistics, since three OFF positions can be sampled which reduces the impact of inhomogeneities in the camera acceptance. We analysed the data in the MARS analysis framework \citep{Moralejo09}. The images were processed using a cleaning algorithm that accounts for timing information \citep{Aliu09}. The criteria for {\it core} and {\it boundary} pixels are eight and four photo-electrons, respectively. These are different from those used for the standard analyses done before the upgrade of the readout \citep[the details are described in ][]{Performance} mainly due to the different noise level of the new readout system. The random forest (RF) method was used for the gamma-hadron separation \citep{Albert08_RF} using both mono and stereoscopic parameters. The reconstructed shower arrival direction of each telescope was calculated with the RF DISP method \citep{Aleksic10}, and the weighted mean of the closest pair amongst the reconstructed DISP positions is regarded as the final reconstructed position. \subsection{Results} The distributions of squared angular distances between the reconstructed source position and the nominal source position in the camera, the so-called $\theta^2$ plot, is shown in Fig.\ref{MAGICfig1}. The number of the background events was extracted from the three OFF regions which were symmetrical relative to the pointing position. Above the normalised background events, an excess of 539 $\gamma$ rays was found. The significance of a signal detection was evaluated following Equation (17) of \citet{Li83}. We found a corresponding significance of 6\,$\sigma$ from the 21.4\,hours observational data. The observation at high zenith angle had a somewhat higher energy threshold of 120\,GeV, determined from the Monte Carlo rate with an assumed photon index of 4.0. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{pks1510_theta2_round.eps}} \caption{Distribution of the square of reconstructed shower direction ($\theta^2$) with respect to the position of \object{PKS~1510$-$089} for the ON (black points) and the OFF (grey shaded area) in the camera coordinates. The events inside the vertical dashed line, corresponding to the a priori-defined signal region, are used to compute the detection significance.} \label{MAGICfig1} \end{figure} To derive the energy spectrum of \object{PKS~1510$-$089} the unfolding procedure \citep{Albert07} was performed to correct for a distortion introduced by the detector which has a finite resolution and biases. Moreover, absorption by e$^+$e$^-$ pair creation due to the interaction with the EBL photons was also corrected for through the same unfolding process, using one of the several state-of-the-art EBL model \citep{dominguez}. We found that different unfolding methods gave consistent results, and the energy spectrum before the EBL correction can be well reproduced by a power law \begin{equation} \frac{dF}{dE} = F_0\left(\frac{E}{200\,\mathrm{GeV}}\right)^{-\Gamma}, \end{equation} where $F_0 =(4.8\pm0.9_{\rm stat}\pm 1.3_{\rm sys})\, \times 10^{-11}\,\mathrm{cm^{-2}\,s^{-1}\,TeV^{-1}}$ and $\Gamma = 3.8 \pm 0.4_{\rm stat} \pm 0.3_{\rm sys}$ are the flux constant at 200\,GeV and the photon index, respectively. As \object{PKS~1510$-$089} is a very weak, steep spectrum VHE source the systematic errors are larger than the ones evaluated in \citet{Performance}. The systematic error in the energy scale is 17\,\% as in \citet{Performance}. Fig.\ref{MAGICfig2} shows the differential energy spectra of \object{PKS~1510$-$089} measured by MAGIC in 2012. The fitted function and its one sigma error range displayed as the shaded regions were obtained through the forward unfolding, and the spectral points were derived using the Bertero unfolding method \citep{Bertero}. The spectrum extends up to $\sim$400\,GeV. The integral flux above 120\,GeV was estimated to be 4\,\% of the Crab Nebula's flux. After the correction for the EBL attenuation the spectrum is still well fitted by a power law with an intrinsic photon index of $\Gamma_{\rm int} = 2.5\pm 0.6_{\rm stat}$. The flux and spectrum are in agreement with those observed by H.E.S.S. in March-April 2009 \citep{hess}. The $\gamma$-ray flux variability above 200\,GeV was studied on both daily and weekly time scales. The mean flux above 200\,GeV of \object{PKS~1510$-$089} in this period was F\,($>$200 GeV)=(3.6$\pm$0.9)$\times10^{-12}$ ph cm$^{-2}$ s$^{-1}$. The reduced $\chi^2$ of the fit with a constant flux is $\chi^2/n_{dof}=40.5/24$ (2.3\,$\sigma$) for daily and $\chi^2/n_{dof}=7.7/4$ (1.6\,$\sigma$) for weekly light curve, consistent with no statistically significant variability. Following the method used in \citet{2344} we also estimated how much variability could be hidden in the data. We derived a 3\,$\sigma$ confidence level upper limit for individual nights$/$weeks and compared it to the observed mean flux adopting the night-to-night systematic error of 12\% \citep{Performance}. We found that variability of a factor of eight in nightly scale and factor of 2.5 in the weekly scale could be missed. The weekly light curve is displayed and discussed with the multi-frequency data in Section 7. The observed VHE $\gamma$-ray emission, showing only marginal variability over several weeks, displays a different behaviour than other FSRQs \citep{Science,1222,3C279}, but is in agreement with previous observations of \object{PKS~1510$-$089} by H.E.S.S. in 2009 March-April \citep{hess}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{PKS1510_2012SED_MAGICHESS_Bertero2.eps}} \caption{VHE differential energy spectra of \object{PKS~1510$-$089} measured by MAGIC in the period between 2012 February 3 and April 3. The blue open circles and the blue shaded region show the observed spectrum and its statistical uncertainty, the red dots and the red shaded region show the de-absorbed spectrum (see text). The grey open squares are the source spectrum observed in March-April 2009 by H.E.S.S. \citep{hess}.} \label{MAGICfig2} \end{figure} \section{HE $\gamma$-ray observations, data analysis, and results} We investigate the emission in the HE $\gamma$-ray range making use of two instruments: \textrm{AGILE}{}-GRID and {\it Fermi}--LAT. The comparison and combination of the HE and VHE $\gamma$-ray results are presented in Section 3.3. \begin{table*} \centering \caption{Integral photon fluxes $>$100 MeV detected by AGILE-GRID} \begin{tabular}{ccccc} \hline \hline \multicolumn{1}{c}{Epoch} & \multicolumn{1}{c}{Integration period} & \multicolumn{1}{c}{Energy bin} & \multicolumn{1}{c}{Flux}& \multicolumn{1}{c}{$\Gamma$} \\ \multicolumn{1}{c}{ } & \multicolumn{1}{c}{[MJD]} & \multicolumn{1}{c}{[MeV]} & \multicolumn{1}{c}{$[\rm{ph~cm^{-2} s^{-1}}]$ } & \multicolumn{1}{c}{ }\\ \hline Flare-I (7 days) & 55952.5 - 55959.5 & $>$ 100 & $(2.0\pm0.5)\times 10^{-6}$ & $2.17\pm0.24$\\ Flare-II (14 days) & 55977.5 - 55991.5 & $>$ 100 & $(4.4\pm0.5)\times 10^{-6}$ & $2.21\pm0.11$\\ Postflare (14 days) & 55998.5 - 56012.5 & $>$ 100 & $(1.8\pm0.5)\times 10^{-6}$ & $2.39\pm0.36$\\ Low$/$intermediate state & 55746.5 - 55803.5 & $>$ 100 & $(9.1\pm1.5)\times 10^{-7}$ & $2.44\pm0.17$\\ \hline \end{tabular} \label{tab1:AGILEresults} \end{table*} \begin{table*} \centering \caption{Differential flux values ($\rm{\nu F(\nu)}$) detected by AGILE-GRID in 2012} \begin{tabular}{ccccc} \hline \hline \multicolumn{1}{c}{Epoch} & \multicolumn{1}{c}{Integration period} & \multicolumn{1}{c}{Energy bin} & \multicolumn{1}{c}{ $\nu$ } & \multicolumn{1}{c}{$\rm{\nu F(\nu)}$}\\ \multicolumn{1}{c}{ } & \multicolumn{1}{c}{[MJD]} & \multicolumn{1}{c}{[MeV]} & \multicolumn{1}{c}{[Hz]} & \multicolumn{1}{c}{$[\rm{erg~cm^{-2} s^{-1}}]$}\\ \hline Flare-II (14 days) & 55977.5 - 55991.5 & 100 - 200 & 3.42 $\times 10^{22}$ & $(7.0\pm1.1)\times 10^{-10}$ \\ & & 200 - 400 & 6.85 $\times 10^{22}$ & $(7.2\pm1.3)\times 10^{-10}$ \\ & & 400 - 10000 & 4.84 $\times 10^{23}$ & $(5.2\pm1.1)\times 10^{-10}$ \\ \hline Low$/$intermediate state & 55746.5 - 55803.5 & 100 - 200 & 3.42 $\times 10^{22}$ & $(1.7\pm0.4)\times 10^{-10}$ \\ & & 200 - 400 & 6.85 $\times 10^{22}$ & $(1.7\pm0.4)\times 10^{-10}$ \\ & & 400 - 10000 & 4.84 $\times 10^{23}$ & $(6.4\pm0.3)\times 10^{-11}$ \\ \hline \end{tabular} \label{tab2:AGILEresults} \end{table*} \subsection{AGILE} AGILE \citep[Astrorivelatore Gamma ad Immagini LEggero,][]{Tavani2009:AGILE} is a scientific mission devoted to the observation of astrophysical sources of HE $\gamma$ rays in the 30\,MeV -- 30\,GeV energy range, with simultaneous X-ray imaging capability in the 18 -- 60\,keV band. The \textrm{AGILE}{} payload combines for the first time two coaxial detectors: the gamma-ray imaging detector (GRID, composed of a 12-plane silicon-tungsten tracker, a cesium-iodide mini-calorimeter and an anti-coincidence shield) and the hard X-ray detector Super-AGILE. The $\gamma$-ray GRID imager provides good performance in a relatively small and compact instrument due to the use of silicon technology: an effective area of the order of 500 cm$^2$ at several hundred MeV, an angular resolution of around 3.5$^\circ$ at 100\,MeV, decreasing below 1$^\circ$ above 1\,GeV, a very large field of view ($\sim$ 2.5 sr), as well as accurate timing, positional and altitude information. During the first $\sim 2.5$ years (2007 July - 2009 October), AGILE was operated in ``pointing observing mode'', characterised by long observations called observation blocks (OBs), typically of two to four weeks duration. Since 2009 November 4, following a malfunction of the rotation wheel, AGILE is operating in ``spinning observing mode'', surveying a large fraction (about $70 \% $) of the sky each day. Thanks to its sky monitoring capability and fast ground segment alert system distributed amongst the AGILE Data Centre (ADC) and the AGILE team institutes, AGILE is very effective in detecting bright $\gamma$-ray flares from blazars. Data were analysed applying the AGILE maximum likelihood (ML) analysis on the \object{PKS~1510$-$089} sky position, using the standard level-3 AGILE-GRID archive at ADC. This archive is composed by counts, exposure and diffuse $\gamma$-ray background \citep{Giuliani2004:diff_model} maps generated on several time scales (one day, one week, 28 days) from the official level-2 data archives, publicly available at the ADC site\footnote{ ADC pointing (sw=5\_19\_18\_17) and spinning (sw=5\_21\_18\_19) archives, from \texttt{http://agile.asdc.asi.it}}. Maps were generated for E $>$ 100~MeV including all events collected up to $60^{\circ}$ off-axis, excluding south Atlantic anomaly data, and by excluding regions within $10^{\circ}$ from the Earth limb to reduce albedo contamination. The data have been processed with the latest available software and calibrations\footnote{AGILE\_SW\_5.0\_SourceCode from ADC website, with \texttt{I0023} calibrations.}. For a general description of the \textrm{AGILE}{} data reduction and of the standard analysis pipeline see \citet[]{Pittori2009:Catalogue,vercellone}. Systematic errors of the AGILE ML analysis have been estimated to be $\sim$10\% of the flux values \citep{bulgarelli}. \par At the beginning of 2012, AGILE detected the \object{PKS~1510$-$089} in a high state in two distinct periods: one at the end of January-beginning of February, and the other at the end of February-beginning of March. The AGILE-GRID (E$>$100\,MeV) light curves covering the MAGIC observation of \object{PKS~1510$-$089} from January to March (MJD~55960-56000), with two days time binning are shown together with the multi-frequency light curves in Section 7. In comparing the AGILE and {\it Fermi} light curves it should be taken into account that over short time intervals, AGILE might not spectrally resolve the blazar due to low statistics, and in such cases a ``standard'' fixed spectral photon index value of 2.1 is adopted for the ML analysis. This effect may result in an additional systematic error on the flux (not shown in the figure). By using, for example, a fixed spectral index value of 2.4, AGILE flux values would change on average by a factor +15\%. The first high state (Flare-I) triggered the AGILE alert system and four day quick-look results were reported in ATel \#3907~\citep{ATel3907}. Performing a refined ML analysis by optimizing the background estimates on the AGILE-GRID data covering the seven day period from January 26 to February 2 (MJD~55952.5 to 55959.5), yields in a detection at a significance level of about 7\,$\sigma$. The flare-I spectral analysis gives a photon index $\Gamma=2.17 \pm 0.24$ and a flux F\,(E $>$ 100 MeV)=$(2.0 \pm 0.5)\cdot10^{-6}$~ph~cm$^{-2}$~s$^{-1}$. The second flare (flare-II), with higher $\gamma$-ray flux, was announced with ATel \#3934~\citep{lucarelli}. The source maintained its high state above 4.0$\cdot$10$^{-6}$~ph~cm$^{-2}$~s$^{-1}$ for almost two weeks. We performed the AGILE ML analysis on this two-week period (from 2012 February 20 to 2012 March 05, MJD~55977.5 to 55991.5) obtaining a detection at a $\sim$16\,$\sigma$ significance level. The corresponding spectral analysis provides a photon index $\Gamma$=2.21$\pm$0.11, consistent with that of Flare-I, but a higher flux F\,(E$>$100~MeV)=(4.4$\pm$0.5)$\cdot$10$^{-6}$~ph~cm$^{-2}$~s$^{-1}$. After 2012 March 9 (MJD~55995) the source went back to a low-flux state, with the source sky position approaching the border of field of view of AGILE, and after 2012 March 14 (MJD~56000) the AGILE daily effective exposure gradually decreased. The ML analysis over the 14 day period starting on 2012 March 12 (MJD~55998.5) gives the source at a significance level of around 6\,$\sigma$, with a photon index $\Gamma$=2.4$\pm$0.4 and an average flux F\,(E$>$100~MeV)=(1.8$\pm$0.5)$\cdot$10$^{-6}$~ph~cm$^{-2}$~s$^{-1}$. For comparison, we have identified one of the typical low$/$intermediate states of the source with $\gamma$-ray flux below 10$^{-6}$~ph~cm$^{-2}$~s$^{-1}$, from 2011 July 4 to 2011 August 30 (MJD~55746.5 to 55803.5), and performed the AGILE ML analysis getting a photon index $\Gamma$=2.44$\pm$0.17 and a flux F\,(E$>$100~MeV)=(0.91$\pm$0.15)$\cdot$10$^{-6}$~ph~cm$^{-2}$~s$^{-1}$. AGILE results during the MAGIC observation period in 2012 and this low intermediate state are summarised in Table~\ref{tab1:AGILEresults} and Table~\ref{tab2:AGILEresults} . \subsection{Fermi-LAT} \begin{table*}[t] \centering \caption{Comparison of the different spectral models for the Fermi-LAT data for \object{PKS~1510$-$089}} \scalebox{0.85}{ \hspace{-1.5cm} \begin{tabular}{cccccccccccc@{}c@{}} \hline\hline \begin{tabular}{c} \\ \large \bf Epoch \\ \\ \end{tabular} & \multicolumn{5}{c}{\large \bf Power-law} & \multicolumn{6}{c}{\large \bf Log parabola} & \multicolumn{1}{l}{\large \bf $\sigma$\tablefootmark{b}} \\ \cline{2-5} \cline{7-11} & Flux\tablefootmark{a}& Index & TS & Loglike && Flux\tablefootmark{a} & Alpha & Beta & TS & Loglike & \\ \begin{tabular}{c} MAGIC \\ observation \end{tabular} & 3.97 $\pm$ 0.08&2.39 $ \pm$ 0.02 & 12241 &107077 &&3.82 $\pm $ 0.08&2.24 $\pm$ 0.03& 0.09 $\pm$ 0.02 &12243&107056 &6 \\ \vspace{3pt} Mean state & 2.67 $\pm$ 0.04 &2.40 $\pm$0.01 & 19943 &269334 && 2.56$\pm$ 0.04 &2.26$\pm$ 0.02 & 0.09 $\pm$ 0.01 &19942&269299& 8\\ \vspace{3pt} \begin{tabular}{l} Low state \\ \end{tabular} & 0.79 $\pm$ 0.04 &2.52 $\pm$ 0.04 &1417& 99964&&0.75 $\pm$0.04&2.35 $\pm$0.07 &0.12 $\pm$0.04&1422&99959& 3 \\ \vspace{3pt} \begin{tabular}{l} High state\\ \end{tabular} &6.50$\pm$0.14&2.29$\pm$0.02& 7389&41207&& 6.24$\pm$0.17&2.12$\pm$0.04&0.10$\pm$0.02&7421&41191&4.5\\ \hline\hline \end{tabular}} \tablefoot{ \tablefoottext{a}{Flux (100 MeV - 300 GeV ) is in units of [$10^{-6}$ ph cm$^{-2}$s$^{-1}$]} \tablefoottext{b}{Significance by which the Log parabola model has to be preferred w.r.t. the simple power-law model ($\sigma$ calculated as [2\,$(Loglike _{Pwl}- Loglike _{LogP}) ]^{1/2}$).} } \label{1510fit} \end{table*} \textit{Fermi}--LAT (Large Area Telescope) is a pair conversion telescope designed to cover the energy band from 20\,MeV to greater than 300\,GeV \citep{atwood09}. In its primary observation strategy, survey mode, the LAT scans the entire sky every three hours and therefore can provide observations of \object{PKS~1510$-$089} simultaneous to MAGIC. PKS~1510$-$089 has been continuously monitored by \textit{Fermi} and the data used for this analysis were collected from 2012 January 1 to April 7 (MJD~55927-56024). They were analysed with the standard analysis tool {\it gtlike}, part of the \textit{Fermi} ScienceTools software package (version 09-27-01). Only good quality events within $10^{\circ}$ of PKS~1510$-$089 were selected for analysis. Moreover, to reduce the contamination from the Earth-limb $\gamma$ rays produced by cosmic ray interactions with the upper atmosphere, data were restricted to a maximal zenith angle of $100^{\circ}$ and time periods when the spacecraft rocking angle exceeded $52^{\circ}$ were excluded. To extract the spectral information we used the standard background models provided by the publicly available files gal\_2yearp7v6\_v0\_trim.fits and iso\_p7v6source.txt\footnote{\texttt{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/\\BackgroundModels.html}}. The background templates, whose normalizations were left free during the fitting process, take into account the diffuse $\gamma$-ray emission from our Galaxy and an isotropic diffuse component. During the spectral fitting of the point source the normalizations of the components comprising the entire background model were allowed to vary freely. To derive the source spectral information an unbinned maximum likelihood technique was applied to events in the energy range from 100\,MeV to 300\,GeV \citep{mattox96} in combination with the post-launch instrument response functions P7SOURCE\_V6. Sources from the 2FGL catalogue \citep{nolan} located within $15^{\circ}$ of PKS 1510$-$089 were incorporated in the model of the region by setting the spectral models and the initial parameters for the modelling to those reported in the 2FGL catalogue. In particular, the source of interest was modelled with a Log parabola spectrum \footnote{\texttt{http://fermi.gsfc.nasa.gov/ssc/data/analysis/\\scitools/source\_models.html}}: \newcommand{\mathrm{d}}{\mathrm{d}} \begin{equation} \frac{\mathrm{d} N} {\mathrm{d} E}\,=\,N_0 \left( \frac{E}{E_b} \right) ^{- \left( \alpha+\beta {\mathrm log} (\frac{E}{E_b}) \right)} \end{equation} where $N_0$ is the normalization, $E_b$ the break energy and $\alpha$ and $\beta$ parameters for the log parabola fit. In the fitting procedure the parameters of sources located within a $10^{\circ}$ radius centred on the source of interest were left free to vary while parameters of sources located within a $10^{\circ}$-$15^{\circ}$ annulus were fixed. When performing the fit for the light curve and SED bins, the photon indices of the sources were frozen to the best-fit values obtained from a long-term analysis. Systematic uncertainties in LAT results due to uncertainties in the effective area are discussed in \citet{ackermann12}; they are smaller than the statistical uncertainties of the points in the light curves and have been neglected. The \textit{Fermi}-LAT one day bin light curve is shown together with the multi-frequency light curves in Section 7. Since the source is not always significantly resolved, flux upper limits at 95\% confidence level were calculated for each time bin where the test statistic{\footnote{a maximum likelihood test statistic TS = 2$\Delta$log(likelihood) between models with and without a point source at the position of PKS 1510--089 \citep{mattox96}}} (TS) value for the source was TS$<$25. The light curve shows that the flaring activity had a duration of about 55 days in $\gamma$ rays and consisted of several distinct flares. As \object{PKS~1510$-$089} is known to show variability on time scale less than a day \citep{saito,brown} we also searched for shorter time scale of variability within the brightest flaring epoch 2012 February 17 to March 8 (MJD~55974-55994) and produced light curves in bins of 1.5 hours and 3 hours (the latter is shown in Fig.~\ref{combined_lightcurve}). We systematically looked at the light curves and calculated the doubling times ($t_d$) between significant (TS$>$25) adjacent bins following $t_d = \Delta t\cdot~\ln2 / \ln(F_{max}/F_{min})$. Excluding flux variations that were within 1$\sigma$ and doubling times with errors larger than 50\%, the shortest value that we derive for this period is $t_d = 1.5\pm0.6$ hour. We considered the \emph{Fermi}-LAT data of individual light-curve bins, fitting them with a power-law model in order to investigate spectral evolution in the HE range. In this analysis we do not find evidence for this behaviour, although we note that the source spectrum is better represented by the log parabola shape in several time intervals, thus the power-law fit may not adequately reproduce the source spectral shape. Additionally it is apparent that during the high state, the spectral index is significantly harder than for the low state or mean state (see below). The SED was obtained combining all events of time intervals coincident with the last two VHE detections, i.e. from February 19 to March 5 (MJD~55976-55991) and from March 15 to April 3 (MJD~56001-56020). For comparison we analysed the mean state in 2012 January-April (MJD~55927-56025), a low state SED which consists of the data taken in 2012 January and April (MJD~55927-55954 and 56007-56025) and a high state which consists of all time periods when the {\it Fermi} flux was $>6\cdot10^{-6}$~ph~cm$^{-2}$ s$^{-1}$. The log parabola model is significantly preferred (in the MAGIC observing epoch with 6$\sigma$ significance and in the low state with 3$\sigma$) with respect to the power-law in all the time intervals considered for this SED analysis. The detailed results are shown in Table~\ref{1510fit}. \subsection{Gamma-ray results} We compared the results of the observations in HE and VHE $\gamma$ rays. As discussed in previous sections, the HE $\gamma$-ray flux is variable on time scales shorter than day. Therefore it appears that fast variability can explain the small mismatches between daily fluxes of {\it Fermi}-LAT and two-day fluxes by AGILE-GRID. These light curves are shown together with multi-frequency light curves in Section 7. The variability amplitude of the HE $\gamma$-ray flux is rather large (more than one order of magnitude in flux) in the first MAGIC observing period (MJD~55976 to 55991). Still, within this period, MAGIC observed no statistically significant variability from the source. In Fig.~\ref{combined_lightcurve} the {\it Fermi}-LAT light curve in three hour bins is shown. The vertical lines show the MAGIC observation times, revealing that the MAGIC observations missed all the periods of fast HE $\gamma$-ray variability and therefore it was to be expected that no fast variability would be detected in the MAGIC observations. Apparently the MAGIC observations also missed the highest peaks of the HE $\gamma$-ray light curve. The maximum flux measured simultaneous to the MAGIC observations is F\,($>$100MeV)$\sim8\cdot10^{-6}$ ph~cm$^{-2}$ s$^{-1}$ and the average of the strictly simultaneous bins is F\,($>$100MeV)$\sim$4.4$\cdot10^{-6}$ ph~cm$^{-2}$ s$^{-1}$. For the second MAGIC observation window in March-April (from 56001 to 56020), fast variability could not be investigated because of the lower HE $\gamma$-ray state of the source. After March 23 (MJD~56009), the source was no longer detected on daily scales in HE $\gamma$ rays, the daily upper limits being below $1.0\cdot10^{-6}$ ph~cm$^{-2}$ s$^{-1}$. Therefore, in total, the HE $\gamma$-ray flux variability amplitude, within the windows strictly simultaneous to the MAGIC observing windows, was $\sim$eight on nightly scales, which could go undetected in the MAGIC light curve given the overall low flux as discussed in Section 2.2. It is therefore not possible to conclude if the lack of significant variability in the VHE $\gamma$-ray band has a real physical origin or if it is simply an observational bias (either due to unfortunate sampling or due to low photon statistics). \begin{figure} \includegraphics[width=0.18\textwidth, angle=-90]{fermi_magic.eps} \caption{{\it Fermi}-LAT $>$100\,MeV light curve in the three hour bins for the first MAGIC observing period. The vertical lines represent the MAGIC observing times (all shorter than three hours in duration) showing that the MAGIC observation windows missed the times of the fastest HE $\gamma$-ray variability.} \label{combined_lightcurve} \end{figure} The SED of \object{PKS~1510$-$089} from $\sim$100\,MeV to $\sim$400\,GeV is presented in Fig.~\ref{SED_Fermi_MAGIC}. The HE $\gamma$-ray data from AGILE-GRID and {\it Fermi}-LAT cover slightly different periods (AGILE from MJD~55977.5 to 55991.5 and {\it Fermi}-LAT from MJD~55976 to 55991 and from 56001 to 56020). The AGILE-GRID data consist of flaring state data only while the {\it Fermi}-LAT spectrum summarises all events of the time intervals coincident with the MAGIC observation window. As suggested by AGILE and confirmed by {\it Fermi}-LAT, the brighter states are characterised by a hardening of the HE spectrum, and therefore the higher flux observed by AGILE at 2\,GeV is expected. The peak of the SED is at $\sim$ 100\,MeV. The log parabola fit and the errors of the {\it Fermi}-LAT spectra have been extrapolated to the MAGIC energy range. We also show the extrapolation taking into account the EBL absorption using the model of \citet{dominguez}. The VHE $\gamma$-ray spectrum observed by MAGIC connects smoothly with this extrapolation suggesting that the emission originates in the same region. \begin{figure} \includegraphics[width=0.48\textwidth]{PKS1510_2012SED_LATMAGIC_logP.eps} \caption{$\gamma$-ray SED constructed from AGILE, {\it Fermi}-LAT and MAGIC data. The AGILE-GRID data (grey filled squares) correspond to the data of Flare-II (from MJD~55977.5 to 55991.5). The {\it Fermi}-LAT spectrum (black open circles) combines all events of time intervals coincident with the MAGIC observation window (MJD~55976 to 55991 and from 56001 to 56020) with the blue lines showing the log parabola fit to the data and its statistical uncertainty (the thinner lines). The fit and the errors of the {\it Fermi}-LAT spectra have been extrapolated to MAGIC energy range. The dashed blue lines show the extrapolation with the EBL absorption effects included. The MAGIC data points are shown with black filled squares (observed) and red filled circles (de-absorbed). The corresponding shaded region indicates the statistical uncertainty of the spectral fitting (same as in the Fig.~2). The grey data shows, for the comparison, the low-intermediate state spectrum of the source as measured by AGILE-GRID (triangles) and {\it Fermi}-LAT (open triangles) and high-state SED as measured by {\it Fermi}-LAT (open squares).} \label{SED_Fermi_MAGIC} \end{figure} \section{Swift X-ray observations, data analysis and results} The {\it Swift} satellite \citep{gehrels04} performed 23 ToO observations on PKS~1510$-$089 between 2012 February 2 and April 5 (MJD~55959-56022), triggered by the strong activity of the source detected first by AGILE \citep{lucarelli} and {\it Fermi}-LAT at HE $\gamma$-ray energies, and then by MAGIC at TeV energies \citep{atel}. The observations were performed with all three onboard instruments: the X-ray Telescope (XRT; \citet{burrows05}, 0.2--10.0 keV), the Ultraviolet Optical Telescope (UVOT; \citet{UVOT}, 170--600 nm), and the Burst Alert Telescope (BAT; \citet{barthelmy05}, 15--150 keV). For the {\it Swift}-XRT data analysis, we considered observations with exposure times longer than 500 seconds, including 20 observations. In addition we summed the data of the three observations performed on February 19 in order to have higher statistics. The XRT data were processed with standard procedures (\texttt{xrtpipeline v0.12.6}), filtering, and screening criteria by using the \texttt{Heasoft} package (v6.11). The source count rate was low during the entire campaign ($<$ 0.5 counts s$^{-1}$), so we only considered photon counting data and further selected XRT event grades 0--12. Pile-up correction was not required. Source events were extracted from a circular region with a radius of 20 pixels (one pixel $\sim$ 2.36$"$), while background events were extracted from a 50 pixel radius circular region not containing any contaminating sources and lying away from the source region. The spectral redistribution matrices v013 in the Calibration database maintained by HEASARC were used. The adopted energy range for spectral fitting is 0.3--10\,keV. When the number of counts was less than 200 the Cash statistic \citep{cash79} on ungrouped data was used. All the other spectra were rebinned with a minimum of 20 counts per energy bin to allow $\chi{^2}$ fitting within {\sc XSPEC} \citep[v12.6.0;][]{arnaud96}. We fitted the individual spectra with a simple absorbed power law, with a neutral hydrogen column density fixed to its Galactic value \citep[6.89 $\cdot$ 10$^{20}$ cm$ ^{-2}$;][]{kalberla05}. The fit results are reported in Table~\ref{XRT_1510}. During the observations {\it Swift}-XRT detected the source with a flux, F\,(0.3--10 keV), in the range (0.7-1.2)$\cdot$10$^{-12}$ erg cm$^{-2}$ s$^{-1}$, comparable to the flux observed in 2009 March, during a period of high HE $\gamma$-ray activity \citep{dammando11, abdo10}, but lower with respect to the high flux level observed in 2006 August \citep{kataoka08}. The light curve is shown in Section 7, together with the multi-frequency data. \begin{figure} \includegraphics[width=0.55\textwidth]{1510_XRT.ps} \caption{Flux (0.3--10 keV) versus photon index for {\it Swift}-XRT. Although there was only marginal X-ray variability during the observations, the plot shows a hint of harder when brighter trend.} \label{Swift_hardness} \end{figure} The flux versus photon index plot is shown in Fig.~\ref{Swift_hardness}. At higher flux the photon index seems to become harder. This behaviour is consistent with the harder when brighter trend reported in \citet{kataoka08} and \citet{dammando11}. As discussed in these papers, such a trend indicates that in bright states the X-ray emission is completely dominated by external Compton emission, while in lower state there is also contribution from a soft excess component, which could be e.g. a blurred reflection, Comptonization of the thermal disc emission or a mixture of synchrotron, external Compton and SSC emission. We also investigated the {\it Swift}-BAT data using the {\it Swift}-BAT Hard X-ray Transient Monitor \citep{krimm}. In the BAT data for 2012 January-April there is only a hint of signal (2.5$\sigma$) on 2012 February 9 (MJD~55966), with a rate of ($0.0033\pm0.0013$) counts s$^{-1}$ cm$^{-2}$, corresponding to 15\,mCrab in the 15-50\,keV energy band. As a comparison, in 2009 March the high flux observed by BAT in hard X-ray was 40\,mCrab \citep{dammando11}. \begin{table*}[th!] \centering \caption{Log and fitting results of {\it Swift}-XRT observations} \begin{tabular}{ccccc} \hline \hline \noalign{\smallskip} \multicolumn{1}{c}{Date} & \multicolumn{1}{c}{Net Exp. Time} & \multicolumn{1}{c}{Photon Index} & \multicolumn{1}{c}{Flux 0.3--10.0 keV\tablefootmark{a}} & \multicolumn{1}{c}{$ \chi^{2}_{\rm red}$ (d.o.f.) / Cash} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{ (sec) } & \multicolumn{1}{c}{$\Gamma$}& \multicolumn{1}{c}{($\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$}) & \multicolumn{1}{c}{} \\ \hline \noalign{\smallskip} $2012-02-02$ & 2470 & $1.35 \pm 0.17$ & $7.8 \pm 0.7$ & Cash \\ $2012-02-04$ & 2450 & $1.42 \pm 0.16$ & $10.7 \pm 1.2$ & 0.85 (19) \\ $2012-02-05$ & 2655 & $1.27 \pm 0.16$ & $10.1 \pm 1.1$ & 1.00 (18) \\ $2012-02-07$ & 2140 & $1.56 \pm 0.16$ & $8.0 \pm 1.2$ & 1.00 (14) \\ $2012-02-17$ & 789 & $1.65 \pm 0.21$ & $8.7 \pm 1.7$ & Cash \\ $2012-02-19$ & 5781 & $1.63 \pm 0.09$ & $7.9 \pm 0.6$ & 0.95 (39) \\ $2012-02-21$ & 1286 & $1.60 \pm 0.23$ & $8.6 \pm 1.5$ & 0.76 (8) \\ $2012-02-22$ & 2700 & $1.51 \pm 0.14$ & $9.0 \pm 1.0$ & 1.05 (19) \\ $2012-02-23$ & 2989 & $1.43 \pm 0.13$ & $9.6 \pm 1.1$ & 0.85 (22) \\ $2012-03-01$ & 1024 & $1.37 \pm 0.18$ & $10.8 \pm 1.7$ & Cash \\ $2012-03-18$ & 3224 & $1.36 \pm 0.13$ & $11.6 \pm 1.3$ & 0.77 (20) \\ $2012-03-20$ & 1351 & $1.45 \pm 0.17$ & $8.9 \pm 1.5$ & Cash \\ $2012-03-22$ & 2477 & $1.28 \pm 0.21$ & $8.6 \pm 1.6$ & 1.06 (9) \\ $2012-03-24$ & 1219 & $1.31 \pm 0.17$ & $12.5 \pm 1.9$ & Cash \\ $2012-03-30$ & 2695 & $1.58 \pm 0.13$ & $7.9 \pm 0.9$ & 1.01 (17) \\ $2012-04-01$ & 2620 & $1.59 \pm 0.14$ & $8.6 \pm 0.9$ & 0.71(17) \\ $2012-04-03$ & 1596 & $1.40 \pm 0.15$ & $9.3 \pm 1.2$ & Cash \\ $2012-04-05$ & 1196 & $1.61 \pm 0.20$ & $7.1 \pm 1.2$ & Cash \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} \end{tabular} \\ \tablefoot{ \tablefoottext{a}{Observed flux} } \label{XRT_1510} \end{table*} \section{Ultraviolet, optical, near infrared observations, data analysis, and results} \object{PKS~1510$-$089} is included in many ongoing optical blazar monitoring programmes which provide good coverage from ultraviolet (UV) to infrared (IR) bands (Fig.~\ref{LC_uv_opt_ir}.). Polarimetric observations of the source were also performed. The participating observatories are described in Section 5.1-5.6 and the results of the optical observations are discussed in Section 5.7. \subsection{Ultraviolet and optical photometry from UVOT} The UVOT covers the 180--600\,nm wavelength range using filters: $UVW2$, $UVM2$, $UVW1$, $U$, $B$ and $V$ \citep{poole}. We reduced the {\it Swift}/UVOT data with the \texttt{Heasoft} package version 6.12 and the 20111031 release of the {\it Swift}/UVOTA CALDB. Multiple exposures in the same filter at the same epoch were summed with {\tt uvotimsum}, and aperture photometry was then performed with the task {\tt uvotsource}. Source counts were extracted from a circular region with a 5 arcsec radius centred on the source. Background counts were estimated in a surrounding annulus with inner and outer radii of 15 and 25 arcsec, respectively. The background region was selected such that it does not contain any contaminating sources. We also compiled SEDs for all 19 epochs for which observations in all the six UVOT filters were available. The $\lambda_{\rm eff}$ and count-rate-to-flux conversion factors were derived by convolving the source spectrum with the effective areas of the UV filters. In the same way we calculated the Galactic extinction in the various bands, using the \cite{cardelli} law and setting $R_V=3.1$ and $A_B=0.416$ after \cite{schlegel} The results were used to obtain de-reddened flux densities. Four out of the 19 SEDs (for the sake of clarity) were combined with the optical and IR data and are shown in Fig.\ref{SED_uv_opt_ir}. These epochs correspond to pre-outburst (2012 February 7, MJD~55964), two local maxima (2012 February 24, MJD~55981 and 2012 March 1, MJD~55987) and post-outburst (2012 March 26, MJD~56012) phases of the light curves. \subsection{Optical $R$-band photometry from KVA} \object{PKS~1510$-$089} was observed as a part of the Tuorla blazar monitoring programme{\footnote{\texttt{http://users.utu.fi/kani/1m}}}, which provides optical support observations for the MAGIC telescopes and participates in the GASP-WEBT collaboration, with the KVA 35\,cm telescope at Observatorio del Roque de los Muchachos, La Palma. The observations started on 2012 January 14 (MJD 55940) and after 2012 February 2 (MJD 55959), the source was observed every night, weather and moon conditions allowing. The data were reduced using the standard data analysis pipeline (Nilsson et al. in preparation) and the fluxes were measured with differential photometry, using the comparison stars from \citet{villata97}. \subsection{Optical photometry and polarisation from Steward and Perkins Observatories} Optical (4000-7550\,\AA\,) spectropolarimetry and differential spectrophotometry were performed at the Steward Observatory 2.3\,m Bok Telescope using the SPOL CCD Imaging/Spectropolarimeter. These observations were obtained as part of an ongoing monitoring programme of $\gamma$-ray bright blazars in support of the {\it Fermi}\footnote{\texttt{http://james.as.arizona.edu/$\sim$psmith/Fermi}}. The observations took place on 2012 January 22-29, 2012 February 13-21 and 2012 March 21-28 (MJD~55948-55955, 55970-55978, 56007-56014). The data analysis pipeline is described in \citet{smith}. Polarimetric and photometric $R$-band observations were also provided by the 1.8\,m Perkins telescope of Lowell Observatory equipped with PRISM (Perkins Reimaging System) in 2012 March. The data analysis was done following the standard procedures as in \citet[][]{chatterjee08}. Because the EVPA has a $\pm180^\circ\times n$ (where $n$ = 1, 2, …) ambiguity, we selected the values such that the differences between any two points were minimised. There was one data point (Fig.~\ref{LC_uv_opt_ir}) which differed by $\sim90^\circ$ from the previous observation; we thus selected the EVPA for this point which does not cause a change in the direction of rotation between the two points. \begin{figure*} \includegraphics[width=0.85\textwidth, angle=270]{uv_opt_ir.eps} \caption{Light curves of \object{PKS~1510$-$089} in the UV, optical and near IR bands. The optical polarisation degree and angle are also shown in the two top panels. The next panels show UV ({\it Swift}/UVOT, middle), optical (KVA/GASP/UVOT, second from bottom) and near IR (REM and GASP, bottom) light curves of the source. The numbers in the second from the top panel refers to the rotations of the EVPA discussed in the text. Vertical line indicates the time when the PA changes by $\sim90^{\circ}$ between the highlighted point and the previous point (see text). The fluxes are given in mJy and are not corrected for Galactic absorption.} \label{LC_uv_opt_ir} \end{figure*} \subsection{Optical and near infrared observations from GASP-WEBT} Additional $R$-band monitoring data were collected by the GLAST-AGILE support programme (GASP) of the Whole Earth Blazar Telescope{\footnote{\texttt{http://www.oato.inaf.it/blazars/webt}}} (WEBT). These GASP observations of \object{PKS~1510$-$089} were performed by the following observatories: Abastumani, Calar Alto, Crimean, Lulin, Rozhen, St. Petersburg and Teide. The source magnitude is calculated with respect to the reference stars two to six calibrated by \cite{raiteri}. The GASP near IR data were acquired in the $J$, $H$, and $K_s$ bands with the IAC80 and Carlos Sanchez telescopes at Teide Observatory. Their calibration was performed using field stars with the most reliable photometry (signal to noise ratio, $\rm S/N \ga 10$ and uncertainty $\sigma < 0.11$) in the Two Micron All Sky Survey\footnote{\texttt{http://www.ipac.caltech.edu/2mass/}} (2MASS) catalogue. \subsection{Near infrared observations from REM} REM (Rapid Eye Mount) is a 60\,cm diameter fast reacting telescope located at La Silla, Chile. The telescope has two instruments: REMIR, an infrared imaging camera, and ROSS, a visible imager and slitless spectrograph \citep{Zer01,Chi03,Cov04a,Cov04b}. \object{PKS~1510$-$089} was observed by REM starting on 2012 January 25 (MJD~55951) during 28 nights. Typical exposure durations were of 30\,s in the $J$, $H$, and $Ks$ filters. The data were analysed in a standard way using tools provided by the ESO-Eclipse package \citep{Dev97}. Standard aperture photometry was derived and results calibrated by a suitable number of well-exposed 2MASS objects in the field$^9$. \subsection{Optical polarimetry observations from Liverpool Telescope} RINGO-2 is a fast readout imaging polarimeter mounted in the fully robotic 2-m Liverpool Telescope at Observatorio del Roque de los Muchachos, La Palma. RINGO2 uses a hybrid $V+R$ filter, consisting of a 3mm Schott GG475 filter cemented to a 2mm KG3 filter. \object{PKS~1510$-$089} was observed as part of a monitoring programme and started on 2012 January 19 (MJD~55945) with rather sparse sampling, but after 2012 February 21 (MJD~55978) the source was observed every night, weather and moon conditions allowing. The data were reduced as described in \citet{aleksic13} using a data reduction pipeline written for the monitoring programme. Inspection of the data revealed that due to the combination of bright moon, partial cloud coverage and low average polarization of \object{PKS~1510$-$089}, the $S/N$ was very low during many nights and no significant polarization was detected. In order to improve the $S/N$ we averaged observations over five day bins by first averaging Q/I and U/I and then computing the unbiased degree of polarization $p_0$ and its error as in \citet{aleksic13} with the difference that the error of EVPA was computed using the confidence intervals in \citet{Naghizadeh-Khouei}, which are better suited for low S/N data than the $\sigma(\rm EVPA) = 28.65*\sigma_p/p$ formula used in \citet{aleksic13}. \subsection{Results} The optical-UV and polarisation light curves from 2012 January to April (MJD~55952-56025) are shown in Fig.~\ref{LC_uv_opt_ir}. The light curves show an increasing flux peaking at near IR to UV wavelengths on 2012 February 25 (MJD~55982), the optical flux more than doubles and reaches a maximum flux of 2.23$\pm$0.39 mJy in the R-band. After that the general trend of the light curves is decreasing. On the top of this flare, the R-band light curve which is the best sampled light curve, shows several smaller amplitude ($<$0.5\,mJy) local minima and maxima. In particular there is a dip in the light curve on 2012 February 19 (MJD~55976.5) and three local maxima after the major peak (2012 March 1, March 5 and March 13; MJD~55987, 55990 and 55999). The flux densities varied by 5\,mJy(Ks), 1.5\,mJy (R) and 0.2\,mJy (UVW1). Hence, the source variability amplitude decreases as the frequency increases, as is usually found in FSRQs. This can be explained by the accretion disc emission diluting the UV emission from the jet \citep[e.g.][]{raiteri08, raiteri12} and the emission originating from the disc needs to be taken into account in the SED modelling (see Section 8). The optical polarisation degree was generally low ($<10^{\circ}$) during 2012 January-April compared to previous observations \citep[e.g][]{marscher}. Therefore the error bars of the measurements are rather large. The EVPA showed three rotations of $>180^\circ$. The first one started in the beginning of the campaign and ended around 2012 February 20 (MJD~55977, Fig.~\ref{LC_uv_opt_ir}). The rotation was $\sim$380$^\circ$, with a rotation rate of $\sim$10$^\circ$$/$day in counter-clockwise direction. The visual appearance of the rotation curve is rather smooth, but is rather poorly sampled between January 29 and February 13 (MJD~55955 and 55970). The second rotation started on February 20 (MJD~55977) and ended on February 25 (MJD~55982), lasting only five days. The rotation is $\sim250^\circ$ and the direction is opposite to the first rotation (i.e. clockwise). After these two rotations the EVPA was stable at $\sim$0$^\circ$ until March 7 (MJD~55993) when the third rotation started in a counter-clockwise direction and ended around March 14 (MJD~56000) at $\sim$ 150$^\circ$. On March 22 (MJD~56008) it dropped to $\sim$ 80$^\circ$ and remained stable until the end of the campaign. The comparison of these rotations with the photometric light curve and polarisation degree behaviour shows that the first rotation takes place during an increase in the optical flux. The second rotation starts when there is a small dip in the optical R-band light curve and a local minimum in the polarisation degree. The rotation stops when the optical flare peaks. The third rotation starts with a small optical outburst and stops when the decay phase of the optical flare has reached a plateau. We constructed SEDs from IR to UV for four distinct epochs: 2012 February 7 (MJD~55964, before the outburst), February 24 (MJD~55981, peak of the outburst), March 1 (MJD~55987, second local maxima in the R-band light curve) and March 26 (MJD~56012, quiescent state after the outbursts), shown in Fig.~\ref{SED_uv_opt_ir}. A softening of the SED from the pre-burst epoch to the epoch of outburst maxima is clearly visible. In the first and last SEDs, taken before and after the outburst, the thermal contributions from the accretion disc are again clearly visible as a strong increasing trend in the optical and UV bands. This behaviour was also seen for the 2009 outburst reported in \citet{dammando11}. \begin{figure} \includegraphics[width=0.35\textwidth, angle=270]{opt_sed.eps} \caption{Evolution of the infrared to ultraviolet SED from pre-outburst (MJD~55964) to two local maxima (MJD~55981 and 55987) and to post-outburst (MJD~56012) phase of the light curves. The data are corrected for Galactic absorption using \cite{schlegel}.} \label{SED_uv_opt_ir} \end{figure} \section{Radio observations, data analysis, and results} \object{PKS~1510$-$089} is part of the numerous blazar radio monitoring programmes extending from 2.6\,GHz to 230\, GHz by F-GAMMA, Medicina, UMRAO, OVRO, Mets\"ahovi, VLBA and the Submillimeter Array. The observations collected for this paper are presented in Sections 6.1-6.7 and the results discussed in 6.8. \begin{figure*} \includegraphics[width=0.41\textwidth, angle=270]{radioLC_OVRO.eps} \caption{15\,GHz, 37\,GHz and 43\,GHz VLBA core long-term light curves from MJD~55750 (2011 July 8) to MJD~56030 (2012 April 13). The flux of the VLBA core at 43\,GHz traces the shape of the 37\,GHz light curve, indicating that the major part of the total flux originates in there. Moreover, the new components found at 43\,GHz are coincident with flux increase in the 37 GHz band. The symbols at the bottom of the plot show the zero separation epochs with the error bars of the components K11 and K12 from the 43\,GHz VLBA core (see text).} \label{radio_long} \end{figure*} \subsection{Submillimeter Array} The 230\,GHz (1.3\,mm) light curve was obtained at the Submillimeter Array (SMA) on Mauna Kea (Hawaii). The SMA is an 8-element interferometer, consisting of 6\,m dishes that may be arranged into configurations with baselines as long as 509\,m, producing a synthesised beam of sub-arcsecond width. \object{PKS~1510$-$089} is included in an ongoing monitoring programme at the SMA to determine the fluxes of compact extragalactic radio sources that can be used as calibrators at mm wavelengths \citep{gurwell07}. Observations of available potential calibrators are usually observed for three to five minutes, and the measured source signal strength calibrated against known standards, typically solar system objects (Titan, Uranus, Neptune, or Callisto). Data from this programme are updated regularly and are available at the SMA website\footnote{\texttt{http://sma1.sma.hawaii.edu/callist/callist.html}}. \subsection{Mets\"ahovi Radio Telescope} The 37 GHz observations were made with the 13.7 m diameter Mets\"ahovi radio telescope, which is a radome enclosed paraboloid antenna situated in Finland. The measurements were made with a 1 GHz-band dual beam receiver centred at 36.8\,GHz. The beamwidth is 2.4 arcmin. The high electron mobility pseudomorphic transistor front end operates at room temperature. The observations were performed in an ON--ON configuration alternating the source in each feed horn, with the second horn observing the sky. The flux density scale was set by observations of calibrator DR~21. The sources NGC 7027, 3C 274 and 3C 84 were used as secondary calibrators. A detailed description of the data reduction and analysis is given in \citet{Metsaehovi}. The error estimate in the flux density includes contributions from the measurement rms and the uncertainty of the absolute calibration. The \object{PKS~1510$-$089} observations were done as part of the regular monitoring programme and the GASP-WEBT campaign. \subsection{Owens Valley Radio Observatory} Regular 15 GHz observations of \object{PKS~1510$-$089} were carried out as part of a high-cadence $\gamma$-ray blazar monitoring programme using the Owens Valley Radio Observatory (OVRO) 40 m telescope in Owens Valley, California \citep{Richards11}. This programme, which commenced in late 2007, now includes about 1800 sources, each observed with a nominal twice per week cadence. The OVRO 40 m uses off-axis dual-beam optics and a cryogenic high electron mobility transistor low-noise amplifier with a 15.0\,GHz centre frequency and 3\,GHz bandwidth. The telescope and receiver combination produces a pair of approximately Gaussian beams (157 arcsec full width half maximum; FWHM), separated in azimuth by 12.95 arcmins. The total system noise temperature is about 52 K, including receiver, atmosphere, ground, and CMB contributions. The two sky beams were Dicke switched using the OFF-source beam as a reference, and the source is alternated between the two beams in an ON-ON fashion to remove atmospheric and ground contamination. A noise level of approximately 3--4\,mJy in quadrature with about 2\% additional uncertainty, mostly due to pointing errors, is achieved in a 70\,s integration period. Calibrations were performed using a temperature-stable diode noise source to remove receiver gain drifts and the flux density scale was derived from observations of 3C~286 assuming the \citet{Baars77} value of 3.44\,Jy at 15.0\,GHz. The systematic uncertainty of about 5\% in the flux density scale is not included in the error bars. Complete details of the reduction and calibration procedure are found in \citet{Richards11}. \begin{figure*} \includegraphics[width=0.75\textwidth, angle=270]{radioLC_short.eps} \caption{High frequency (top), medium frequency (middle) and low frequency (bottom) light curves from SMA, Mets\"ahovi, OVRO, UMRAO, Medicina and F-GAMMA programme for the campaign period.} \label{radio_short} \end{figure*} \subsection{F-GAMMA programme} The cm/mm radio light curves of PKS\,1510--089 have been obtained within the framework of a {\sl Fermi}-GST related monitoring programme of $\gamma$-ray blazars \citep[F-GAMMA programme,][]{2007AIPC..921..249F,2008arXiv0809.3912A}. The overall frequency range spans from 2.64\,GHz to 142\,GHz using the 100\,m radio telescope located in Effelsberg, Germany and IRAM 30\,m located on Pico Veleta in the Spanish Sierra Nevada. The Effelsberg measurements were conducted with the secondary focus heterodyne receivers at 2.64, 4.85, 8.35, 10.45, 14.60, 23.05, 32.00 and 43.00\,GHz. The observations were performed quasi-simultaneously with cross-scans, slewing in azimuth and elevation across the source position with an adaptive number of sub-scans until the desired sensitivity is reached \citep[for details, see][]{2008A&A...490.1019F, 2008arXiv0809.3912A}. Consequently, pointing off-set correction, gain correction, atmospheric opacity correction and sensitivity correction have been applied to the data. The IRAM 30\,m observations were carried out with calibrated cross-scans using the new EMIR horizontal and vertical polarisation receivers operating at 86.2 and 142.3\,GHz. The opacity corrected intensities were converted into the standard temperature scale and finally corrected for small remaining pointing offsets and systematic gain-elevation effects. The conversion to the standard flux density scale was done using the instantaneous conversion factors derived from frequently observed primary (Mars, Uranus) and secondary (W3(OH), K3-50A, NGC\,7027) calibrators. \subsection{UMRAO} Centimetre band total flux density observations were obtained with the University of Michigan Radio Observatory (UMRAO) 26\,m paraboloid located in Dexter, Michigan, USA. The instrument is equipped with transistor-based radiometres operating at frequencies centred at 4.8, 8.0, and 14.5\,GHz with bandwidths of 0.68, 0.79, and 1.68\,GHz, respectively. Dual horn feed systems are used at 8 and 14.5 GHz, while at 4.8 GHz a single-horn, mode-switching receiver is employed. Each observation consisted of a series of 8 to 16 individual measurements over approximately a 25 to 45 minutes time period, utilizing an ON-OFF observing technique at 4.8 GHz, and an ON-ON technique (switching the target source between the two feed horns which are closely spaced on the sky) at 8.0 and 14.5\,GHz. As part of the observing procedure, drift scans were made across strong sources to verify the telescope pointing correction curves, and observations of nearby calibrators were obtained every one to two hours to correct for temporal changes in the antenna aperture efficiency. The \object{PKS~1510$-$089} observations were done as part of the regular monitoring programme and the GASP-WEBT campaign. \subsection{Medicina} The Medicina telescope is a 32\,m parabolic antenna located 30\,km from Bologna, Italy, performing observations at both 5 and 8.4 GHz\footnote{\texttt{http://www.med.ira.inaf.it/parabola\_page\_EN.htm}}. FWHM beamwidth is 38.7 arcmin/frequency (GHz). We used the new enhanced single-dish control system acquisition system, which provides enhanced sensitivity and supports observations with the cross scan technique. All observations were performed at both 5 and 8.4 GHz; the typical on-source time is 1.5 minutes and the flux density was calibrated with respect to 3C 286. PKS~1510$-$089 was observed during 2012 January-April as part of the regular monitoring programme and the GASP-WEBT campaign. \begin{figure} \includegraphics[width=0.35\textwidth, angle=270]{radio_spectra_color.eps} \caption{Evolution of the radio spectra over the campaign period from 2012 January 28 to April 17 (MJD~55954, 55975, 55990, 56009, 56018 and 56034). The first radio outburst 11 in Fig.~\ref{radio_long} dominates the spectra in the first epoch, while in the second epoch new outburst 12a in Fig.~\ref{radio_long} is apparent in the high frequencies. In the third epoch 12b becomes visible in the highest frequencies with the peak moving to lower energies in the fourth and fifth epochs.} \label{radio_sed} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{1510_vlbaK11n.eps} \hspace{10 mm} \includegraphics[width=0.49\textwidth]{1510_vlbaK12n.eps} \caption{43\,GHz total (contours) and polarised (colour scale) intensity images of PKS~1510$-$089 from 2011 September to 2012 April (top) and 2012 April to October (bottom) with $S_{\rm peak}$=2.58~Jy/beam, $S_{\rm peak}^{\rm pol}$=46~mJy/beam, and a Gaussian restoring beam=0.14$\times$0.39~mas$^2$ at $PA$=-10$^\circ$ (in the right bottom corner); contours represent 0.25, 0.5,...,64, 90\% of the peak intensity; line segments within the colour scale show direction of linear polarisation; solid lines indicate position of components across the epoch, the core A0, knot K11, and knot K12.} \label{k12} \end{figure} \subsection{Very Long Baseline Array} VLBA is a system of ten radio-telescope antennas, each with a dish 25\,m in diameter located from Mauna Kea in Hawaii to St. Croix in the U.S. Virgin Islands. VLBA observations were performed as a part of the Boston University $\gamma$-ray blazar monitoring programme at 43\,GHz. The observations were carried out with the VLBA recording system using eight 8\,MHz wide channels, each in right and left circular polarization, with 15$-$20 scans of three to five minutes duration. All ten antennas were used except at epochs affected by weather or receiver failure. The observations are performed about once per month. The data were reduced and modelled in the same manner as described in \citet{jorstad05,jorstad07}. In short: the initial correlation was carried out at the National Radio Astronomy Observatory (NRAO) Array Operations Center in Socorro, New Mexico and subsequent calibration was performed with the astronomical image processing system (AIPS) software supplied by NRAO, while images were made with the Caltech software DIFMAP. These calibrations included application of the nominal antenna-based gain curves and system temperatures, as well as correction for sky opacity, followed by iterative imaging plus phase and amplitude self-calibration. The flux-density correction factors from \citet{jorstad05} were used for the final adjustment of the flux-density scale in the images. In addition to the kinematics of the jet, the total polarisation data and the polarisation of the VLBA core were also analysed. Also these analysis followed the methods in \citet{jorstad05}. \subsection{Results} In second half of 2011 \object{PKS~1510$-$089} showed extremely high cm- and mm-band radio flux \citep{fgamma-atel, medicina-atel,brazil-atel}. During the outburst the flux increased from 2\,Jy to 7\,Jy. The outburst peaked first at higher frequencies, the peak at 37\,GHz was reached around 2011 October 21 (MJD~55855) and at 15\,GHz on $\sim$ 2011 December 15 (MJD~55910, see Fig.~\ref{radio_long}, outburst ``11''). After the maximum was reached the two radio light curves showed decreasing flux. However, there are several smaller amplitude outbursts (amplitudes 1-2\,Jy) visible in the both light curves peaking at 2012 January 20 and February 25 (MJD~55946 and 55982) at 15\,GHz. The last outburst at 15\,GHz appears to be a sum of two outbursts seen at 37\,GHz peaking at 2012 February 8 and February 25 (MJD~55965 and 55982, outburst ``12a'' and ``12b'' in Fig.~\ref{radio_long}). Figure~\ref{radio_short} shows radio light curves from all frequencies from the observing campaign period. In the lowest frequencies (2-8\,GHz) there is very little variability while at higher frequencies variability is clearly present at all frequencies, but the rather sparse sampling does not allow us to identify outbursts from other than 15\,GHz and 37\,GHz light curves. The radio spectral evolution from 2012 January 28 to April 17 (MJD~55954 to 56034) is shown in Fig.~\ref{radio_sed}. In the four first spectra at low frequencies the dominating component is the decaying major outburst. At higher frequencies the new outburst 12a is first visible on 2012 February 18 (MJD~55975). The outburst 12b is first visible on 2012 March 4 (MJD 55990) and the peak then moves to lower frequencies. In the last two spectra this outburst is visible as a flattening of the spectra above 15\,GHz and increased flux. Both outbursts follow the typical spectral evolution of radio outbursts. In the initial (growth) stage, the synchrotron self-absorption turnover frequency decreases and the turnover flux density increases. In the second (plateau) stage, the turnover frequency decreases while the turnover flux density remains roughly constant. During the third (decay) stage both turnover frequency and flux density decrease. The behaviour is in agreement with the three stage evolution of the shock-in-jet model of \citet{MG85}; in the first stage the inverse Compton losses dominate, in the second the synchrotron losses and in the third the adiabatic losses. The VLBA 43\,GHz images reveal a new component (named K11) corresponding to the major radio outburst of the second half of 2011 appearing in 2011 December as already reported in \citet{orienti} using the MOJAVE 15\,GHz data (see Fig.~\ref{k12}). The apparent speed of the component, $(19.34\pm1.85)$c, and the zero separation epoch 2011 October 29 (MJD~55864$\pm$12) ones derived by \citet{orienti}. In 2012 April there was a second new component appearing in the images (named K12). It had an apparent speed of $(16.26\pm2.43)$c and a time of ejection of 2012 February 3 (MJD~55961$\pm$15) 55961$\pm$15days). The zero separation epochs of these components agree very well with the local maxima in the 37\,GHz light curve according to the general trend found in \citet{savolainen}. The VLBA polarisation data showed in general a rather low polarisation of the core (1-3\%) compared to the historical values from \citet{jorstad07} (0.9-9.7\%). The observed EVPA of the core between 2012 January and April was between -10$^o$ and 25$^o$. The sparse sampling does not allow us to trace possible rotations of the EVPA, but as shown in Fig.~\ref{pol}, the EVPA values of the VLBA core seem to trace close those of the optical emission. \begin{figure} \includegraphics[width=0.24\textwidth,angle=270]{LC_pol.eps} \caption{Radio and optical polarisation behaviour of PKS~1510$-$089 in 2012 February-April.} \label{pol} \end{figure} \section{Multi-Frequency light curves} \begin{figure*} \includegraphics[width=1.05\textwidth, angle=270]{LC_overall.eps} \caption{Multi-frequency light curve of PKS~1510$-$089 from VHE $\gamma$ rays to radio in February-April 2012. The symbol marked K12 in the bottom panel shows the zero separation epoch of the VLBA component K12 (see Section 6.8) from the 43\,GHz VLBA core. The numbering of the HE $\gamma$-ray and 37\,GHz radio flares is described in the text.} \label{MWL_lc} \end{figure*} Figure~\ref{MWL_lc} shows the MAGIC, {\it Fermi}-LAT, AGILE-GRID, {\it Swift}, optical polarisation, $R$-band photometry and 37\,GHz light curves of \object{PKS~1510$-$089} in 2012 February-April. The {\it Fermi}-LAT light curve showed three distinct flares with flux increase more than factor five compared to quiescent state flux: flare I (2012 January 29 to February 13, MJD$\sim$55955-55970), flare II (2012 February 23 to March 9 , MJD$\sim$55980-55995) and flare III (2012 March 14 to March 19, MJD$\sim$56000-56005). Additionally there was a smaller amplitude ($\sim$ factor four) flare between flare I and II. The first two flares also triggered the AGILE alert system, and are evident in the two day AGILE-GRID light curve, while during flare III the source gradually exited the AGILE field of view. As discussed in Section 3 AGILE and {\it Fermi}-LAT data hint for a marginal harder when brighter trend. During these flares the VHE $\gamma$-ray flux remained rather unchanged. The flares were all characterised by different multi-frequency behaviour at lower energies. The first $\gamma$-ray outburst coincided with an X-ray peak. The first and second $\gamma$-ray flares were accompanied by quasi simultaneous flares in 37\,GHz radio (flare I in $\gamma$ rays is simultaneous to flare 12a in radio and flare II in $\gamma$ ray is simultaneous to flare 12b in radio, see Fig.~\ref{MWL_lc}). During the first outburst there was also a rotation of the EVPA of $>180^\circ$. This outburst also coincided with the zero separation epoch of new knot from the 43\,GHz VLBA core (see Section 6.8). During the second $\gamma$-ray flare there was an optical outburst and in the very beginning a second rotation of the EVPA with $>180^\circ$, but this rotation had a very short duration and it was in the opposite direction from the first one. During this rotation there was also a local minimum of the polarisation degree, and this rotation looks very similar to the one observed in 3C~279 during the $\gamma$-ray event seen by {\it Fermi}-LAT in 2009 April \citep{nature}. However, while the optical flux started to decrease, the $\gamma$-ray flare continued and the optical polarisation degree started to increase. After these events the EVPA stayed constant until the third rotation started apparently simultaneously with the third outburst in the $\gamma$-ray light curve. During the outburst the degree of polarisation stayed constant. There was a gap in the 37\,GHz light curve, however the emission level was similar before and after the gap. The overall outbursting event had several similarities to the $\gamma$-ray flaring event in 2009 discussed in \cite{marscher,abdo10,dammando11}: ejection of the knot from the VLBA core, accompanied activity in the millimetre wavelengths and the rotation of the optical polarisation angle. However, there are also some differences: there was no preceding $\gamma$-ray flare, but the activity in radio and $\gamma$ rays started simultaneously. Also the observed rotation of the optical polarisation angle was shorter in duration ($\sim$ 30 days) and the rotation was only $\sim$ 380$^\circ$ instead of $>720^\circ$ seen in 2009. \cite{marscher} interpreted the 2009 outburst in terms of the phenomenological model presented for BL Lacertae in \cite{Marscher_nature}. In this model the rotation of the polarisation angle is caused by a moving emission feature following a spiral path as it propagates through the toroidal magnetic field of the acceleration and collimation zones. The emission feature passes the 43\,GHz VLBA core, interpreted as a standing conical shock \citep{Marscher_nature}, which compresses the knot. The synchrotron flares occur when the energisation of the electrons increases suddenly while the $\gamma$-ray flares with very weak optical counterparts are produced by an increase of the local seed photon field in optical and IR wavelengths. The same scheme can be adopted to the multi-frequency light curve discussed here: flare I takes place as the emission feature passes the core while flare II is caused by the sudden energisation of the electrons of the emission feature and flare III by the sudden increase in the local seed photon field. Unlike the millimetre, optical and HE $\gamma$ rays, the X-rays did not show strong variability. The X-ray light curve showed a general shape similar to that of the HE $\gamma$-ray light curve. However, the sparse sampling and the small amplitude of variability in the X-ray light curve, do not allow us to draw a strong conclusion on the connection. The X-ray spectrum was hard, as in the previous observations \citep{dammando11,kataoka08}, which is a signature of a hard electron population with slope 1.6-2.0. The observed properties are in agreement with the conclusion of \cite{kataoka08} that the X-ray spectrum is a result of Comptonization of IR radiation produced by hot dust located in the surrounding molecular torus. Direct mid-IR (3.6-- 160$\mu$m) observations give an upper limit on the luminosity from thermal emission from such dust to be $2.3\cdot10^{45}$ erg s$^{-1}$ \citep{malmrose}. As discussed in Section 5.7, the behaviour of the optical polarisation EVPA during the observing campaign was particularly interesting showing three distinct rotations of $>180^\circ$. In addition to \object{PKS~1510$-$089} \citep{marscher} and BL~Lac \citep{Marscher_nature} such rotations have been reported for 3C~279 in coincidence with $\gamma$-ray flaring events \citep{larionov,nature,3C279,aleksic13}. In these papers the rotations have been interpreted as a signature of the geometry, in particular as caused by a bent trajectory that the moving emission feature is following. For 3C~279 the rotations have been changing direction between epochs, which was interpreted as a signature of an actual bend in the jet \citep{nature,nalewajko10}. However, here the second rotation was very fast, the rotations took place very close in time and the multi-frequency data suggests that a major part of the emission would originate in one single emission region. Therefore, a bend does not appear a likely explanation for the change of the direction of the rotations. As discussed in \citet[][and references therein]{marscher}, the rotations can also be explained by a turbulent magnetic field within the emission region where cells with random magnetic field orientations enter and exit the emission region causing a random walk of the resultant polarisation vector and apparent rotation. For rotations caused by the turbulent magnetic field both directions should be as likely and they should occur at random times. Additionally the appearance of the rotations caused by turbulence is not very smooth. Turbulence as a possible cause of the rotations is favoured by the fact that the rotations with different directions took place very close in time. According to \citet{marscher14} such rotations are expected when turbulent plasma flows at relativistic speeds down a jet and crosses the standing shock. In summary we conclude that the multi-frequency light curves show compelling evidence that the emission during this flaring epoch is dominated by a moving emission feature located close to the VLBA 43\,GHz core. As described before such evidence has been found for \object{PKS~1510$-$089} as well as in several other sources \citep[e.g.][and references therein]{jorstad13}. The complicated flaring pattern, showing variable synchrotron to Compton ratios and different time scales in diffent wavebands, suggest that additionally both the emission region and the underlying jet might have some substructures. \section{Spectral energy distribution} We construct the SED for \object{PKS~1510$-$089} combining the radio data from F-GAMMA and Mets\"ahovi with infrared, optical and UV data from REM, GASP-WEBT and UVOT, X-ray data from {\it Swift}-XRT and $\gamma$-ray data from {\it Fermi}-LAT and MAGIC. The radio to X-ray data are quasi-simultaneous taken from 2012 March 1 to March 4 (MJD~55987-55990) while the {\it Fermi}-LAT data cover the main MAGIC observation periods (2012 February 19 to March 5 and March 15 to April 3, MJD~55976-55991 and MJD~56001-56020) and AGILE data the period from 2012 February 20 to March 5 (MJD~55977.5-55991.5). \begin{figure} \includegraphics[width=0.45\textwidth]{u.ps} \caption{Energy density of the photon field as function of the distance from the central engine. The blue lines refer to the BLR and the red lines to infrared torus. The green line indicates the $\tau_{37 GHz}$ (calculated using the magnetic field derived for case {\it a} model, see text) and yellow zone indicates the area at which the jet is transparent at 37\,GHz. The dashed lines indicate the assumed size of BLR (blue) and dust torus (orange). The thick red and cyan vertical lines indicate the regions which we selected for our SED modelling.} \label{u} \end{figure} The SEDs of FSRQs are conventionally modelled with a small emission region (typically size of $\sim$10$^{16}$ cm) close to the central engine, in regions where the dense radiation field generated by the direct and reprocessed accretion disc emission is thought to provide the ideal environment for efficient IC emission \citep{dermer, sikora94}. There is, however, growing evidence that, at least in some objects and/or at some epochs, the emission could occur far downstream in the jet \citep{sikora08,Marscher_nature,1222}. For \object{PKS~1510$-$089} at the epoch analysed here, the multi-frequency light curves and the ejection of a new component from the 43\,GHz VLBA core point to the co-spatial siting of the $\gamma$-ray and millimetre flaring activity. Since the inner regions of the jet are highly opaque to low frequency photons through synchrotron self-absorption \citep[as indeed observed for the great majority of FSRQs, e.g.][]{giommi2012}, the $\gamma$-ray and millimetre emission region has to be located farther out in the jet, at distances at which the jet is transparent at radio frequencies. Another compelling indication that the $\gamma$ rays are not produced very close to the nucleus is that, in this case, one would expect a strong depression of the emission above $\sim20$ GeV due to absorption through interactions with the UV-optical emission of the BLR clouds \citep[e.g.][]{donea, sitarek, FM, PoutanenStern}. Instead, the combined {\it Fermi}-LAT and MAGIC $\gamma$-ray spectrum does not show signatures of strong absorption, but a smooth log parabola shape. This is similar to what was observed for PKS~1222+216 \citep{1222} and for other few other FSRQs whose LAT spectrum extends well above 10-20\,GeV, supporting the idea of emission occurring beyond the BLR radius \citep{pacciani,tavecchio13}. The simultaneous millimetre and $\gamma$-ray light curves show similar variability patterns on a weekly time scales, and are therefore consistent with a large dominating emission region, $R\sim ct_{\rm var} \delta = 2\times 10^{17} (\delta/10)$ cm, where $\delta$ is the relativistic Doppler factor. The low compactness implied by such large dimensions makes the synchrotron self-Compton process, in which the seed photons are produced in the jet via synchrotron radiation \citep[e.g.][]{maraschi92}, highly inefficient. It is thus unable to produce the observed $\gamma$-ray emission \citep[see e.g.][for the case of 3C 279]{lindfors05}, and therefore the seed photons for IC scattering must be provided by some external field. The radiative environment for the jet in \object{PKS~1510$-$089} is schematically described in Fig.~\ref{u}, which reports the energy density of the external radiation in the jet co-moving frame as a function of the distance from the central engine. Two components are considered, namely the emission of the BLR clouds in the innermost regions (blue), and the contribution provided by the thermal emission of dust organised in the molecular torus, at larger scales (red). The external energy density is assumed to be constant within the corresponding radius of the emitting structure, $r_{{\mathrm BLR}}$ and $r_{IR}$, for the BLR and the IR torus respectively, and shows a rapid decline beyond it. The detailed geometry and extension of the BLR and of the IR torus are still under debate, but values typically adopted for the extensions are of order $0.1-1$ parsec and $1-5$ parsecs, respectively. \cite{nalewajko12} estimated that for \object{PKS~1510$-$089} these values are $r_{{\mathrm BLR}}=0.07$ pc and $r_{{\mathrm Torus}}=3.2$ pc. The curves in Fig.~\ref{u} have been calculated following \citet{ghisellini09}, who provide simple scaling laws for the dimensions of the BLR and the IR torus, depending only on the accretion disc luminosity, $L_{\rm disc}$. In the literature there are several estimates of the disc luminosity \citep{celotti,nalewajko12}, all in the range $3-7\times 10^{45}$ erg s$^{-1}$. In the following we assume $L_{\rm disc}=6.7\times 10^{45}$ erg s$^{-1}$, as inferred from the observed ``blue bump'' traced by UVOT. With the adopted $L_{\rm disc}$ the estimates of \citet{ghisellini09} provide $r_{{\mathrm BLR}}=0.086$ pc and $r_{{\mathrm Torus}}=2.15$ pc. The resulting $L_{\rm Torus}$ is compatible with the upper limits from mid-IR observations \citep{malmrose}. We reproduced the observed SED by assuming that the emission region (blob) is filled with electrons following a smoothed broken power-law energy distribution with normalization $K$ between $\gamma _{\rm min}$ and $\gamma _{\rm max}$, with slopes $n_1$ and $n_2$ below and above the break at $\gamma _{\rm b}$ as in \citet{tavecchio98} and \citet{maraschi03}. We assume a conical geometry for the jet, characterised by a semi-aperture angle $\theta_{\rm j}=0.1^\circ$ \citep{jorstad05}. Electrons emit through synchrotron and IC mechanisms. The relative luminosities of the IC components arising from the different target photon populations are proportional to the level of the corresponding energy densities (measured in the jet frame), when the scattering takes place in Thomson regime. The energy density, in turn, depends on the distance along the jet as reported in Fig.~\ref{u}. \begin{figure*} \includegraphics[width=0.45\textwidth]{sed2012torus.ps} \hspace{10 mm} \includegraphics[width=0.45\textwidth]{sed2012.ps} \caption{SED of PKS~1510$-$089 in 2012 February-April as observed by F-GAMMA and Mets\"ahovi (magenta triangles), GASP-WEBT (blue filled circles), {\it Swift}-UVOT and XRT (red filled circles), {\it Fermi}-LAT (black filled circles), AGILE-GRID (Flare-II, green triangles) and MAGIC (cyan, observed; red, EBL corrected). {\bf Left:} The solid black curve shows the overall emission modelled, where the low energy bump is dominated by the synchrotron emission (blue dashed line) and the high energy bump is dominated by the external Compton mechanism, using the infrared torus (long dashed line) photons as seed photons (case {\it a}). The short dashed line is the thermal component from the accretion disc. {\bf Right:} The black curve shows the model assuming that the emission region is located at the radio core (case {\it b}). The orange dashed line shows the additional external photon field representing the slow sheath of the jet. The red dotted line indicates the synchrotron self Compton emission from this region.} \label{SED} \end{figure*} \begin{table*}[th] \centering \caption{Model parameters for the two SED models} \begin{tabular}{lcccccccccccc} \hline \hline model&$\gamma _{\rm min}$ & $\gamma _{\rm b}$ & $\gamma _{\rm max}$ & $n_1$ & $n_2$ &$B$ & $K$ &$R$ & $\Gamma$ \\ & & & & & &[G] & [cm$^{-3}]$ & $[10^{16}$ cm]\\ \hline IR torus\tablefootmark{a}&3&9e2&6.5e4&1.9&3.85&0.12&20&30&20\\ \hline Sheath\tablefootmark{b}& 800& 7e3& 5e4& 2& 3.4& 1.3e-2& 18& 600& 2.2\\ Spine& 800& 2.6e3& 8e4& 2& 3.7& 6.5e-3& 2.5& 510& 20\\ \hline \hline \end{tabular} \vskip 0.4 true cm \tablefoot{The following quantities are reported: the minimum, break, and maximum Lorentz factors and the low and high energy slope of the electron energy distribution, the magnetic field intensity, the electron density, the radius of the emitting region and the Lorentz factor. \tablefoottext{a}{IR torus (external photons for IC scattering provided by the IR torus)}\tablefoottext{b}{Sheath-spine (sheath providing the seed photons for the scattering)}}. \label{param} \end{table*} The observational evidence discussed above (co-spatiality of $\gamma$-ray and millimetre emission and transparency to $>100$\,GeV photons) allow us to locate the emission region outside the BLR but do not provide a clear upper limit for its distance. We first tried (case {\it a}) to reproduce the SED finding a solution which minimises the distance from the central engine. The SED is successfully reproduced (Fig.~\ref{SED}) assuming that the emission occurs at a distance of $r\sim1$ pc, i.e. within the torus indicated with a thick red vertical line in Fig.~\ref{u}. As a consistency check, we infer the run of the optical depth at 37 GHz ($\tau_{37 GHz}$, green line in Fig~\ref{u}) with the distance (for a conical geometry the scaling laws $B\propto r^{-1}$, $K\propto r^{-1}$ can be assumed, and adopting, $B$, $K$ and $R$, of the case {\it a} model), confirming that the emission region is indeed characterised by $\tau_{37 GHz}<1$ as required by the correlation observed in the light curves. The grey and the yellow areas in Fig.~\ref{u} indicate the opaque and the transparent millimetre regions. For \object{PKS~1510$-$089} there are measurements of the distance of the VLBA core from the central engine. \citet{pushkarev} used the core shift measurements to locate the 15\,GHz VLBA core at $\sim$17.7 parsecs from the central engine. Using the speed and core shift measurement from that paper gives a distance of $\sim$6.5 parsecs (indicated with a thick cyan line in Fig.~\ref{u}) for the 43\,GHz core. At this distance, the infrared torus no longer provides a strong enough source of seed photons for the IC process \cite{marscher} suggest that the dilemma could be solved if the jet is surrounded by slow sheath providing seed photons for IC scattering. We test this scenario (case {\it b}) by assuming that the emission blob is surrounded by a sheath with $\Gamma$=2.2, so that the radiation field is amplified in the emitting region by a Doppler factor $\delta=3.7$ (assuming a viewing angle 2.8$^\circ$ and $\Gamma$=20 for the blob). The fit to the SED is presented in Fig.~\ref{SED}: the orange dashed line represents the observed emission from the modelled sheath, which would be negligible compared to the jet emission and therefore not directly observed. We therefore conclude that, from the point of view of the radiative properties, this scenario is also feasible. We find that both models provide an acceptable fit to the data and the resulting model parameters are given in Table~\ref{param}. This implies that the presented scenarios are feasible and in agreement with the obtained data; however the parameters used for modelling are not unique. We also note that this SED represents an average emission state, since the data have been collected over a few days, and do not account for the rapid ($\sim$ one hour) variability of the $\gamma$-ray emission as measured by the {\it Fermi}-LAT \citep{saito, 2013arXiv1304.2878F}. As proposed by \citet[][]{marscher} \citep[see also][]{marscher14,narayan}, such rapid flickering could indicate the presence of relativistic turbulent motion within the flow. In this framework, radiation from single, small, turbulent cells is occasionally observed, while the long-term emission is the result of the integrated emission over all of the active jet volume. More detailed simulations along the lines of \citet{marscher14} are required to investigate this scenario in detail. \section{Summary and discussion} In this paper we report the detection of VHE $\gamma$ rays from \object{PKS~1510$-$089} by the MAGIC telescopes in 2012 February-April. The VHE $\gamma$-ray flux and spectrum are comparable to those observed from the source in 2009 March-April by the H.E.S.S. telescopes \citep{hess}. During the MAGIC observations the source was in a high state in the HE $\gamma$-ray band, showing significant variability, but the VHE $\gamma$-ray light curve does not reveal significant variability. This is in agreement with the result of \cite{hess}. We performed a detailed multi-frequency study of the source during 2012 January-April extending for the first time from radio to VHE $\gamma$ rays. In summary we find that: \begin{enumerate} \item The HE and VHE $\gamma$-ray spectra connect smoothly, therefore we conclude that VHE $\gamma$-ray emission and the HE $\gamma$-ray emission originate in a single emission region located outside the BLR. \item The VHE $\gamma$-ray observations by MAGIC missed the times of the hour scale variability observed in the HE $\gamma$-ray band and the MAGIC light curve does not show significant variability in daily or weekly time scales. However, the HE $\gamma$-ray variability indicates that within the larger emission region, there must exist more compact emission regions producing the fast variability. The model of \citet{marscher14}, in which turbulent plasma flowing at a relativistic speed down the jet and crossing a standing shock, would naturally lead to such behaviour. We note that, the fast variability could also extend to the VHE $\gamma$-ray band, even if the observations presented here did not detect it. \item The common variability patterns seen in the HE $\gamma$ ray and 37\,GHz light curves as well as the concurrent ejection of a new component from the 43\,GHz VLBA core support this emission scenario. We also identify several $\sim$180$^\circ$ rotations of the optical polarisation angle, which have been suggested as relating to such events \citep{Marscher_nature}. \item The SED can be modelled with a one-zone external Compton model for both studied cases, namely: the seed photons originating from the infrared torus and the seed photons originating from a slow sheath of the jet. The latter model is favoured if the VLBA core is as distant from the central engine as suggested by \citet{marscher, pushkarev}. \end{enumerate} However, there are other alternatives for the source of seed photons and for the fast variability: \begin{itemize} \item \citet{LeonTavares13} suggested that the relativistic jet could drag the broad line region clouds to greater distances from the central engine and the VLBA radio core could be surrounded by such clouds. This would be manifested by a brightening of the broad emission lines in the optical monitoring of the spectral lines. We have no such data for our campaign, but we note that this additional seed photon population is not required to reproduce our data. \item It was proposed in \citet{giannios} that additional flickering can be explained by the jet-in-jet model even if the emission region is far out in the jet. In this scheme long-term flares are the result of the ``envelope" emission of magnetic reconnection events in the jet, while short-term flares flag the random formation of ``monster" blobs during the reconnection process. While the discussion in \citet{giannios} was suited for the case of PKS~1222+216, characterised by shorter time scales (both for the long-term modulation and the rapid flares), we expect that it might also be possible to reproduce the behaviour of \object{PKS~1510$-$089} by generalising the model. \end{itemize} Since \object{PKS~1510$-$089} has been active in $\gamma$ rays in the AGILE and {\it Fermi}-LAT era, there have been several other multi-frequency studies. \citet{brown} analysed the {\it Fermi}-LAT data concluding from the presence of $>20$\,GeV photons that multiple simultaneously active $\gamma$-ray emission regions are required. We find that the {\it Fermi}-LAT and MAGIC spectra connect smoothly suggesting a single emission region. These very fast spikes probably originate in a separate emission region, possibly embedded in the larger region producing the slower modulations of the radio and $\gamma$-ray light curves. Several emission sites were also suggested by \citet{nalewajko12}, who concluded that the high energy cut-off (in the low state) of the main synchrotron component implies a two-zone model, otherwise the requested external photon density is too high. \citet{barnacka} reached a similar conclusion and favoured a two-zone model for reproducing the VHE $\gamma$-ray emission observed by H.E.S.S. telescopes. However, in our modelling, one emission region is sufficient to reproduce the average SED during the high state. In addition to \object{PKS~1510$-$089}, only two other FSRQs have been detected in VHE $\gamma$ rays (3C~279 and PKS~1222+216). All detections have been made during a high state in the lower energy regimes and even during high activity in the HE $\gamma$-ray band, 3C~279 \citep{aleksic13} and PKS~1222+216 \citep{ackermann14} are detected in VHE $\gamma$ rays only during individual nights. The upper limits derived for these two sources from the non-detections are also below the detected flux. In this sense \object{PKS~1510$-$089} is clearly different from the other two and follow up observations in lower HE $\gamma$-ray states should be performed in order to study if the source is a constant VHE $\gamma$-ray emitter like some of the high peaking BL Lac objects detected in VHE $\gamma$ rays \citep[e.g.][]{0414Ver}. However, VHE $\gamma$-ray detections from FSRQs imply that in all cases the emission takes place outside the BLR \citep{1222, 3C279}. Further observations are needed to study why in some cases we see extremely bright, fast flares of VHE $\gamma$ rays and in other cases (as in the case of \object{PKS~1510$-$089}) the emission of VHE $\gamma$ rays appears more stable. \begin{acknowledgements} We would like to thank the Instituto de Astrof\'{\i}sica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The support of the German BMBF and MPG, the Italian INFN, the Swiss National Fund SNF, and the Spanish MICINN is gratefully acknowledged. This work was also supported by the CPAN CSD2007-00042 and MultiDark CSD2009-00064 projects of the Spanish Consolider-Ingenio 2010 programme, by grant DO02-353 of the Bulgarian NSF, by grant 127740 of the Academy of Finland, by the DFG Cluster of Excellence ``Origin and Structure of the Universe'', by the DFG Collaborative Research Centres SFB823/C4 and SFB876/C3, by the Polish MNiSzW grant 745/N-HESS-MAGIC/2010/0 and by JSPS KAKENHI Grants numbers 24000004 and 25800105. The \textit{Fermi}-LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. Astrorivelatore Gamma ad Immagini LEggero (AGILE) is a scientific mission of the Italian Space Agency (ASI) with INFN, INAF e CIFS participation. AGILE research partially supported through the ASI grants I/089/06/2, I/042/10/0 and I/028/12/0. Data from the Steward Observatory spectropolarimetric monitoring project were used. This programme is supported by Fermi Guest Investigator grants NNX08AW56G, NNX09AU10G, and NNX12AO93G. This article is partly based on observations made with the telescopes IAC80 and TCS operated by the Instituto de Astrofisica de Canarias in the Spanish Observatorio del Teide on the island of Tenerife. Most of the observations were taken under the rutinary observation programme. The IAC team acknowledges the support from the group of support astronomers and telescope operators of the Observatorio del Teide. The Abastumani team acknowledges financial support of the project FR/638/6-320/12 by the Shota Rustaveli National Science Foundation under contract 31/77. The OVRO 40-m monitoring programme is supported in part by NASA grants NNX08AW31G and NNX11A043G, and NSF grants AST-0808050 and AST-1109911. The VLBA is operated by the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The research at Boston University (BU) was funded in part by NASA Fermi Guest Investigator grants NNX11AQ03G, NNX11AO37G, and NNX12AO90G. The PRISM camera at Lowell Observatory was developed by K.\ Janes et al. at BU and Lowell Observatory, with funding from the NSF, BU, and Lowell Observatory. The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. The Mets\"ahovi team acknowledges the support from the Academy of Finland to our observing projects (numbers 212656, 210338, 121148, and others). This research is partly based on observations with the 100-m telescope of the MPIfR (Max-Planck-Institut f\"ur Radioastronomie) at Effelsberg and with the IRAM 30-m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). I. Nestoras is funded by the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. \end{acknowledgements}
1,116,691,500,588
arxiv
\section{Introduction} \label{intro} Image semantic segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image. The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task~\cite{chen2017deeplab, zhao2017pyramid, huang2019ccnet} in recent years. However, training such a fully-supervised semantic segmentation model requires large numbers of pixel-wise annotations. Preparing such a segmentation dataset needs considerable human-labor and resources. Recently, researchers have studied weakly-supervised semantic segmentation (WSSS) methods to alleviate the issue of high dependence on accurate pixel-level human annotations for training semantic segmentation models under cheap supervision. Weak supervision takes the forms of image-level~\cite{wei2018revisiting, ahn2018learning, wang2018weakly, wei2017object, huang2018weakly}, scribbles~\cite{lin2016scribblesup, tang2018normalized} or bounding box~\cite{dai2015boxsup, krahenbuhl2011efficient, lee2021bbam, song2019box}. In this paper, we focus on WSSS method based on image-level labels only, because it is the cheapest and most popular option of weak supervision annotation which only provides information on the existence of the target object categories. Most WSSS methods utilize class labels~\cite{wei2018revisiting, ahn2018learning, wang2018weakly, wei2017object, huang2018weakly} to generate pseudo ground-truths for training a segmentation model obtained from a trained classification network with CAM~\cite{zhou2016learning} or Grad-CAM~\cite{selvaraju2017grad} method. However, image-level labels can not provide specific object position and boundary information for supervising the network training, resulting in that these localization maps identify only local regions of a target object that are the most discriminative ones for the classification prediction. Therefore, with the incomplete and inaccurate pseudo ground-truths, training a fully-supervised semantic segmentation network to reach a decent segmentation performance is challenging. Existing WSSS methods usually attempt to gradually seek out more less discriminative object regions starting from the very small and local discriminative regions~\cite{chang2020weakly, ahn2018learning, ahn2019weakly}. Differ from the existing works, in this paper, we attack the partial localization issue of the CAM method with a novel deformable transformation operation. We empirically observe that the classification models can re-discover more discriminative regions when we fix a trained classifier and equip the network with more ``sampling'' freedom to attend to other less discriminative features. This inspires us to explore a proper way to improve the quality of the initial localization maps via a new training pipeline, \textbf{Expansion and Shrinkage}, shown in Figure~\ref{fig:fig_pipeline_comparison}. \begin{wrapfigure}{R}{0.45\textwidth} \centering \captionsetup{font={small}} \includegraphics[width=0.45\textwidth]{figures/pipeline_comparison_01.pdf} \caption{The pipeline comparison for training WSSS. Our contribution is to improve the classification model initial localization maps with a new ``expanding first then shrinking'' scheme.} \label{fig:fig_pipeline_comparison} \vspace{-4.5mm} \end{wrapfigure} The \textbf{Expansion} stage aims to recover the entire object as much as possible, by sampling the exterior object regions beyond the most discriminative ones, to improve the \textbf{recall} of the located object regions. We introduce a deformable convolution layer after the image-level classification backbone, whose offset learning branch serves as a sampler seeking for sampling increasingly less discriminative object regions,driven by an inverse image-level supervision signal. We call this newly embedded deformable convolution layer ``expansion sampler'' (ES). During the inverse optimization process, the backbone is frozen to provide fixed pixel-wise features obtained in the image-level classification training to be sampled by the offset learning branch in the ES. In this way, the inverse supervision target solely enforces the offset learning in the ES branch to optimize its sampling strategy to gradually attend to the less discriminative regions, given that the pixel-wise features cannot be changed. Hence, the image-level loss maximization allows the network to pay more attention to the less discriminative regions via deformation transformation achieved by ES in the inverse optimization, which are easily ignored during the normal image-level classification task. Having obtained the high-\textbf{recall} object region after the \textbf{Expansion} stage, we propose a \textbf{Shrinkage} stage to exclude the false positive regions and thus further enhance the \textbf{precision} of the located object regions. The Shrinkage stage remains the same network architecture as in the Expansion stage except that an extra deformable convolution layer, referred as ``shrinkage sampler'' (SS), is introduced to narrow down the object region from the high-\textbf{recall} one. However, we observe a feature activation bias issue, \emph{i.e.}, the initial most discriminative parts are more highlighted in the feature map after the ES in the Expansion stage while the newly attended regions have much weaker feature activation. Such activation bias serves as prior knowledge which encourages the later shrinkage to converge to the same discriminative parts as the initial highlighted regions in the original CAM. To alleviate such an issue, we propose a feature clipping strategy after the ES in the Expansion stage of training to normalize pixel-wise feature values, allowing each pixel to have a relatively fair chance to be selected by the SS in the Shrinkage stage. Similarly, in the training of Shrinkage stage, all the layers before the SS are fixed to provide stable pixel-wise features and only the offset learning branch in the SS is updated to sample the true positive pixels, optimized by the standard image-level classification supervision. The main contributions of this study are summarized as follows. First, this paper proposes an Expansion and Shrinkage scheme to sequentially improve the \textbf{recall} and \textbf{precision} of the located object in the two respective stages, leading to high-quality CAM which can be used as strong pseudo ground truth masks for WSSS. Second, both the Expansion and Shrinkage stages are realized by carefully applying deformable convolution combined with two contrary training signals. To avoid the repeated convergence to the initial discriminative parts, a feature clipping method is applied to alleviate the activation bias of these regions. Third, our approach significantly improves the quality of the initial localization maps, exhibiting a superior performance on the PASCAL VOC 2012 and MS COCO 2014 datasets for WSSS. \vspace{-5.0mm} \section{Related Work} \label{related_work} \vspace{-2.0mm} \subsection{Weakly-Supervised Semantic Segmentation} Weakly-supervised semantic segmentation pipeline~\cite{kolesnikov2016seed,huang2018weakly} with image-level labels only mostly consists of two steps: pseudo ground-truths generation and segmentation model training~\cite{ahn2018learning, ahn2019weakly, chen2020weakly,wei2018revisiting,wang2020self,li2022weakly,jiang2019integral,wei2017object}. Erasure methods~\cite{wei2017object, hou2018self, singh2017hide} applied various iterative erasing strategies to prevent the classification network from focusing only on the most discriminative parts of objects by feeding the erased image or feature maps to the model. MDC~\cite{wei2018revisiting}, layerCAM~\cite{jiang2021layercam} and FickleNet~\cite{lee2019ficklenet} aggregated different contexts of a target object by considering multiple attribution maps from different dilated convolutions or the different layers of the DCNNs. Some works utilized diverse image contexts to explore cross-image semantic similarities and differences~\cite{fan2020cian, sun2020mining}. CONTA~\cite{zhang2020causal} analyzed the causalities among images, contexts and class labels and used intervention to remove the confounding bias in the classification network. Recently, Anti-Adv~\cite{lee2021anti} utilized an anti-adversarial manipulation method to expand the most discriminative regions in the initial CAMs to other non-discriminative regions. RIB~\cite{lee2021reducing} used the information bottleneck principle to interpret the partial localization issue of a trained classifier and remove the last double-sided saturation activation layer to alleviate this phenomenon. However, the generated location maps obtained by the classifier cannot reveal the entire object areas with accurate boundaries, the initial CAM seeds obtained using the methods above were further refined by a subsequent refinement network~\cite{ahn2018learning, ahn2019weakly, chen2020weakly}. In this paper, we also follow this pipeline and our contribution is to propose a new training pipeline to generate high-quality localization maps. Different from MDC~\cite{wei2018revisiting} using multi-dilated convolutions to combine multiple contexts for better feature mining, our method uses deformation transformation for less discriminative feature discovery. \subsection{Deformation Modeling} We refer to deformation modeling as learning geometric transformations in 2D image space without regarding to 3D. One popular way to attack deformation modeling is to craft certain geometric invariances into networks. However, to achieve this usually needs specific designs for certain kinds of deformation, such as offset shifts, rotation, reflection and scaling~\cite{sifre2013rotation,bruna2013invariant,kanazawa2014locally,cohen2016group,worrall2017harmonic,esteves2017polar}. Another line of this work on deformation modeling task learns to recompose data by either semi-parameterized or completely free-form sampling in image space. STN~\cite{jaderberg2015spatial} learnt 2D affine transformations to construct feature alignment. Deformable Convolutions~\cite{dai2017deformable,zhu2019deformable} applied learnable offset shifts for better feature learning in free-from transformations. In WSSS community, applying deformation modeling is still less explored. In this paper, we utilize deformation transformation to act as a feature ``sampling'' to re-discover other non-discriminative regions, instead of better feature representation learning. \section{Proposed Method} \label{method} Weakly-supervised semantic segmentation methods use given class labels to produce a pixel-level localization map from a classification model using CAM~\cite{zhou2016learning} or Grad-CAM~\cite{selvaraju2017grad}. We first give some fundamental introduction to localization maps generation with CAM~\cite{zhou2016learning} in Section~\ref{cam_gen}. Then, we present the whole framework of our method, \textbf{Expansion and Shrinkage with Offset Learning (ESOL)}, to obtain high-quality localization maps covering more complete and accurate target object parts in Section~\ref{expanding} and Section~\ref{shrinkage}, respectively. We then explain how we train the final semantic segmentation model with generated localization maps in Section~\ref{wsss_training}. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{figures/pipeline_01.pdf} \vspace{-7mm} \end{center} \captionsetup{font={small}} \caption{Our proposed Expansion and Shrinkage training pipeline. The \textbf{Expansion} scheme consists of a feature extractor $f(:, {\theta}_{f})$, deformable convolution layer, hand-craft feature clipping operation and classifier layer. A loss maximization training is implemented to enable the offset learning in the ES to attend other less discriminative regions. For \textbf{Shrinkage} scheme, an extra deformable convolution layer is introduced to exclude the false positive regions with a loss minimization optimization, including a classification loss and an area loss function, respectively. Expansion and Shrinkage play contrary training signals to sequentially improve the recall and precision of the initial localization maps. } \label{fig:fig_overall_pipeline} \vspace{-0.6cm} \end{figure*} \subsection{Prerequisties} \label{cam_gen} We first present the way to generate localization maps via the CAM~\cite{zhou2016learning}. A class activation map of a target object focuses on the regions of an image by a trained classification network for a specific category prediction. The CAM is based on a DCNN with a global average pooling (GAP) before its final classification layer, which is trained by a sigmoid cross-entropy loss function, formulated as follows: \vspace{-5.0mm} \begin{equation} \mathcal{L}(\hat{y}, y) = -\frac{1}{C} * \sum_{i}{ y[i]*\log((1 + e^{-\hat{y}[i]})^{-1}) + (1-y[i])*\log({\frac{e^{-\hat{y}[i]}}{1+e^{-\hat{y}[i]}}})}, \label{eq_1} \end{equation} \vspace{-4.0mm} where $C$ is the total number of training classes, $i$ represents the $i^{th}$ training sample in the mini-batch, $y[i]$ is the ground truth label of $i^{th}$ class with the value of either 0 or 1 while $\hat{y}[i]$ is the model prediction. The localization maps is realized by considering the class-specific contribution of each channel of the last feature map, before the GAP layer, to the final classification prediction. Given a trained classifier network parameterized by $\theta = \{{\theta}_{f}, \rm w \}$ where $f(:, {\theta}_{f})$ is the feature extractor, and $\rm w$ denotes the weight of the final classification layer. For some class $c$, the localization map is then computed from an input image $\rm x$ as follows: \vspace{-2.0mm} \begin{equation} {\rm CAM(x; w)} = \frac{\textbf{w}^{T}_{c} f({\rm x}; {\theta}_{f})}{{\rm max} \, {\textbf{w}^{T}_{c} f({\rm x}; {\theta}_{f})}}, \end{equation} \vspace{-3.0mm} where $\rm max(\cdot)$ is the maximization over the spatial locations for normalization. The above method can only locate the most discriminative regions and fails in locating other less discriminative regions that are semantically meaningful as well. In the following work, we elaborate our method by presenting a new training pipeline to capture high-quality object localization maps. \vspace{-3.0mm} \subsection{Expansion} \label{expanding} \vspace{-2.0mm} As mentioned above, the localization maps generated by a commonly trained classifier usually struggle with the partial localization issue of the target objects, since the image-level labels cannot provide detailed position or boundary information. To alleviate this phenomenon, we devise a new training pipeline, Expansion, to firstly recover the entire object regions as much as possible so as to improve the recall of the located object regions. Then, we further introduce another training scheme, Shrinkage, to exclude false positive regions, \textit{e.g.,} background regions, to enhance the precision of the located regions. A commonly trained classifier usually considers only the local regions that make the most contributions to final classification prediction. Differ from other works, we first enforce the network to seek out the entire target object regions via our proposed Expansion scheme with an offset learning branch in a deformable convolution layer, as shown at the top of Figure~\ref{fig:fig_overall_pipeline}. A trained classifier is utilized to prepare our Expansion scheme firstly providing the regular convolutional weights in a deformable convolution layer, that can capture the most discriminative feature activation, \textit{e.g.,} $conv\_1$ or $conv\_2$. Given an input image $\rm x$, the classification network $\theta = \{{\theta}_{f}, {\rm w} \}$ locates the coarse object regions. For a specific convolution layer, $conv\_1$ with weights $\rm w_1$, the output feature maps $F_1$ is computed for each location $p_0$ as follows: \vspace{-5.0mm} \begin{equation} \begin{split} F_1(p_0) = \rm \sum_{p_n \in R} w_1(p_n) \cdot x(p_0 + p_n), \end{split} \end{equation} where $\rm R$ defines a specific kernel size with dilation 1 and $\rm p_n$ enumerates the locations in $\rm R$. For Expansion scheme, a embedded deformable convolution layer is introduced after the feature extractor $f(:, {\theta}_{f})$. The new feature maps $F^{'}_1$ is then computed as: \vspace{-2.0mm} \begin{equation} \begin{split} F^{'}_1(p_0) = \rm \sum_{p_n \in R} w_1(p_n) \cdot x(p_0 + p_n + {\Delta}p_{1n}), \end{split} \end{equation} \vspace{-4.0mm} where the regular grid $\rm R$ is augmented with offset fields $\{ {\Delta}p_{1n} | n=1,...,N \}$, $N$ denotes the number of offset points, \textit{e.g.,} $1 \times 1$ or $3 \times 3$. The learned ${\Delta}p_{1n}$ obtained from the offset learning branch in the deformable convolution layer further acts as ``expansion sampler'' to sample the exterior object regions gradually beyond the most discriminative ones. This training scheme utilizes an image-level loss maximization to enforce the network seek for increasingly less discriminative object regions, given unchanged pixel-wise features that is implemented by detaching the loss back-propagation to the backbone. The training loss function $\rm \mathcal{L}_{expansion}$ then becomes: \vspace{-5.0mm} \begin{equation} \begin{split} \rm \mathcal{L}_{expansion} = (-1.0) \cdot \alpha \mathcal{L}(\hat{y}, y), \end{split} \end{equation} \vspace{-5.0mm} where $\alpha$ controls the loss weight for updating and $\mathcal{L}(\hat{y}, y)$ is the corresponding multi-label classification loss function mentioned in Eq~\ref{eq_1}. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{figures/feature_vis_comparison.pdf} \end{center} \captionsetup{font={small}} \caption{Feature visualizations for well demonstration. (a) Examples of input images. (b) Examples of feature visualization of $F_1$ from trained classifier. (c) Examples of feature visualization of $F^{'}_{clip}$ from our Expansion stage. (d) Plot of Recall values of pre-trained classifier \textit{v.s.} our Expansion stage for to demonstrate the high-recall results. Red boxes point out the difference between the trained classifier and Expansion while yellow boxes point out the negative positive regions (\textit{e.g.,} background).} \label{fig:feature_vis_comp} \end{figure*} \subsection{Shrinkage} \label{shrinkage} After the Expansion stage, though most possible target object regions are sampled, the backgrounds are also included. This would inevitably cause imprecise localization maps generation, which results in poor quality pseudo ground-truths and hampers the final segmentation performance. To enhance the precision of such high-recall regions, Shrinkage scheme is proposed to exclude the false positive regions of the localization maps. As shown at the bottom of Figure~\ref{fig:fig_overall_pipeline}, the Shrinkage stage remains the same network architecture as in Expansion stage, except that an extra deformable convolution layer is implemented to narrow down the high-recall regions. Specifically, the model weights obtained from the Expansion are used to initialize the Shrinkage model and a loss minimization is adopted to train the network, including a multi-label classification loss and an area loss. Area regularization is adopted to constraint the size of the localization maps to ensure that the irrelevant backgrounds are excluded in the localization map $\mathcal{P}_{k}$: \vspace{-4.0mm} \begin{equation} \begin{split} \rm \mathcal{L}_{shrinkage} = \gamma \mathcal{L}(\hat{y}, y) + \mu \mathcal{L}_{area}, \qquad \mathcal{L}_{area} = \frac{1}{C} \sum^{C}_{c=1} \mathcal{S}_{c}, \end{split} \end{equation} \vspace{-4.0mm} where $\mathcal{S}_{c} = \frac{1}{HW} \sum^{H}_{h=1} \sum^{W}_{w=1} \mathcal{P}_{k}(h, w)$, $C$ is the total class category numbers of the dataset, $H$ and $W$ denotes the height and width of the localization maps, respectively. We empirically observe that the feature activation bias issue is obvious, the initial most discriminative parts got more highlighted after ES in the Expansion stage while the newly attended ones have weaker feature activation. This activation bias would encourage the later shrinkage to seek for the same discriminative parts as the initial activation parts, even more local. To address such an issue, a feature clipping strategy is then proposed after the ES in the Expansion stage to normalize pixel-wise features, providing relatively fair chances for the pixels to be selected by SS in the Shrinkage stage. The feature clipping strategy is formulated as follows: \vspace{-4.0mm} \begin{equation} \begin{split} F^{'}_{clip}({\rm x}_i) = \rm \begin{cases} \rm \beta max(x), & \rm x_i \ge \beta max(x) \\ \rm x_i, & \rm others \end{cases}, \end{split} \end{equation} \vspace{-3.0mm} where ${\rm x}_i$ is the input feature maps over the spatial dimension, $\rm max(\cdot)$ is applied to obtain the maximal values along the spatial dimension as well. We present examples of PASCAL VOC 2012~\cite{everingham2010pascal} training images for better demonstration shown in Figure~\ref{fig:feature_vis_comp}. During the Expansion stage, the image-level loss maximization enforces the offset learning branch attend on other less discriminative regions, indicating that the ES indeed serves as a sampler locating the entire object regions, thanks to the deformation transformation in the deformable convolution layer. \subsection{Pseudo Ground-truth Generation} \label{wsss_training} \begin{wrapfigure}{R}{0.45\textwidth} \centering \includegraphics[width=0.45\textwidth]{figures/wsss_segmentation.pdf} \vspace{-5mm} \caption{Examples of the final segmentation predictions on PASCAL VOC 2012 validation. The top row, middle row and bottom row denotes the input images, corresponding ground truth and our final segmentation model predictions, respectively.} \label{fig:wsss_seg_vis} \vspace{-5mm} \end{wrapfigure} For example, the body or legs of the cat or cow shown in Figure~\ref{fig:feature_vis_comp} is successfully activated by our Expansion scheme, and the overall recall of foregrounds is significantly improved among diverse hard-threshold settings. We obtain the final localization map $M$ by our final Shrinkage trained model via CAM~\cite{zhou2016learning} method. Since a CAM is computed from down-sampled intermediate feature maps produced by a classifier, it has to be up-sampled to match the size of the original image. Thus, it tends to localize the target object coarsely and cannot cover the entire regions with exact boundary. Many WSSS methods~\cite{chang2020mixup,SubeljBajec2012,lee2019ficklenet,wang2020self,zhang2020causal} produce pseudo ground-truths by extending their initial CAM seeds using another seed refinement methods~\cite{ahn2019weakly,ahn2018learning,chen2020weakly}. Similarly, we obtain our final pseudo ground-truths using IRN~\cite{ahn2019weakly}, a state-of-the-art refinement method, to refine the coarse map $M$ for generating better segmentation model supervision. And Figure~\ref{fig:wsss_seg_vis} illustrates the final segmentation results on VOC2012 val set. \begin{table}[t] \centering \setlength{\tabcolsep}{8pt} \center \begin{tabular}{l|c|c@{\hskip 0.2in}c@{\hskip 0.2in}c|c@{\hskip 0.2in}c} \Xhline{1pt}&&&&&&\\[-0.95em] \multirow{2}[0]{*}{Method} & Refinement & \multicolumn{3}{c}{PASCAL VOC} & \multicolumn{2}{@{\hskip -0.005in}|c}{MS COCO} \\&&&&&&\\[-0.98em]\cline{3-7}&&&&&&\\[-0.9em] & Method & Seed & CRF & Mask & Seed & Mask \\ \hline\hline &&&&&&\\[-0.9em] $\text{PSA}_{\text{~~CVPR '18}}$~\cite{ahn2018learning} & \multirow{6}{*}{PSA~\cite{ahn2018learning}} & 48.0 & - & 61.0 & - & - \\ $\text{Mixup-CAM}_{\text{~~BMVC '20}}$~\cite{chang2020mixup} & & 50.1 & - & 61.9 & - & - \\ $\text{CDA}_{\text{~~ICCV '21}}$~\cite{su2021context}& & 48.9 & 57.5 & 63.3 & - & - \\ $\text{SC-CAM}_{\text{~~CVPR '20}}$~\cite{chang2020weakly}& & 50.9 & 55.3 & 63.4 & - & - \\ ESOL (Ours) & & \textbf{53.6} & \textbf{61.4} & \textbf{66.4} & - & - \\ \hline $\text{IRN}_{\text{~~CVPR '19}}$~\cite{ahn2019weakly} & \multirow{6}{*}{IRN~\cite{ahn2019weakly}} & 48.8 & 53.7 & 66.3 & 33.5$^{\ddagger}$ & 42.9$^{\ddagger}$ \\ $\text{MBMNet}_{\text{~~ACMMM '20}}$~\cite{liu2020weakly} & & 50.2 & - & 66.8 & - & -\\ $\text{BES}_{\text{~~ECCV '20}}$~\cite{chen2020weakly} & & 50.4 & - & 67.2 & - & -\\ $\text{CONTA}_{\text{~~NeurIPS '20}}$~\cite{zhang2020causal} & & 48.8 & - & 67.9& 28.7$^{\dagger}$ & 35.2$^{\dagger}$ \\ $\text{CDA}_{\text{~~ICCV '21}}$~\cite{su2021context}& & 50.8 & 58.4 & 67.7 & - & - \\ ESOL (Ours) & & \textbf{53.6} & \textbf{61.4} & \textbf{68.7} & \textbf{35.7}$^{\ddagger}$ & \textbf{44.6}$^{\ddagger}$ \\ \Xhline{1pt} \end{tabular}% \vspace{0.3em} \caption{Comparison of the initial localization maps (Seed), the seed with CRF (CRF), and the pseudo ground-truths mask (Mask) on PASCAL VOC 2012 and MS COCO 2014 training images, in terms of mIoU (\%). $^{\dagger}$ denotes the results reported by CONTA~\cite{zhang2020causal}, and $^{\ddagger}$ denotes the results obtained by us.} \label{tab:table_seed}% \vspace{-9.5mm} \end{table} \vspace{-5.0mm} \section{Experiments} \label{exp} \vspace{-3.0mm} \subsection{Experimental Setup} \label{exp_setup} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{figures/cam_vis_comparison.pdf} \end{center} \vspace{-3.0mm} \captionsetup{font={small}} \caption{Examples of localization maps on PASCAL VOC 2012 training images. (a) Input Images. (b) Original baseline CAMs. (c) DT denotes Expansion training without feature clipping strategy. (d) Expansion results. (e) Shrinkage results. (f) Ground Truth.} \label{fig:cam_vis_comp} \vspace{-0.6cm} \end{figure*} \vspace{-2.0mm} \textbf{Dataset and Evaluation metric:} Experiments are conducted on two publicly available datasets, PASCAL VOC 2012~\cite{everingham2010pascal} and MS COCO 2014~\cite{lin2014microsoft}. The Pascal VOC 2012 dataset contains 20 foreground categories and the background. It has three sets, the training, validation, and test set, each containing 1464, 1449 and 1456 images, respectively. Following most previous works~\cite{chang2020mixup,SubeljBajec2012,lee2019ficklenet,wang2020self,zhang2020causal}, we also adopt the augmented training set~\cite{hariharan2011semantic} to yield totally 10582 training images. The MS COCO 2014 dataset has 80 foreground categories, including approximately 82K training images and 4K validation images. We evaluate our method on 1449 validation images and 1456 test images from the PASCAL VOC 2012 dataset and on 40504 validation images from the MS COCO 2014 datasets. The mean intersection-over-union (mIoU)~\cite{long2015fully} is used as the evaluation metric. \textbf{Implementation Details:} \label{imple} We implement CAM~\cite{zhou2016learning} by following the procedure from Ahn \textit{et al.}~\cite{ahn2019weakly}, implemented with the PyTorch framework~\cite{paszke2017automatic} on 12G Nvidia XPs. We adopt the ResNet-50~\cite{he2016deep} as backbone for the classification model. To prepare the regular convolutional weights for the deformable convolution layers, we opt to follow the common settings to train a classifier for our method, except that we add two additional convolution layers ($3 \times 3$ or $1 \times 1$) that would be used to perform the deformable transformation. For Expansion stage, we train the network 6610 iterations for PASCAL VOC 2012 and 51730 iterations for MS COCO 2014. To carefully train the model with a loss maximization, we set a relatively small learning rate, 0.01 and 0.001 for PASCAL VOC 2012 and MS COCO 2014, while the controlling parameter $\alpha$ is set to 0.001. Noting that the loss gradients are not allowed to back-propagate to the backbone, as we want to provide pixel-wise unchanged feature maps for ES and SS. For Shrinkage stage, the network is initialized from the Expansion model weights. The learning rate is set to be 0.1 and 0.02, and the training iteration is set to be 6610 and 51730 for PASCAL VOC 2012 and MS COCO 2014, respectively. The threshold value of the hand-craft feature clipping strategy is 0.15. The $\gamma$ and $\mu$ are set to be 1.0. To generate reliable initial localization maps, the scale ratio of multi-scale CAM is $\{ 0.5, 1.0, 1.5, 2.0 \}$. During testing, DenceCRF~\cite{krahenbuhl2011efficient} is used as post-processing to refine the generated localization maps. For the final semantic segmentation, we use the PyTorch implementation of DeepLab-v2-ResNet101 provided by~\cite{deeplabres101}. \subsubsection{Quality of the initial localization maps and pseudo ground-truths} \vspace{-2.0mm} \textbf{PASCAL VOC 2012 dataset:} In Table~\ref{tab:table_seed}, we report the mIoU values of the initial localization maps (seed) and pseudo ground-truths masks produced by our method, and other recent WSSS techniques. Following common practice~\cite{wang2020self,ahn2018learning,ahn2019weakly,chang2020weakly}, we perform a range of hard-threshold values to distinguish the foreground and the background regions in localization maps $M$ to determine the best initial seeds result. Our initial seeds improves 5.2\% mIoU over the original CAM seeds (48.4 to 53.6), a baseline for comparison, and outperform those simultaneously from the other methods. Noting that our initial seeds are superior to those of CDA~\cite{su2021context} or SC-CAM~\cite{chang2020weakly}, which applied a complicated context decoupling augmentation on the network training or adopted sub-category exploration to enhance the feature representation via an iterative clustering method. \begin{wraptable}{r}{0.49\linewidth} \centering \vspace{-1em} \begin{tabular}{@{\hskip 0.03in}l@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.03in}} \Xhline{1pt}\\[-0.95em] Method & Backbone & mIoU \\ \hline\hline \\[-0.9em] $\text{ADL}_{\text{~~TPAMI '20}}$~\cite{choe2020attention} & VGG16 & 30.8 \\ $\text{CONTA}_{\text{~~NeurIPS '20}}$~\cite{zhang2020causal} & ResNet50 & 33.4 \\ $\text{SEAM}_{\text{~~CVPR '20}}$~\cite{wang2020self} & Wider-ResNet38 & 32.8 \\ $\text{IRN}_{\text{~~CVPR '19}}$~\cite{ahn2019weakly} & ResNet101 & 41.4 \\ ESOL (Ours) & ResNet101 & 42.6 \\ \Xhline{1pt} \end{tabular}% \caption{Comparison of semantic segmentation on MS COCO validation images.} \label{table_semantic_coco} \vspace{-1em} \end{wraptable} \vspace{-3.0mm} \subsection{Weakly-supervised Semantic Segmentation} \label{wsss_exps} After obtaining the initial seeds based on CAM~\cite{zhou2016learning}, we then apply a post-processing method based on Conditional Random Field (CRF)~\cite{krahenbuhl2011efficient} for pixel-level refinement on the results from the method proposed by SC-CAM~\cite{chang2020weakly},CDA~\cite{su2021context}, IRN~\cite{ahn2019weakly}, and our method. We can observe that applying CRF post-processing improves all the initial seeds over 5\% mIoU. When the seeds produced by our method is then refined with CRF, it obtains 7.8\% mIoU better than that from the original CAM (53.6 to 61.4), and consequently outperforms all the recent competitive methods by a large margin. We then compare the pseudo ground-truths masks generated after seed refinement with other methods. For a fair comparison, we compute our pseudo ground-truths masks using both seed refinements, PSA~\cite{ahn2018learning} or IRN~\cite{ahn2019weakly}. Table~\ref{tab:table_seed} illustrates the masks results from our method yield 66.4\% mIoU with PSA~\cite{ahn2018learning} and 68.7\% mIoU with IRN~\cite{ahn2019weakly}, respectively. Figure~\ref{fig:cam_vis_comp} visualizes initial localization maps adopted different components or training phases for the PASCAL VOC 2012 dataset. And the visualizations demonstrate the effectiveness of the proposed Expansion and Shrinkage approach, sequentially balancing the recall and precision of the initial localization maps. More examples are shown in the Appendix. \vspace{-2.0mm} \subsubsection{Performance of the Weakly-supervised Semantic Segmentation} \label{performance_wsss_seg} \vspace{-2.0mm} \textbf{PASCAL VOC 2012 dataset:} Table~\ref{table_semantic} shows the segmentation performance (mIoU) on PASCAL VOC 2012 validation and test set. We illustrate the results predicted by our method and other recently proposed WSSS methods, which use either bounding box or image-level labels. All the segmentation results in Table~\ref{table_semantic} were implemented by a ResNet-based backbone~\cite{he2016deep}. Our proposed method achieves 69.9\% and 69.3\% mIoU values for the validation and test set on the PASCAL VOC 2012 dataset, outperforming all the WSSS methods that utilize image-level class labels only. \begin{table}[t] \centering \begin{minipage}{0.45\linewidth} \centering \scalebox{0.95}{ \begin{tabular}{l@ {\hskip 0.4in}c@{\hskip 0.2in}c} \Xhline{1pt}\\[-0.95em] Method & \textit{val} & \textit{test} \\ \hline\hline \\[-0.9em] \multicolumn{3}{l}{Supervision: Bounding box labels} \\ $\text{BoxSup}_{\text{~~ICCV '15}}$~\cite{dai2015boxsup} & 62.0 & 64.6 \\ $\text{Song \textit{et al.}}_{\text{~~CVPR '19}}$~\cite{song2019box} & 70.2 & - \\ $\text{BBAM}_{\text{~~CVPR '21}}$~\cite{lee2021bbam} & 73.7 & 73.7 \\ \\[-0.9em] \hline \\[-0.9em] \multicolumn{3}{l}{Supervision: Image class labels}\\ $\text{IRN}_{\text{~~CVPR '19}}$~\cite{ahn2019weakly} & 63.5 & 64.8 \\ $\text{SEAM}_{\text{~~CVPR '20}}$~\cite{wang2020self} & 64.5 & 65.7 \\ $\text{BES}_{\text{~~ECCV '20}}$~\cite{chen2020weakly} & 65.7 & 66.6 \\ $\text{Chang \textit{et al.}}_{\text{~~CVPR '20}}$~\cite{chang2020weakly} & 66.1 & 65.9\\ $\text{RRM}_{\text{~~AAAI '20}}$~\cite{zhang2020reliability} & 66.3 & 66.5 \\ $\text{CONTA}_{\text{~~NeurIPS '20}}$~\cite{zhang2020causal} & 66.1 & 66.7 \\ ESOL (Ours) & \textbf{69.9} & \textbf{69.3} \\ \Xhline{1pt} \end{tabular}% } \vspace{0.5mm} \caption{Comparison of semantic segmentation performance on PASCAL VOC 2012 validation and test images.} \label{table_semantic} \end{minipage}\hfill \begin{minipage}{0.5\linewidth} \center \vspace{-0.1em} \scalebox{0.95}{ \begin{tabular}{lc@{\hskip 0.2in}c@{\hskip 0.2in}c} \Xhline{1pt}\\[-0.95em] Method & Sup. &\textit{val} & \textit{test} \\ \hline\hline \\[-0.85em] $\text{SeeNet}_{\text{~~NeurIPS '18}}$~\cite{hou2018self} & $\mathcal{S}$ & 63.1 & 62.8 \\ $\text{FickleNet}_{\text{~~CVPR '19}}$~\cite{lee2019ficklenet} & $\mathcal{S}$ & 64.9 & 65.3\\ $\text{CIAN}_{\text{~~AAAI '20}}$~\cite{fan2020cian} & $\mathcal{S}$ & 64.3 & 65.3 \\ $\text{Zhang \textit{et al.}}_{\text{~~ECCV '20}}$~\cite{zhang2020splitting} & $\mathcal{S}$ & 66.6 & 66.7 \\ $\text{Fan \textit{et al.}}_{\text{~~ECCV '20}}$~\cite{fanemploying} & $\mathcal{S}$ & 67.2 & 66.7 \\ $\text{Sun \textit{et al.}}_{\text{~~ECCV '20}}$~\cite{sun2020mining} & $\mathcal{S}$ & 66.2 & 66.9 \\ $\text{LIID}_{\text{~~TPAMI '20}}$~\cite{liu2020leveraging} & $\mathcal{S}_I$ & 66.5 & 67.5 \\ $\text{Li \textit{et al.}}_{\text{~~AAAI '21}}$~\cite{li2020group} & $\mathcal{S}$ & 68.2 & 68.5 \\ $\text{Yao \textit{et al.}}_{\text{~~CVPR '21}}$~\cite{yao2021nonsalient} & $\mathcal{S}$ & 68.3 & 68.5 \\ ESOL (Ours) & $\mathcal{S}$ & \textbf{71.1} & \textbf{70.4}\\ \Xhline{1pt} \end{tabular}% } \vspace{1.5mm} \caption{Comparison of semantic segmentation performance on PASCAL VOC 2012 validation and test images using explicit localization cues. $\mathcal{S}$: salient object, $\mathcal{S}_I$: salient instance.} \label{table_semantic_sal} \end{minipage} \vspace{-3.0em} \end{table} In particular, our method surpasses CONTA~\cite{zhang2020causal}, a state-of-the-art WSSS competitors recently, obtaining 66.1\% mIoU. CONTA adopted SEAM~\cite{wang2020self}, which is applied with WiderResNet-based~\cite{wu2019wider} backbone that is known to be more powerful than IRN~\cite{ahn2019weakly} based on ResNet-based. When it was implemented with IRN~\cite{ahn2019weakly} for a fair comparison with our method, its segmentation performance got only 65.3\%, which is inferior to ours by 3.9\% mIoU. Recently, saliency map cues are introduced to supervise the network training for better localization performance, since the offline saliency maps provide detail foreground boundary priors. In Table~\ref{table_semantic_sal}, we also compare our method with other methods using additional salient object supervision. We combine our final pseudo ground-truths masks with saliency maps produced by Li \textit{et al.}~\cite{li2020group} or Yao \textit{et al.}~\cite{yao2021nonsalient}. The pixels that are considered as foreground are identified as background, or regarded as background are identified as foreground, we simply set them to be 255 on these pseudo ground-truths maps. Because the semantic segmentation training would ignore these pixels for cross-entropy loss. We can see that our method achieves 71.1\% and 70.4\% mIoU values for PASCAL VOC 2012 validation and test set, respectively, consistently outperforming all other methods under the salient object supervision. \textbf{MS COCO 2014 dataset:} Table~\ref{table_semantic_coco} illustrates the segmentation performance on MS COCO 2014 compared with other methods. Our method achieves 42.6\% in terms of the mIoU values on validation set, surpassing 1.2\% over the IRN~\cite{ahn2019weakly}, regarded as our baseline and outperforming the other recent competitive methods~\cite{choe2020attention,zhang2020causal,wang2020self,ahn2019weakly} by a large margin. In particular, we reproduce different results of IRN~\cite{ahn2019weakly} with CONTA~\cite{zhang2020causal}, in which we achieve 41.4\% mIoU values. Hence, we compare the relative improvements for comparison: CONTA reaches a 0.8\% mIoU improvement compared with IRN (32.6 to 33.4), while our method achieves 1.2\% mIoU improvement (41.4 to 42.6). \vspace{-3.0mm} \subsection{Ablation Studies} \label{ablations} In this section, we conduct various ablation experiments on PASCAL VOC 2012 dataset to validate the effectiveness of each component or training scheme of our method. \vspace{-3.0mm} \subsubsection{Expansion Training Analysis} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{figures/alations.pdf} \end{center} \vspace{-2.0mm} \captionsetup{font={small}} \caption{Ablation Studies. (a) $\alpha$ sensitivity analysis. (b) $\beta$ sensitivity analysis. (c) losses combinations. (d) $\gamma$ and $\mu$ balance analysis.} \label{fig:ablations} \vspace{-0.6cm} \end{figure*} \textbf{Loss Maximization Controller $\alpha$:} To analyze the influence of the loss maximization controller $\alpha$ on the Expanding training sensitivity, we conduct a range from 0.001 to 0.1 for $\alpha$ as shown in Figure~\ref{fig:ablations} (a). We found that too small or large $\alpha$ values degrade to attend on backgrounds extremely, showing a lower foreground recall ratio. We choose appropriate value: $\alpha = 0.01$, which balances the recall and precision of the localization maps. Some samples are visualized in the Appendix. \textbf{Feature Clipping Strategy Effect:} In Expansion stage, we tend to provide high-recall foregrounds with a relatively fair chance to be sampled by Shrinkage. The feature clipping strategy is introduced after the ES and we study the impact of hand-craft threshold settings in Figure~\ref{fig:ablations} (b). \subsubsection{Shrinkage Training Analysis} \textbf{Impact of the Loss Functions:} Although Expansion training brings high-recall foreground regions, backgrounds cannot be ignored. A classification loss and an area loss are used to optimize the network to select true foreground pixels via a loss minimization. We provide the ablative study in Figure~\ref{fig:ablations} (c) to demonstrate the impact of each one and find that both of them are useful for the network training, constraint the size of the localization maps to ensure that the irrelevant backgrounds are excluded in the localization maps $\mathcal{P}_k$. \textbf{Loss Minimization Controller $\gamma$ and $\mu$:} The sensitivity of these two hyper-parameters are performed shown in Figure~\ref{fig:ablations} (d). It is observed that the performances of our approach are stable with the variation of $\gamma$ (from 0.1 to 5) and $\mu$ (from 0.01 to 1.5), \textit{i.e.,} our method is not sensitive to such two hyper-parameters. In our experiments, the default values of $\gamma$ and $\mu$ are 1.0 and 1.0 simply. \section{Conclusion} \label{conclusion} In this paper, we explore a deformation transformation operation to address the major challenge in weakly-supervised semantic segmentation with image-level class labels. A novel training pipeline for the WSSS task, Expansion and Shrinkage, is proposed to first recover the entire target object regions as much as possible while the network is driven by an inverse image-level supervision. Then, we apply a feature Clipping operation to provide even high-recall regions for Shrinkage to sample high-precision regions, while the network is optimized by two loss functions, classification loss and area loss. Our method significantly improves the quality of the initial localization maps, exhibiting a competitive performance on the PASCAL VOC 2012 and MS COCO 2014 datasets. \medskip { \small \bibliographystyle{plain}
1,116,691,500,589
arxiv
\section{Introduction} False vacua are accessible in classical and quantum field theory, given enough time and energy. By a careful preparation of the initial state, an experimentalist could, in principle, create an arbitrarily big region of false vacuum in the laboratory. For instance, given that any finite region of the false vacuum will fully decay into asymptotic QFT states, a recipe would be to apply time-reversal to the decay products of a state that contains such a region to begin with. In gravity, the situation is qualitatively different. A big region of a false vacuum with positive cosmological constant (CC) can eternally inflate, and as a result change the asymptotic structure of the spacetime. In fact, in classical gravity Penrose singularity theorem \cite{Penrose} forbids the formation of such an inflating bubble in a Minkowski vacuum with no singularity in the past \cite{Guth}. It is tantalizing to ask whether this can happen quantum mechanically. It would be quite remarkable if quantum gravity completely excised macroscopic domains of de Sitter vacua from Minkowski and Anti-de Sitter physics. But if it didn't, one could hope to use the false vacuum bubble to shed some light on the notoriously hard to understand quantum mechanics of de Sitter (dS) spacetime. We have a better handle on quantum gravity in asymptotically Minkowski spacetime via scattering amplitudes and in asymptotically Anti-de Sitter (AdS) spacetime via holography. If dS arises in the excitations of the Minkowski or AdS vacua, those frameworks can, at least in principle, be employed to also explore dS quantum gravity. On the other hand, dS can help formulate new problems in the asymptotically Minkowski or AdS setups. For instance, because of the negative interior pressure, dS bubbles are generically hidden behind black hole horizons. This provides an interesting interplay between black hole microstate counting and that of de Sitter.\footnote{It is actually known that in an FRW universe with inflation (a period of quasi-dS expansion) in its past there can be black holes connecting to inflating pocket universes \cite{Garriga}. Nevertheless, it would be desirable if we could study dS without starting from it.} But perhaps the best motivation is coming from the cosmological observations. They suggest that we live in a dS false vacuum with a tiny CC, and that our universe has gone through a period of inflation at very early times. One might wonder: Could it all have started inside the laboratory of a dedicated experimentalist? This question was raised 30 years ago in \cite{Farhi,Fischler,Fischler2}. To go around the singularity theorem, they invoke quantum tunneling to a spacetime that if classically extended it would contain a singularity in the past. However, the solution of \cite{Farhi,Fischler,Fischler2} is degenerate, and to date there is no consensus on whether or not it describes a valid tunneling process. See \cite{Quevedo} and \cite{Fu} for two recent papers with contrasting views and for further references. To break the impasse, finding an alternative solution seems necessary. Here we propose an alternative. One that is a byproduct of our solution (in $2d$ gravity) to a similarly interesting question raised in \cite{Fu}: Is it possible to prepare, via a Euclidean path integral, an asymptotically AdS state that contains an inflating bubble? The two questions are connected if we take advantage of simplifications that arise in the near horizon geometry of near-extremal black holes. Specifically, our focus is on the near extremal Reissner-Nordstr\"om (RN) black hole in the Minkowski vacuum and on the near extremal Schwarzschild-de Sitter (SdS) black hole in the false vacuum. Both geometries have a long throat with an approximately constant radius and hence one can dimensionally reduce to obtain a $2d$ gravity model called Jackiw-Teitelboim (JT) gravity \cite{Jackiw,Teitelboim}. The $2d$ geometry is AdS$_2$ in the RN case and dS$_2$ in the SdS case. These connections have been the subject of extensive research in recent years. Two papers that have been particularly inspiring to us are \cite{Maldacena_AdS,Maldacena_dS}, though for completeness a brief review will be given in section \ref{sec:JT}. In section \ref{sec:sol}, we find a Euclidean solution in which a brane emanates from the AdS$_2$ boundary, it decays into a dS$_2$ bubble (a piece of a 2-sphere in Euclidean signature), and bounces back to reach the AdS boundary (see figure \ref{fig:bounce}). Cutting this bounce solution at the moment of time-reflection symmetry and Wick rotating gives the desired asymptotically AdS$_2$ geometry with an inflating bubble in the middle (see figure \ref{fig:penrose}). Once embedded in four dimensions, the two AdS asymptotics match to the mouths of a near-extremal RN wormhole. The dS$_2$ region describes the near-extremal SdS geometry. And the domain walls carry opposite magnetic charges to break the magnetic field lines. \begin{figure}[t] \centering \includegraphics[scale =0.8]{free.pdf} \includegraphics[scale =0.8]{bounce.pdf} \caption{\small{{\em Schematic}. Left: a freely propagating brane in Euclidean AdS$_2$. Right: the bounce solution for the decay of the brane into a dS bubble. The dashed red lines on the left have to be identified with the solid red lines on the right. The diagrams can be cut at the moment of time-reflection symmetry (along the dotted lines) and Wick rotated to Lorentzian signature.}} \label{fig:bounce} \end{figure} \begin{figure}[t] \centering \includegraphics[scale =0.8]{adsbr.pdf} \includegraphics[scale =0.85]{schds.pdf} \caption{\small{Penrose diagrams obtained from the Wick rotation of the Euclidean solutions in figure \ref{fig:bounce} and analytic continuation to the past. Left: A two sided AdS$_2$ black hole with an elongated throat due to the presence of the massive brane. Right: An inflating bubble that is nucleated inside the AdS$_2$ throat. The domain walls (green) fall though the black hole horizons. In the $4d$ embedding, the inflating region has the spatial topology of $\mathbf{R}\times S^2$. At first, the $\mathbf{R}$ factor expands exponentially and the 2-sphere slowly. Eventually, the expansion becomes isotropic and we obtain a locally dS$_4$ solution. As seen, the classically extended geometry is singular in the past as dictated by the Penrose singularity theorem \cite{Penrose,Vilenkin}.}} \label{fig:penrose} \end{figure} In section \ref{sec:con}, we will speculate about the tunneling probability by comparing the bounce action to the Euclidean action for the free propagation of the brane in AdS$_2$. We will conclude by further remarks on the viability of this scenario. \section{JT gravity}\label{sec:JT} Our goal is to study the motion of codimension-1 domain walls and branes in the RN and SdS geometries. Since we are mainly concerned with the Euclidean solution, it is enough to look at the static patch metric: \begin{equation} ds^2 = -f(r) dt^2 + \frac{dr^2}{f(r)} + r^2 d\Omega^2 \end{equation} where $d\Omega^2$ is the line element on unit 2-sphere $S^2$, and \begin{equation}\label{fS} f(r) = 1-\frac{8\pi G}{3} \Lambda r^2 -\frac{2G M_i}{r}, \qquad \text{SdS} \end{equation} where $\Lambda$ is the vacuum energy of the false minimum, and \begin{equation}\label{fR} f(r)=1-\frac{2 GM_e}{r} + \frac{4\pi G Q^2}{r^2},\qquad\text{RN}. \end{equation} The horizons correspond to the zeros of $f(r)$. The extremal limit is when the two zeroes coincide. In the SdS case this is at \begin{equation} r_0^2 = \frac{1}{8\pi G \Lambda}. \end{equation} Suppose we tune the masses $M_i$, $M_e$ and the magnetic charge $Q$ such that both geometries are near extremality and with approximately equal horizon areas. Then the near horizon geometry can be studied by dimensionally reducing over $S^2$ and working with the $2d$ model. Ignoring the KK modes and setting $r_0=1$, we have the spherically symmetric metric ansatz \begin{equation} ds^2 = g_{\mu\nu} dx^\mu dx^\nu + (1+\phi) d\Omega^2,\qquad \phi \ll 1 \end{equation} where $\mu,\nu$ run over $0$ and $1$. Dropping topological terms, the $4d$ action reduces to $2d$ JT gravity coupled to $2d$ matter fields $\psi$: \begin{equation}\label{JT} S = C \left[\int d^2x \sqrt{-g} (\phi R - U(\phi))-2\phi_b\oint d\ell\ k\right] + 4\pi S_m[g_{\mu\nu},\psi] \end{equation} where in terms of the $4d$ Newton's constant $C =1/(4 G)$. $\phi$ is called the dilaton field. At $\phi =\phi_b\sim 1$ the $2d$ theory has to be matched with the higher dimensional one. $k$ is the geodesic curvature of this boundary, and $d\ell$ its line element. Having set the extremal radius to one, the dilaton potential is \begin{equation} U(\phi) = \left\{\begin{array}{cc} 2\phi,\qquad \text{SdS}\\[10pt] -2\phi,\qquad \text{RN}\end{array}\right. \end{equation} To leading order in $\phi$ the $2d$ metric in the false vacuum is the ${\rm dS}_2$ metric with unit radius of curvature.\footnote{Earlier studies of ${\rm dS}_2$ physics include \cite{Anninos,Galante,Cotler,Maldacena_dS}. However, in some cases the possibility of embedding in a higher dimensional setup has not been a requirement.} Working in the static patch and Wick rotating, we get the sphere metric \begin{equation}\label{ds} ds^2 = d\theta^2 +\sin^2\theta \ d\varphi^2. \end{equation} This can be derived from the variation of \eqref{JT} with respect to $\phi$, or directly from \eqref{fS} by expanding in the near-horizon, near-extremal limit. The $\phi$ solution is given by \begin{equation}\label{phids} \phi = B \cos\theta,\qquad B = 2\sqrt{\frac{2}{3}-2 G M_i}. \end{equation} The points $\theta=0$ and $\theta=\pi$ correspond, respectively, to the cosmological and the black hole horizons of the SdS geometry. Beyond the cosmological horizon, $\phi$ expands and one eventually recovers isotropic $4d$ inflation. The form of the solution \eqref{phids} can also be derived by varying the effective $2d$ action \eqref{JT} with respect to $g_{\mu\nu}$, which gives \begin{equation}\label{phieq} (g_{\mu\nu} \nabla^2 - \nabla_\mu\nabla_\nu)\phi+\frac{1}{2} g_{\mu\nu}U(\phi) = \frac{2\pi}{C} T_{\mu\nu}, \end{equation} and setting $T_{\mu\nu} =0$. Matter perturbations back-react on $\phi$, and in order for the approximation $\phi\ll 1$ to hold true, we need (restoring factors of $r_0$) \begin{equation} G r_0^2 T_{\mu\nu}=\mathcal{O}(\phi) \ll 1. \end{equation} We neglect the small effect of these perturbations on the curvature of the $2d$ metric $g_{\mu\nu}$. In the RN case, the extremal radius of a charge $Q$ black hole is \begin{equation} r_e^2 =4\pi G_N Q^2. \end{equation} We take $r_e\simeq 1$. In the near extremal limit, the $2d$ metric is approximately a unit-curvature AdS$_2$. In Euclidean signature \begin{equation}\label{ads} ds^2 = d\rho^2 + \sinh^2\rho \ d\varphi^2, \end{equation} as follows from the variation of \eqref{JT} with respect to $\phi$, or directly expanding \eqref{fR} and Wick rotating. Allowing for a small offset between the extremal radii $r_e$ and $r_0 =1$, corresponds to including a small $2d$ CC on the r.h.s. of \eqref{phieq}. The $\phi$ solution is then \begin{equation}\label{phiads} \phi = A \cosh \rho - B_0, \qquad B_0 = 1 - r_e^2, \end{equation} and in terms of the RN mass parameter $M_e$ \begin{equation}\label{A} A = 2 r_e \sqrt{2 r_e(GM_e-r_e)}. \end{equation} We neglect $\mathcal{O}(\phi)$ corrections to the $2d$ geometry. \section{Domain wall motion}\label{sec:sol} The motion of a domain wall follows from the junction condition between the two geometries it connects, and it is particularly simple in the spherically symmetric case \cite{Israel,Blau}. In our approximation, it follows from \eqref{phieq}: \begin{equation}\label{junc} \left.\xi^\mu \partial_\mu \phi\right|_L^R = \kappa, \end{equation} where L/R label the two sides, $\xi^\mu$ is the normal to the domain wall trajectory, pointing from right to left, and $\kappa$ is related to the brane tension $\sigma$ via \begin{equation} \kappa = 8\pi G \sigma. \end{equation} Our bounce solution consists of a brane that emanates from the AdS$_2$ boundary and bifurcates into the dS-AdS domain walls as in figure \ref{fig:bounce}.\footnote{Of course, in $2d$ these are just particles, but we continue calling them by their $4d$ names.} Below we will discuss each part separately. \subsection{Brane in AdS$_2$} In the absence of the brane the metric and dilaton are given by \eqref{ads} and \eqref{phiads} respectively. Next we add a brane that stretches to the AdS boundary and impose $\mathbf{Z}_2$ symmetry across it. When the two sides of the brane are identified, this is often called an end of the world brane. It is considered as a model for a one-sided black hole formed from the collapse of a pure state \cite{Kourkoulou}. See also \cite{Cooper,Antonini} for possible connections to cosmology. We do not make this identification. The normal vector to the brane is \begin{equation} \xi^\mu = (\sqrt{1-\dot \rho^2},\frac{\dot \rho}{\sinh \rho}) \end{equation} where over-dot indicates derivative with respect to the proper length. From \eqref{junc}, we find \begin{equation}\label{dotr} \dot \rho = \sqrt{1- \frac{\sinh^2 \rho_m}{\sinh^2 \rho}},\qquad \sinh \rho_m = \frac{\kappa_0}{2A}, \end{equation} where the normalized brane tension is called $\kappa_0$. This is the equation of a geodesic in AdS$_2$. We could also arrive at this conclusion by using the fact that in the presence of a soft source (the brane) the $2d$ curvature has to remain finite. Denoting by $\eta$ the normal coordinate to the brane, the scalar curvature near the brane is related to its geodesic curvature via \begin{equation}\label{kLR} R = 2(k_L-k_R) \delta(\eta) + \rm{regular}. \end{equation} The $\mathbf{Z}_2$ symmetry implies $k_R=-k_L$, hence we must have $k_L=k_R=0$. Strictly speaking, the curve has a small $\mathcal{O}(\phi)$ curvature that is irrelevant for our discussion. The angular position of the brane at radial coordinate $\rho$, as measured from the closest approach to the origin $\rho= \rho_m$, can be calculated using \eqref{dotr}, \begin{equation}\label{cosphi} \cos \varphi = \frac{\tanh \rho_m}{\tanh \rho}. \end{equation} Finally, the brane meets the ${\rm AdS}_2$ boundary at $\rho=\rho_c$ with an angle \begin{equation}\label{alb} \alpha_b = \arccos\sqrt{1-\frac{\sinh^2\rho_m}{\sinh^2\rho_c}}, \end{equation} with respect to the normal. This contributes to the boundary term in the JT action \eqref{JT} because $k=2 \alpha_b \delta(u)+$regular, where $u$ is the proper length along $\partial{\rm AdS}_2$ measured from this point. Without any decay, this solution can be Wick rotated at $\rho = \rho_m$ to give a two-sided eternal black hole with a brane inside as in figure \ref{fig:penrose}-Left. \subsection{Brane decay}\label{sec:decay} Suppose at $\rho=\rho_1$ the brane branches into two domain walls separating Euclidean ${\rm AdS}_2$ from a Euclidean ${\rm dS}_2$ (i.e. $S^2$) region. First, we discuss the geometric aspects of the decay. At the end of this subsection, we will comment on the microscopic aspects. Let us first show that the $2d$ geometry is smooth along the domain wall. This follows from the derivative of \eqref{junc} along the domain wall. On the right we get $u^\nu \partial_\nu \kappa$ and on the left \begin{equation} \left.u^\nu \nabla_\nu (\xi^\mu \partial_\mu \phi)\right|^R_L =(u^\nu \nabla_\nu \xi^\mu)\partial_\mu\phi|_L^R + u^\nu \xi^\mu \nabla_\nu\nabla_\mu\phi|_L^R . \end{equation} The first term can be written in terms of the geodesic curvature, and in the second, we can use \eqref{phieq} and the fact that $u\cdot \xi = 0$ to find \begin{equation}\label{lr} (k_R-k_L)u^\mu \partial_\mu\phi +8\pi G u_\nu \xi_\mu (T_L^{\mu\nu}-T_R^{\mu\nu})= u^\nu \partial_\nu \kappa. \end{equation} By energy-momentum conservation the second term on the left is the same as the term on the right. Therefore\footnote{In the JT framework, the same result could be obtained by introducing boundary terms on the two sides of the domain wall and imposing that $\phi$ and $g_{\mu\nu}$ are continuous.} \begin{equation}\label{klr} k_R = k_L \Rightarrow k_{AdS} = k_{dS}. \end{equation} This holds also at the branching point, where the brane can be smeared over some width and be thought of as a flux of Euclidean energy from the AdS side in \eqref{lr} that is absorbed by the domain wall. Equation \eqref{klr} implies that there is no conical singularity at that point and the sum of the three exterior angles is $2\pi$. Note that if the boundaries of the left AdS region, the right AdS region and the dS part were all smooth curves, the sum of the angles would be $3\pi$. Therefore, there has to be a break in the boundary trajectories (see figure \ref{fig:bifur}). \begin{figure}[t] \centering \includegraphics[scale =0.8]{bifur.pdf} \caption{\small{The brane bifurcation into the dS-AdS domain walls.}} \label{fig:bifur} \end{figure} Denote the break of the right AdS boundary by $\alpha$ and the $\rho$ velocity after the break $\dot \rho_+$, then \begin{equation}\label{alpha} \sqrt{1-\frac{\sinh^2 \rho_m}{\sinh^2 \rho_1}} \dot \rho_+ + \frac{\sinh \rho_m}{\sinh \rho_1} \sqrt{1-\dot \rho_+^2}= \cos\alpha. \end{equation} On the dS side, using $\mathbf{Z}_2$ symmetry and imposing that the bubble includes the cosmological horizon at $\theta=0$, we find \begin{equation}\label{alpha1} \dot\theta = -\cos\alpha, \qquad \text{at bifurcation} \end{equation} where $\dot \theta$ is measured in the direction away from the branching point. In addition, \eqref{klr} forbids the formation of a dS bubble that is carved out of AdS without a deficit angle because otherwise \begin{equation} \oint d\ell \ k_{dS} <2\pi < \oint d\ell \ k_{AdS} \qquad \text{no angular deficit.} \end{equation} Indeed, if the AdS region in the bounce solution of figure \ref{fig:bounce} is continued beyond the bubble walls it either encounters a piece of the AdS boundary or the brane trajectories collide at an angle, implying a conical deficit. Finally, momentum conservation relates $\alpha$, $\kappa_0$ and the domain wall tension $\kappa$: \begin{equation}\label{mom} \kappa_0 = 2 \kappa \cos\alpha. \end{equation} Hence $\alpha < \pi/2$, and $\dot\theta <0$ at the branching point. So far, our treatment of the brane decay has been purely phenomenological. A possible microscopic realization is to consider the brane to be the fundamental particle (FP) describing scalar excitations around the true vacuum. There is a cubic coupling between FP and the domain walls (kink and anti-kink in the limit of degenerate vacua). At weak coupling, FP is much lighter than the domain walls and hence the decay process is kinematically forbidden. This corresponds to a decay process that is allowed in Euclidean signature, as in our phenomenological model (see \eqref{mom}). Our finding is that when gravity is included, the decay products can materialize in the Lorentzian geometry. This is somewhat analogous to the decay of a light axion into an $e^+e^-$ pair. If $m_a <2 m_e$, the decay is kinematically forbidden in vacuum but can happen in a strong electric field. \subsection{dS-AdS domain wall} The Euclidean dS$_2$ metric and dilaton solutions are given by \eqref{ds} and \eqref{phids}. The continuity of $\phi$ at the domain wall implies \begin{equation}\label{contin} A\cosh \rho = B \cos \theta + B_0. \end{equation} To simplify equations, we assume the bubble forms at $\rho_1\gg 1$, and therefore there is a hierarchy \begin{equation} A\ll B,\qquad A\ll B_0. \end{equation} Then, we get from \eqref{contin} \begin{equation}\label{dotr1} \dot \rho = - \frac{B\sin\theta}{B\cos\theta + B_0} \dot\theta. \end{equation} The opposite sign of $\dot \rho$ and $\dot\theta$ is because we are keeping the cosmological horizon at $\theta=0$, rather than black hole horizon $\theta =\pi$. Therefore, larger values of $\rho$, where dilaton grows, correspond to smaller values of $\theta$. The fact that $\dot\theta <0$ at the branching point implies that $\dot\rho_+>0$. As a result bifurcation can happen only after the AdS brane has passed $\rho=\rho_m$. Using \eqref{dotr1}, the junction condition at the dS-AdS domain wall can be simplified to a one-dimensional motion in a potential \begin{equation}\label{energy} \dot\theta^2 + V(\theta) =0, \end{equation} where \begin{equation} V(\theta) = \left(\frac{(B\cos\theta+B_0)^2-\kappa^2 - B^2 \sin^2\theta}{2 \kappa B\sin\theta}\right)^2 -1. \end{equation} The Euclidean bounce solution will be in the region $-1\leq V(\theta)\leq 0$, which corresponds to \begin{equation} \sqrt{\kappa^2 + B^2\sin^2\theta}\leq B\cos\theta + B_0 \leq \kappa + B\sin\theta. \end{equation} To ensure that these condition are satisfied for a finite range of $\theta$, we impose \begin{equation}\label{bounds} \sqrt{2(\kappa^2+B^2)}< B_0< \sqrt{2}B+ \kappa. \end{equation} See figure \ref{fig:V} for a sketch of the potential. \begin{figure}[t] \centering \includegraphics[scale =0.8]{v.pdf} \caption{\small{The Euclidean potential for the dS-AdS domain wall. The parameter choices are $\kappa=0.35 B$ and $B_0=1.6 B$.}} \label{fig:V} \end{figure} Wick rotation to Lorentzian signature has to be done at one of the turning points: \begin{equation}\label{th-} \theta_- = \arccos\left(\frac{\kappa - B_0}{\sqrt{2} B}\right) - \frac{\pi}{4}, \end{equation} \begin{equation} \theta_+ = \frac{7\pi}{4}- \arccos\left(\frac{\kappa - B_0}{\sqrt{2} B}\right). \end{equation} The potential for the Lorentzian motion is $-V(\theta)$. Hence, if the rotation is done at the larger root $\theta_+$, the walls are guaranteed to fall through the black hole horizon ($\theta = \pi$) and the cosmological region of the SdS geometry will inflate. The starting point of the bounce solution $\theta_1$ is obtained from \eqref{alpha} and \eqref{alpha1}. Using \eqref{dotr1} this can be written as \begin{equation} F(\theta_1) = \frac{\sinh \rho_m}{\sinh \rho_1} \sqrt{1-\dot\rho_1^2} +\left(1- \frac{B\sin\theta_1}{B\cos\theta_1+B_0}\sqrt{1-\frac{\sinh^2\rho_m}{\sinh^2 \rho_1}}\right)\dot\theta_1 =0, \end{equation} where $\dot\theta_1 = -\sqrt{-V(\theta_1)}$, and $\rho_1$ and $\dot \rho_1$ are implicitly functions of $\theta_1$ and they also depend on $A$ and $\kappa_0$. It would be convenient to eliminate $A$ and $\kappa_0$, and instead consider $\rho_1$ and $\rho_m$ as independent parameters. It is then easy to see that this equation has a solution when $\rho_m\ll \rho_1$, and that this solution is close to $\theta_-$. In the limit $\rho_m\ll \rho_1$ \begin{equation} F(\theta)\simeq \frac{\sinh \rho_m}{\sinh \rho_1}\sqrt{1-\dot \rho_1^2} - \frac{B\cos\theta +B_0 - B\sin\theta}{B\cos\theta+B_0}\sqrt{-V(\theta)}. \end{equation} We have \begin{equation} F(\theta) \simeq\left\{\begin{array}{cc}\frac{\sinh \rho_m}{\sinh \rho_1} >0,&\qquad \theta\to \theta_-\\[10pt] - \frac{B\cos\theta +B_0 - B\sin\theta}{B\cos\theta+B_0}\sqrt{-V(\theta)}<0,&\qquad \theta-\theta_- \gg \frac{\sinh^2\rho_m}{\sinh^2\rho_1}\end{array}\right. \end{equation} so there is a solution. At this solution \begin{equation} \dot\theta_1 \simeq -\frac{\sinh \rho_m}{\sinh \rho_1}\frac{B\cos\theta +B_0}{\kappa}, \end{equation} and $\theta_1-\theta_- \simeq -\dot\theta_1^2/V'(\theta_-) = \mathcal{O}(\sinh \rho_m/\sinh \rho_1)^2\ll 1$. The solution we are interested in first bounces at $\theta_-$, right after the bifurcation point, then at $\theta_+$ and for a second time at $\theta_-$, right before meeting the second bifurcation point. The point of time-reflection symmetry is $\theta = \theta_+$. In order for the interior to be free from conical singularity, we have to make sure that the bounce is completed over $\Delta\varphi_{\rm dS} =\pi$. In the above approximation, \begin{equation} \Delta\varphi_{\rm dS} \simeq 2 \int_{\theta_-}^{\theta_+} d\theta \frac{\sqrt{1+V(\theta)}}{\sin\theta\sqrt{-V(\theta)}}=\pi. \end{equation} This imposes one constraint on $B,B_0,\kappa$, which we numerically solved for and plotted in figure \ref{fig:cons}. \begin{figure}[t] \centering \includegraphics[scale =0.8]{b0k.pdf} \caption{\small{Absence of conical singularity in the dS region imposes one constraint (blue line) on $B,B0,\kappa$. The shaded region corresponds to the range \eqref{bounds}, where the potential admits a Euclidean bounce.}} \label{fig:cons} \end{figure} Finally, we should make sure that a piece of AdS boundary remains in the solution. This piece would then cross the time-reflection cut at two points and Wick rotates into the boundaries of the Lorentzian AdS$_2$ region as in figure \ref{fig:penrose}. In the full $4d$ solution they are connected to the Minkowski asymptotics. The angular size of this piece must satisfy \begin{equation}\label{vphiad} \varphi_{b} =4\pi- 4(\Delta_1+\Delta_2 +\Delta_3)>0 \end{equation} where using \eqref{cosphi} \begin{equation}\label{Dphi1,2} \Delta_1 =\arccos (\tanh \rho_m) ,\qquad \Delta_2 = \arccos \frac{\tanh \rho_m}{\tanh \rho_1} \end{equation} and \begin{equation}\label{Dphi3} \Delta_3 =\frac{B\cos\theta_- +B_0}{\sinh \rho_1} \int_{\theta_-}^{\theta_+}d\theta \frac{\sqrt{1+\left(\frac{B\sin\theta}{B\cos\theta+B_0}\right)^2 V(\theta)}}{(B\cos\theta+B_0)\sqrt{-V(\theta)}}. \end{equation} All $\Delta$'s can be made much less than $1$ by choosing $\rho_1\gg \rho_m \gg 1$. \section{Speculations}\label{sec:con} Relative probabilities are obtained by comparing the norm of various branches of the wavefunction. Here the comparison is between the two-sided RN geometry containing a brane in the middle, and the same geometry but with the brane replaced by an expanding dS bubble. A common approach is to apply saddle-point approximation to the Euclidean gravity path integral to calculate the square of the norms (as in the standard example of Coleman-De Luccia (CDL) tunneling \cite{Coleman}): \begin{equation} \frac{p_{\rm dS}}{p_{\rm brane}} \sim e^{S_0-S_b} \end{equation} where $S_0$ is the Euclidean action for the freely propagating brane and $S_b$ is the bounce action. This estimate ignores the prefactor, which in particular includes the square of the coupling between the brane and domain-walls (see the end of section \ref{sec:decay}). It is not possible to calculate $S_0$ and $S_b$ separately without specifying the embedding of the $2d$ solution in a $4d$ Euclidean solution. There is a UV ambiguity in the on-shell JT action \eqref{JT}, which contains a piece proportional to the boundary length, and hence dependent on the cutoff $\rho_c$: \begin{equation} -2\phi_b\oint_{\partial {\rm AdS}_2} d\ell \ k \simeq 2\phi_b\ell_c +A \varphi_{b}-4\kappa_0. \end{equation} Here $\ell_c = \varphi_{b}\sinh \rho_c$, and $\varphi_{b}$ is the angular size given by \eqref{vphiad} if there is a dS bubble, and the same expression with $\Delta_2=\Delta_3=0$ if there is none. We also used \eqref{phiads},\eqref{dotr},\eqref{alb} and assumed $\phi_b\simeq A\cosh\rho_c\gg B_0$, and $\rho_m\ll \rho_c$ to get the last two terms. It makes sense to compare solutions that have the same $\ell_c$, which is expected to be determined by the UV. For instance, if the black holes are pair produced in a magnetic field, as in \cite{Garfinkle}, the boundary length is fixed to $\sim M/Q B$ and the UV action is $\sim M \ell_c$. See \cite{Horowitz} for other formation scenarios that fix $\ell_c$. Given the change in $\varphi_{b}$ when a dS bubble nucleates, equal $\ell_c$ means different $A$ because $\ell_c \simeq \frac{\varphi_{b} \phi_b}{A}$. Equivalently, it means slightly different $M_e$ as follows from \eqref{A}. With this choice, we obtain a UV insensitive difference \begin{equation} S_b-S_0 = \mathcal{O}(\phi r_0^2/G)\ll S_{BH}, \end{equation} where $S_{BH}$ is the Bekenstein-Hawking entropy of the black hole. In particular, in the limit $\rho_1\gg \rho_m\gg 1$, the difference is dominated by the action of the dS-AdS domain wall: \begin{equation}\label{DS} S_b - S_0 \simeq \frac{2\kappa}{G} \int_{\theta_-}^{\theta_+} d\ell,\qquad \text{when}\quad \rho_1\gg \rho_m\gg 1. \end{equation} For the choice of parameters in the plot \ref{fig:V}, we get $S_b -S_0 \simeq 1.0\frac{B r_0^2}{G}$. In the limit $A,B\to 0$, even though our estimate gives $p_{\rm dS}/p_{\rm brane}\sim 1$, the probability of forming the original RN wormhole is expected to vanish since $\ell_c\to \infty$. To conclude, the uptunnelling probability is exponentially suppressed, but it is much larger than $e^{-S_{BH}}$. These arguments suggest that the nucleation of a false vacuum bubble is possible, assuming all ingredients are carefully prepared. However, one of the ingredients, namely the smallness of the wall tension $\kappa=8\pi G\sigma r_0 \ll 1$, is completely out of control of the experimentalist. Moreover, if the tension is too low the false vacuum region will entirely collapse (e.g. via percolating CDL bubbles). To prevent the latter, we need the critical size of the CDL bubble \begin{equation} r_{\rm CDL} \sim \frac{\sigma}{\Lambda} \end{equation} to be sufficiently big compared to the CC scale, and, to ensure the former, we need it to be small compared to $r_0$: \begin{equation} \Lambda^{-1/4}\ll r_{\rm CDL} \ll (G\Lambda)^{-1/2}. \end{equation} Hence, we need the CDL down-tunneling to be microscopic and unlikely. Even though this is a mild requirement, the full setup might seem exotic, and impractical for creating a Universe in the lab since it requires constructing the topologically nontrivial RN wormhole with a brane in the throat. However, there are reasons to believe that it is not completely fictitious. It has been argued in \cite{Maldacena_traversable} that a traversable magnetic wormhole can be constructed in a QED-like model. (See \cite{Qi,Fu_wormhole,Maldacena_realtime} for related work and comments on the formation time.) This wormhole is kept open using the negative Casimir energy. Starting from there, it is a trivial task to get a non-traversable RN wormhole (which is all that we need here) by adding mass to the system.\footnote{In the first version of this article, we speculated about the possibility of a brief period of causal communication with the inflating region. However, this requires violating achronal null energy condition, and cannot be achieved using the Casimir energy. We thank Douglas Stanford for pointing this out.} Lastly, our effective treatment of branes and domain walls guarantees that gravity by itself does not forbid formation of the false vacuum bubble. If the Maxwell theory in the Minkowski vacuum emerges from Higgsing a non-abelian gauge theory inside the false vacuum, the discharge of the magnetic field lines is also automatic and electromagnetism does not forbid the process either. Given that in non-gravitational theories false vacua are probed in scattering processes, it is not unimaginable that our toy model can indeed be an approximation to a scattering taking place inside a well-prepared wormhole. \section*{Acknowledgments} We thank Sergei Dubovsky, Andrei Gruzinov, Kyriakos Papadodimas, Veronica Pasquarella, Shahin Sheikh-Jabbari, Eva Silverstein, Douglas Stanford, Giovanni Villadoro, and Zhenbin Yang for stimulating discussions. This work was partially supported by the Simons Foundation Origins of the Universe program (Modern Inflationary Cosmology collaboration).
1,116,691,500,590
arxiv
\section{Introduction, motivations, outline} \subsubsection{Projection onto semidefinite positive matrices} Consider the space $\Sy$ of symmetric $n$-by-$n$ matrices, equipped with the norm associated to the usual inner product \[ \prods{X}{Y}=\sum^n_{i,j=1}X_{ij}Y_{ij}=\trace(\trans{X}Y). \] The subset $\sdp$ made of positive semidefinite matrices forms a closed convex cone of $\Sy$. A general result for closed convex sets yields that we can project onto~$\sdp$: given $C\in \Sy$, there exists an unique element of $\sdp$ (called the projection of $C$ onto $\sdp$ and denoted by $\psdp(C)$) such that \[ \norm{\psdp(C)-C} = \min_{X\in \sdp}\norm{X-C}. \] It turns out that we also have an explicit expression of this projection, through the spectral decomposition of $C$. Consider indeed the decomposition \[ C = U \Diag(\la_1 ,\ldots, \la_n )\trans{U} \] where $\la_1 \geq \cdots \geq\la_n$ are the eigenvalues of $C$ and $U$ is a cor\-res\-pon\-ding orthonormal matrix of eigenvectors of $C$; then the projection of $C$ onto $\sdp$ is \begin{equation}\label{projsdp} \psdp(C) = U \Diag\big(\max(0, \la_1 ),\ldots,\max(0, \la_n )\,\big)\trans{U}. \end{equation} This result was noticed early by statisticians \cite{stat-1979} (see also \cite{higham-1988}), and since then this projection has been widely used. We notice that this result generalizes nicely for ``spectral sets''; see \cite{lewis-malick-2007}. Note also that the numerical cost of computing this projection is essentially that of computing the spectral decomposition of $C$, the matrix to project. The developments of this chapter show that more sophisticated projections onto subsets of $\sdp$ are also computable using standard tools of numerical optimization. More specifically, the subsets that we consider are intersections of the cone $\sdp$ with a polyhedron (defined as affine equalities and inequalities). Though the projection onto those intersections is not explicit anymore, we still have efficient algorithms to compute them, even for large-scale problems. \subsubsection{Projection onto correlation matrices} The most famous example of such projections is the projection onto the set of correlation matrices (that are the real symmetric positive semidefinite matrices with ones on the diagonal). It is common to be faced with a matrix that is supposed to be a correlation matrix but for a variety of reasons is not. For example, estimating correlation matrices when data come from different time frequencies may lead to a non-positive semidefinite matrix. Another example is stress-testing in finance: a practitioner may wish to explore the effect on a portfolio of assigning certain correlations differently from the historical estimates, but this operation can destroy the semidefiniteness of the matrix. These important practical questions have led to much interest in the problem of computing the nearest correlation matrix to a given a matrix $C$ (see e.g.\,\cite{higham-2002}, \cite{malick-2004}, \cite{qi-sun-2006} and \cite{borsdorf-higham-2008}). This problem is simply formulated as the projection of $C$ onto correlation matrices \begin{equation}\label{eq:corr} \accol{ \min \quad \frac{1}{2}\norm{X-C}^2\\ \quad X_{ii} = 1, \quad i=1,\ldots,n\\ \quad X\succeq 0. } \end{equation} The methods reviewed in this chapter apply to solving this problem in particular. The point is that this problem (and variants of it) can now be solved efficiently (for sizes up to $n=5000$; the only limitation on a standard computer is the memory constraints). \subsubsection{Conic projection problem} The general problem that we first focus on in this chapter is the following. In the space $\RR^n$ equipped with the standard inner product, we want to compute the projection of a point $c\in \RR^n$ onto the intersection $\Kk\cap \Pp$ where \begin{itemize} \item $\Kk$ is a closed convex cone in $\RR^n$ (that we further assume to have full dimension in $\RR^n$; that is, $\Int\Kk \neq \emptyset$), \item $\Pp$ is a convex polyhedron defined by affine (in)equalities \[ \Pp:=\left\{x\in \RR^n\!:\,\trans{a_i\!}x= (\textor \leq)\;b_i, \quad i=1,\ldots,m\right\}. \] \end{itemize} We suppose moreover that the intersection $\Kk\cap\Pp$ is nonempty, so that the projection onto the closed convex set $\Kk\cap\Pp$ exists and is unique (see e.g.\;\cite{hull-1993}). The fact that $\Pp$ is defined by both equalities and inequalities does not really matter in our developments. To simplify presentation, one may take only equalities, so that $\Pp$ is an affine subspace. We prefer to keep the above loose notation with both equalities and inequalities, because it is closer to projection problems arising in practice, and because it does not impact the basics of projection algorithms. Adding positive slack variables for the inequalities allows us to reformulate the problem as a projection onto the intersection of an affine space with a cone of the form $\Kk\times (\RR_+)^\mI$. We note that in general one can project onto a polyhedron $\Pp$. For the case when there are only (independent) equalities in the definition (i.e.\;if $\Pp=\Aa$ is an affine subspace of the equation $Ax=b$ with a full-rank matrix $A$), we have the explicit expression of the projection of $x$ \begin{equation}\label{eq-projU} \proj_{\Aa}(x)=x-\trans{A}[A\trans{A}]^{-1}(A x-b). \end{equation} For a general polyhedron $\Pp$, we still can compute the projection $\proj_{\Pp}(x)$ efficiently using quadratic programming solvers (see e.g.\,\cite{nocedal-wright-1999} or \cite{bgls-2003}). In this chapter, we make the practical assumption that it is also easy to project onto $\Kk$. Recall from above that we have an easy-to-compute expression of the projection for $\Kk =\sdp$; it turns out to be also the case for the second-order cone (or Lorentz cone) \[ \Ll_n:=\left\{x\in \RR^n:\ \norm{(x_1,\ldots,x_{n-1})}\leq x_n\right\}. \] Though it is easy to project onto $\Pp$ and also onto $\Kk$ by assumption, the projection onto the intersection $\Pp\cap\Kk$ can still be challenging. The difficulty comes from the presence of both (affine and conic) constraints at the same time. We will see in \secref{sec:sdls} that many numerical methods to compute the projection onto the intersection use combinations of projections onto $\Pp$ and $\Kk$ separately. The geometrical projection problem has an obvious analytical formulation as a least-squares problem, namely minimizing the (squared) norm subject to the conic constraints and the affine constraints: \begin{equation}\label{eq:pb} \accol{\min \quad \frac{1}{2}\norm{x-c}^2\\ \quad x\in \Pp\cap \Kk.} \end{equation} For example, an important subclass of such problems are semidefinite least-squares problems (i.e.\;when $\Kk=\sdp$): \begin{equation}\label{eq:sdls} \accol{\min \quad \frac{1}{2}\norm{X-C}^2\\ \quad \prods{A_i}{X}= (\textor \leq)\;b_i, \ \ i=1,\ldots,m\\ \quad X\succeq 0,} \end{equation} for $C,A_i\in \Sy$. For example, the nearest correlation matrix problem \eqref{eq:corr} is an instance of this later class. Notice finally that problems \eqref{eq:pb}~and~\eqref{eq:sdls} coincide formally with $x\in \RR^{n^2}$ collecting the rows of $X\in \Sy$. This also explains the slight abuse of notation when writing $\Kk=\sdp$. We use this ambiguity $x\leftrightarrow X$ in particular in \secref{sec:poly} to ease presentation of relaxations of polynomial optimization problems. \subsubsection{Linear conic optimization problem} The second development of this chapter is about a more standard topic: solving linear conic optimization problems. With the above notation, these problems can be expressed as \begin{equation}\label{eq:lin} \accol{\min \quad \trans{c}{x}\\ \quad x\in \Pp\cap \Kk.} \end{equation} As presented in this handbook and as well as in the first handbook \cite{wolkowicz-saigal-vandenberghe-2000}, linear conic programming has been a very active field of research spurred by many applications and by the development of efficient methods. In this chapter, we explain how conic projections can be used to develop a new family of algorithms for solving linear conic problems. In fact, the projection problem \eqref{eq:pb} can be written as a linear problem of the form \eqref{eq:lin} and then can be solved using usual conic programming solvers (we come back to this at the beginning of \secref{sec:sdls} and we explain why it is not a good idea to do so). However we will also show the other way around: \secref{sec:prox} explains that one can also solve the linear conic problem \eqref{eq:lin} by solving projection problems~\eqref{eq:pb}, more precisely with a succession of (truncated) projection-like problems. So-called regularization methods are presented, discussed and illustrated on solving semidefinite relaxations of combinatorial optimization and polynomial optimization problems having many constraints. \subsubsection{Polynomial optimization} Over the last decade, semidefinite programming has been used in polynomial optimization, namely for deciding whether a multivariate real polynomial is nonnegative, or, more generally, to minimize a polynomial on a semialgebraic set (a set described by polynomial inequalities and equations). A hierarchy of embedded linear semidefinite relaxations (of the form \eqref{eq:lin}) can be constructed to generate a monotone sequence of bounds on the global minimum of a polynomial optimization problem. Asymptotic convergence of the sequence to the global minimum can be guaranteed under mild assumptions, and numerical linear algebra can be used to detect global optimality and extract global minimizers. The theory is surveyed in \cite{laurent-2009} and \cite{lasserre-2009}; the potential applications are numerous (see e.g.\;in control theory \cite{henrion-garulli-2005} or signal processing \cite{dumitrescu-2007}). Section\;\ref{sec:poly} reports numerical experiments showing that regularization algorithms based on projections outperform classical primal-dual interior-point algorithms for solving semidefinite relaxations arising when deciding whether a polynomial is nonnegative, and for globally minimizing a polynomial. \subsubsection{Objectives and outline of this chapter} This chapter focuses on projection problems that have a simple geometric appeal as well as important applications in engineering. We give references to some of these applications, and we give emphasis on polynomial optimization. The first goal of this chapter is to sketch the different approaches to solve~\eqref{eq:pb}. \secref{sec:sdls} is devoted to this review, with an emplasis on dual methods (in \secref{sec:dual}). The bottomline is that as soon as we can easily project onto $\Kk$ (we have in mind $\Ll_n$ and $\sdp$ as well as direct products of these), we have efficient algorithms to project onto the intersection $\Kk\cap\Pp$. The second goal of this chapter is to explain how to use these conic projections to build a family of ``regularization'' methods for linear conic programming. The approach uses standard optimization techniques (proximal algorithms and augmented Lagrangian methods) and has been recently developed for the case $\Kk=\sdp$. \secref{sec:reg} presents it in a general framework and underlines the role of conic projections. The final section presents some numerical experiments with regularization methods on polynomial optimization problems, showing the interest of the approach in that context. This chapter is meant to be an elementary presentation of parts of the material of several papers; among those, our main references are \cite{malick-2004}, \cite{qi-sun-2006}, \cite{malick-povh-rendl-wiegele-2009}, \cite{henrion-malick-2009}, \cite{zhao-sun-toh-2010} and \cite{nie-2009}. We aim at clarifying the ideas, presenting them in a general framework, unifying notation, and most of all, pointing out what makes things work. To this purpose, we have to elude some technical points; in particular, we discuss algorithms, but we do not to give convergence results. We try to give precise references throughout the text on these lacking points. \section{Conic projections: algorithms and applications}\label{sec:sdls} This section reviews the methods for solving the conic projection problem \eqref{eq:pb}, presenting them in chronological order. We sketch the main ideas and give references; we do not get into much details. Discussions about convergence issues and numerical comparisons are beyond the scope of this section. Beside interior-point methods, the basic idea of all the approaches is to somehow separate the two constraint-sets $\Kk$ and $\Pp$ and to use the projections onto them successively: this is obvious for alternating projections and alternating directions methods; it is also the case for dual methods (we focus on this latter method in \secref{sec:dual}). The point is that we can solve the conic projection problem \eqref{eq:pb} efficiently (by dual algorithms in particular). To simplify presentation, we stick here with the projection problem \eqref{eq:pb}, but the approach generalizes in two directions. First, we could replace the cone $\Kk$ by any closed convex set: in this more general case, the developments are similar, with slightly more complicated expressions of dual objects (a related work is \cite{micchellli-utretas-1988}). Second, we could consider problems with strongly convex quadratic objective functions, such as \begin{equation}\label{eq:gene} \accol{\min \quad \trans{(x-c)}Q(x-c) + \trans{d}{x}\\ \quad x\in\Kk\cap \Pp} \end{equation} with $Q$ positive definite. Such problems can be phrased as projection problems with respect to $\norm{x}_Q = \sqrt{\trans{x}\!Q\,x}$ the norm associated to~$Q$. The practical technical assumption is then that one can project onto $\Kk$ with respect to $\norm{\cdot}_Q$ (which is not easy in general). \subsection{Computing conic projections}\label{sec:algo} \subsubsection{Using linear conic programming} A tempting method to solve \eqref{eq:pb} is to cast this projection problem as a usual linear conic programming problem, so that we can use the powerful tools developed for this case. There are several ways to do so; a simple one consists in pushing down the objective function with an additional variable $t$: \eqref{eq:pb} is indeed equivalent to linear conic program \[ \accol{\min \quad t \\ \quad x\in \Pp\\ \quad x-c = z\\ \quad (x,(z,t))\in \Kk\times \Ll_{n+1}} \] where the variable $z\in \RR^n$ is then introduced to express the additional second-order cone constraint appearing in the constraints. This problem can be readily given to usual conic solvers, for example interior-points methods, like \sedumi \cite{sturm-1999} or \texttt{SDPT3} \cite{tutuncu-toh-todd-2003} under Matlab. Unfortunately, adding $(z,t)$ makes the computational cost and memory space needed by a standard primal-dual interior-point method increase, and numerical testing confirms that the method is not viable in general (as mentioned e.g.\;in \cite{higham-2002},\cite{toh-2007}). We note furthermore that the projection problem \eqref{eq:pb} is a quadratic conic programming problem, hence a special case of nonlinear conic optimization problems. We could solve \eqref{eq:pb} by algorithms and software devoted to nonlinear conic optimization problems such as the penalization method of \cite{kocvara-stingl-2003}. However those methods would not use the special structure of \eqref{eq:pb}, and as the above approach by linear conic programming, they would be efficient only for small-size projection problems. The projection problems are important enough to design algorithms specifically to them, as presented in the sequel. Note that we are not aware of a tailored penalization algorithm for~\eqref{eq:pb}. \subsubsection{Alternating projections} The alternating projection method is an intuitive algorithmic scheme to find a point in the intersection of two sets: it consists in projecting the initial point onto the first set, then projecting the new point onto the second set, and then projecting again the new point onto the first and keep on projecting alternatively. In other words, it consists in repeating: \begin{equation}\label{eq:alternating} \accol{x_{k+1} = \proj_{\Kk}(y_k)\\ y_{k+1} = \proj_{\Pp}(x_{k+1})\\ } \end{equation} If the two sets have a ``regular'' intersection, this algorithm converges linearly to a point in $\Pp\cap \Kk$ and we know the speed of convergence (for two convex sets, see e.g.\;\cite{deutsch-2001}; for the general case, see the local result of \cite{lewis-luke-malick-2008}). We can modify this simple alternating projection scheme by adding a correction step (called Dykstra's correction \cite{dykstra-1983}) at each iteration \eqref{eq:alternating} \begin{equation}\label{eq:dykstra} \accol{x_{k+1} = \proj_{\Kk}(z_k)\\ y_{k+1} = \proj_{\Pp}(x_{k+1})\\ z_{k+1} = z_k-(x_{k+1}-y_{k+1}). } \end{equation} This modification ensures the convergence of the sequence $(x_k)_k$ to the projection $\proj_{\Kk\cap\Pp}(c)$ -- and not only to a point in the intersection $\Kk\cap \Pp$. This approach was proposed by \cite{higham-2002} for the nearest correlation matrix problem~\eqref{eq:corr}. It generalizes to \eqref{eq:pb} since it is easy to project onto $\Pp$ and we assume that it is the same for $\Kk$. We will see that dual methods and alternating direction methods can be interpreted as variants of this basic geometrical method. \subsubsection{Dual methods} The conic programming problem \eqref{eq:pb} looks more complicated than a usual conic programming problem with linear function instead of a norm as objective function. It turns out that the strong convexity of the objective function provides nice properties to the dual problem that can then be solved efficiently. The dual approach was proposed for the conic least-squares problem \eqref{eq:pb} in \cite{malick-2004}, later revisited by \cite{boyd-xiao-2005} for the case of $\Kk=\sdp$, and then enhanced by \cite{qi-sun-2006} and \cite{borsdorf-higham-2008} for the projection onto correlation matrices. In the next section, we give more details and more references about this approach. \subsubsection{Interior points} As a convex optimization problem, \eqref{eq:pb} can be attacked with the interior-point machinery \cite{nemirovski-nesterov-1994}, assuming that both the cone $\Kk$ and its polar cone \[ \Kk^o:=\left\{s\in \RR^n: \ \trans{s}x \leq 0 \ \text{for all }x\in \Kk\right\} \] are equipped with so-called self-concordant barriers (as is the case for $\Ll_n, \sdp$). The approach consists in solving perturbed optimality conditions of \eqref{eq:pb}. As any projection problem, notice that the optimality condition is \[ \bar x \in \Pp\cap \Kk, \quad \trans{(c- \bar x)}{(x- \bar x)}\leq 0, \quad \text{for all }x\in \Pp\cap \Kk. \] To write down the optimality conditions more concretely, let us make explicit the affine constraints with the help of $\AE\in\RR^{n\times \mE}$ and $\AI\in\RR^{n\times \mI}$ as \begin{equation}\label{eq:pb2} \accol{\min \quad \norm{x-c}^2\\ \quad \AE x=\bE, \ \AI x\leq \bI\\ \quad x\in \Kk.} \end{equation} Under a non-degeneracy assumption (e.g.\;Slater condition, see next section), the optimality conditions of \eqref{eq:pb2} give the complementarity system \[ \accol{x-c+u+\trans{\AE}y+\trans{\AI}z = 0\\[0.5ex] \AE x = \bE, \ y\in \RR^{\mE}\\[1ex] \AI x \leq \bI, \ z\in \RR_+^\mI, \ \trans{z}(\AI x - \bI)=0\\[1ex] x\in \Kk, \ u\in \Kk^o, \ \trans{u}{x}=0. } \] Roughly speaking, an interior-point approach consists in perturbing the complementary equations above and keeping other equations satisfied. (We will see that the forthcoming dual approach goes exactly the other way around.) A first interior-point method is proposed in \cite{takouda} for the nearest correlation matrix problem \eqref{eq:corr}. Interior-point methods for general quadratic SDP are introduced and tested on projection problems \eqref{eq:pb} in \cite{tutuncu-toh-todd-2006} and \cite{toh-2007}. \subsubsection{Alternating directions} The alternating direction method is a standard method in variational analysis (see e.g.\;\cite{gabay-1976}), going back to \cite{douglas-rachford-1956}. This method was proposed by \cite{adm-2009} for solving the semidefinite projection problem \eqref{eq:sdls} and by \cite{adm-2009-2} for more general quadratically constrained quadratic SDP. The idea of the method is to exploit the separable structure of the problem, as follows. Let us duplicate the variables to write the equivalent problem \begin{equation}\label{eq:pb3} \accol{\min \quad \frac{1}{2}\norm{x-c}^2+ \frac{1}{2}\norm{y-c}^2\\ \quad x=y \\ \quad x\in \Kk,\quad y\in \Pp.} \end{equation} The alternating direction method applied to \eqref{eq:pb3} gives the following scheme: consider the augmented Lagrangian function \[ L(x,y;z) = \frac{1}{2}\norm{x-c}^2+ \frac{1}{2}\norm{y-c}^2 -\prods{z}{x-y} + \frac{\beta}{2}\norm{x-y}^2; \] the minimization of $L$ with respect to primal variables $(x,y)$ is decomposed in two steps, so that an augmented Lagrangian iteration is \[ \accol{x_{k+1} = \argmin_{x\in \Kk}L(x,y_k,z_k)\\ y_{k+1} = \argmin_{y\in \Pp}L(x_{k+1},y,z_k)\\ z_{k+1} = z_k-\beta(x_{k+1}-y_{k+1}). } \] It is not difficult to prove that the two above minimizations boil down to projections, more specifically \[ x_{k+1}=\pK\!\Big(\frac{\beta y_k+z_k+c}{1+\beta}\Big),\quad y_{k+1}=\pA\!\Big(\frac{\beta x_{k+1}-z_k+c}{1+\beta}\Big). \] Thus the approach alternates projections onto $\Pp$ and $\Kk$ to compute the projection onto $\Kk\cap\Pp$; it can thus be seen as a modification of the simple alternating projection scheme \eqref{eq:alternating}, with the same flavour as Dykstra modification~\eqref{eq:dykstra}. \subsection{More on dual approach}\label{sec:dual} \subsubsection{Apply standard machinery} Let us give more details about the dual approach for solving \eqref{eq:pb}. Following \cite{malick-2004}, we apply the standard mechanism of Lagrangian duality to this problem; we refer to \cite[Ch.\,XII]{hull-1993} and \cite[Ch.\,5]{boyd-vandenberghe-2004} for more on this mechanism in general. Let us consider the more explicit form \eqref{eq:pb2}, and denote also by $A:=[\AE;\AI]$ and $b:=[\bE;\bI]$ the concatenation of the affine constraints. We dualize affine constraints only: introduce the Lagrangian, a function of primal variable $x\in \Kk$ and dual variable $(y,z)\in \RR^\mE\!\times\RR_+^\mI$ \begin{equation}\label{eq-def-lagr} L(x;y,z) := \frac{1}{2}\norm{c - x}^2 - y^{\!\top}(\AE x-\bE) - z^{\!\top}(\AI x-\bI), \end{equation} and the corresponding concave dual function \begin{equation}\label{eq-def-theta} \theta(y,z) := \min_{x \in \Kk} L(x;y,z), \end{equation} which is to be maximized. There is no more affine constraint in the above minimum, and it is easy to prove (\cite[Th.3.1]{malick-2004}) that the problem corresponds to a projection onto $\Kk$: there exists a unique point which reaches the above minimum, namely \begin{equation}\label{eq-xy} x(y,z) :=\pK(c+ \trans{\AE}y + \trans{\AI}z), \end{equation} so we have \begin{equation}\label{eq-theta} \theta(y,z)=\trans{\bE}\!y + \trans{\bI}\!z +\frac{1}{2}(\norm{c}^2-\norm{x(y,z)}^2). \end{equation} It is also not difficult to show \cite[Th.3.2]{malick-2004} that the concave function $\theta$ is differentiable on $\RR^m$, and that its gradient \begin{equation}\label{eq-grad-theta} \nabla \theta(y,z)=-A x(y,z) + b \end{equation} is Lipschitz continuous. As any function with Lipschitz gradient, $\theta$ is twice differentiable almost everywhere, but not everywhere (this basically relies on the differentiability properties of $\pK$; for the case $\Kk=\sdp$, see more in \cite{sun-sun-2002} and \cite{malick-sendov-2005} among others). The dual problem is thus \begin{equation}\label{eq:dual} \accol{\max\quad \theta(y,z)\\ \quad (y,z)\in \RR^{\mE}\times\RR_+^{\mI}.} \end{equation} Strong duality (the optimal values of \eqref{eq:pb2} and \eqref{eq:dual} coincide) holds under a standard assumption in convex optimization. The so-called (weak) Slater assumption (see e.g.\,\cite{bertsekas-1995}, \cite{hull-1993}) is in our context: \begin{equation}\label{eq:slater} \exists \ \bar x \in \Pp \cap \Int \Kk. \end{equation} In fact, this assumption yields moreover that there exists solutions to \eqref{eq:dual} (note that the assumption has also a natural geometrical appeal in context of projection methods, see \cite[Sec.\,3]{henrion-malick-2009}). Finally we get directly the projection from dual solutions: let $(y^*,z^*)$ be a (dual) solution of \eqref{eq:dual}, the (primal) solution $x^*$ of \eqref{eq:pb} is the associated $x^*=x(y^*,z^*)$ (see \cite[Th.\,4.1]{malick-2004}). \subsubsection{Apply standard algorithms} To compute the projection of $c$ onto $\Pp\cap \Kk$, we just have to solve the dual problem \eqref{eq:dual}. Let us have a closer look to this problem: the constraints are simple positivity constraints on the variable corresponding to the dualization of inequality constraints; the dual function is a differentiable concave function with Lipschitz gradient. This regularity has a huge impact in practice: it opens the way for using standard algorithms for nonlinear optimization. Hence we can use any of the following numerical methods to solve \eqref{eq:dual} (as soon as the software can deal with the constraints $z_i\geq 0$): \begin{enumerate} \item gradient methods: standard methods \cite{bertsekas-1995} or more evolved ones, as e.g.\;Nesterov's method \cite{nesterov-2004}; \item Newton-like methods: quasi-Newton, limited memory quasi-Newton, inexact Newton, Newton-CG, see textbooks \cite{nocedal-wright-1999} and \cite{bgls-2003} -- with the restriction that $\theta$ is not twice differentiable everywhere, so that we have to use the so-called semismooth Newton methods, see \cite{qi-sun-1993}. \end{enumerate} For example, \cite{malick-2004} uses a quasi-Newton method for solving \eqref{eq:sdls}, and \cite{qi-sun-2006} uses a semismooth inexact Newton method for solving \eqref{eq:corr}. We come back on these two methods in the next section to give more practical details. We also mention here the so-called inexact smoothing method of \cite{gao-sun-2009} which consists in writing the optimality conditions of the dual problem \eqref{eq:dual} as a nonsmooth fixed point problem (and solving it by combining smoothing techniques and an inexact Newton method; see e.g.\;\cite{nocedal-wright-1999}). The dual problem \eqref{eq:dual} can thus be attacked with classical tools or more evolved techniques. In practice, the choice of the solving method depends on the structure of the problem and the target level of sophistication. We call dual projection methods any method using an optimization code for functions with Lipschitz gradient to maximize $\theta$ on $\RR^\mE\times\RR_+^\mI$. Specifically, a dual projection method generates a maximizing dual sequence $\{y_k,z_k\}_k$ together with the primal sequence $x_k=x(y_k,z_k)$ such that: \begin{eqnarray} \theta(y_k,z_k) &=& \trans{\bE\!}\!y_k+\trans{\bI\!}\!z_k + \frac{1}{2}(\norm{c}^2-\norm{x_k}^2)\label{eq:sim}\\ \nabla \theta (y_k,z_k) &=& -Ax_k + b.\label{eq:simgrad} \end{eqnarray} We notice that in our numerical experiments with dual methods, we have observed better behaviour and convergence when the (strong) Slater assumption holds (that is, when \eqref{eq:slater} holds and moreover $\AE$ is full rank). \subsubsection{More algorithmic details (for the case without inequalities)} We detail now further some algorithmic issues. To simplify we focus on the case without inequalities ($\mI=0$, no dual variables $z$). Iterations of most algorithms for maximizing $\theta$ can be written as \begin{equation}\label{eq:iter} y_{k+1} = y_k + \tau_k W_k \nabla \theta (y_k). \end{equation} Note that the usual stopping test of these methods has an intrinsic meaning: a threshold condition on the gradient \begin{equation}\label{eq:stop-proj} \norm{\nabla \theta(y_k)} = \norm{Ax_k-b} \leq \eps \end{equation} controls in fact the primal infeasibility. Among these methods, let us discuss further the three following ones. \paragraph{Gradient descent with constant step-size.} We have a remarkable result: the gradient method in an adapted metric, namely \eqref{eq:iter} with \begin{equation}\label{eq:grad} W_k = [A\trans{A}]^{-1}\qqandqq \tau_k =1, \end{equation} corresponds exactly to the alternating projection method~\eqref{eq:dykstra} (see \cite{malick-2004} for a proof in the special case of correlation matrices, and \cite{henrion-malick-2009} for the proof in general). We thus have a (surprizing) dual interpretation of the primal projection method. Using descent schemes more evolved than a simple gradient descent (see below) then leads to (dual) projection methods that can be seen as improvements of the basic alternating projection method. \paragraph{BFGS Quasi-Newton method.} The method is known to be very efficient in general, and have many industrial applications (one of the most striking is in weather forecasting \cite{gilbert-lemarechal-1989}). The method can be readily applied to the dual problem, since it requires no more information than \eqref{eq:sim}: $W_k$ is constructed with successive gradients with the BFGS formula and $\tau_k$ is well-chosen with a Wolfe line-search (see e.g.\,\cite{bgls-2003}). The initial paper about dual methods \cite{malick-2004} proposes to use this method in general and reports very good numerical results on the nearest correlation matrix problem \eqref{eq:corr}. Since then, this dual method has been used successfully to solve real-life projection problems in numerical finance (among them: the problem of calibrating covariance matrices in robust portfolio selection \cite[5.4]{malick-2004}). A simple Matlab implementation has been made publicly available together with \cite{henrion-malick-2009} for pedagogical and diffusion purposes. \paragraph{Generalized (or semismooth) Newton.} A pure Newton method would be to use $\tau_k=1$ and $W_k = [H_k]^{-1}$ with the Hessian $H_k$ of $\theta$ at the current iterate $y_k$. In practice, an inexact generalized Newton method is used for the following reasons. As mentioned earlier, $\theta$ is differentiable but not twice differentiable (though its gradient is Lipschitz continuous). We can still replace the usual Hessian by a matrix $H_k\in \partial^2_c\theta(y_k)$ the Clarke generalized Hessian of $\theta$ at $y_k$~\cite{clarke-1983}. Computing a matrix in $\partial^2_c\theta(y_k) \subset \sdp$ amounts to computing an element of the Clarke generalized Jacobian of the projection onto the cone $\partial_c\!\pK$ since we have (see \cite{hiriart-strodiot-nguyen-1984}) \[ \partial^2_c\theta(y_k) = A\,\partial_c\!\pK(c+\trans{A}\!y_k)\trans{A}. \] We can often compute an element of $\partial_c\!\pK$. For example, we even have an explicit expression of the whole $\partial_c\psdp$ \cite{malick-sendov-2005}. For overall efficiency of the method, the Newton direction $d_k$ is computed by solving the system $H_kd = \nabla\theta(y_k)$ approximately, usually by conjugate gradient (CG) type methods. More precisely, the idea of so-called Newton-CG (also called inexact Newton going back to \cite{dembo-eisentat-steinhaug-1982}) is to stop the inner iteration of CG when \begin{equation}\label{eq-inexact} \norm{H_k d + \nabla \theta (\la_k)} \leq \eta_k\norm{\nabla\theta(\la_k)} \end{equation} with small $\eta_k$ (see e.g.\,\cite{nocedal-wright-1999}). Note that preconditioning the Newton system is then crucial for practical efficiency. The nice feature of this algorithm is that $H_k$ has just to be known through products $H_kd$ so that large-scale problems can be handled. In our context, the main work on this method is \cite{qi-sun-2006} about the nearest correlation matrix problem; we come back to it in the next~section. We finish here with a couple of words about convergence of this Newton dual method. In general (see \cite{qi-sun-1993}), the two conditions to prove local superlinear convergence are that the minimum is strong (i.e.\;all elements of the generalized Hessian are positive definite), and the function has some smoothness (namely, the so-called semismoothness). In our situation, the two ingredients implying those conditions are the following ones: \begin{itemize} \item The intersection has some ``nondegeneracy'', in the sense of \cite[4.172]{bonnans-shapiro-2000} and \cite[Def.\,5]{alizadeh-haeberly-overton-1997}. This allows us to prove $\partial^2_c\theta(y_k) \succ 0$ (see e.g.\,\cite{qi-sun-2006} for a special case). \item The convex cone $\Kk$ has some ``regularity''. An example of sufficient regularity is that $\Kk$ is a semialgebraic set (i.e. defined by a finite number of polynomial (in)equalities). Indeed for semialgebraic convex sets, the projection $\pK$ and then $\theta$ are automatically semismooth \cite{bolte-daniilidis-lewis-2007} (which is the property needed to apply the convergence results of \cite{qi-sun-1993}. This is the case for direct products of the cones $\Ll_n$ and $\sdp$ (for which we even have strong semismoothness \cite{sun-sun-2002} so in fact quadratic convergence). \end{itemize} \subsubsection{Illustration on nearest correlation matrix problem} We give a rough idea of the efficiency of the dual approach on the projection problem \eqref{eq:corr}. The first numerical results of \cite[Sec.\,4]{malick-2004} show that the dual approach copes with large-scale problems, in reporting that one can solve in a couple of minutes projection problems of size around one thousand. By using the dual generalized Newton method (instead of quasi-Newton as in \cite{malick-2004}), the algorithm of \cite{qi-sun-2006}, improved later by \cite{borsdorf-higham-2008}, gives very nice results in both practice and theory. Nondegeneracy of the constraints and then of the generalized Hessian is proved in \cite[Prop.\,3.6]{qi-sun-2006}: as recalled above, this technical point leads to quadratic convergence of the method \cite[Prop.\,5.3]{qi-sun-2006}. Today's state of the art is that one can solve nearest correlation matrix problems of big size (say, up to 4000-5000) in a reasonable amount of computing time (say, less than 10 minutes on a standard personal computer). The only limitation seems to be the memory constraint to store and deal with dense large-scale data. To give a more precise idea, let us report a couple of results from \cite{borsdorf-higham-2008}. The implementation of their dual algorithm is in Matlab with some external Fortran subroutines (for eigenvalues decomposition in particular). The stopping criterion is set to \begin{equation}\label{eq:approx} \norm{\nabla \theta(y_k)} \leq 10^{-7}n. \end{equation} We consider the nearest correlation matrix problems for two (non-SDP) matrices with unit diagonal (of size $n_1=1399$ and $n_2= 3120$) provided by a fund management company. The dual method solves them in around 2 and 15 min., respectively, on a very standard machine (see more details in \cite{borsdorf-higham-2008}). We finish with a last remark about accuracy. The approximate correlation matrix $X$ that is computed by such a dual method is often just what is needed in practice. It might happen though that a special application requires a perfect correlation matrix -- that is, with exactly ones on the diagonal, whereas $X$ satisfies only (by \eqref{eq:approx}) \[ \Big(\sum_{i=1}^n(X_{ii}-1)^2\Big)^{-1/2}\leq 10^{-7}n. \] A simple post-treatment corrects this. Setting diagonal elements to ones may destroys the positiveness, so we apply the usual transformation that computes the associated correlation matrix $\bar X$ from a covariance matrix $X$, namely \[ \bar X = D^{-1/2}XD^{-1/2} \qqandqq D=\diag(X). \] This operation increases the distance from $C$; but the error is still under control (by $\eps/(1-\eps)$; see \cite[Prop.\;3.2]{borsdorf-higham-2008}). \subsection{Discussion: applications, generalizations} \subsubsection{Direct or indirect applications} Conic projection problems with the positive semidefinite cone (like $\Kk=\sdp$, $\Kk=\sdp\times (\RR^+)^p$ or $\Kk=\Ss^+_{n_1}\!\times \cdots \times \Ss^+_{n_p}$) are numerous in engineering. Constructing structured semidefinite matrices, for example, are naturally modeled this way. Such problems naturally appear in finance for constructing structured covariance matrices (as a calibration step before simulations); they also appear in many other fields, such as in control (e.g.\,\cite{ieee-2009}), in numerical algebra (e.g.\,\cite{hankel-2007}), or in optics (e.g.\,\cite{vandenberghe-2008}), to name a few of them. Conic projections also appear as inner subproblems within more involved optimization problems. Solving efficiently these inner problems is often the key to numerical efficiency of the overall approach. Let us give some examples. \begin{itemize} \item {\em Linear conic programming.} So-called regularization methods for solving~\eqref{eq:lin} use the conic projection problem as an inner subproblem; these methods are studied in \secref{sec:reg}. \smallskip \item {\em Weighted projections.} For given weights $H_{ij}\geq 0$, consider the semidefinite projection~\eqref{eq:sdls} with a different objective function \ \accol{\min \quad \frac{1}{2}\sum^n_{i,j=1}H_{ij}(X_{ij}-C_{ij})^2\\ \quad \prods{A_i}{X}= (\textor \leq)\;b_i, \ \ i=1,\ldots,m\\ \quad X\succeq 0.} \ An augmented Lagrangian approach for this problem \cite{qi-sun-2010} produces a projection-like inner problem, which is solved by a semismooth Newton method (recall the discussion of the previous section). \smallskip \item {\em Low-rank projections.} Consider the semidefinite projection problem~\eqref{eq:sdls} with additional rank-constraint \begin{equation}\label{eq:low} \accol{\min \quad \frac{1}{2}\norm{X-C}^2\\ \quad \prods{A_i}{X}= (\textor \leq)\;b_i, \ \ i=1,\ldots,m\\ \quad X\succeq 0, \ \rank X =r.} \end{equation} This difficult non-convex calibration problem has several applications in finance and insurance industry (e.g.\;pricing interest rate derivatives for some models; see e.g.\,\cite{brigo-mercurio-2006}). Two approaches (by augmented Lagrangian \cite{li-qi-2010} and by penalty techniques \cite{gao-sun-2010}) have been recently proposed to solve these types of problems; both approaches solve a sequence of projection-like subproblems. The numerical engine is a dual semismooth truncated Newton algorithm for computing projections. \end{itemize} For these applications of conic projections, the techniques and the arguments are often the same, but are redeveloped for each particular projection problem encountered. We hope that the unified view of Section 2 can bring forth the common ground of these methods and to better understand how and why they work well. We finish this section by pointing out an easy geometrical application. \subsubsection{Application for solving conic feasibility problems} The conic feasibility problem consists simply in finding a point $x$ in the intersection $\Kk\cap\Pp$. Many engineering problems can be formulated as semidefinite or conic feasibility problems (for example in robust control \cite{boyd-1994} where an element in the intersection is a certificate of stability of solutions of differential equations). \secref{sec:SOSfeas} focuses on semidefinite feasibility problems arising when testing positivity of polynomials. We refer to the introduction of \cite{henrion-malick-2009} for more examples and references. A simple and natural technique for solving conic feasibility problems is just to project a (well-chosen) point onto the intersection $\Kk\cap\Pp$ (by dual projection methods for example). In \cite{henrion-malick-2009}, a comparative study of such a conic projection method with the usual approach using SeDuMi was carried out precisely on polynomial problems. It was shown there that an elementary Matlab implementation can be competitive with a sophisticated primal-dual interior-point implementation. This would even have a better performance if an initial heuristic for finding a good point to project could be determined (the numerical experiments of \cite[Sec.\,6]{henrion-malick-2009} simply use $c=0$). An answer to this latter point is provided by the regularization methods of the next section. \section{Projections in regularization methods}\label{sec:reg} We focus in this section on standard linear conic programming. We show that, following classical convex optimization techniques, conic projections can be used to solve linear conic programming problems. There exist many numerical methods for solving linear conic problem \eqref{eq:lin} (see the first handbook \cite{wolkowicz-saigal-vandenberghe-2000}). But on the other hand, there also exist big conic problems, and especially big SDP problems, that make all the standard methods fail. Relaxations of combinatorial optimization problems and polynomial optimization problems yield indeed challenging problems. This motivates the development of new algorithmic schemes. The strategy that we present in this section exploits the efficiency of projection methods by developing proximal algorithms for linear conic programming. We generalize the developments of \cite{malick-povh-rendl-wiegele-2009}, and give all way long references to related works. As for numerical aspects, the target problems are semidefinite programs with the number of constraints possibly very large (more than $100,\!000$). \subsection{Proximal method for linear conic programming}\label{sec:prox} \subsubsection{Apply classical techniques of convex optimization} The proximal algorithm is a classical method of convex optimization and variational analysis: it goes back from the 1970s with premises in \cite{bellman-kalaba-lockett-1966}, the first work \cite{martinet-1970} and the important reference \cite{rockafellar-1976}. The driving idea of the proximal algorithm is to add quadratic terms to the objective function to ``regularize'' the problem (ensuring existence, uniqueness, and stability of solutions). A (primal) proximal method of the linear conic problem \eqref{eq:lin} goes along the following lines. Consider the problem with respect to $(x,p)$ \[ \accol{\min \quad \trans{c}{x} + \frac{1}{2t}\norm{x-p}^2\\ \quad p\in \RR^n, \ x\in \Pp\cap \Kk.} \] By minimizing first with respect to $p$, we see that this problem is equivalent to the primal linear conic problem \eqref{eq:lin}. We have added to the objective function a quadratic ``regularizing'' term $\norm{x-p}^2$ with the so-called ``prox-parameter''~$t$. The idea now is to solve this problem in two steps: first with respect to $x$, and second to $p$: \begin{equation}\label{eq:proxform} \begin{array}{rl} \accol{ \min\\{\quad p \in \RR^n}\quad} & \!\!\!\!\!\!\!\!\left(\begin{array}{l}\min\quad\trans{c}{x} + \frac{1}{2t} \norm{x-p}^2\\ \quad x\in \Pp\cap\Kk\end{array}\right). \end{array} \end{equation} The outer problem is thus the minimization with respect to $p$ of the function \begin{equation}\label{eq:F} F(p):=\accol{\min \quad \trans{c}{x} + \frac{1}{2t}\norm{x-p}^2\\ \quad x\in \Pp\cap \Kk} \end{equation} which is the result of the inner optimization problem parametrized by~$p$. As such defined, $F$ is the so-called Moreau-Yosida regularization of the function $x\ra \trans{c}x + i_{\Pp\cap\Kk}(x)$ the linear objective function plus the indicator function of the intersection (see e.g.\,\cite[Ch.XV\!.4]{hull-1993}). The connection with the previous developments of this chapter is then obvious: the above inner problem is essentially a projection problem as studied in the previous section (see \eqref{eq:gene}). The solution of the inner problem (the ``projection'') is called the proximal point and denoted \[ \prox(p):=\left\{\begin{array}{l}\argmin\quad\trans{c}{x} + \frac{1}{2t} \norm{x-p}^2\\ \quad x\in \Pp\cap\Kk.\end{array}\right. \] Note that, for simplicity, the dependence of $F$ and $\prox$ with respect to $t$ is dropped in notation. \subsubsection{Primal proximal algorithm for conic programming} Applying basic convex analysis properties, it is easy to prove (see e.g.\,\cite[Ch.XV\!.4]{hull-1993}) that the Moreau-Yosida regularization $F$ is convex and differentiable with gradient $\nabla F(p) = (p-\prox(p))/t$. The optimality condition of the unconstrained minimization of $F$ is then simply \begin{equation}\label{eq:fix} \bar p=\prox(\bar p). \end{equation} Moreover a fixed-point $\bar p$ of the $\prox$ operator is also a solution of the initial linear conic problem~\eqref{eq:lin}: observe indeed that $\bar p$ is feasible and reaches the optimal value, since \begin{equation}\label{eq:sol} \val\eqref{eq:lin} = \min F(p) = F(\bar p) = \trans{c}\bar p. \end{equation} A (primal) proximal algorithm for solving \eqref{eq:lin} then consists of a fixed-point algorithm on \eqref{eq:fix} \begin{equation}\label{eq:proxiter} p_{k+1}=\prox(p_k). \end{equation} Since computing $\prox(p_k)$ corresponds to solving a projection problem, we can use any of the algorithmic schemes described in \secref{sec:algo} to implement~\eqref{eq:proxiter} inside of this proximal algorithm. We call the proximal method the outer algorithm, and the chosen projection algorithm, the inner algorithm. We study in \secref{sec:reg} the family of proximal algorithms obtained when dual projection algorithms of \secref{sec:dual} are used as inner algorithms. As we have an iterative optimization algorithm (inner algorithm) inside of another iterative algorithm (outer algorithm), the question of the stopping tests of the inner algorithm is obviously crucial. For practical efficiency, the inner stopping test should somehow depend on some outer information; we~come back later in detail to this important point. So in fact the iteration \eqref{eq:proxiter} is not carried out exactly, and replaced instead by a looser implementable relation \begin{equation}\label{eq:proxiterimpl} \norm{p_{k+1}-\prox(p_k)}\leq \eps_k. \end{equation} Whatever is the inner projection algorithm, we have the general global convergence of the method under the assumption that the sequence of errors $\eps_k$ goes rapidly to zero. \begin{proposition}[Global convergence]\label{prop:convergence} Assume that there exist a solution~to~\eqref{eq:lin}. If $(t_k)_k$ is bounded away from $0$ and if the primal proximal algorithm generates a sequence $(p_k)_k$ such that \begin{equation}\label{eq:sum} \sum_k\eps_k < + \infty \end{equation} then $(p_k)_k$ converges to a solution $\bar p$ of \eqref{eq:lin}. \end{proposition} {\bf Proof:} The result is straightforward from the general convergence result of proximal algorithms. As a consequence of \eqref{eq:sum} and the existence of a solution to \eqref{eq:lin}, the sequence $(p_k)_k$ is bounded and we can apply \cite[Th.1]{rockafellar-1976}: $(p_k)_k$ converges to a fixed-point to $\prox$ which is a solution of \eqref{eq:lin} by \eqref{eq:sol}. $\Box$ \subsubsection{Dual point of view: augmented Lagrangian} We give here some details about the dual interpretation of the above primal algorithmic approach. It is known indeed that a proximal method for a problem corresponds exactly to an augmented Lagrangian method on its dual; we~detail this for our case. To simplify writing duals, we abandon the general formulation \eqref{eq:lin}, and we suppose that there is no affine inequalities (or that there are incorporated with slack variables in $\Kk$). So we work from now with the standard form of primal and dual linear conic problems \begin{equation}\label{eq:primaldual} \accol{\min \quad \trans{c}x\\ \quad Ax=b \\ \quad x\in \Kk} \qqandqq \accol{\max \quad \trans{b}y\\ \quad \trans{A}y-u-c =0\\ \quad u\in \Kk^o.} \end{equation} Augmented Lagrangian methods are important classical regularization techniques in convex optimization (see \cite{polyak-tret-1972}, \cite{rockafellar-1976b} for important earlier references, and \cite[Chap.XII]{hull-1993} for the connection with usual Lagrangian duality). In our situation, a dual augmented Lagrangian method goes along the following lines. Introduce the augmented Lagrangian function $L$ with parameter $t >0$, for the dual problem \eqref{eq:primaldual}: $$ L(y,u;p) := \trans{b}y - \trans{p}(\trans{A} y - u -c) - \frac{t}{2}\norm{\trans{A} y - u -c}^2. $$ Note that this is just the usual Lagrangian for the problem \begin{equation}\label{eq-dual-stab} \accol{ \max\quad \trans{b}y - \frac{t}{2}\norm{\trans{A} y - u -c}^2\\ \quad \trans{A}y - u -c = 0, ~ u \in \Kk^o, } \end{equation} that is the dual problem with an additional redundant quadratic term in the objective. The convex (bi)dual function is then defined as \begin{equation}\label{eq-inner-rendl} \Theta(p):= \max_{y\in \RR^m,u\in \Kk^o} L(y,u;p). \end{equation} The bridge between the primal proximal method and the dual augmented Lagrangian is set in the next proposition, formalizing a well-known result. \begin{proposition} With notation above, we have $\Theta(p)=F(p)$ for $p\in \RR^n$. \end{proposition} {\bf Proof:} Just apply \cite[XII.5.2.3]{hull-1993}: the augmented Lagrangian function $\Theta(p)$ is the Moreau-Yosida of the usual dual function, which is here \[ \trans{c}p + i_{\{Ax=b\}\cap\Kk}(p) = \max_{y,u\in \Kk^o}\trans{b}y - \trans{p}(\trans{A} y - u -c). \] This is exactly $F(p)$ defined by \eqref{eq:F} (in the case when $\Pp$ is just the affine subspace of equation $Ax=b$).$\Box$ The primal regularization by proximal approach and the dual augmented Lagrangian regularization thus correspond exactly to the same quadratic regularization process viewed either on the primal problem or on the dual~\eqref{eq:primaldual}. The developments of this section share similar properties with other augmented Lagrangian-type approaches for conic programming, among them: a primal augmented Lagrangian in \cite{burer-vandenbusshe-2006}, a primal-dual augmented Lagrangian in \cite{jarre-rendl-2007} and a penalized augmented Lagrangian in \cite{kocvara-stingl-2007}. \subsection{Regularization methods for linear conic programming} In this section we give more details on primal proximal algorithms (or dual augmented Lagrangian algorithms) that use dual projection methods as inner algorithms to carry out the proximal iteration \eqref{eq:proxiterimpl}. This family of algorithms is introduced for the case $\Kk=\sdp$ in \cite{malick-povh-rendl-wiegele-2009}. They are called regularization algorithms (rather than proximal algorithms, which would focus on the primal point of view only); we keep this terminology here. This section is more technical and could be skipped at a first reading. Regularization algorithms for conic programming specialize on three points: \begin{enumerate} \item the dual projection algorithm to compute $\prox(x_k)$, \item the rule to stop this inner algorithm, \item the rule to update the prox-parameter $t_k$. \end{enumerate} The third point is an inherent difficulty of any practical implementation of proximal methods (e.g.\;bundle methods, see \cite{correa-lemarechal-1993}). We are not aware of general techniques to tackle it. So we focus here on the first two points. \subsubsection{Dual projection methods as inner algorithms} We could use any dual projection algorithm of \secref{sec:dual} to solve \begin{equation}\label{eq:inner} \accol{\min \quad \trans{c}{x} + \frac{1}{2t}\norm{x-p}^2\\ \quad Ax= b, \ x\in \Kk.} \end{equation} Embedded in a proximal scheme, a dual projection algorithm would lead to the forthcoming overall algorithm for solving linear conic problems \eqref{eq:primaldual}. Note first that equations \eqref{eq-xy} and \eqref{eq-theta} for the projection-like problem \eqref{eq:inner} become respectively \begin{eqnarray}\label{eq:xy2} x(y) &=& \pK\big(p+t(\trans{A}y-c)\big)\\ \theta(y) &=& \trans{b}\!y+\frac{1}{2t}(\norm{p}^2-\norm{x(y)}^2). \end{eqnarray} We use the (slightly loose) formulation \eqref{eq:iter} of the iteration of dual projection methods to write a general regularization algorithm. We index the outer iterations by $k$ and the inner ones by $\ell$. \begin{algorithm}[Regularization methods]\hfill Outer loop on $k$ stopped when $\norm{p_{k+1}- p_k}$ small: \quad Inner loop on $\ell$ stopped when $\norm{Ax_\ell-b}$ small enough: \quad \quad Compute $x_\ell = \pK(p_k + t_k(\trans{A} y_\ell-c))$ and $g_\ell = b - Ax_\ell$ \quad \quad Update $y_{\ell+1} = y_\ell + \tau_\ell\, W_\ell\, g_\ell$ with appropriate $\tau_\ell$ and $W_\ell$ \quad end (inner loop) \quad Update $p_{k+1} = x_\ell$ (and $t_k$) end (outer loop) \end{algorithm} \noindent We discuss several points about the above conceptual algorithm. \begin{itemize} \item \emph{Memory.} An important feature of regularization methods is the rather low memory requirement. The intrinsic operations of the algorithm are basically the projection onto the cone and the multiplications by $A$ and $\trans{A}$. If the data has some structure, those multiplications could be performed efficiently (without constructing matrices). Moreover for maximizing $\theta$ (that is, essentially, implementing $y_{\ell+1} = y_\ell + \tau_\ell\, W_\ell\, g_\ell$), we could use algorithms of smooth unconstrained optimization adapted to large-scale problems and then requiring low-memory (as limited memory BGFS or Newton-CG, see e.g.\;\cite{nocedal-wright-1999} and \cite{bgls-2003}). We come back to this point later when discussing numerical issues. Roughly speaking, the point is the following: the computer memory can be used for storing problem data and the computation does not require much more extra memory. \smallskip \item \emph{Inner restarting.} At the outer iteration $k$, the inner projection algorithm can be initialized with the best $y_\ell$ of the previous iteration $k-1$. This has an intuitive appeal, so that in practice, $\ell$ keeps increasing over the outer iterations. (Note also that the historical information on gradients may be carried out from iteration $k-1$ to $k$ as well.) \smallskip \item \emph{Dual variable $u$.} It is known that for any $x\in \RR^n$, the projection onto the polar cone $\proj_{\Kk^o}(x)$ is given by $\pK(x)+\proj_{\Kk^o}(x)=x$ (together with $\trans{\pK(x)}\proj_{\Kk^o}(x)=0$, see \cite[III.3.2.5]{hull-1993}). When computing $x_\ell$, we thus get automatically \[ u_{\ell} = \proj_{\Kk^o}(p_k + t_k(\trans{A} y_\ell-c))/t_k \] and it holds \begin{equation}\label{eq:ul} p_k + t_k(\trans{A} y_\ell-c) = t_ku_\ell + x_\ell. \end{equation} \smallskip \item \emph{Dual outer iterates.} At the end of outer iteration $k$, we set (with a slight abuse of notation) $y_{k+1}=y_\ell$ and $u_{k+1}=y_\ell$ for $\ell$ the final iteration of inner algorithm. Thus we have a sequence of primal-dual outer iterates $(p_k,y_k,u_k)\in \Kk\times\RR^n\times\Kk^o$. Under some technical assumptions, we can prove a convergence result of the same vein as \propref{prop:convergence}: any accumulation point of the sequence $(p_k,y_k,u_k)$ is a primal-dual solution of \eqref{eq:pb2} (see e.g.\;Theorem\,4.5 of \cite{malick-povh-rendl-wiegele-2009} for a proof when $\Kk=\sdp$). \smallskip \item \emph{Outer stopping test.} We have already noticed in \eqref{eq:stop-proj} that the natural stopping test of dual projection algorithms controls primal infeasibility $\norm{Ax_\ell-b}$. Interpreted as a fixed point iteration \eqref{eq:proxiter}, the natural stopping of the proximal algorithm is $\norm{p_{k+1}-p_k}$; it turns out that this can interpreted as controlling dual infeasibility. Note indeed that \eqref{eq:ul} yields \[ p_k + t_k(\trans{A} y_k-c) = t_ku_k + p_k \] and then we have \[ \norm{p_{k+1}-p_k} = t_k\norm{\trans{A} y_k-u_k-c}. \] \smallskip \item \emph{Normal to interior-point methods.} By construction, conic feasibility $p\in \Kk$ (and $u\in \Kk^o$) and complementary $\trans{x}u=0$ are ensured throughout the algorithm, while primal-dual feasibilities are obtained asymptotically. In~contrast, recall that basic interior-point methods maintain primal and dual feasibility and the conic feasibility and work to reach complementarity. Note also that, regularization algorithms give solutions that are usually on the boundary of the cone $\Kk$, since the primal iterates are constructed by projections onto $\Kk$. In contrast again, basic interior-points give solutions as inside of the cone as possible. In a sense, regularization methods are then ``normal'' to interior point methods. \smallskip \item \emph{Overall stopping test.} We have seen above that the natural outer and inner stopping rules of the regularization algorithm have a practical interpretation as dual and primal infeasibilities. Since complementary and conic feasibility are ensured by construction, the natural stopping test of the overall algorithm is \begin{equation}\label{eq:residual} \max\left\{\norm{Ap_k-b}, \norm{\trans{A} y_k-u_k-c}\right\}. \end{equation} In practice, one should divide moreover the two infeasibilities by some constant quantities to get homogeneous ratios. \end{itemize} \subsubsection{Stopping inner iterations} For which inner iteration $\ell$ can we set $p_{k+1}=x_\ell$ to proceed with the outer iteration? Since we have a loop inside of another, the rule to terminate the inner projection algorithm is indeed an important technical point for regularization algorithms. We discuss three strategies to set up inner stopping~rules. \paragraph{Solving approximately the inner problem.} The usual stopping inner rule in proximal methods is to stop inner iterations when the current inner iterate $x_\ell$ is close to the proximal point $\prox(p_k)$. Doing this, the regularization algorithm approximates at best the conceptual proximal algorithm (which requires to solve the inner problem exactly), so that we keep convergence properties (as in \propref{prop:convergence} for instance). This is the strategy followed by the regularization method of \cite{zhao-sun-toh-2010} for $\Kk=\sdp$. This paper adopts the dual point of view (augmented Lagrangian) and uses semismooth Newton as dual projection algorithm for inner iterations (remember \secref{sec:dual}). The regularization method thus combines the usual stopping strategy and an efficient inner algorithm (supporting large-scale problems); it gives excellent numerical results on various semidefinite programming test-problems (see the last section of \cite{zhao-sun-toh-2010}). Under nondegeneracy assumptions, we have moreover proofs of global and local convergences of the inner algorithm as well as the overall regularization method. \paragraph{Only one inner iteration; interpretation as saddle-point.} An opposite strategy is to do only one inner iteration per outer iteration. This cautious strategy is motivated by the following remark. Let us come back to the proximal formulation \eqref{eq:proxform} of the linear conic problem. Under Slater assumption \eqref{eq:slater}, the inner projection problem can be replaced by its dual (recall \eqref{eq-theta}, \eqref{eq:dual} and \eqref{eq:xy2}), so that the primal and dual conic problems \eqref{eq:primaldual} have the saddle-point formulation \[ \begin{array}{rl} \accol{ \min\\{\quad p \in \RR^n}\quad} & \!\!\!\!\!\!\!\!\left(\begin{array}{l}\max \quad \trans{b}{y} - \frac{1}{2t}(\norm{p}^2 - \norm{\pK(p + t(\trans{A} y -c)}^2) \\ \quad y\in \RR^m \end{array}\right). \end{array} \] With this point of view on the process, the choice of inner stopping conditions appears indeed to be crucial, because the inner and outer loops are antagonistic, as the first minimizes and the second maximizes. The idea of the ``Arrow--Hurwicz'' approach (see, for instance, \cite{arrow-hurwicz-uzawa-1959}) is essentially to proceed with gradient-like iterations with respect to each variable successively. This is the strategy of the simple regularization method presented in \cite[Sec.\,5.1]{malick-povh-rendl-wiegele-2009}. Doing one inner iteration of \eqref{eq:iter} with $W_k=[A\trans{A}]$ and $\tau_k=1/t_k$ allows to simplify the regularization algorithm to just one~loop~with \begin{eqnarray} p_{k+1} &=& \pK(p_k + t_k(\trans{A} y_\ell-c)) \quad (\text{with $u_{k+1}$ as by-product})\label{eq:pk}\\ y_{k+1} &=& y_k + [A\trans{A}]^{-1}(b - Ap_k)/t_k.\label{eq:yk} \end{eqnarray} We retrieve algorithm 5.1 of \cite{malick-povh-rendl-wiegele-2009} by using the proof of proposition 3.4 in there. This simple regularization algorithm has also an interpretation as an alternating direction method, see the chapter of this book devoted to them. In practice, it is important to note that $A\trans{A}$ and its Cholesky factorization can be computed only once at the beginning of the algorithm. Even if this is an expensive task in general for problems that have many unstructured constraints (so that $A\trans{A}$ is big and unstructured), there exists some cases when $A\trans{A}$ is very simple, so that solving the system \eqref{eq:yk} is cheap. This is the case in particular when $A\trans{A}$ is diagonal, as for SDP relaxations of max-cut problem, or $k$-max-cut problem \cite{goemans-williamson-1995}, frequency assignment problems, see \cite[(5)]{burer-monteiro-zhang-2003}, max-stable problem, see more below, and polynomial minimization problems, see forthcoming \secref{sec:poly}. \paragraph{Something in-between.} An attractive option is to find something in-between the previous two extreme strategies. Explicit rules for the management of $\eps_k$ in \eqref{eq:proxiterimpl} should be given, and for numerical efficiency they should be given online. Using off-line rules independent of the function is interesting in theory since it allows to get proof of linear convergence. Following usual paradigms of numerical optimization, it would be possible to do better as soon as practical efficiency in concerned. An appropriate stopping rule still has to be found and studied; this is actually a general question and we are not aware of efficient techniques. Note finally that the practical implementation of the regularization method of \cite{zhao-sun-toh-2010} does indeed something in-between: the simple scheme with one inner gradient iteration (second strategy) is used as a preprocessing phase before switching to making inner Newton iterations (first strategy). See more about this on numerical illustrations of the method in \secref{sec:poly}. A stopping test along the above lines would further enhance the numerical performance of the implementation. \subsubsection{Illustration: Computing Lov\'asz theta number} We finish this section with a numerical illustration (borrowed from \cite{malick-povh-rendl-wiegele-2009}) of the performance of regularization methods on a classical combinatorial optimization problem. Lov\'asz \cite{lovasz-1979} proved the celebrated ``sandwich'' theorem in graph theory: the stable number $\alpha(G)$ of a graph $G$ and the chromatic number $\chi(\bar G)$ of its complementary graph $\bar G$ are separated \[ \al(G) \leq \vartheta(G) \leq \chi(\bar G) \] by the optimal value of an SDP problem \begin{equation}\label{eq:lovasz} \vartheta(G)= \accol{\max\quad \langle \mathbf{1}_{n\times n}, X \rangle \\ \quad X_{ij}=0,\quad \text{when $(i,j)$ is an edge of $G$}\\ \quad \trace\,X=1, \ \ X \succeq 0.} \end{equation} As expected, it can be shown that this SDP problem is a formulation of the SDP relaxation of the max-stable problem. The stable number $\alpha(G)$ and the chromatic number $\chi(\bar G)$ are both NP-hard to compute and even hard to approximate, so that the tractable $\vartheta(G)$ gives interesting information about~$G$. Some graphs from the DIMACS collection~\cite{johnson-trick-1996} are very challenging instances for computing $\vartheta(G)$. Those graphs have many edges and also many edges on the complementary, so that makes them the most difficult for standard methods (as noticed in \cite{dr:07}). On the other hand, the structure of problem \eqref{eq:lovasz} is very favorable for regularization methods and in particular for the simple one of \cite[Sec.\,5.1]{malick-povh-rendl-wiegele-2009} which is essentially \eqref{eq:pk}-\eqref{eq:yk}. Observe indeed that the affine constraints \eqref{eq:lovasz} is given by ``orthogonal'' matrices (i.e.\;$\prods{A_j}{A_i}=0$), such that the matrix $A\trans{A}$ is diagonal. For illustration, the next table reports some of the bounds that were computed for the first time in \cite{malick-povh-rendl-wiegele-2009}. \begin{center} \begin{tabular}{l|c|c|c} graph name & $n$ & $m$ & $\vartheta(G)$ \\ \hline brock400-1 & 400 & 59723 & 10.388 \\ keller5 & 776 & 225990 & 31.000 \\ brock800-1 & 800 & 207505 & 19.233 \\ p-hat500-1 & 500 & 31569 & 58.036 \\ p-hat1000-3 &1000 & 371746 & 18.23 \\ \end{tabular} \end{center} For more examples, see \cite[Sec.\;5.4]{malick-povh-rendl-wiegele-2009} and \cite[Sec.\;6.3]{zhao-sun-toh-2010}. \section{Applications to polynomial optimization}\label{sec:poly} In this section, we illustrate the regularization methods for solving linear semidefinite optimization problems in the context of polynomial optimization. We collect numerical experiments showing that regularization algorithms can be considered as an alternative to standard methods for deciding whether a polynomial is non-negative (Section \ref{sec:SOSfeas}) and for globally minimizing a polynomial (Section \ref{sec:SOSmin}). The point is thus the same as in \cite{nie-2009} which reports extensive numerical results using regularization methods for solving various large-scale polynomial optimization problems. We aim at giving here a methodology: our main focus is the generality of the approach and the reproducibility of experiments and results. We explain how to generate the test problems, we use public-domain implementations of the algorithms with default parameter tunings and with no attempt to adapt them to each particular problem instances (contrary to \cite{nie-2009}). We do not carry out a comprehensive benchmarking with all methods and solvers; we just compare a widely used implementation of interior-point algorithm \cite{sturm-1999} with two recent implementations of regularization methods (the basic one of \cite[Sec.\,5.1]{malick-povh-rendl-wiegele-2009}, and the more sophisticated one of \cite{zhao-sun-toh-2010}). \subsection{Sum-of-squares, SDP and software}\label{sec:SOSintro} We briefly introduce in this section the notions and notation of polynomial optimization that we need. We refer to the recent surveys \cite{laurent-2009} and \cite{lasserre-2009} and to the other chapters of this book for more on this topic. Consider a multivariate polynomial of total degree $2d$ \begin{equation}\label{eq:poly} v \in {\mathbb R}^N \longmapsto p(v) = \sum_{|\alpha|\leq 2d} p_{\alpha} v^{\alpha}. \end{equation} We use here the multi-index notation $v^{\alpha} = v^{\alpha_1}_1 \cdots v^{\alpha_N}_N$ where $\alpha \in {\mathbb N}^N$ runs over all nonnegative integer vectors of sum $|\alpha| = \alpha_1 + \cdots + \alpha_N \leq 2d$. We say that $p(v)$ is a sum-of-squares (SOS) of polynomials if one could find polynomials $q_k(v)$~such~that \begin{equation}\label{eq:SOSpoly} p(v) = \sum_k {q_k}^2(v). \end{equation} It can be shown that finding such polynomials $q_k(v)$ amounts to a semidefinite feasibility problem. More specifically, if $\pi(v)$ denotes a vector of basis of polynomials of total degree less than or equal to $d$, finding an SOS decomposition (\ref{eq:SOSpoly}) amounts to finding a so-called Gram matrix $X \in {\mathbb R}^{n\times n}$ such that \begin{equation}\label{eq:SOSGram} p(v)=\trans{\pi(v)} X \pi(v) \qqandqq X\in \sdp. \end{equation} The set of SOS polynomials has thus a SDP representation of the form \begin{equation}\label{eq:Gramvector} Ax=b, \quad x \in {\mathcal K} \end{equation} where $\mathcal K =\sdp$, $A \in {\mathbb R}^{m\times n^2}$ is a linear operator depending only on the choice of basis $\pi(v)$, and $b \in {\mathbb R}^m$ is a vector depending on $p(v)$. For example if the vector of basis polynomials $\pi(v)=[x^{\alpha}]_{|\alpha|\leq d}$ contains monomials $x^{\alpha}$ indexed by $\alpha \in {\mathbb N}^n$, then identifying powers of $v$ in relation (\ref{eq:SOSGram}) yields \[ p_{\alpha} = \prods{A_{\alpha}}{\pi(v)\trans{\pi(v)}}, \quad \text{ for all } \alpha \] where matrix $A_{\alpha}$ selects monomials $x^{\alpha}$ in rank-one matrix $\pi(v)\trans{\pi(v)}$. More specifically, the entry in $A_{\alpha}$ with row index $\beta$ and column index $\gamma$ is equal to one if $\beta+\gamma=\alpha$ and zero otherwise. In problem (\ref{eq:Gramvector}), the row indexed by $\alpha$ in matrix $A$ collects entries of matrix $A_{\alpha}$, and the row indexed by $\alpha$ in vector $b$ is equal to $p_{\alpha}$. Note that \[ n = \left(\begin{array}{c}N+d\\N\end{array}\right) \qqandqq m = \left(\begin{array}{c}N+2d\\N\end{array}\right), \] so that the sizes of SDP problems grow quickly with the degree and the number of variables. The important remark is that this type of constraints are favorable to regularization methods: $A\trans{A}$ is always diagonal indeed. To see this, let $\alpha$, $\beta$ denote the row and column indices in matrix $A\trans{A}$. By construction, the entry $(\alpha,\beta)$ in $A\trans{A}$ is equal to $\prods{A_{\alpha}}{A_{\beta}}$: if $\alpha=\beta$, this is equal to the number of non-zero entries in matrix $A_{\alpha}$, otherwise, this is zero. Since it is important for numerical efficiency, we formalize the previous remark in a proposition. \begin{proposition}[Orthogonality of constraints] Let $A$ be the matrix in SOS semidefinite problem (\ref{eq:Gramvector}). Then $A\trans{A}$ is diagonal with integer entries. \end{proposition} Polynomial optimization problems that we consider in the next two sections are difficult to tackle directly but admit standard SOS relaxations involving constraints sets \eqref{eq:Gramvector}. In practice, an SOS relaxation approach boils down to solving linear semidefinite problems of the form \begin{equation}\label{eq:pbsdp} \accol{\min \quad \trans{c}{x}\\ \quad Ax=b, \ x\in \sdp} \end{equation} where and $c\in \RR^{n^2}$, $b\in \RR^m$, $A\in\RR^{m \times n^2}$, and vector $x$ collects entries of matrix $X \in {\mathbb R}^{n\times n}$. For solving problem \eqref{eq:pbsdp}, we use the three following the public-domain Matlab implementations: \begin{enumerate} \item SeDuMi1.3 implementing the primal-dual interior-point algorithm of \cite{sturm-1999}\\ (available on {\tt sedumi.ie.lehigh.edu}) \item MPRW a version of the basic regularization method of \cite[Sec.\,5.1]{malick-povh-rendl-wiegele-2009}\\ (available on {\tt www.math.uni-klu.ac.at/or/Software/mprw2.m}) \item SDPNAL0.1 the regularization method of \cite{zhao-sun-toh-2010}\\ (available on {\tt www.math.nus.edu.sg/$\sim$mattohkc/\sdpnal.html}) \end{enumerate} \noindent Our goal here is just to show that the regularization methods are interesting in this context. We simply use default parameter tunings of the algorithms, with no attempt to adapt them to each particular problem instances contrary to in \cite{nie-2009}. With {\tt K.s=n}, the calling sequences of the three Matlab functions {\tt \sedumi}, {\tt \mprw} and {\tt \sdpnal} for solving \eqref{eq:pbsdp} are thus as follows: \begin{verbatim} pars = []; pars.tol = 1e-9; [x,y] = sedumi(A,b,c,K,pars); X = reshape(x,K.s,K.s); tol = 1e-9; C = reshape(c,K.s,K.s); [X,y] = mprw(A,b,C,1e6,1,tol); opts = []; opts.tol = 1e-9; [blk,At,C,B] = read_sedumi(A,b,c,K); [obj,X,y] = sdpnal(blk,At,C,B,opts); X = X{1}; \end{verbatim} Experiments are carried out with Matlab 7.7 running on a Linux PC with Intel Xeon CPU W3520 2.67Ghz using 64 bit arithmetic and 8GB RAM. Computation times are given in seconds, with two significant digits only (since our objective is not a comprehensive accurate benchmarking of codes). Similarly to \cite{nie-2009}, we will see that, due to lower memory requirements, regularization methods can solve larger polynomial optimization problems than classical interior-point methods with the above setting. A last note about tolerance. The tolerance parameters {\tt tol} for the three solvers are set to $10^{-9}$ for all the numerical experiments (except otherwise stated). Notice though that the meaning of the tolerance parameter is not the same for two types of algorithms. With regularization methods, the relative accuracy measured in terms of primal and residuals (remember \eqref{eq:residual}) is easily controlled. We stress that lower requirements on the relative accuracy could result in a significant saving of computational time, and this could be useful when solving approximately large-scale problems with \mprw and \sdpnal (see some examples in \cite{nie-2009}). In contrast, we observe (as expected) that the iteration count of \sedumi does not depend significantly on the expected accuracy, measured in terms of duality gap. Most of the computational time is spent to generate an approximately feasible primal-dual pair with relatively small duality gap, and only a few more iterations are required to refine the accuracy below $10^{-9}$. \subsection{Testing positivity of polynomials}\label{sec:SOSfeas} We focus in this section on the very first problem of polynomial optimization: we would like to know whether a polynomial \eqref{eq:poly} is positive \begin{equation}\label{eq:pospoly} p(v) \geq 0, \quad \text{for all } v \in {\mathbb R}^N. \end{equation} In general this is a difficult problem for which no polynomial-time algorithm is known. It can be relaxed to the easier problem of testing if $p$ could be expressed as an SOS \eqref{eq:SOSpoly}. Whenever it holds, then obviously condition (\ref{eq:pospoly}) is satisfied. The converse is not true in general if $N\geq 2$ and $d\geq3$, and there are explicit counter-examples; the simplest of them (the Motzkin polynomial) is studied below. \subsubsection{Random full-rank polynomial SOS problems}\label{sec:sos} We consider random polynomial SOS problems which are constructed so that there is a full-rank orthogonal Gram matrix $X$ (an interior point) solving problem (\ref{eq:SOSGram}). We use GloptiPoly 3 (see \cite{lasserre-2009}) to generate matrix $A$ and vector $b$ as follows: \begin{verbatim} N = 5; d = 3; mpol('v',N,1); P = msdp(min((v'*v)^d); [A,b,c,K] = msedumi(P); A = [c';-A]; c = zeros(size(A,2),1); X = orth(randn(K.s)); b = A*X(:); \end{verbatim} On Table \ref{tab:sos} we report execution times (in seconds) required by \sedumi, \mprw and \sdpnal to solve problem (\ref{eq:Gramvector}) for $d=3$ (degree six polynomials) and $N=5,\ldots,12$. We also indicate the size $n$ of matrix $X$ and the number $m$ of constraints (row dimension of matrix $A$). We observe that \sedumi is largely outperformed by \mprw and \sdpnal. We also observe that \mprw is about 4 times slower than \sdpnal, but this is not surprising as \mprw is a simple prototype (without comments and printing instructions it is about 50 lines of interpreted Matlab), whereas \sdpnal is a much more sophisticated package heavily relying on the efficient data handling and numerical linear algebra routines of the SDPT3 package. We also recall that \sdpnal makes several iterations of \mprw as preprocessing. \begin{table}[h] \centering \begin{tabular}{ccc|ccc} $N$ & $n$ & $m$ & SeDuMi & \mprw & SDPNAL \\ \hline 5 & 56 & 462 & 0.29 & 0.03 & 0.05 \\ 6 & 84 & 924 & 0.92 & 0.05 & 0.07 \\ 7 & 120 & 1716 & 4.8 & 0.13 & 0.10 \\ 8 & 165 & 3003 & 25 & 0.35 & 0.16 \\ 9 & 220 & 5005 & 110 & 0.66 & 0.25 \\ 10 & 286 & 8008 & 410 & 1.3 & 0.43 \\ 11 & 364 & 12376 & 1500 & 3.0 & 0.73 \\ 12 & 455 & 18564 & $>3600$ & 5.0 & 1.3 \end{tabular} \caption{Comparative execution times for SOS problems.\label{tab:sos}} \end{table} \subsubsection{Random low-rank polynomial SOS problems}\label{sec:sosrankone} We consider random polynomial SOS problems which are constructed so that there is a rank-one Gram matrix $X$ solving problem (\ref{eq:Gramvector}). For such problems, it is unlikely that there is an interior point $x$ solving problem (\ref{eq:Gramvector}), and indeed \sedumi does not find a full-rank solution. We use the same code as above, replacing the instruction {\tt X = orth(randn(K.s));} with the instructions \begin{verbatim} X = orth(randn(K.s,1)); X = X*X'; \end{verbatim} We report execution times (in seconds) in Table \ref{tab:sosrankone}. In comparison with the problems with interior points of Table\;\ref{tab:sos}, we observe that all the solvers experience convergence issues. However, there is still a considerable improvement in terms of efficiency brought by regularization methods, though Slater's qualification constraint cannot be invoked to guarantee convergence. \begin{table}[h] \centering \begin{tabular}{ccc|ccc} $N$ & $n$ & $m$ & SeDuMi & MPRW & SDPNAL \\ \hline 5 & 56 & 462 & 0.83 & 0.20 & 0.21 \\ 6 & 84 & 924 & 0.85 & 0.32 & 0.28 \\ 7 & 120 & 1716 & 16 & 2.1 & 0.51 \\ 8 & 165 & 3003 & 61 & 4.8 & 0.98 \\ 9 & 220 & 5005 & 330 & 12 & 1.2 \\ 10 & 286 & 8008 & 1300 & 24 & 2.5 \\ 11 & 364 & 12376 & $>3600$ & 50 & 3.5 \\ 12 & 455 & 18564 & $>3600$ & 110 & 6.6 \end{tabular} \caption{Comparative execution times for low-rank SOS problems.\label{tab:sosrankone}} \end{table} \subsubsection{Motzkin's polynomial}\label{sec:motzkin} We study a well-known bivariate ($N=2$) polynomial of sixth degree ($d=3$) which is non-negative but cannot be written as a polynomial SOS, namely Motzkin's polynomial \[ p_0(v) = 1+v^2_1v^2_2(v^2_1+v^2_2-3) \] see \cite{laurent-2009} or \cite{lasserre-2009}. This polynomial achieves its minimum zero at the four points $v_1=\pm 1$, $v_2=\pm 1$. In a basis of monomials of degree up to $3$ there is no Gram matrix $X$ solving problem (\ref{eq:Gramvector}). However, it was observed in \cite{henrion-lasserre-2005} and later on shown theoretically in \cite{lasserre-2006} that the perturbed polynomial \[ p_0(v) + \varepsilon p_1(v) \] can be represented as a polynomial SOS (with full-rank Gram matrix) provided the degree of the perturbation polynomial $p_1(v)$ is high enough, inversely proportional to scalar $\varepsilon>0$. In some sense, this can be interpreted as a regularization procedure as in \cite{henrion-malick-2009}. Practically speaking, since semidefinite programming solvers use inexact operations (floating point arithmetic), it is not necessary to perturb explicitly the data. It is enough to choose a basis $\pi(v)$ of sufficiently high degree $d>3$ in relation (\ref{eq:SOSGram}), and higher-order perturbations are automatically introduced by the algorithm. We use the following GloptiPoly3 instructions to generate data $A$, $b$ for increasing values of $d$: \begin{verbatim} d = 8; mpol v1 v2 p = 1+v1^2*v2^2*(v1^2+v2^2-3); P = msdp(min(p),d); [A,b,c,K,b0] = msedumi(P); A = [c';-A]; b = [-b0;-b]; c = zeros(size(A,2),1); \end{verbatim} For this problem, we set {\tt tol=1e-6} for the three solvers. When $d=3,4,5,6$, \sedumi takes less than 0.1 seconds to detect that problem (\ref{eq:Gramvector}) is infeasible, and it provides a Farkas dual certificate vector $y \in -{\mathcal K}$ such that $\trans{b} y = 1$. When $d=7,8,9,10$, \sedumi takes less than 0.5 seconds to return a vector $x$ such that the primal residual $\|Ax-b\|_2/\|b\|_2$ is less than $10^{-9}$ and the dual objective function $\trans{b} y$ is less than $10^{-9}$ in absolute value. The behavior of \sdpnal and \mprw is more erratic, and convergence issues occur, as shown by the execution times (in seconds) of Table \ref{tab:motzkin}. For $d=3,4,5$, \mprw stops after $10^6$ iterations, as there is no mechanism to detect infeasibility in this prototype software. \begin{table}[h] \centering \begin{tabular}{c|ccc} $d$ & time & $\|Ax-b\|_2/\|b\|_2$ & $\trans{b}y$ \\ \hline 3 & - & - & - \\ 4 & - & - & - \\ 5 & - & - & - \\ 6 & 15 & $6.20 \cdot 10^{-6}$ & $1.12 \cdot 10^{-6}$ \\ 7 & 25 & $6.03 \cdot 10^{-7}$ & $6.81 \cdot 10^{-7}$ \\ 8 & 26 & $5.80 \cdot 10^{-6}$ & $-4.08 \cdot 10^{-7}$ \\ 9 & 34 & $1.01 \cdot 10^{-6}$ & $-1.45 \cdot 10^{-7}$ \\ 10 & 75 & $5.42 \cdot 10^{-7}$ & $-1.58 \cdot 10^{-7}$ \end{tabular} \qquad\quad \begin{tabular}{c|ccc} $d$ & time & $\|Ax-b\|_2/\|b\|_2$ & $\trans{b}y$ \\ \hline 3 & 5.1 & $4.28 \cdot 10^{-3}$ & 33.4 \\ 4 & 9.2 & $1.56 \cdot 10^{-4}$ & 0.832 \\ 5 & 3.5 & $4.59 \cdot 10^{-6}$ & $4.37 \cdot 10^{-5}$ \\ 6 & 4.6 & $6.33 \cdot 10^{-6}$ & $1.05 \cdot 10^{-6}$ \\ 7 & 5.7 & $8.95 \cdot 10^{-6}$ & $3.86 \cdot 10^{-7}$ \\ 8 & 5.9 & $2.79 \cdot 10^{-6}$ & $-3.46 \cdot 10^{-7}$ \\ 9 & 7.9 & $2.54 \cdot 10^{-6}$ & $-3.25 \cdot 10^{-7}$ \\ 10 & 8.8 & $1.88 \cdot 10^{-6}$ & $-1.34 \cdot 10^{-7}$ \end{tabular} \caption{Behavior of \mprw (left) and \sdpnal (right) for Motzkin's polynomial.\label{tab:motzkin}} \end{table} \subsubsection{Regularization vs projection} Though it solves linear semidefinite problems, using regularization techniques somehow generalizes and enhances the idea \cite{henrion-malick-2009} to using projection methods directly for SOS feasibility problems. With this approach indeed, a question is to find a good point to project; taking systematically the zero matrix gives interesting results but could be greatly enhanced. Regularization methods provide a numerical solution to this: doing a sequence of truncated projections allows to keep the projection idea while getting rid of the question of the initial point to project. The behaviour of SDPNAL is interesting with this respect: it does first a preprocessing of several alternating direction iterations to get a meaningful point, then follows by projection-like iterations. In practice, we observe usually a very few iterations, and often one. For example, to decide whether the (admittedly trivial) polynomial $p(v)=\sum_{i=1}^{10}v_i^{10}$ is SOS, the SDP problem dimensions are $n=3003$ and $m=184756$, and after 90 seconds and only one projection-like iteration, \sdpnal provides a vector $x$ satisfying $\|Ax-b\|_2/\|b\|_2 \approx 1.4\cdot 10^{-10}$. \subsection{Unconstrained polynomial minimization}\label{sec:SOSmin} In this section we study global minimization problems \begin{equation}\label{eq:minpoly} p^* = \min_{v \in {\mathbb R}^N} p(v) \end{equation} where $p(v)$ is a given polynomial. For this problem, a semidefinite relaxation readily follows from the observation that \[ \begin{array}{rcll} p^* & = & \max_{\underline{p}} & \underline{p} \\ & & \mathrm{s.t.} & p(v)-\underline{p} \geq 0, \quad \forall v \in {\mathbb R}^N \end{array} \] and by relaxing the above non-negativity constraint by the semidefinite programming constraint that polynomial $p(v)-\underline{p}$ is SOS, see \cite{laurent-2009} and \cite{lasserre-2009}. \subsubsection{Random polynomial minimization problems} We generate well-behaved instances of unconstrained polynomial minimization problems (\ref{eq:minpoly}) with \[ p(v) = p_0(v) + \sum_{i=1}^N v_i^{2d} \] where $p_0(v)$ is a random polynomial of total degree strictly less than $2d$. The leading term $\sum_{i=1}^N v_i^{2d}$ ensures coercivity of $p(v)$ and hence existence of a global minimum in (\ref{eq:minpoly}). We use the following GloptiPoly 3 script to generate our examples: \begin{verbatim} N = 10; mpol('v',N,1); b = mmon(v,0,2*d-1); p0 = randn(1,length(b)); p0 = p0/norm(p0); p = p0*b + sum(mmon(v,d).^2); P = msdp(min(p)); [A,b,c,K] = msedumi(P); \end{verbatim} In Table \ref{tab:minpol} we report comparative execution times (in seconds) for $d=2$ and various values of $N$, for solving the semidefinite relaxation. It turns out that for these generic problems, we observe that global optimality is always certified with a rank-one moment matrix \cite{henrion-lasserre-2005}. Both \mprw and \sdpnal largely outperform \sedumi on these examples. \begin{table}[h] \centering \begin{tabular}{ccc|ccc} $N$ & $n$ & $m$ & SeDuMi & MPRW & SDPNAL \\ \hline 5 & 21 & 126 & 0.09 & 0.05 & 0.18 \\ 6 & 28 & 209 & 0.11 & 0.07 & 0.18 \\ 7 & 36 & 329 & 0.24 & 0.12 & 0.20 \\ 8 & 45 & 494 & 0.36 & 0.19 & 0.22 \\ 9 & 55 & 714 & 0.77 & 0.28 & 0.26 \\ 10 & 66 & 1000 & 1.9 & 0.45 & 0.29 \\ 11 & 78 & 1364 & 5.0 & 0.78 & 0.36 \\ 12 & 91 & 1819 & 11 & 1.1 & 0.41 \\ 13 & 105 & 2379 & 20 & 1.6 & 0.47 \\ 14 & 120 & 3059 & 42 & 2.3 & 0.65 \\ 15 & 136 & 3875 & 74 & 3.0 & 0.68 \end{tabular} \caption{Comparative execution times for semidefinite relaxations of random polynomial minimization problems.\label{tab:minpol}} \end{table} \subsubsection{A structured example}\label{sec:nie35} Consider the problem studied in \cite[Example 3.5]{nie-2009}, that is (\ref{eq:minpoly}) with \[ p(v) = \sum_{i=1}^N \left(1-\sum_{j=1}^i (v_j+v_j^2)\right)^2+\left(1-\sum_{j=1}^N(v_j+v_j^3)\right)^2. \] We solve the semidefinite relaxation for increasing values of $N$. We collect comparative execution times on Table \ref{tab:nie35} for this example. For example, when $N=10$, \sdpnal resp. \mprw returns a point $x$ such that $\|Ax-b\|/\|b\|$ is equal to $1.4\cdot 10^{-9}$ resp. $2.6\cdot 10^{-10}$ and the minimum eigenvalue of $X$ is equal to zero to machine precision. We observe again a considerable improvement in terms of performance brought by regularization methods in comparison with a classical interior-point method. For larger instances, most of the computation time of \sedumi is spent for memory swapping when constructing and handling large matrices. We refer to the recent extensive numerical work of \cite{nie-2009} for various structured problems. \begin{table}[h] \centering \begin{tabular}{ccc|ccc} $N$ & $n$ & $m$ & SeDuMi & MPRW & SDPNAL \\ \hline 5 & 56 & 461 & 0.50 & 0.54 & 0.60 \\ 6 & 84 & 923 & 2.5 & 1.2 & 1.2 \\ 7 & 120 & 1715 & 14 & 5.0 & 2.6 \\ 8 & 165 & 3002 & 92 & 19 & 6.7 \\ 9 & 220 & 5004 & 410 & 65 & 22 \\ 10 & 286 & 8007 & 1800 & 200 & 71 \\ 11 & 364 & 12375 & 7162 & 490 & 150 \\ 12 & 455 & 18563 & $>7200$ & 1500 & 530 \\ 13 & 560 & 27131 & $>7200$ & 3500 & 2300 \\ 14 & 680 & 38760 & $>7200$ & $>7200$ & 9900 \\ \end{tabular} \caption{Comparative execution times for semidefinite relaxations of a larger polynomial minimization problem.\label{tab:nie35}} \end{table} \section*{Acknowledgements} The work of the first author was partly supported by Research Programme MSM6840770038 of the Czech Ministry of Education and by Project 103/10/0628 of the Grant Agency of the Czech Republic.
1,116,691,500,591
arxiv
\section{Preliminaries}% \label{sec:preliminaries} Let \( X \) be a state space, \( \invstates \subseteq \states \) be the subset of invalid states, and \( \valstates \coloneqq \states \setminus \invstates \) be the subset of valid states. Let \( \path*{} \colon [0, l] \to \valstates \) be a path, i.e., a continuous function parameterized by its arc length, \( \arclength < \infty \), and let \( \paths \) denote the set of all valid paths. Let \( \clearance*{} \colon \valstates \to (0, \infty) \) be the clearance of a valid state and let \( \cost*{} \colon \paths \to [0, \infty) \) be the reciprocal clearance cost of a path, \begin{align*} \clearance{\state} &\coloneqq \min_{\state[][\prime] \in X_{\text{invalid}}} \norm{\state - \state[][\prime]} & \cost{\path*{}} &\coloneqq \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}} \diff{\patharg}.\stepcounter{equation}\tag{\theequation}\label{eq:reciprocal_obstacle_clearance_cost} \end{align*} It is assumed that no state on a path of interest has clearance of exactly zero. \begin{lemma}[An upper bound on state clearance]\label{thm:state-clearance} Let \( \path*{} \in \paths \) be a path with arc length \( \arclength \). Let \( \path{\patharg[1]} \in \states \) be the state \( 0 \leq \patharg[1] \leq \arclength \) along this path, and let \( \dist[1] \) be the known clearance of this state, \( \dist[1] \coloneqq \clearance{\path{\patharg[1]}} \). The clearance of any state on the same path, \( \clearance{\path{\patharg}} \), is then upper bounded by \begin{align*} \clearance{\path{\patharg}} \leq \dist[1] + \abs{\patharg[1] - \patharg}.\stepcounter{equation}\tag{\theequation}\label{eq:upper_bound_clearance} \end{align*} \end{lemma} \begin{proof} (Figure~\ref{fig:lemma-1}) Let \( \state[1][\prime] \in X_{\text{invalid}} \) be one of the closest invalid states of the state \( \state[1] \coloneqq \path{\patharg[1]} \), \begin{align*} \state[1][\prime] \coloneqq \mathop{\arg\min}_{\state[][\prime] \in X_{\text{invalid}}}\norm{\state[1] - \state[][\prime]} \,\implies\, \dist[1] = \norm{\state[1] - \state[1][\prime]}. \end{align*} Because \( \path{\patharg} \) is parametrized by arc length, any state on the path is at most \( \abs{\patharg[1] - \patharg} \) away from the state \( \state[1] \) and therefore at most \( \norm{\state[1] - \state[1][\prime]} + \abs{\patharg[1] - \patharg} \) away from the state \( \state[1][\prime] \). Since \( \state[1][\prime] \) is an invalid state, the distance \( \dist[1] + \abs{\patharg[1] - \patharg} \) provides an upper bound on the true clearance of any state on the path. \end{proof} \input{figures/lemma-1} \section{Solution-cost heuristics}% \label{sec:solution-cost-heuristics} This section presents two lower bounds on the cost of an optimal path between two states. These bounds can be used as admissible cost-to-go heuristics in informed planners, such as \accite{BIT*}{gammell_ijrr2020}, \accite{ABIT*}{strub_icra2020a}, and \accite{AIT*}{strub_icra2020b}, if a lower bound on the arc length of the optimal path between two states is known, e.g., the Euclidean distance. \begin{lemma}[An admissible solution-cost heuristic when the clearance of one end state is known]\label{thm:end-state} Let \( \path*{} \in \paths \) be a path whose arc length, \( \arclength \), is lower bounded by \( \lbarclength \leq \arclength \). Let the clearance of the start or goal state of this path be known and denoted by \( \dist[1] \coloneqq \clearance{\path{0 \text{ or } \arclength}} \). The reciprocal clearance cost \( \cost{\path*{}} \) of the path \( \path*{} \) is then lower bounded by \begin{align*} \cost{\path*{}} \geq \ln\left(\frac{\dist[1] + \lbarclength}{\dist[1]} \right). \end{align*} \end{lemma} \begin{proof} The bounds are computed by setting \( \patharg[1] = 0 \) or \( \patharg[1] = \arclength \) in Lemma~\ref{thm:state-clearance} and replacing the clearance function in the integrand of the reciprocal clearance cost~\eqref{eq:reciprocal_obstacle_clearance_cost} with the upper bound on the clearance~\eqref{eq:upper_bound_clearance}. First let \( \patharg[1] = 0 \) (Figure~\ref{fig:start-state}). Then by Lemma~\ref{thm:state-clearance} \begin{align*} \clearance{\path{\patharg}} \leq \dist[1] + t, \end{align*} and the lower bound on the cost is \begin{align*} \cost{\path*{}} = \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} &\geq \int_{0}^{\arclength} \frac{1}{\dist[1] + \patharg} \diff{\patharg} \\ &= {\left[ \ln\left( \dist[1] + \patharg \right) \right]}_{0}^{\arclength} \\ &= \ln\left( \dist[1] + \arclength \right) - \ln\left( \dist[1] \right) \\ &= \ln\left( \frac{\dist[1] + \arclength}{\dist[1]} \right) \\ &\geq \ln\left( \frac{\dist[1] + \lbarclength}{\dist[1]} \right). \end{align*} Similarly, let \( \patharg[1] = \arclength \) (Figure~\ref{fig:end-state}). Then by Lemma~\ref{thm:state-clearance} \begin{align*} \clearance{\path{\patharg}} \leq \dist[1] + \arclength - \patharg, \end{align*} and the lower bound on the cost is again \begin{align*} \cost{\path*{}} = \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} &\geq \int_{0}^{\arclength} \frac{1}{\dist[1] + \arclength - \patharg} \diff{\patharg} \\ &= {\left[ - \ln\left( \dist[1] + \arclength - \patharg \right) \right]}_{0}^{\arclength} \\ &= -\ln\left( \dist[1] \right) - \left( - \ln\left( \dist[1] + \arclength \right) \right) \\ &= \ln\left( \frac{\dist[1] + \arclength}{\dist[1]} \right) \\ &\geq \ln\left( \frac{\dist[1] + \lbarclength}{\dist[1]} \right). \end{align*} \end{proof} \begin{theorem}[An admissible solution-cost heuristic when the clearance of both end states is known]\label{thm:end-states} Let \( \path*{} \) be a path whose arc length, \( \arclength \), is lower bounded by \( \lbarclength \leq \arclength \). Let \( \path{\patharg[1] = 0} \in X \) and \( \path{\patharg[2] = \arclength} \in X \) be its start and goal states with clearances \( \dist[1] \) and \( \dist[2] \), respectively. The reciprocal clearance cost \( \cost{\path*{}} \) of the path \( \path*{} \) is then lower bounded by \begin{align*} \cost{\path*{}} \geq \ln\left( \frac{{\left( \dist[1] + \dist[2] + \lbarclength \right)}^{2}}{4\dist[1]\dist[2]} \right). \end{align*} \end{theorem} \input{figures/solution_cost_heuristics} \begin{proof} (Figure~\ref{fig:end-states}) The clearance of any state \( \path{\patharg}, 0 \leq \patharg \leq l, \) on the path is upper bounded by both clearances \( \dist[1] \) and \( \dist[2] \) according to Lemma~\ref{thm:state-clearance}, \begin{align*} \clearance{\path{\patharg}} &\leq \min\{\dist[1] + \abs{\patharg[1] - \patharg}, \dist[2] + \abs{\patharg[2] - \patharg} \}\\ &= \min\{ \dist[1] + \patharg, \dist[2] + \arclength - \patharg \} \end{align*} A lower bound on the reciprocal clearance cost \( \cost{\path*{}} \) of the path \( \path*{} \) can therefore be computed by \begin{align*} \cost{\path*{}} &= \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}} \diff{\patharg} \geq \int_{0}^{\arclength} \frac{1}{\min\left\{ \dist[1] + \patharg, \dist[2] + \arclength - \patharg \right\}} \diff{\patharg}.\stepcounter{equation}\tag{\theequation}\label{eq:integral_with_min} \end{align*} Since \( \dist[1] + \patharg \) is strictly monotonically increasing and \( \dist[2] + \arclength - \patharg \) is strictly monotonically decreasing, the two bounds must intersect at some point, \( \patharg[\text{e}] \), \begin{align*} \dist[1] + \patharg[\text{e}] = \dist[2] + \arclength - \patharg[\text{e}] \quad\implies\quad \patharg[\text{e}] = \frac{\dist[2] - \dist[1] + \arclength}{2}. \end{align*} This intersection must lie within the integration limits because by Lemma~\ref{thm:state-clearance} we have \begin{align*} \clearance{\path{\patharg}} \leq \dist[2] + \arclength - t \,\implies\, \clearance{\path{0}} \leq \dist[2] + \arclength \,\implies\, \dist[1] \leq \dist[2] + \arclength \,\implies\, \patharg[\text{e}] \geq 0\phantom{.}\\ \clearance{\path{\patharg}} \leq \dist[1] + t \,\implies\, \clearance{\path{\arclength}} \leq \dist[1] + \arclength \,\implies\, \dist[2] \leq \dist[1] + \arclength \,\implies\, \dist[2] - \dist[1] \leq \arclength \,\implies\, \patharg[\text{e}] \leq l. \end{align*} The minimum in~\eqref{eq:integral_with_min} can therefore be written as \begin{align*} \min\left\{\dist[1] + \patharg, \dist[2] + \arclength - \patharg \right\} = \begin{cases} \dist[1] + \patharg & \text{if } \patharg \leq \patharg[\text{e}] \\ \dist[2] + \arclength - \patharg & \text{otherwise}, \end{cases} \end{align*} and the definite integral~\eqref{eq:integral_with_min} can be evaluated to \begin{align*} \cost{\path*{}} = \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} &\geq \int_{0}^{\arclength} \frac{1}{\min\left\{ \dist[1] + \patharg, \dist[2] + \arclength - \patharg \right\}}\\ &= \int_{0}^{\patharg[\text{e}]} \frac{1}{\dist[1] + \patharg} \diff{\patharg} + \int_{\patharg[\text{e}]}^{\arclength} \frac{1}{\dist[2] + \arclength - \patharg} \diff{\patharg} \\ &= {\left[ \ln\left( \dist[1] + \patharg \right) \right]}_{0}^{\patharg[\text{e}]} + {\left[ -\ln\left( \dist[2] + \arclength - \patharg \right) \right]}_{\patharg[\text{e}]}^{\arclength} \\ &= \ln\left( \dist[1] + \frac{\dist[2] - \dist[1] + \arclength}{2} \right) - \ln\left( \dist[1] \right) + \left( -\ln\left( \dist[2] \right) - \left(- \ln\left( \dist[2] + \arclength - \frac{\dist[2] - \dist[1] + \arclength}{2} \right) \right) \right)\\ &= \ln\left( \frac{\dist[1] + \dist[2] + \arclength}{2\dist[1]} \right) + \ln\left( \frac{\dist[1] + \dist[2] + \arclength}{2\dist[2]} \right)\\ &= \ln\left( \frac{{\left(\dist[1] + \dist[2] + \arclength\right)}^{2}}{4\dist[1]\dist[2]} \right) \\ &\geq \ln\left( \frac{{\left(\dist[1] + \dist[2] + \lbarclength\right)}^{2}}{4\dist[1]\dist[2]} \right). \end{align*} \end{proof} \section{Path-cost heuristics}% \label{sec:path-cost-heuristics} Informed sampling-based planning algorithms, such as \ac{BIT*}, \ac{ABIT*}, and \ac{AIT*}, also use path-cost heuristics, i.e., estimates of the unknown cost of known paths, for example when ordering their search queues. The solution-cost heuristics of Section~\ref{sec:solution-cost-heuristics} can be made more accurate for known paths by sampling additional states along the path and computing their clearance. This improves performance if the evaluation of the true edge cost is computationally expensive. \begin{lemma}[An admissible path-cost heuristic when the clearance of any state on the path is known]\label{thm:single-known-state} Let \( \path*{} \in \paths \) be a path with arc length \( \arclength \). Let \( \path{\patharg[1]} \in \states \) be the state \( 0 \leq \patharg[1] \leq \arclength \) along this path, and let \( \dist[1] \coloneqq \clearance{\path{\patharg[1]}} \) be the known clearance of this state. The reciprocal clearance cost \( \cost{\path*{}} \) of the path \( \path*{} \) is then lower bounded by \begin{align*} \cost{\path*{}} \geq \ln\left( \frac{\dist[1] + \patharg[1]}{\dist[1]} \right) + \ln\left(\frac{\dist[1] + \arclength - \patharg[1]}{\dist[1]} \right).\stepcounter{equation}\tag{\theequation}\label{eq:lower-bound-single-state} \end{align*} \end{lemma} \begin{proof} (Figure~\ref{fig:single-state}) The lower bound is computed by replacing the clearance function in the integrand of the reciprocal clearance cost~\eqref{eq:reciprocal_obstacle_clearance_cost} with the upper bound on the clearance~\eqref{eq:upper_bound_clearance}, \begin{align*} \cost{\path*{}} = \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} &\geq \int_{0}^{\arclength} \frac{1}{\dist[1] + \abs{\patharg[1] - \patharg}} \diff{\patharg} \\ &= \int_{0}^{\patharg[1]} \frac{1}{\dist[1] + \patharg[1] - \patharg} \diff{\patharg} + \int_{\patharg[1]}^{\arclength} \frac{1}{\dist[1] + \patharg - \patharg[1]} \diff{\patharg}. \\ &= {\left[ -\ln\left( \dist[1] + \patharg[1] - \patharg \right) \right]}_{0}^{\patharg[1]} + {\left[ \ln\left( \dist[1] + \patharg - \patharg[1] \right) \right]}_{\patharg[1]}^{\arclength} \\ &= -\ln\left( \dist[1] \right) - \left(-\ln\left( \dist[1] + \patharg[1] \right) \right) +\ln\left( \dist[1] + \arclength - \patharg[1] \right) - \ln\left( \dist[1] \right) \\ &= \ln\left( \frac{\dist[1] + \patharg[1]}{\dist[1]} \right) + \ln\left( \frac{\dist[1] + \arclength - \patharg[1]}{\dist[1]} \right). \end{align*} Note that this reduces to the result of Lemma~\ref{thm:state-clearance} when \( \patharg[1] = 0 \) or \( \patharg[1] = \arclength \). \end{proof} \begin{theorem}[An admissible path-cost heuristic when the clearance of multiple states on the path is known]\label{thm:arbitrary-num-states} Let \( \path*{} \in \paths \) be a path with arc length \( \arclength \). Let \( 0 \leq \patharg[1] < \patharg[2] < \cdots < \patharg[n] \leq \arclength \) be a sequence of \( n \) numbers between \( 0 \) and \( \arclength \) whose associated states on the path have known clearance, \( \dist[i] \coloneqq \clearance{\path{\patharg[i]}} \) for \( i = 1, 2, 3, \hdots, n \). The reciprocal clearance cost \( \cost{\path*{}} \) of the path \( \path*{} \) is then lower bounded by \begin{align*} \cost{\path*{}} \geq \ln\left( \frac{\dist[1] + \patharg[1]}{\dist[1]} \right) + \sum_{i = 1}^{n - 1} \ln\left( \frac{{\left( \dist[i] + \dist[i + 1] + \patharg[i + 1] - \patharg[i] \right)}^{2}}{4\dist[i]\dist[i + 1]} \right) +\ln\left( \frac{\dist[n] + \arclength - \patharg[n]}{\dist[n]} \right). \end{align*} \end{theorem} \begin{proof} (Figure~\ref{fig:many-states}) The proof follows from Lemma~\ref{thm:end-state} and Theorem~\ref{thm:end-states} by splitting the clearance cost into \( n + 1 \) segments between the states with known clearance, \begin{align*} \cost{\path*{}} &= \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} \\ &= \int_{0}^{\patharg[1]} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} + \int_{\patharg[1]}^{\patharg[2]} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} + \cdots + \int_{\patharg[n-1]}^{\patharg[n]} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} + \int_{\patharg[n]}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg}. \end{align*} Let the arc lengths of these segments be denoted by \begin{align*} l_{0} = \patharg[1], l_{1} = \patharg[2] - \patharg[1], \ldots, l_{n - 1} = \patharg[n] - t_{n - 1}, l_{n} = \arclength - \patharg[n]. \end{align*} The first segment is a path of arc length \( l_{0} \) with known clearance of its end state and therefore lower bounded by Lemma~\ref{thm:end-state}, \begin{align*} \int_{0}^{\patharg[1]} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} &\geq \ln\left( \frac{\dist[1] + \arclength_{0}}{\dist[1]} \right) = \ln\left( \frac{\dist[1] + \patharg[1]}{\dist[1]} \right).\stepcounter{equation}\tag{\theequation}\label{eq:first-segment} \end{align*} Similarly, the last segment is a path of arc length \( l_{n} \) with known clearance of its start state and therefore also lower bounded by Lemma~\ref{thm:end-state}, \begin{align*} \int_{\patharg[n]}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} &\geq \ln\left( \frac{\dist[n] + \arclength_{n}}{\dist[n]} \right) \\ &= \ln\left( \frac{\dist[n] + \arclength - \patharg[n]}{\dist[n]} \right).\stepcounter{equation}\tag{\theequation}\label{eq:last-segment} \end{align*} Each of the segments between \( \patharg[1] \) and \( \patharg[n] \) can be viewed as a path with known clearance at the end-states and is therefore lower bounded by the result of Theorem~\ref{thm:end-states}. Specifically, the segment from \( \patharg[i] \) to \( \patharg[i + 1] \) with \( i \in [1, n - 1] \) is lower bounded by \begin{align*} \int_{\patharg[i]}^{\patharg[i + 1]} \frac{1}{\clearance{\path{\patharg}}} \diff{\patharg} &\geq \ln\left( \frac{{\left( \dist[i] + \dist[i + 1] + \arclength[i] \right)}^{2}}{4\dist[i]\dist[i + 1]} \right)\\ &= \ln\left( \frac{{\left( \dist[i] + \dist[i + 1] + \patharg[i + 1] - \patharg[i] \right)}^{2}}{4\dist[i]\dist[i + 1]} \right).\stepcounter{equation}\tag{\theequation}\label{eq:mid-segment} \end{align*} A lower bound on the path cost can be computed by adding the lower bounds~\eqref{eq:first-segment},~\eqref{eq:last-segment} and~\eqref{eq:mid-segment} \begin{align*} c(\path*{}) &= \int_{0}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} \\ &= \int_{0}^{\patharg[1]} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} + \sum_{i = 1}^{n - 1} \int_{\patharg[1]}^{\patharg[i + 1]} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} + \int_{\patharg[n]}^{\arclength} \frac{1}{\clearance{\path{\patharg}}}\diff{\patharg} \\ &\geq \ln\left( \frac{\dist[1] + \patharg[1]}{\dist[1]} \right) + \sum_{i = 1}^{n - 1} \ln\left( \frac{{\left( \dist[i] + \dist[i + 1] + \patharg[i + 1] - \patharg[i] \right)}^{2}}{4\dist[i]\dist[i + 1]} \right) + \ln\left( \frac{\dist[n] + \arclength - \patharg[n]}{\dist[n]} \right). \end{align*} \end{proof} \input{figures/path_cost_heuristics} The accuracy of this heuristic improves as the number of states of known clearance increases (Figures~\ref{fig:many-states} and~\subref{fig:arbitrary-states}). If the start and end states are among the states of known clearance, then the path is a chain of paths whose end states have known clearance and Theorem~\ref{thm:arbitrary-num-states} simplifies to a sum of Theorem~\ref{thm:end-states} over all segments (Corollary~\ref{thm:path-cost-end-states}). \begin{corollary}[An admissible path-cost heuristic when the clearance of the end states and other states of the path is known]\label{thm:path-cost-end-states} Let \( \path*{} \in \paths \) be a path with arc length \( \arclength \). Let \( 0 = \patharg[1] < \patharg[2] < \cdots < \patharg[n] = \arclength \) be a sequence of \( n \) numbers between \( 0 \) and \( \arclength \) whose associated states on the path have known clearance, \( \dist[i] \coloneqq \clearance{\path{\patharg[i]}} \) for \( i = 1, 2, 3, \hdots, n \). The reciprocal clearance cost \( \cost{\path*{}} \) of the path \( \path*{} \) is then lower bounded by \begin{align*} \cost{\path*{}} \geq \sum_{i = 1}^{n - 1} \ln\left( \frac{{\left( \dist[i] + \dist[i + 1] + \patharg[i + 1] - \patharg[i] \right)}^{2}}{4\dist[i]\dist[i + 1]} \right). \end{align*} \end{corollary} \begin{proof} (Figure~\ref{fig:arbitrary-states}) The proof follows from Theorem~\ref{thm:arbitrary-num-states} by setting \( \patharg[1] = 0 \) and \( \patharg[n] = \arclength \). \end{proof} \bibliographystyle{plainnat}
1,116,691,500,592
arxiv
\section{Introduction} Imitation learning (IL), which tries to mimic the expert's behaviors, has been successfully applied to many tasks, \emph{e.g.}, self-driving \cite{bojarski2016end,li2017infogail}, navigation \cite{silver2008high}, and robot locomotion \cite{merel2017learning}. According to the information contained within the expert references, imitation learning can be divided into two categories, Learning from Demonstration (LfD) and Learning from Observation (LfO) \cite{goo2019one,wake2020learning}. Specifically, the only difference between LfD and LfO is the required input data, \emph{i.e.}, the former demands both states and actions while the latter only needs states. Intuitively, without actions from the expert, it would be more challenging to duplicate behaviors in the imitation learning scenario. However, motivations for LfO are comparatively strong: (1) it is infeasible to record actions in various practical scenarios. For example, in the robot locomotion task, we often desire robots to clone human behaviors. However, it is significantly difficult for us to obtain human actions (\emph{e.g.}, the forces and torques acting on the joints and actuators \cite{peng2018deepmimic,wang2019imitation,park2019learning}). In these cases, LfD is impracticable due to the absence of human actions. By contrast, LfO could still work effectively by exploiting massive online videos \cite{peng2018sfv}; (2) it is quite unnatural to conduct imitation learning using actions as the guidance. Intelligent creatures primarily learn skills by observing how others accomplish tasks \cite{douglas2006observational}. Thus, exploiting clues from actions in imitation learning would somewhat limit the intelligence of agents to a comparatively low level; (3) directly imitating actions is infeasible in situations where the expert and the learner are executing tasks in different environment dynamics \cite{Gangwani2020State-only}. Hence, naively asking the learner to copy actions of the expert would lead to performance degeneration or even control failure. Though the three motivations mentioned above illustrate the necessity of LfO, a huge performance gap between LfO and LfD has been reported in previous works \cite{torabi2018behavioral,torabi2018generative,yang2019imitation}. The comparatively low performance of LfO compared to LfD imposes restrictions on LfO's potential in real-world applications. Intuitively, the learning process for LfD is more straightforward since expert references provide the direct action guidance to the imitator. On the contrary, in terms of LfO, the agent is required to figure out the right action to generate state transitions similar to the expert ones. This is the conjecture for the existence of the performance disparity \cite{torabi2018generative}. Based on this conjecture, Yang et al. \cite{yang2019imitation} proved that the performance gap is primarily caused by the inverse dynamics disagreement between the imitator and the expert. They further pointed out that LfO is equivalent to LfD when the environment dynamics are injective. However, an abnormal phenomenon occurs that LfO can perform as well as LfD in simple environments that are not injective (\emph{e.g.,} Pendulum-v2) \cite{torabi2018generative,sun2019adversarial}. Therefore, investigating the root for the performance gap and bridging the gap between these two methods are significantly beneficial to improve the employment of LfO in practice. In this paper, we prove that the inverse dynamics disagreement between the expert and the imitator approaches zero in deterministic robot tasks or robot tasks with bounded randomness, which means that LfO is almost equivalent to LfD. In this paper, we conduct a deep investigation into the difference between LfO and LfD from the control theory. Based on the most representative imitation learning algorithms: Generative Adversarial Imitation Learning (GAIL) \cite{ho2016generative} and Generative Adversarial Imitation from Observation (GAIfO) \cite{torabi2018generative}, we first use the deterministic property of the environment to analyze the value of the inverse dynamics model in a more rigorous way. Then, we establish the Euler-Lagrange dynamical equation for typical Mujoco tasks, and prove the existence and uniqueness of the action for a feasible state transition $(s,s')$. Combing the above two results, we prove that, in the same deterministic robot environment, the inverse dynamics models for the imitator and the expert are almost equal everywhere. Thus, the inverse dynamics disagreement between the expert and the imitator approaches zero, meaning that LfO is almost equivalent to LfD in deterministic robot tasks. To further relax the deterministic constraint and better adapt to the environment in practice, we consider the bounded randomness in the robot environment and prove that the optimizing targets for LfD and LfO remain almost same in a more generalized setting. Primary contributions of our paper are summarized as follows: \begin{itemize} \item By exploiting the Euler-Lagrange dynamical equation from control theory, we are the first to prove that in a deterministic robot environment, the performance gap between LfO and LfD approaches zero. \item We further prove that the optimizing targets for LfO and LfD remain almost same even if bounded randomness is considered in the robot dynamics, which extends our conclusion to more generalized settings. Moreover, this makes it possible to employ LfO in practical applications. \item Extensive experiments on various robot tasks illustrate that the performance of LfO is comparable to that of LfD. Furthermore, we give some discussions and suggestions on the factors which may affect the performance of LfO and LfD. \end{itemize} \section{Related Works} Imitation Learning tries to reproduce a policy that can mimic an expert's specific behaviors using only expert references. From the perspective of the information contained in expert trajectories, imitation learning can be divided into Learning from Demonstration (LfD) and Learning from Observation (LfO) \cite{goo2019one,wake2020learning}. \paragraph{LfD} LfD utilizes both states and actions to conduct imitation learning and can be divided into two categories, \emph{i.e.}, Behavior Cloning (BC) and Inverse Reinforcement Learning (IRL). Behavior Cloning, as indicated by its name, aims to clone expert's behaviors from expert trajectories which contain states and actions \cite{Bain95,Ross10,Ross11}. Thus, BC solves the imitation problem in a supervised learning manner, taking states as the inputs and actions as the labels. Given current states, the agent learns to predict actions as close as the expert ones. Inverse Reinforcement Learning imitates the expert from the perspective of reward shaping. IRL first reconstructs a reward function from expert trajectories, and then uses this reward function to guide a standard reinforcement learning process. A recent advance in IRL is Generative Adversarial Imitation Learning (GAIL) \cite{ho2016generative}, which makes no assumption on the form of the reward function. Instead, it utilizes a discriminator to measure the similarity of state-action pairs between the imitator and the expert and takes the similarity as the reward to perform forward reinforcement learning \paragraph{LfO} LfO is developed in a scenario where expert actions are absent. Similar to BC and GAIL in LfD, corresponding LfO versions are proposed, \emph{i.e.}, Behavior Cloning from Observation (BCO) \cite{torabi2018behavioral} and Generative Adversarial Imitation from Observation (GAIfO) \cite{torabi2018generative}. BCO employs an inverse dynamics model to guess the possible expert actions, and then uses the guessed expert actions to conduct standard BC. Unlike GAIL which uses state-action pairs to generate the reward, GAIfO exploits state transitions to obtain the reward. Generally, the performance of GAIfO is inferior to that of GAIL due to the lack of direct action guidance. Another approach in LfO is to design a hand-crafted reward function with expert states and then employ ordinary reinforcement learning to maximize the episode cumulative reward \cite{peng2018deepmimic,peng2018sfv,peng2020learning}. However, it is not an easy task to design a proper hand-crafted reward function since there is no mature design paradigm for the reward function and it requires background knowledge from the given application field. \paragraph{Relationship between LfD and LfO} Several studies have empirically shown that LfO is more difficult than LfD \cite{torabi2018behavioral,torabi2018generative}. In other words, the performance of LfO is inferior to that of LfD. An intuitive explanation is that, expert references in LfD are able to teach the imitator the correct actions directly. By contrast, in LfO, the imitator is required to find out the right actions and generates the state-transitions that are similar to those of the expert via interacting with the environment. Evidently, this process brings higher complexity and difficulty. Specifically, Yang et al. \cite{yang2019imitation} investigated the reason for the performance gap between LfD and LfO using two representative IL algorithms, GAIL and GAIfO. They believed that, in the complex real-world environment, the inverse dynamics disagreement is not zero, implying that a performance gap exists between LfD and LfO. They further presented a corollary that LfD is equivalent to LfO in injective dynamics. However, according to the definition of injectivity, injective dynamics indicate that a distinct state transition $(s,s')$ in its domain maps to a distinct action $a$ in its codomain. This is quite a harsh requirement and we could not find corresponding environments in practice. Unlike previous works, we suggest that the performance gap between LfD and LfO approaches zero in the deterministic robot system (a class of common system). In particular, we do not require the environment dynamics to be injective. By contrast, we only need that there only exists a unique action $a$ for a feasible state transition $(s,s')$. But on the other hand, an action $a$ could correspond to many distinct state transitions, which means that the environment dynamics are not injective. As a result, our work tremendously relaxes the pre-requisite conditions for the equivalence between LfD and LfO, and extends the possible application domains of LfO in practice. \section{Preliminaries} To derive our theorem, prerequisite definitions and concepts are presented in advance. \paragraph{Markov Decision Process} We consider a Markov Decision Process described by $(S,A,T,R,\gamma)$, where $S$ and $A$ represent state space and action space respectively, $T=T(s'|s,a)$ is the environment dynamics modeling the probability of states transitions over actions, $R:S \times A \rightarrow R$ is the reward function, and $\gamma$ is the discount factor. Let $\pi(a_t|s_t):S\times A\rightarrow[0,1]$ be a stochastic policy for the agent where $t$ is the current timestep, $J(\pi)=\mathbb{E}_{\mathbf{s}_{0},\mathbf{a}_{0},\cdots}\left[R_0\right]$ in which $R_t=\sum_{l=0}^{\infty}\gamma^l r_{t+l}$ denoting the expected discounted reward, where $\mathbf{s}_0\sim\rho_0(\mathbf{s}_0)$, $a_t\sim\pi(a_t|s_t)$, $\mathbf{s}_{t+1}\sim T(\mathbf{s}_{t+1}|\mathbf{s}_t,\mathbf{a}_t)$, and $\rho_0(\mathbf{s}_0)$ is the probability distribution of the initial state $s_0$. The goal of Reinforcement Learning (RL) algorithms is to find the optimal policy $\pi^{*}(a_t|s_t)$, which can achieve the maximum episode cumulative reward $J^*(\pi)$. \par To measure the discrepancy between trajectories, we bring in the following definition which characterizes the property. \begin{definition}[Occupancy Measure]\cite{puterman1994markov,torabi2018generative} Define the state occupancy measure for a distribution as follows: \begin{align*} \rho_{\pi_\theta}(s) = \sum_{t=0}^{\infty} \gamma^t P(s_t = s|\pi_{\theta}), \end{align*} where $\gamma$ is a discount and $\pi_{\theta}(a|s)$ stands for the probability of a policy taking action $a$ given state $s$. For briefness, we omit the parameter of a policy and denote $\pi_{\theta}$ as $\pi$. \end{definition} In order to model the relationship between the state transitions and the actions, the inverse dynamics model is defined below. \begin{definition}[Inverse Dynamics Model]\cite{spong1990adaptive}\label{definition:inversedynamicsmodel} Given a policy $\pi$ and the dynamics of the environment $T(s'|s,a)$, we can define the density of the inverse dynamics model: \begin{align}\label{equation:inversedynamicsmodel} \rho_{\pi}(a|s,s') = \frac{T (s'|s,a)\pi(a|s)}{\int_{A} T (s'|s,\bar a)\pi(\bar a|s) d\bar a}. \end{align} \end{definition} \paragraph{GAIL} GAIL tries to minimize the divergence between the expert trajectory and the agent trajectory. When the trajectory generated from the agent's current policy matches that of the expert, it suggests that the trained policy can achieve similar performance to the expert. The type of divergence could be Kullback–Leibler divergence \cite{kullback1951information} or Jensen–Shannon divergence \cite{lin1991divergence}, which measures the distribution distance between trajectories. It is formalized as follows: \begin{align* \begin{split} \mathop{\min}\limits_{\theta} \mathop{\max}\limits_{\omega} \mathop{\mathbb{E}}\limits_{s,a \sim \rho_\theta}[\log D_w(s,a)]+\mathop{\mathbb{E}}\limits_{s,a \sim \rho_{E}}[\log(1- D_w(s,a))], \end{split} \end{align*} where $D_w(s,a)$ is a discriminator which judges the similarity of a state-action pair $(s,a)$ to the reference one, and $w$ represents the weights of the discriminator network. \paragraph{GAIfO} GAIfO is a modified version of GAIL, which only uses the observation pair $(s,s')$ to learn. Its objective is defined as follows: \begin{align* \begin{split} \mathop{\min}\limits_{\theta} \mathop{\max}\limits_{\omega} \mathop{\mathbb{E}}\limits_{s, s' \sim \rho_\theta}[\log D_w(s,s')]+\mathop{\mathbb{E}}\limits_{s, s' \sim \rho_{E}}[\log(1- D_w(s, s'))]. \end{split} \end{align*} However, according to the literature \cite{torabi2018generative,yang2019imitation}, GAIfO performs worse than GAIL despite its merit of not requiring expert actions. An insight into the performance gap is presented below. \paragraph{Inverse Dynamics Disagreement between LfD and LfO} Yang et al. \cite{yang2019imitation} investigated the cause for the performance gap and revealed this cause with the following equation: \begin{align*} \begin{split} \mathbb{D}_{KL}(\rho_{\pi}(a|s,s')||\rho_{E}(a|s,s'))=\mathbb{D}_{KL}(\rho_{\pi}(s,a)||\rho_{E}(s,a)) - \mathbb{D}_{KL}(\rho_{\pi}(s,s')||\rho_{E}(s,s')), \end{split} \end{align*} where $\mathbb{D}_{KL}(\rho_{\pi}(a|s,s')||\rho_{E}(a|s,s'))$ is the inverse dynamics disagreement which induces the performance gap between LfD and LfO. Furthermore, they proved that LfD and LfO should be equivalent when the environment dynamics are injective. In this paper, we would further analyze the inverse dynamics disagreement in a stricter manner by utilizing the property of robot environment dynamics, and our analysis prove that the equivalence of LfD and LfO holds even when the environment dynamics are not injective. \section{Method} In this section, we will present our theoretical results---the almost equivalence between LfD and LfO in both deterministic robot environments and bounded randomness robot environments, which are organized into three subsections. More precisely, firstly, some properties of the deterministic robot environment and the Euler-Lagrange Dynamical Equations are presented; then, we prove that on deterministic robot tasks, LfD and LfO are almost the same in terms of the optimizing target, which means that the performance gap between them approaches zero; finally, by exploiting the bounded randomness of the environment and the Lipshitz continuity of policies, the almost equivalence between LfO and LfD is still guaranteed. Theoretically, we can see that LfO could perform as well as LfD essentially, and our findings increase the potential for LfO to be applied in real-world robot systems. In the next section, the experimental results validate our theoretical analysis. \par \subsection{Deterministic Robot Environments}\label{subsection:deterministic} For LfD and LfO considered in Ho et al. \cite{ho2016generative} and Torabi et al. \cite{torabi2018generative}, respectively, the learner policy is interacting with the same environment in which the expert reference is sampled. That is to say, we do not consider environment variations in the imitation learning process, and this setting is consistent with previous researches showing the performance discrepancy. Thus, to formalize this common situation in imitation learning, an assumption is made consequently. \begin{assumption}\label{assum:samedynamics} The environment dynamics for the learner and the expert remain same. \end{assumption} This assumption does make sense in reality because it is the most basic task to demand the learner to imitate the expert without environment dynamics change and morphology inconsistency \cite{ho2016generative,sun2019adversarial}. Furthermore, in this subsection, we focus on one kind of more specific environment, \emph{e.g.}, the deterministic robot environment, which is the most fundamental setting for robots and is of significance to investigate the difference between LfO and LfD given such environment. The deterministic property means that there is no randomness in the environment, leading to the following lemma. \begin{lemma}\label{lemma:dynamics} For a deterministic environment, given the current state $s$, taking action $a$ will result in a certain next state $s'$. To distinguish this certain next state against the others, we denote it as $s_{d}'$ in which the subscript $d$ stands for determinacy. Consequently, the transition dynamics for both the imitator and the demonstrator can be further specified as: \begin{align*} T (s'|s,a)=\left\{ \begin{array}{rcl} 1.0 & & {s'=s_{d}' }\\ 0.0 & & {s'\in S \ and \ s'\neq s_{d}'} \end{array} \right . \end{align*} \end{lemma} The zero-one property of the environment dynamics can help us simplify the inverse dynamics model in Eq. \eqref{equation:inversedynamicsmodel}. Besides, to present another feature of the deterministic robot environment, we would borrow the idea from the field of control science. In MDP, $T(s'|s,a)$ is employed to describe the environment dynamics, \emph{i.e}., the dynamical relationship between forces and motions. In contrast to $T(s'|s,a)$ in MDP, the control science community uses differential equations to represent this relationship. Under Lagrangian mechanics \cite{brizard2nd}, we obtain the dynamics model in control science as follows: \begin{lemma}\label{lemma:lagrange} \cite{dubowsky1993kinematics,westervelt1st,nanos2012cartesian} The differential equation, called Euler-Lagrange dynamical equation, which models the dynamics for deterministic robot environments, could be written in the following form: \begin{align}\label{equation:Euler-La} M(q)\ddot q + C(q,\dot q)\dot q + G(q)= u, \end{align} where $q$ is the generalized coordinates, $\dot q$ is the derivative of the generalized coordinates, and $u$ is the control input. $M(q)$ is the inertia matrix which is positive definite, $C(q,\dot q)$ describes the centripetal and Coriolis torques, and $G(q)$ is the vector of gravitational torques. \end{lemma} From the above lemma, Eq. \eqref{equation:Euler-La} is able to describe the system dynamics of the robot. Besides, both $q$ and $\dot q$ are defined as system states in control theory. However, to distinguish the states in MDP and Euler-Lagrange dynamical equation, we will merely use the term states for MDP and generalized coordinates for the control model, respectively. The connection between $s$ and $(q,\dot q)$ is illustrated by the following remark. \begin{remark}\label{remark:equalqa} The generalized coordinates $(q,\dot q)$ have the capacity to express all the dynamical information of the robot. Hence, the state $s$ in MDP is equivalent to $(q,\dot q)$ in the control theory via some specific transforms. \end{remark} It means that $s$ and $(q,\dot q)$ can replace each other. As a result, we can investigate $(q,\dot q)$ rather than $s$ in MDP. In robot control, the control inputs are often forces or torques acting on the joints. Under this setting, the control input $u$ is exact the action $a$. Hence, we can use the Euler-Lagrange dynamical equation to analyze the relationship between the generalized coordinates transition $((q_t,\dot q_t),(q_{t+1},\dot q_{t+1}))$ and the control input $u$ instead of analyzing the unknown environment dynamics $T(s'|s,a)$. \subsection{Almost Equivalence between LfO and LfD in Deterministic Robot Environments}\label{subsection:eqdeterministic} In this subsection, we first analyze the inverse dynamics models and the sufficient condition under which the inverse dynamics models of the imitator and the expert are equal everywhere. Then, we give the theorem proving that the inverse dynamics disagreement between LfD and LfO approaches zero, \emph{i.e.}, $\mathbb{D}_{KL}(\rho_{\pi}(a|s,s')||\rho_{E}(a|s,s'))\approx0$, leading to the almost equivalence of both considered approaches. \par The difference lying between LfD and LfO is the inverse dynamics disagreement, that is, \begin{align}\label{equation:detailinversedynamicsdis} \begin{split} \mathbb{D}_{KL}(\rho_{\pi}(a|s,s')||\rho_{E}(a|s,s')) = \int_{S\times A\times S} \rho_{\pi}(s,a,s') log \frac{\rho_{\pi}(a|s,s')}{\rho_{E}(a|s,s')} dsdads'. \end{split} \end{align} Eq. \eqref{equation:detailinversedynamicsdis} indicates that the inverse dynamics models of the learner and the expert determine how large the performance gap would be. As a result, based on Definition \ref{definition:inversedynamicsmodel} and Eq. \eqref{equation:detailinversedynamicsdis}, if the environment dynamics $T(s'|s,a)$ plusing the imitator policy $\pi_{\theta}(a|s)$ are able to guarantee that $\rho_{\pi}(a|s,s')=\rho_{E}(a|s,s')$ for every $(s,s',a)\in\{S\times S\times A\}$, then, $\mathbb{D}_{KL}(\rho_{\pi}(a|s,s')||\rho_{E}(a|s,s'))=0$ and no difference exists between the two focus approaches. For deterministic robot environments, the theorem on the almost equivalence of the inverse dynamics models for the learner and the expert is given as follows. \begin{theorem}\label{theorem1} Under Assumption \ref{assum:samedynamics}, for deterministic robot controlled plant, the inverse dynamics disagreement between the learner policy and the expert policy approaches zero. \end{theorem} \begin{proof} Our analysis relies on the deterministic property and the Euler-Lagrange dynamical equation. We first use the deterministic character to simplify inverse dynamics models and obtain the sufficient condition to ensure the equality of inverse dynamics models between LfO and LfD. Subsequently, we employ the Euler-Lagrange dynamical equation to prove the uniqueness of the action for a feasible state transition $(s,s')$. \par \textit{Sufficient Condition.} For a deterministic system, the state-action-state tuples $(s,a,s')$ recorded by interacting with the environment satisfy that $T(s'|s,a)=1$. Assume actions in set $A$ can be divided into two subsets, in which the actions belonging to the first subset can transfer state $s$ to $s'$ while the other could not. To better represent the first part of actions, we use symbol $A_{f}$ in which the subscript $f$ stands for feasibility. Thus, the inverse dynamics models for the imitator and the demonstrator could be simplified as follows: \begin{align} &\rho_{\pi}(a|s,s')=\frac{\pi_{\theta}(a|s)}{\int_{A_f} \pi_{\theta}(\bar a|s) d\bar a},\label{euqation:inversemodel0}\\ &\rho_{E}(a|s,s')=\frac{\pi_{E}(a|s)}{\int_{A_f} \pi_{E}(\bar a|s) d\bar a}.\label{euqation:inversemodel1} \end{align} Since that we have no access to the expert policy, the only way to guarantee $\rho_{\pi}(a|s,s')=\rho_{E}(a|s,s')$ is $\left| A_f\right|=1$, meaning that there is only one element in the set $A_f$. In other words, for a feasible state transition $(s,s')$, there is a unique action $a$ which can transfer the current state $s$ to the next $s'$. Thus, the inverse dynamics models are obtained as: \begin{align*} \rho_{\pi}(a|s,s')=\frac{\pi_{\theta}(a|s)}{\pi_{\theta}(a|s)}=1=\frac{\pi_{E}(a|s)}{\pi_{E}(a|s)}=\rho_{E}(a|s,s'). \end{align*} Now, the equality of $\rho_{\pi}(a|s,s')$ and $\rho_{E}(a|s,s')$ turns into whether the unique existence of action $a$ holds for a feasible state transition $(s,s')$.\par \textit{Uniqueness of the Action.} According to Remark \ref{remark:equalqa}, we would focus on whether there exist more than one control inputs $u$ which can transfer the generalized coordinates from $(q_t,\dot q_t)$ to $(q_{t+1},\dot q_{t+1})$ where $t$ is the current timestep. We assume that there exist at least two possible control inputs $u_0$ and $u_1$, which can bring the current state $(q_t,\dot q_t)$ to the next $(q_{t+1},\dot q_{t+1})$ and $u_0 \neq u_1$. When applying $u$ to the robot, accelerations $\ddot q$ are generated from Eq. \eqref{equation:Euler-La}. Based on Euler Methods \cite{butcher2008numerical} and Euler-Lagrange dynamical equation (Lemma \ref{lemma:lagrange}), we prove that: \begin{align*} u_0 \approx u_1. \end{align*} This result contradicts the assumption that there exist at least two different control inputs that can achieve the same generalized coordinates transition. Hence, it means that given a system generalized coordinates change, there is only one corresponding control input $u_{uni}$. Consequently, based on Remark \ref{remark:equalqa}, the existence and uniqueness of the action $a$ for a feasible state transition $(s,s')$ have been proved. Returning back to Eqs. \eqref{euqation:inversemodel0} and \eqref{euqation:inversemodel1}, we can guarantee that the inverse dynamics models for the learner and the expert are almost equal everywhere. In conclusion, for deterministic robot tasks, the almost equivalence between LfD and LfO holds. \end{proof} Theorem \ref{theorem1} tells us that LfO is almost equivalent to LfD, and thus, there would be no performance difference in deterministic robot environments. Hence, researchers could safely employ LfO in this kind of environments without worrying about performance degeneration, which greatly expands its application range. We provide a more detailed proof in the Appendix. \subsection{Almost Equivalence between LfO and LfD in Robot Environments with Bounded Randomness}\label{subsection:random} In the robot system, Eq. \eqref{equation:Euler-La} is an ideal analytical model of the robot system, which does not contain disturbances \cite{liu2000disturbance}, un-modeled dynamics \cite{nguyen2015adaptive}, and parameter uncertainties \cite{burkan2003upper}. These factors will lead to stochasticity in the environment dynamics and impair the universality of Theorem \ref{theorem1}. To further cope with the situation in reality, we assume bounded randomness in the robot environment dynamics \cite{lu2010experimental,lin2011modeling,li2017robustness}. This means that the environment dynamics $T(s'|s,a)$ can randomly transfer the current state $s$ with action $a$ to a range of next states $\{s'|s'\in(s'_d-\frac{\epsilon}{2},s'_d+\frac{\epsilon}{2})\}$, where $s'_d$ is the certain next state in the deterministic setting and $\epsilon$ is the bound of the randomness which is assumed to be small. This assumption is reasonable and is able to cover the environment dynamics in the real-world. Then, a corollary of the inverse dynamics disagreement between LfD and LfO in random robot environments is presented below. \begin{corollary}\label{corollary1} When the randomness of the environment dynamics is bounded by a small number $\epsilon$ and policies are Lipshitz continuous, the inverse dynamics disagreement between the learner policy and the expert policy approaches zero. \end{corollary} This means that even in the robot environment with bounded randomness LfO is almost equivalent to LfD. Corollary \ref{corollary1} extends the Theorem \ref{theorem1} to the stochastic setting, and enlarges the application fields of LfO. The proof of this corollary is in the Appendix. \par It should be noted that the even the optimizing targets for LfD and LfO are almost same, this does not mean the performance of LfD and LfO could always be the same in the experiments because both LfD and LfO are not guaranteed to achieve the global optima especially with deep neural networks. \section{Experiments} In this section, we compare the experimental results of LfD and LfO on various Mujoco tasks ranging from toy tasks (\emph{e.g.,} InvertedPendulum-v2) to complicated ones (\emph{e.g.,} Humanoid-v2) in the OpenAI Gym \cite{brockman2016openai}. All of these environments satisfy Assumption \ref{assum:samedynamics} and Lemmas \ref{lemma:dynamics}-\ref{lemma:lagrange}, which indicates that they are deterministic robot locomotion tasks. We also provide some conjectures on the ``performance gap'', \emph{i.e.}, the factors that may lead to the performance gap. \subsection{Setup} In this paper, we use OpenAI Mujoco tasks (\emph{e.g.,} InvertedPendulum-v2 and Humanoid-v2) for our experiments, which are continuous deterministic robot locomotion tasks. Specifically, the observation and action dimensions for InvertedPendulum-v2 are $4$ and $1$, respectively. Humanoid-v2 has 376-dimensional observation and 17-dimensional action. To generate the expert data, we first train an expert policy using the reinforcement learning (RL) algorithm SAC \cite{haarnoja2018soft}; then we execute the policy deterministically in the environment to collect expert trajectories composed of state transition tuples or state-action pairs. Each trajectory consists of 1000 state transitions or state-action pairs. \pa As for the implementation of GAIfO and GAIL, we follow the implementation of GAIL in the OpenAI Baselines \cite{dhariwal2017baselines} and implement the GAIfO consequently. Following \cite{dhariwal2017baselines}, we use the RL algorithm TRPO \cite{schulman2015trust} to serve as the generator. Commonly used hyper-parameters are as follows: the hidden size for all networks is $(100,100)$ and we use $tanh$ as the activation function; the learning rate for the discriminator is $3 \times 10^{-4}$ while that for the value network is $1 \times 10^{-3}$; the ratio between the update of the generator network and discriminator network is set to $3:1$. We only give the hidden layer size of neural networks since the input layer and output layer size varies in different environments. The input size of GAIL is defined by $(s,a)$ whereas that in GAIfO is determined by $(s,s')$, which distinguishes the discriminators in GAIL and GAIfO. More details are in the Appendix. \begin{figure*}[htbp] \centering \subfigure[InvertedPendulum-v2]{ \includegraphics[width=0.3\textwidth]{InvertedPendulumA.pdf} } \subfigure[Hopper-v2]{ \includegraphics[width=0.3\textwidth]{HopperA.pdf} } \subfigure[Walker2d-v2]{ \includegraphics[width=0.3\textwidth]{WalkerA.pdf} } \subfigure[HalfCheetah-v2]{ \includegraphics[width=0.3\textwidth]{HalfCheetahAnew.pdf} } \subfigure[Humanoid-v2]{ \includegraphics[width=0.3\textwidth]{HumanoidAnew.pdf} } \subfigure[HumanoidStandup-v2]{ \includegraphics[width=0.3\textwidth]{HumanoidStandupA.pdf} } \caption{Performance of each approach on MuJoCo tasks. Performance is measured with average episode return and x-axis stands for time steps interacting with the environment. Each algorithm is evaluated with 5 random seeds.} \label{learningcurve} \end{figure*} \begin{table*}[h!] \centering \caption{Environments and overall performances of compared algorithms. The numbers before and after the plus or minus symbol are average episode reward and standard error, respectively.} \label{table2} \centering \begin{tabular}{c c c c} \hline Environment & Expert & GAIL & GAIfO\\ \hline InvertedPendulum-v2 & 1000.0$\pm$0.0 & 995.4$\pm$9.2 & 992.7$\pm$13.1 \\ Hopper-v2 & 3491.8$\pm$37.3 & 3229.6$\pm$221.6 & 3251.6$\pm$195.5\\ Walker2d-v2 & 3999.6$\pm$10.5 & 3600.9$\pm$201.2 & 3731.3$\pm$424.8\\ HalfCheetah-v2 & 4993.4$\pm$65.0 & 3270.4$\pm$458.6 & 3381.6$\pm$202.8 \\ Humanoid-v2 & 5600.4$\pm$10.0 & 4764.8$\pm$227.0 & 4896.9$\pm$507.1\\ HumanoidStandup-v2 & 155525.8$\pm$2179.9 & 151093.4$\pm$2672.0 & 152898.4$\pm$1333.7\\ \hline \end{tabular} \end{table*} \subsection{Results} In this subsection, we provide the qualitative and quantitative results of GAIL and GAIfO on MuJoCo tasks, which are shown in Fig. \ref{learningcurve} and Tab. \ref{table2}. Each task is trained with 5 random seeds and the episode cumulative reward is employed to evaluate the performance of both methods. All the other settings, including the hyper-parameters, are kept the same for fair comparisons. More details are presented in the Appendix. In Fig. \ref{learningcurve}, the mean and the standard deviation of the episode cumulative rewards are illustrated with a solid line and shaded area, respectively, while the numerical results for the expert, GAIL, and GAIfO are listed in Tab. \ref{table2}. From Fig. \ref{learningcurve} and Tab. \ref{table2}, it is clear that (1) both LfD and LfO can achieve the expert-level performance at the end of training; (2) LfO could achieve comparative performance to LfD, \emph{i.e.}, no performance gap between LfO and LfD is noticed. In addition, we adopt a third party open-sourced implementation of GAIL and GAIfO to validate their performance \cite{ota2020tf2rl}, whose learning curves are similar to ours and are presented in the Appendix. \par In summary, in deterministic robot tasks, LfO is almost equivalent to LfD, which further confirms our theoretical finding in Theorem \ref{theorem1}. \subsection{Discussion and Suggestions} In this subsection, we aim to provide some conjectures and discussions on why previous studies hold the point that there exists a performance gap between LfD and LfO. As far as we are concerned, two factors may affect the performance of LfO and LfD. \begin{itemize} \item [1)] It's well known that current reinforcement learning strategies are comparatively unstable and require elaborate implementation of the algorithms \cite{henderson2017deep,hutson2018artificial, dasagi2019ctrl}. Theorem \ref{theorem1} and Corollary \ref{corollary1} demonstrates that the optimizing targets for LfD and LfO in deterministic robot environments or robot environments with bounded randomness remain almost same, which proves that there would be almost no performance gap. However, in practice, many implementation factors may affect the final performance of these algorithms. Evidently, LfO is more difficult than LfD, since it needs to predict the right action from the expert trajectories. Thus, LfO is more likely to converge at a local optima compared to LfD if some implementation tricks in reinforcement learning or deep learning are not employed. Specifically, the input normalization for the policy and value network plays a very important role among them. To illustrate the importance of input normalization, we further conduct an ablation experiment by training LfO with/without input normalization. As can be seen in Fig. \ref{discussion}, it is clear that the performance is boosted with input normalization. Though this technique helps improve the performance, it does not affect the core of LfO. \item [2)] Another possible reason is that the performance of LfO might be affected by the training instability of GAN \cite{gulrajani2017improved, zhang2019consistency}, since the exploration in LfO is more difficult than that in LfD. Suffering from the training problems of GAN (\emph{e.g.,} vanishing gradients, model collapse, \emph{etc.}), LfO may get stuck at a local optima and perform poorly. To further confirm this point and take full advantage of LfO, we adopt Spectral Normalization \cite{miyato2018spectral}, which is a commonly used technique to stabilize the GAN training. As shown in Fig. \ref{discussion}, after introducing Spectral Normalization, the performance of LfO and LfD are improved. The introduction of spectral normalization does not change the optimizing target and just helps stabilize the training process. \end{itemize}\par In summary, we believe that the performance gap previously reported might be caused by implementation details or training un-stability rather than the inverse dynamics disagreement. Our theoretical analysis and experiments suggest that LfO is almost equivalent to LfD. To further promote research, we will open source our code and dataset to help the reproducibility. \begin{figure}[htbp] \centering \subfigure[Input Noramlization]{ \includegraphics[width=0.4\textwidth]{InputNormAnew2.pdf} } \subfigure[Spectral Normalization]{ \includegraphics[width=0.4\textwidth]{SpectralNormAnew.pdf} } \caption{Impact of input normalization and spectral normalization on LfO and LfD. The performance measurement criterion is same as described in Fig \ref{learningcurve}. In (a), we test the impact of input normalization on GAIL and GAIfO in Humanoid-v2, in which GAIL and GAIfO uses input normalization in their policy and value networks while GAILw/oIN and GAIfOw/oIN do not; In (b), the learning curves of GAIL and GAIfO against their spectral noramlizaed versions GAIL-sn and GAIfO-sn in HalfCheetah-v2 are presented.} \label{discussion} \end{figure} \section{Conclusions} In this paper, we delve into the difference between LfD and LfO specifically from two algorithms, GAIL and GAIfO, which are the two most commonly used imitation learning algorithms. From the perspective of control theory and inverse dynamics model, we prove that, for deterministic robot systems, the performance gap between LfO and LfD approaches zero, \emph{i.e.}, the almost equivalence between LfO and LfD is proved. To further relax the deterministic constraint and better adapt to the environment in practice, we consider the bounded randomness in the robot environment and prove that the optimizing targets for LfD and LfO remain almost same in a more generalized setting, which paves the way for the application of LfO in real-world. In other words, LfO is almost equivalent to LfD in deterministic robot environments or robot environments with bounded randomness. Extensive experiments on OpenAI MuJoCo tasks are conducted and empirically demonstrate that LfO achieves comparable performance to LfD. Based on our studies, we provide the state-of-the-art implementation of GAIfO with the code and dataset open-sourced, which are beneficial for the imitation learning community. Our analysis gives a deep insight into imitation learning from observation and will for certain promote the applicability of LfO in real-world robot tasks. Future work would focus on the analysis of the difficulty for finding the right action with given state transitions for LfO. \section{Ethics} Like most imitation learning algorithms, generative adversarial imitation learning discussed in our paper has the potential to be applied in industrial production and robots for daily life. But applying the algorithms in the researches of artificial intelligence to real-world applications is not an easy task. Our work extends the possible application areas of imitation learning, and to some extent, our theoretical results mitigate the obstacles for employing imitation learning methods in reality. Since these learning methods do not require humans to intervene, if they are successfully applied in practice fewer workers will be needed. For example, the stable walking of Atlas robot in Boston Dynamics is achieved with the efforts from many engineers who work on control, planning, and perception. If imitation learning algorithms could be employed to teach Atlas to walk like humans, Boston Dynamics would not need to hire so many engineers. However, on the other hand, imitation learning can help increase industrial productivity and reduce production costs simultaneously, which will create large social values that most people can benefit from. \bibliographystyle{ieee_fullname}
1,116,691,500,593
arxiv
\section{Introduction} General relativity predicts that black holes obey the laws of thermodynamics with the horizon area and the surface gravity playing the roles of entropy and temperature, respectively \cite{Bardeen1973,Bekenstein1973}. We may expect that in a quantum theory of gravitation these thermodynamic quantities such as mass and area are treated as operators. It is therefore reasonable to attempt to ``quantize'' thermodynamics. Another possible application of quantized thermodynamics is to systems where the thermal fluctuations are small, but the quantum fluctuations are not. Thermodynamics can be formulated in terms of contact geometry \cite{Arnold1990}, where the first law is used to define a contact structure and the equations of state pick out a Legendrian submanifold of the thermodynamic phase space. With the aim of quantizing thermodynamics, a quantization procedure for contact manifolds is established in \cite{Rajeev2008a}. Subsequently, a Hamilton-Jacobi formalism for thermodynamics is developed in \cite{Rajeev2008b}. In this formalism one extends a thermodynamic system of $n$ degrees of freedom into an $n$ parameter family; e.g. the ideal gas of fixed particle number can be extended into the van der Waals family with the parameters $a$ and $b$. This family can be described as a hypersurface in the phase space, defined by the vanishing of a function which we take to be the Hamiltonian. Given the Hamiltonian function $F$, one can formulate a Hamilton-Jacobi equation (HJE), the characteristic curves of which correspond to the dynamics generated by $F$. The equations of state can be obtained in principle by solving the HJE. In the same work the Hamilton-Jacobi formalism is applied to black holes of one thermodynamical degree of freedom, i.e. to electrically neutral non-rotating black holes. A negative cosmological constant is introduced to extend the Schwarzschild black hole into the Schwarzschild-AdS family. It is also proposed that the Born-Infeld action be used as a modification of the Einstein-Maxwell equations to describe the family of charged black holes (further modifications can be made to include the rotating ones as well). Following this suggestion we shall apply the Hamilton-Jacobi formalism to charged non-rotating black holes. We first summarize Hamiltonian dynamics in contact geometry and how it is applied to thermodynamics in \cite{Rajeev2008b}. Next we review the non-rotating black hole solutions in the Einstein-Born-Infeld (EBI) theory. We extract the thermodynamical quantities such as the mass and the surface gravity and find the hypersurface in the thermodynamical phase space which describes the EBIAdS family. Lastly, we write down the HJE and discuss its solutions. Throughout we work with units for which $c=G=4\pi\epsilon_{0}=1$. \section{Hamilton-Jacobi Formalism for Thermodynamics} Before describing the formalism that we shall use, we briefly review contact geometry and contact Hamiltonian dynamics. The proofs of the claims stated here and further reading on the subject can be found in \cite{Geiges2009}. \subsection{Contact Hamiltonian Dynamics} A contact structure $\xi$ on a $2n+1$ dimensional manifold $\mathcal{M}$ is a codimension one distribution which is maximally non-integrable. In other words it is (locally) the kernel of a 1-form $\alpha$ such that \begin{equation} \alpha\wedge\left(\mathrm{d}\alpha\right)^{n}\neq0.\label{eq: contact condition} \end{equation} The contact form $\alpha$ is not unique since $\ker(f\alpha)=\ker(\alpha)$ if $f$ is a non-vanishing function. The contact condition (\ref{eq: contact condition}) is, however, independent of the choice of $\alpha$. For a fixed contact form $\alpha$ there is a unique vector field $R_{\alpha}$ called the \emph{Reeb vector field} which satisfies $i_{R_{\alpha}}\alpha=1$ and $i_{R_{\alpha}}\mathrm{d}\alpha=0$. If a vector field $X$ generates a \emph{contactomorphism }$\mathcal{M}\rightarrow\mathcal{M}$, i.e. a diffeomorphism which preserves the contact structure; \[ \mathcal{L}_{X}\alpha=\mu\alpha \] where $\mu$ is an arbitrary function, then $X$ is called a \emph{contact vector field}. If the contact form $\alpha$ is fixed, there is a one-to-one correspondence between the smooth functions $F:\mathcal{M}\rightarrow\mathbb{R}$ and the contact vector fields $X$ on $\mathcal{M}$ which is given by \begin{itemize} \item $X\rightarrow F_{X}:=i_{X}\alpha,$ \item $F\rightarrow X_{F}$ defined as the unique solution of the equations $i_{X_{F}}\alpha=F$ and $i_{X_{F}}\mathrm{d}\alpha=R_{\alpha}(F)\alpha-\mathrm{d} F$. \end{itemize} In this context the function $F$ is called the generating function or the Hamiltonian and $X_{F}$ the corresponding Hamiltonian vector field. Thus, in close analogy to symplectic geometry, a Hamiltonian function $F$ defines a dynamics on $\mathcal{M}$ by the flow of $X_{F}$. Note however that \[ \mathcal{L}_{X_{F}}F=FR_{\alpha}(F), \] so the Hamiltonian is in general not conserved. But if the initial value of $F$ is zero, it remains zero. As a final remark we note that if $L$ is a submanifold such that $TL\subset\xi$, then $\dim L\leq n$. Such a submanifold of maximal dimension $n$ is called a \emph{Legendrian submanifold}. \subsection{Application to Thermodynamics} The phase space $\mathcal{M}$ of a thermodynamic system of $n$ degrees of freedom is $2n+1$ dimensional. For concreteness, take a gas with a fixed number of particles. The variables $U,S,V,T$ and $p$ can be taken as the coordinates on $\mathcal{M}$. The first law defines a contact form \[ \alpha=-\mathrm{d} U+T\mathrm{d} S-p\mathrm{d} V, \] whose kernel is the set of directions in which the energy is conserved. The equations of state define an $n$ dimensional submanifold, all of whose tangent vectors are annihilated by $\alpha$, i.e. a Legendrian submanifold: \[ U=U(S,V),\quad T=\left(\frac{\partial U}{\partial S}\right)_{V},\quad p=-\left(\frac{\partial U}{\partial V}\right)_{S}. \] Note that once the fundamental relation $U=U(S,V)$ is given, the remaining $n$ equations of state follow automatically by the vanishing of $\alpha$. In general, let \[ \alpha=du+p_{i}\mathrm{d} q^{i} \] be a contact form on the thermodynamic phase space $\mathcal{M}$ with coordinates $u$, $q^{1}$, $\dots$, $q^{n}$, $p_{1}$, $\dots$, $p_{n}$\footnote{By Darboux's theorem one can always find such coordinates for a given contact form $\alpha$.}. A family of substances can be described by allowing the fundamental relation $u=\Phi(q^{1},\dots,q^{n})$ to depend on $n$ parameters\footnote{We \emph{choose} the number of parameters to be $n$ since this is going to allow us to define a Hamiltonian function, see \cite{Rajeev2008b}.} $a_{1},\dots,a_{n}$ as well: \begin{equation} u=\Phi(q^{1},\dots,q^{n};a_{1},\dots,a_{n}).\label{eq: fundamental with parameters} \end{equation} E.g. for the van der Waals family we have $U=U(S,V;a,b)$, where $a$ and $b$ are the van der Waals parameters. Given the fundamental relation (\ref{eq: fundamental with parameters}) and the other $n$ equations of state \[ p_{i}=-\frac{\partial\Phi}{\partial q^{i}}, \] we can eliminate the parameters $a_{1},\dots,a_{n}$ to get a single relation between the $2n+1$ coordinates: \[ F(u,q^{1},\dots,q^{n},p_{1},\dots,p_{n})=0. \] Given such a Hamiltonian $F$ for a family of substances, the equations of state may be recovered by solving the first order PDE \[ F(\Phi,q^{1},\dots,q^{n},-\frac{\partial\Phi}{\partial q^{1}},\dots,-\frac{\partial\Phi}{\partial q^{n}})=0. \] A complete integral of this PDE will depend on $n$ parameters $a_{1},\dots,a_{n}$. The characteristic curves of this PDE are precisely the integral curves of the Hamiltonian vector field $X_{F}$ \cite{Rajeev2008b,Courant1989}. \section{The Einstein-Born-Infeld-AdS Black Hole} The EBI field equations with a cosmological constant $\Lambda$ may be derived from the action \begin{equation} S[e^{a},A]=\frac{1}{16\pi}\int\left(R^{ab}\wedge\star e_{ab}-2\Lambda\star1\right)+\int\mathcal{L}\star1,\label{eq: action} \end{equation} with the Born-Infeld Lagrangian \cite{Born1934} \[ \mathcal{L}=\frac{1}{4\pi\lambda^{2}}\left(1-\sqrt{1-\lambda^{2}X-\lambda^{4}Y^{2}}\right), \] where the quadratic invariants of the electromagnetic field are given by \[ X=\star\left(F\wedge\star F\right),\quad Y=\frac{1}{2}\star\left(F\wedge F\right). \] Here $R^{ab}$ are the curvature 2-forms of the Levi-Civita connection, $e^{a}$ are the orthonormal coframe 1-forms, $e_{a}=\eta_{ab}e^{b}$ and $e_{a_{1}\cdots a_{r}}:=e_{a_{1}}\wedge\cdots\wedge e_{a_{r}}$. The Born-Infeld parameter $\lambda$ is to be seen as a new fundamental constant of nature. Note that in the limit $\lambda\to0$ the Born-Infeld Lagrangian $\mathcal{L}$ becomes the Maxwell Lagrangian $\mathcal{L}\to(8\pi)^{-1}\star(F\wedge\star F)$. The variational field equations for the case with no cosmological constant are derived using the invariant tensor notation in \cite{Dereli2010a}. The derivation with $\Lambda$ is analogous and here we simply note the results. Variation with respect to the coframe $e$ leads to the EBI field equations \begin{equation} -\frac{1}{2}R^{bc}\wedge\star e_{abc}+\Lambda\star e_{a}=8\pi\tau_{a},\label{eq: einstein-born-infeld eqn} \end{equation} with the stress-energy 3-forms \[ \tau_{a}=M\star e_{a}+N\tau_{a}^{\text{(LED)}}, \] where $M=\mathcal{L}-X\partial_{X}\mathcal{L}-Y\partial_{Y}\mathcal{L}$, $N=8\pi\partial_{X}\mathcal{L}$ and $\tau_{a}^{\text{(LED)}}$ are the stress-energy 3-forms of linear (i.e. Maxwell) electrodynamics: \[ \tau_{a}^{\text{(LED)}}=\frac{1}{8\pi}\left(i_{a}F\wedge\star F-F\wedge i_{a}\star F\right). \] Variation with respect to the electromagnetic potential $A$ yields the field equation $\mathrm{d} G=0$ where \[ G=\frac{1}{4\pi\sqrt{\Delta}}\left(\star F+\lambda^{2}YF\right). \] The static spherically symmetric solution without the cosmological constant was first found by Hoffmann \cite{Hoffmann1935} and then rediscovered by Demianski \cite{Demianski1986}. The solution in the presence of a cosmological constant was noted in \cite{Fernando2003}. Note that the action they use differs from ours by the absence of the $Y^{2}$ term. However we shall see below that in the static case one has $Y=0$, hence our results agree. The solution is given by \begin{align*} g & =-f\left(r\right)\mathrm{d} t^{2}+\frac{\mathrm{d} r^{2}}{f\left(r\right)}+r^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right),\\ F & =\frac{Q}{\sqrt{r^{4}+a^{4}}}\mathrm{d} r\wedge\mathrm{d} t, \end{align*} where \begin{align} f\left(r\right) & =-\frac{\Lambda}{3}r^{2}+1-\frac{2M}{r}+\frac{2Q^{2}}{ar}h\left(\frac{r}{a}\right),\label{eq: f(r)}\\ h\left(x\right) & :=\int_{x}^{\infty}\mathrm{d} y\left(\sqrt{y^{4}+1}-y^{2}\right), \end{align} $Q$ is the total electric charge, $a=(\lambda|Q|)^{1/2}$ and $M$ is the total mass\footnote{For the definition of the total mass in an asymptotically (A)dS spacetime, see \cite{Abbott1982}}. \begin{figure} \includegraphics[width=8cm]{h_x.pdf} \caption{Plot of the function $h(x)$ defined in (\ref{eq: f(r)}). It satisfies $h(0)\approx1.24$ and $\lim_{x\to\infty}h(x)=0$.} \end{figure} For simplicity we shall work with a negative cosmological constant $\Lambda=-3l^{-2}$ as in \cite{Rajeev2008b}. Then $f$ can be expanded for $a\ll r$ as \[ f(r)=\frac{r^{2}}{l^{2}}+1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\left[1-\frac{a^{4}}{20r^{4}}+\mathcal{O}\left(\frac{a^{8}}{r^{8}}\right)\right]. \] In particular the spacetime is asymptotically AdS and in the limit $\lambda\to0$ we recover the Reissner-Nordström-AdS (RNAdS) black hole as expected. To investigate the horizon structure we first note that \[ \lim_{r\rightarrow\infty}rf(r)=\infty\quad\text{and}\quad\left.rf(r)\right|_{r=0}=-2M+\frac{2Q^{2}}{a}h(0) \] ($h\left(0\right)\approx1.24$ is finite.). We shall limit our attention to the case \begin{equation} 2M<\frac{2Q^{2}}{a}h(0)\quad\text{and}\quad\lambda<2Q,\label{eq: assumptions} \end{equation} which includes the RNAdS black hole. With the assumptions in (\ref{eq: assumptions}), the function $rf(r)$ has exactly one minimum whose position we denote by $r_{0}=r_{0}(\lambda,\Lambda,Q)$. Since $rf(r)$ is positive at $r=0$ and $r=\infty$, it will have no zeros, a double zore or two zeros if $r_{0}f(r_{0})$ is positive, zero or negative, respectively. Hence from (\ref{eq: f(r)}) we see that there is a critical mass \[ 2M_{\text{c}}=\frac{r_{0}^{3}}{l^{2}}+r_{0}+\frac{2Q^{2}}{a}h(r_{0}/a), \] such that (see Fig. \ref{f plot}) if \begin{enumerate} \item $M<M_{\text{c}}$, there is no horizon. In that case there is a naked singularity at $r=0$ where the Kretschmann scalar $\mathcal{K}=2\star(R_{\phantom{a}b}^{a}\wedge\star R_{\phantom{b}a}^{b})$ diverges. \item $M=M_{\text{c}}$, we have an extremal black hole with one horizon. \item $M>M_{\text{c}}$, there are two horizons. \end{enumerate} \begin{figure} \includegraphics[width=8cm]{rf_r.pdf} \caption{Typical plots of $rf(r)$ for the three cases $M<M_{\text{c}}$, $M=M_{\text{c}}$ and $M>M_{\text{c}}$. The number of horizons is different in each case, analogous to the RN black hole. The minimum occurs at $r_0$, which is independent of $M$.} \label{f plot} \end{figure} Thus the situation is completely analogous to the RN case. If $l$ is small, it may not be possible to satisfy the condition $M>M_{\text{c}}$ simultaneously with (\ref{eq: assumptions}), but it is possible if $l$ is sufficiently large. In the subsequent discussion we shall assume that $M>M_{\text{c}}$, which should cover the physically relevant cases. We denote the outer horizon radius by $r_{\text{H}}$. The thermodynamical equations of state were given in \cite{Rasheed1997} for the case with $\Lambda=0$ and then in \cite{Dey2004} for a non-zero $\Lambda$. In the following we note these equations and derive from them the HJE. Using (\ref{eq: f(r)}) and the fact that $f(r_{\text{H}})=0$ we can write the mass of the black hole as \begin{equation} 2M=\frac{r_{\text{H}}^{3}}{l^{2}}+r_{\text{H}}+\frac{2Q^{2}}{a}h(r_{\text{H}}/a).\label{eq: m} \end{equation} The surface gravity $\kappa=(1/2)\left.\mathrm{d} f/\mathrm{d} r\right|_{r=r_{\text{H}}}$ is given by \begin{align} \kappa & =\frac{r_{\text{H}}}{l^{2}}+\frac{M}{r_{\text{H}}^{2}}-\frac{Q^{2}}{ar_{\text{H}}^{2}}h(r_{\text{H}}/a)-\frac{Q^{2}}{a^{4}r_{\text{H}}}\left(\sqrt{r_{\text{H}}^{4}+a^{4}}-r_{\text{H}}^{2}\right),\label{eq: kappa} \end{align} and the electrostatic potential on the horizon is \begin{equation} \Phi=\int_{r_{\text{H}}}^{\infty}\frac{Q}{\sqrt{x^{4}+a^{4}}}dx.\label{eq: phi} \end{equation} Introducing the surface area $A=4\pi r_{\text{H}}^{2}$ and using the identity \[ h(x)=\frac{2}{3}\int_{x}^{\infty}\frac{\mathrm{d} y}{\sqrt{y^{4}+1}}-\frac{x}{3}\left(\sqrt{x^{4}+1}-x^{2}\right), \] it is straightforward to verify the first law \[ \mathrm{d} M=\frac{\kappa}{8\pi}\mathrm{d} A+\Phi\mathrm{d} Q, \] which defines our contact structure. Furthermore, from the three equations (\ref{eq: m})-(\ref{eq: phi}) we can eliminate $a$ and $l$ to get a single relation between the variables $M$, $A$, $\kappa$, $Q$ and $\Phi$: \[ 3M-\frac{\kappa A}{4\pi}-2\Phi Q=\sqrt{\frac{A}{4\pi}}. \] We therefore see –using the first law– that the EBIAdS family is described by the hypersurface $F(M,A,Q,p_{A},p_{Q})=0$ with the Hamiltonian \[ F(M,A,Q,p_{A},p_{Q})=3M-2p_{A}A-2p_{Q}Q-\sqrt{\frac{A}{4\pi}}. \] The HJE we get from this Hamiltonian is \begin{equation} 3M-2A\frac{\partial M}{\partial A}-2Q\frac{\partial M}{\partial Q}=\sqrt{\frac{A}{4\pi}}.\label{eq: hje} \end{equation} This is a first order quasi-linear PDE which is surprisingly nice and we can do even better than finding a particular complete integral. One can indeed show that the most general solution is \[ 2M=\sqrt{\frac{A}{4\pi}}+\left(\frac{A}{4\pi}\right)^{3/2}u(4\pi Q/A), \] where $u$ is an arbitrary function which must be fixed by a boundary condition. The complete integral corresponding to the actual equation of state (\ref{eq: m}) is given by the choice \[ u\left(x\right)=\frac{1}{l^{2}}+\frac{2x^{3/2}}{\lambda^{1/2}}h\left(\frac{1}{\sqrt{\lambda x}}\right). \] It should be noted, however, that (\ref{eq: m}) is not the only complete integral of the PDE (\ref{eq: hje}) as, e.g., the choice $u(x)=l^{-2}+2x^{3/2}\lambda^{-1/2}$ also yields a complete integral. It is not exactly clear what this non-uniqueness means, but at the very least it shows that one must be careful to use the HJE to get the equations of state of a family of substances. \section{Conclusion} To study the thermodynamic HJE for charged black holes we have made the RN black hole into a two parameter family by introducing a (negative) cosmological constant and replacing the Maxwell Lagrangian by the Born-Infeld one. Under the assumption that the Born-Infeld parameter $\lambda$ is sufficiently small, the horizon structure of the resulting static black hole is quite similar to that of the RN one. By this we mean that there is a critical mass below which there is no horizon (Fig. \ref{f plot}). An element of the EBIAdS family has two thermodynamical degrees of freedom. Hence its phase space is five dimensional, which can be coordinatized by $M$, $\kappa$, $A$, $\Phi$ and $Q$, and the two parameters $\Lambda$ and $\lambda$ specify the particular element. Using the three equations of state (\ref{eq: m})-(\ref{eq: phi}) we were able to eliminate the parameters $\Lambda$ and $\lambda$ to get a single relation between the phase space variables. This relation defines a hypersurface by the vanishing of a Hamiltonian function and therefore yields a HJE. The HJE (\ref{eq: hje}) we get for the EBIAdS family is quasi-linear and we can find an analytic expression for the most general solution. It is interesting to note that the solution is far more general than the equation of state of the EBIAdS black hole. A function must be specified by a boundary condition to get the actual equation of state, not just two constants of integration. As we mentioned, the precise meaning of the existence of these solutions that do not correspond to the actual equation of state is not clear. This may get clearer if it is understood whether and how the HJE is related to the quantization of black holes. In any case it provides a caveat against the use of the Hamilton-Jacobi formalism to determine the equations of state of a family of substances. It should also be remarked that the above is not the only way of extending the RN black hole into a two parameter family; one may think of parameters other than the cosmological constant and the Born-Infeld parameter. In fact, we can introduce $n$ parameters to a system of $n$ degrees of freedom in a completely arbitrary manner. It is possible that there is another choice of extension which is free of this ambiguity. Moreover, the question of finding a HJE for rotating black holes is still open. \begin{acknowledgments} We thank C. Yetişmişoğlu for valuable discussions. \end{acknowledgments} \bibliographystyle{apsrev}
1,116,691,500,594
arxiv
\section{Motivation} \section{Introduction} Hyperparameter Optimization (HPO) is a major challenge for the Machine Learning community. Unfortunately, HPO is not yet feasible for Deep Learning (DL) methods due to the high cost of evaluating multiple configurations. Recently, Gray-box HPO (a.k.a. multi-fidelity HPO) has emerged as a promising paradigm for HPO in DL, by discarding poorly-performing hyperparameter configurations after observing the validation error on the low-level fidelities of the optimization procedure~\citep{Li2017,Falkner2018,Awad2021,Li2020}. The advantage of gray-box HPO compared to online HPO~\citep{Chen2017,Parker-Holder2020}, or meta-gradient HPO~\citep{Maclaurin2015,Franceschi2017,Lorraine2020} is the ability to tune all types of hyperparameters. In recent years, a stream of papers highlights the fact that the performance of DL methods is predictable~\citep{Hestness2017}, concretely, that the validation error rate is a power law function of the model size, or dataset size~\citep{Rosenfeld2020,Rosenfeld2021}. Such a power law relationship has been subsequently validated in the domain of NLP, too~\citep{Ghorbani2022}. In this paper, we demonstrate that the power-law principle has the potential to be a game-changer in HPO, because we can evaluate hyperparameter configurations in low-budget regimes (e.g. after a few epochs), then estimate the performance on the full dataset using dataset-specific power law models. Concretely, we hypothesize and empirically demonstrate that optimization curves (training epochs versus accuracy, or loss) can be efficiently modeled as simple power laws functions. As a result, we introduce Deep Power Law (DPL) ensembles, a probabilistic surrogate for Bayesian optimization (BO) that estimates the performance of a hyperparameter configuration at future budgets using ensembles of deep power law functions. Subsequently, our novel formulation of BO dynamically decides which configurations to pause and train incrementally by relying on the performance estimations of the surrogate. We demonstrate that our method achieves the new state-of-the-art in HPO for DL by comparing against 7 strong HPO baselines, and 57 datasets of three diverse modalities (tabular, image, and natural language processing). As a result, we believe the proposed method has the potential to finally make HPO for DL a feasible reality. Overall, our contributions can be summarized as follows: \begin{itemize} \item We introduce a novel probabilistic surrogate for gray-box HPO based on ensembles of deep power law functions. \item We derive a simple mechanism to combine our surrogate with Bayesian optimization. \item Finally, we demonstrate the empirical superiority of our method against the current state-of-the-art in HPO for Deep Learning, with a large-scale HPO experimental protocol. \end{itemize} \vfill \section{Related Work} \textbf{Multi-fidelity HPO} assumes a method has access to the learning curve of a hyperparameter configuration. Such a learning curve is the function that maps either training time or dataset size, to the validation performance. The early performance of configurations (i.e. first segment of the learning curve) can be used to discard unpromising configurations, before waiting for full convergence. Successive halving~\citep{Jamieson2016} is a widely used multi-fidelity method that randomly samples hyperparameter configurations, starts evaluating them, and ends a fraction of them upon reaching a predefined budget. Afterward, the budget is multiplied by the fraction of discarded hyperparameter configurations and the process continues until the maximum budget is reached. Although the method relies only on the last observed value of the learning curve, it is very efficient. In recent years, various flavors of successive halving have been elaborated, including Hyperband~\citep{Li2017}, which effectively runs successive halving in parallel with different settings. A major improvement to Hyperband is replacing random search with a more efficient sampling strategy~\citep{Awad2021,Falkner2018}. However, the only assumption these methods make about the learning curve is that it will improve over time. In contrast, we fit surrogates that exploit a power law assumption on the curves. \textbf{Learning curve prediction} is a related topic, where the performance of a configuration is predicted based on a partially observed learning curve. Typically, the assumptions about the learning curve are much stronger than those described above. The prediction is often based on the assumption that the performance increases at the beginning and then flattened towards the end. One way to model this behavior is to define a weighted set of parametric functions~\citep{Domhan2015,Klein2017}. Then, the parameters of all functions are determined so that the resulting prediction best matches the observed learning curve. Another approach is to use learning curves from already evaluated configurations and to find an affine transformation that leads to a well-matched learning curve~\citep{Chandrashekaran2017}. A more data-driven approach is to learn the typical learning curve behavior directly from learning curves across different datasets~\citep{Wistuba2020}. Learning curve prediction algorithms can be combined with successive halving~\citep{Baker2018}. In contrast to this line of research, we actually fit ensembles of power law surrogates for conducting multi-fidelity HPO with Bayesian optimization. \textbf{Scaling laws} describe the relationship between the performance of deep learning models as a function of dataset size or model size. Concretely, \citet{Hestness2017} show empirically for different data modalities and neural architectures that a power law relationship holds when growing the dataset. Further work confirms this observation and extends it by demonstrating the power law relationship also with regard to the model size~\citep{Rosenfeld2020,Rosenfeld2021,Ghorbani2022}. From a practical angle, \citet{Yang2022} propose to tune hyperparameters on a small-scale model and then transfer it to a large-scale version. In contrast to these papers, we directly use the power law assumption for training surrogates in Bayesian optimization for HPO. \section{Preliminaries} \textbf{Hyperparameter Optimization (HPO)} demands finding the configurations $\lambda \in \Lambda$ of a Machine Learning method that achieve the lowest validation loss $\mathcal{L}^{(\text{Val})}$ of a model (e.g. a neural network), which is parameterized with $\theta$ and learned to minimize the training loss $\mathcal{L}^{(\text{Train})}$ as: \begin{align} \label{eq:bilevel} \lambda^{*} &:= \argmin_{\lambda \in \Lambda} \;\; \mathcal{L}^{(Val)}\left(\lambda, \theta^{*}\left(\lambda\right) \right), \\ \nonumber & \text{s.t.} \;\;\; \theta^{*}\left(\lambda\right) := \argmin_{\theta \in \Theta} \; \mathcal{L}^{(Train)}\left(\lambda, \theta\right) \end{align} For simplicity we denote the validation loss as our function of interest $f(\lambda)=\mathcal{L}^{(Val)}\left(\lambda, \theta^{*}\left(\lambda\right) \right)$. The optimal hyperparameter configurations $\lambda^{*}$ of Equation~\ref{eq:bilevel} are found via \textbf{an HPO policy} $\mathcal{A}$ (also called an HPO method) that given a history of $N$ evaluated configurations $H:=\left\{\lambda_i, f\left(\lambda_i\right)\right\}_{i=1}^N$ suggests the $(N+1)$-th configuration to evaluate as $\lambda_{N+1} := \mathcal{A}(H)$ where $A: \left[\Lambda \times \mathbb{R}_{+}\right]^N \rightarrow \Lambda$. The search for an optimal HPO policy is a bi-objective problem in itself, aiming at (i) finding a configuration out of $N$ evaluations that achieves the smallest validation loss $f(\lambda)$, and (ii) ensuring that the costs of evaluating the $N$ configurations do not exceed a total budget $\Omega$, as shown in Equation~\ref{eq:hpopolicy}. \begin{align} \label{eq:hpopolicy} \argmin_{\mathcal{A}} &\min_{i \in \left\{1,\dots, N\right\}} f\left(\lambda_i = \mathcal{A}\left( H^{(i-1)} \right) \right), \\ \nonumber &\text{where: } \;\;\;\;\;\;H^{(i)} := \begin{cases}\left\{(\lambda_j, f(\lambda_j)) \right\}_{j=1}^{i} & i > 0 \\ \emptyset & i = 0\end{cases} \\ \nonumber &\text{subject to:} \;\;\; \Omega > \sum_{i=1}^N \text{cost}\left(f\left(\lambda_i\right)\right) \end{align} \textbf{Bayesian optimization (BO)} is the most popular type of policy for HPO, due to its ability to balance the exploration and exploitation aspects of minimizing the loss $f$. Technically speaking, BO fits a surrogate $\hat f(\lambda; \phi)$ parametrized with $\phi$ to approximate the observed loss $f(\lambda)$ using the history $H$, as $\phi^{*} := \argmin_{\phi} \mathbb{E}_{\left(\lambda, f(\lambda)\right)~\sim p_H} \; p(f(\lambda) | \lambda, \phi)$. Afterwards, BO uses an acquisition/utility function $a: \Lambda \rightarrow \mathbb{R}_{+}$ to recommend the next configuration as $\lambda_{N+1}:=\mathcal{A}\left(H^{(N)}\right)=\argmax_{\lambda \in \Lambda} a\left(\lambda; \phi^{*}\right)$. A typical acquisition choice is the Expected Improvement~\citep{Mockus1978}. For a more detailed introduction to BO and HPO, we refer the interested reader to ~\citet{Hutter2019} \textbf{Gray-box (multi-fidelity) HPO} refers to the case where an approximation of the validation loss can be measured at a lower budget $b \in B$, where $B=(0,b^{\text{max}}]$. For instance, in Deep Learning we can measure the validation loss after a few epochs ($0<b<\epsilon$), rather than wait for a full convergence ($b=b^{\text{max}}$). Throughout this paper, the term budget refers to a learning curve step. The evaluation of a configuration $\lambda$ for a budget $b$ is defined as $f\left(\lambda, b\right) := \mathcal{L}^{(Val)}\left(\lambda, \theta^{*}\left(\lambda, b\right) \right)$, where $f\left(\lambda, b\right): \Lambda \times B \rightarrow \mathbb{R}_{+}$. The concept of budgets alters the HPO problem definition slightly. The history of $N$ configurations evaluated at different budgets becomes a set of $N$ triples (config, budget, eval) defined as $H := \left\{\left(\lambda_i, b_i, f\left(\lambda_i, b_i\right)\right)\right\}_{i=1}^{N}$. A gray-box HPO policy is still optimized for Equation~\ref{eq:hpopolicy}, however, the constraint is altered as $\Omega > \sum_{i=1}^N \text{cost}\left(f\left(\lambda_i, b_i\right)\right)$. \section{Power Law Surrogates for Bayesian Optimization} Prior work has demonstrated that the performance of Machine Learning methods as a function of budgets (i.e. dataset size, number of optimization epochs, model size, image resolution) follows a power law relationship~\citep{Rosenfeld2020, Rosenfeld2021}. In this work, we employ this power law dependence between the validation loss and the number of optimization epochs in Deep Learning. We propose a novel gray-box Hyperparameter Optimization method which is based on power law surrogates. We assume that every learning curve $f\left(\lambda, \cdot\right)$ can be described by a power law function defined by $(\alpha,\beta,\gamma)$. Concretely, we define a power law function for the validation loss of a configuration $\lambda$ at a budget $b$ (a.k.a. the number of epochs) as shown in Equation~\ref{eq:powerlaw}. \begin{align} \label{eq:powerlaw} \hat f\left(\lambda, b\right) := \alpha_\lambda + \beta_\lambda \; b^{-\gamma_\lambda}, \;\; \alpha_\lambda, \beta_\lambda, \gamma_\lambda \in \mathbb{R} \end{align} Instead of fitting one separate power law function to each learning curve, we fit a single \textbf{shared power law function} across all configurations by conditioning the power law coefficients $\alpha, \beta, \gamma$ on $\lambda$ using a parametric neural network $g$ that maps a configuration to the power law coefficients of its learning curve as $g: \Lambda \rightarrow \mathbb{R}^3$. The network $g$ has three output nodes, corresponding to the power law coefficients, denoted as $g(\lambda)_\alpha, g(\lambda)_\beta, g(\lambda)_\gamma$. The configuration-conditioned power law surrogate becomes: \begin{align} \label{eq:conditionedpowerlaw} \hat f\left(\lambda, {b}\right) := g(\lambda)_\alpha + g(\lambda)_\beta \; {b}^{-g(\lambda)_\gamma}, \;\; g: \Lambda \rightarrow \mathbb{R}^3 \end{align} Using a history of learning curve evaluations $H := \left\{\left(\lambda_i, b_i, f\left(\lambda_i, b_i\right)\right)\right\}_{i=1}^{N}$ we can train the power law surrogate to minimize the following loss function using stochastic gradient descent: \begin{align} \label{eq:powerlawobjective} \argmin_{\hat f} \; \mathbb{E}_{\left(\lambda, b, f\left(\lambda, {b}\right)\right) \sim p_H} \; \left| f\left(\lambda_i, {b}_i\right) - \hat f\left(\lambda_i, {b}_i \right)\right| \end{align} BO surrogates need to be probabilistic regression models because the acquisition functions require the posterior variance of the predictions. As a result, we train an ensemble of $K$ diverse surrogates $\hat f^{(1)}(\lambda,b), \dots, \hat f^{(K)}(\lambda,b)$ by initializing each surrogate with different weights and by training with a different sequence of mini-batches as in Deep Ensembles~\citep{Lakshminarayanan2017}. The posterior mean $\mu$ and the posterior variance $\sigma^2$ of the power law ensemble are trivially computed as: \begin{align} \label{eq:meanvar} \mu_{\hat f}(\lambda,{b}) &= \frac{1}{K} \sum_{k=1}^K \hat f^{(k)}(\lambda,{b}) \;\;\;\; \\ \nonumber \sigma^2_{\hat f}(\lambda,{b}) &=\frac{1}{K} \sum_{k=1}^K \left( \hat f^{(k)}(\lambda,{b}) - \mu_{\hat f}(\lambda,{b}) \right)^2 \end{align} A commonly used acquisition function in the domain is Expected Improvement (EI) which incorporates both the mean and uncertainty of predictions, applying a trade-off between exploration and exploitation. Consequently, in our work, we use the Expected Improvement (EI) acquisition with the estimated full budget's ($b^{\text{max}}$) posterior mean and variance. We briefly define the acquisition function in Equation~\ref{eq:acquisition}: \begin{align} \label{eq:acquisition} \lambda^{\text{next}} &:= \argmax_{\lambda \in \Lambda} \;\; \text{EI}\left(\lambda, b^{\text{max}}|H\right) \\ \nonumber \operatorname{EI}(\lambda, b^{\text{max}}|H)&=\mathbb{E}\left[\max\left\{\mu_{\hat f}(\lambda,{b^{\text{max}}}) - f^{\text{best}}\left(b^{\text{max}}\right), 0\right\}\right] \end{align} where, $f^{\text{best}}\left(b^{\text{max}}\right)$ corresponds to the best observed loss for any budget $b' \le b^{\text{max}}$ from the history $H$. However, after selecting a configuration with our variant of the EI acquisition, we do not naively run it until convergence. Instead, we propose a novel multi-fidelity strategy that advances the selected $\lambda^{\text{next}}$ of Equation~\ref{eq:acquisition} by a small budget of $b^{\text{step}}$, e.g. 1 epoch of training. Therefore, the selected $\lambda^{\text{next}}$ will be evaluated at $b^{\text{next}}$ as defined in Equation~\ref{eq:steps}. Notice our proposed strategy also covers new configurations with no learning curve evaluations in $H$. \begin{align} \label{eq:steps} b^{\text{next}} := \begin{cases} b^{\text{step}}, & \nexists \lambda^{\text{next}}: \left(\lambda^{\text{next}}, \cdot, \cdot\right) \in H \\ b^{\text{step}}+\max\limits_{\left(\lambda^{\text{next}}, b, \cdot \right) \in H} b, & \text{otherwise} \\ \end{cases} \end{align} \input{Plots/cifar10_motivation.tex} \input{Plots/cifar10_different_schedulers.tex} \section{A Proof-of-Concept Example} To visually demonstrate the power law surrogate, we created a 1-dimensional search space where we train a Preact ResNet~\citep{he2016identity} on the CIFAR-10 dataset~\citep{krizhevsky2009learning}. We generate learning curves by training the model with different dropout hyperparameter values $\lambda \in [0.1, 0.85]$ for $50$ epochs, with a cosine annealed learning rate and an initial value of $10^{-3}$. We train our power law surrogate on a subspace of the hyperparameter search space corresponding to \{0.2, 0.45, 0.7\}. We use the full validation curves for the aforementioned hyperparameter subspace, except for the hyperparameter configuration corresponding to 0.45, where, we use only a subset of the learning curve. As shown in Figure~\ref{fig:cifa10_power_law_modelling}, our power law surrogate fits the training data well by generalizing correctly across different hyperparameter configurations. Furthermore, the surrogate has a low predictive uncertainty at the evaluated hyperparameter configurations (circle points), and a higher uncertainty in the regions of the non-evaluated configurations. Lastly, we repeated Experiment 1 with various learning rate schedulers, such as linear decay and cosine annealing (both with and without restarts), and illustrate the respective learning curves in Figure~\ref{fig:cifa10_different_schedulers}. We observe that power law functions can model the learning curves well even in the presence of learning rate schedulers. \section{Experimental Protocol} In our experiments, we standardize the data by performing min-max scaling for our method and each included baseline. If a baseline has a specific preprocessing protocol, we do not apply min-max scaling but we apply the protocol as suggested by the authors. The benchmarks do not support a common evaluation metric for configurations (i.e. the function $f$). As a consequence, the evaluation metric for LCBench is the balanced accuracy, for TaskSet the loss, while for PD1 the accuracy. Moreover, the benchmarks do not offer learning curves with a common step size. For LCBench and PD1, one step size is equivalent to one epoch, while for TaskSet one step size is 200 batches. The HPO budget is defined as the maximum number of steps needed to fully evaluate 20 hyperparameter configurations. In that context, one unit step of the HPO budget means training a particular configuration for one more optimization step (e.g. 200 batches in TaskSet or 1 epoch in LCBench). In the following experiments, we report the regret of the best-found configuration: \begin{align} R &= f^{\text{best}}\left(b^{\text{max}}\right) - f^{\text{oracle}}\left(b^{\text{max}}\right) \end{align} where the oracle is given as $f^{\text{oracle}}\left( b^{\text{max}}\right) := \min \left\{ f\left(\lambda, b\right) \; | \; \left(\lambda, b, f\left(\lambda, b\right)\right) \in H_D \land b \leq b^{\text{max}}\right\}$, and $H_D$ corresponds to the set of all the exhaustively-evaluated hyperparameter configurations' performances on a dataset $H_D$. If the oracle configuration is not known in advance for the search space, then $H_D$ can be replaced with the history $H$ at the end of the HPO procedure. The only difference between $f^{\text{best}}$ and $f^{\text{oracle}}$ is that the latter only considers the history at the HPO step for which we are reporting the results. In short, the regret is the difference in the evaluation metric performance from the best-found hyperparameter configuration during optimization to the best possible hyperparameter configuration (oracle) on the dataset (in a minimization setting). On a dataset level, we report the average regret across 10 repetitions with different seeds. To be able to aggregate results over datasets, we report the averaged normalized regret. As normalization, we divide the regret by the difference between the performances of the best and the worst hyperparameter configuration on a dataset. Then we compute the mean of the normalized regrets across all the datasets of a benchmark. Moreover, in the experiments that report the average normalized regret over time, we provide results over normalized wall clock time. The wall clock time includes both the method's overhead (i.e. training the surrogate $\hat f$ and selecting the next hyperparameter configuration to evaluate) and the time taken to evaluate the selected hyperparameter configuration (i.e. evaluating $f$). Since the methods have different run times, we normalize the individual times by the time it took Random Search (the fastest non-model-based method) to complete the HPO optimization process. To provide a fair any-time comparison, we report results until the time it took Random Search to evaluate 20 hyperparameter configurations. Furthermore, when reporting the learning curve (LC) length fraction, we imply the fraction of the total learning curve length. LCBench and TaskSet have LCs of a fixed length for all datasets, corresponding to 51 epochs for LCBench and 50 epochs for TaskSet. In contrast, PD1 has varying LC lengths for different datasets. In our experiments, all methods start with a history $H$ of 1 randomly sampled hyperparameter configuration evaluated for 1 step/epoch in the case of gray-box techniques (Hyperband, BOHB, DEHB, SMAC, ASHA, Dragonfly; descriptions in Section~\ref{subsec:baselines}), or for the full budget for the black-box technique (Random Search). We ran experiments on a CPU cluster, where every node contains two Intel Xeon E5-2630v4 CPUs with 20 CPU cores running at 2.2 GHz. During the optimization procedure, DPL is only trained from scratch during the first few iterations, for the rest of the HPO steps the weights are only refined, restarting only when the convergence of updating the surrogate parameters stagnates. For more details regarding the settings of our method, we refer the reader to Appendix~\ref{app:dpl_details}. Our implementation of DPL is publicly available.\footnote{\url{https://github.com/releaunifreiburg/DPL}} \subsection{Benchmarks} \label{subsec:benchmarks} \textbf{LCBench:} A benchmark that features 2,000 hyperparameter configurations that parametrize the architecture of simple feedforward neural networks, as well as, the training pipeline~\citep{ZimLin2021a}. The benchmark features 7 numerical hyperparameters and 35 different datasets from the AutoML benchmark~\citep{gijsbers2019open}. \textbf{PD1:} A deep learning benchmark~\citep{wang2022pre} that consists of recent DL (including Transformers) architectures run on large vision datasets such as CIFAR-10, CIFAR-100, ImageNet, as well as statistical modeling corpora and protein sequence datasets from bioinformatics. Every search space includes varying learning curve lengths, ranging from 5 to 1414, and a different number of evaluated hyperparameter configurations ranging from 807 to 2807. The search space includes hyperparameter configurations that parametrize the learning rate, the learning rate scheduler, and the momentum. \textbf{TaskSet:} A benchmark that features different optimization tasks evaluated in 5 different search spaces \citep{Metz2020}. For our work, we focus on the Adam8p search space, which is among the largest search spaces in the benchmark with 1000 hyperparameter configurations. Every hyperparameter configuration features 8 continuous hyperparameters. The hyperparameters control the learning rate, the learning rate schedulers, and the optimizer. For variety among our benchmarks, we focus on 12 NLP tasks. For a more detailed explanation of the benchmarks, we refer the reader to Appendix~\ref{app:benchmarks}. \subsection{Baselines} \label{subsec:baselines} \textbf{Random Search:} Randomly samples hyperparameter configurations for the largest possible budget. \textbf{Hyperband:} Uses multiple brackets with different trade-offs of the initial budget and number of epochs to initially train~\citep{Li2017}. It then applies Successive Halving (SH)~\citep{Jamieson2016} on every bracket. \textbf{ASHA:} An asynchronous version of SH~\citep{li2018massively} that does not wait for all configurations to finish in an SH bracket before starting the next one. \textbf{BOHB:} An extension of Hyperband that uses TPE~\citep{bergstra2011algorithms} to sample the initial hyperparameter configurations of a bracket~\citep{Falkner2018}. \textbf{DEHB:} Modifies Hyperband by using evolutionary strategies to sample the initial hyperparameter configurations~\citep{Awad2021}. \textbf{SMAC:} Extends Hyperband but uses random forests to sample the initial hyperparameter configurations for a bracket~\citep{lindauer-jmlr22a}. \textbf{Dragonfly:} We use the Dragonfly Library~\citep{Kandasamy2020} to compare against BOCA~\citep{Kandasamy2017}, a multi-fidelity method that uses Gaussian processes to predict the next hyperparameter to evaluate and the fidelity for which it should be evaluated. For all the baselines, we use their official public implementations. We provide additional details in Appendix~\ref{app:baselines}. \section{Research Hypotheses and Experiments} \paragraph{Hypothesis 1:} \textit{The power law assumption improves the quality of learning curve forecasting.} \input{Plots/conditioned_powerlaw_correlations} \input{Plots/power_law_performance_epochs} \input{Plots/power_law_diagrams} In this experiment, we evaluate the predictive performance of forecasting models that given a fraction of the observed learning curve, estimate the remaining unobserved segment of the curve, on the LCBench benchmark. The results of Figure~\ref{fig:conditioned_pl_correlations} compare three different forecasting models, concretely, neural networks (NN), Gaussian Processes (GP), and Power Law functions (PL). For the three variants (PL, NN, GP) we fitted one model on every learning curve of each hyperparameter configuration (i.e. given $b$ in the x-axis estimate one $\hat f(b)$ separately for every $\lambda$). For the other two variants (DPL and Cond NN) we fit a single forecasting model for all configurations, by conditioning the surrogate on the configuration (i.e. given b and $\lambda$, estimate $\hat f\left(\lambda, b\right)$). The purpose of the experiment is to assess whether a power law function regressor leads to superior predictive accuracy, compared to generic forecasting models, such as neural networks, or Gaussian processes. The evaluation metric of the experiment highlighted in Figure~\ref{fig:conditioned_pl_correlations} is the rank correlation between the estimated performances at the end of the learning curve and the true performances. We notice that although a Power Law regressor has significantly fewer parameters than a neural network (3 to 288 parameters), PL still achieves higher predictive performance than NN. Furthermore, our conditioning of the power law function to the hyperparameter configuration improves the predictive quality even further, as the difference between DLP and PL demonstrates. Lastly, we refer the reader to Appendix~\ref{app:plots}, where we provide an analysis of the distributions for the absolute error rate between the DPL predictions and the ground truth values over the different LC length fractions, showing that DPL does not only retain the ranks but, it also accurately predicts the final performance. Based on the results, we consider Hypothesis 1 to be validated and that \textbf{DPL is accurate in terms of learning curve forecasting}. \input{Plots/power_law_time_performances} \input{Plots/power_law_efficiency} \paragraph{Hypothesis 2:} \textit{Our method DPL achieves state-of-the-art results in HPO.} In Figure~\ref{fig:power_law_performance_epochs}, we show the performance of the considered methods over the HPO budget, where DPL manages to outperform all the rival baselines. In the case of LCBench, DPL quickly finds well-performing hyperparameter configurations compared to the competitor methods and continues to discover even better configurations until the HPO process ends. Furthermore, we observe the same trend with TaskSet and PD1, where after ca. 25\% of the HPO budget, our method DPL converges to a better regret compared to the baselines and increases the lead until HPO ends. For a detailed overview of the performances of all methods on all individual datasets, we point the reader to Appendix~\ref{app:plots}. In addition, Figure~\ref{fig:power_law_performance_diagrams} provides the critical difference diagrams of the per-dataset regret ranks at 50\% and 100\% of the HPO budget. Our method DPL outperforms all baselines in 5 out of 6 cases (in 4 of which with a statistically significant margin), while being second best only at the 50\% of the HPO budget on the PD1 benchmark. Lastly, we analyse the performance of DPL over time in Figure~\ref{fig:power_law_performance_time}. As it can be observed, DPL manages to outperform the competitors even when the method's overhead time is included, showing that the overhead of DPL (i.e. fitting surrogate and running the acquisition) is negligible in terms of the quality of the HPO results. For a more detailed information, regarding the DPL overhead time, we point to Appendix~\ref{app:dpl_overhead}. TaskSet is not included in Figure~\ref{fig:power_law_performance_time} since the benchmark does not offer runtimes. Given the results, we conclude that Hypothesis 2 is validated and that \textbf{DPL achieves state-of-the-art performance in HPO}. \textbf{Hypothesis 3:} \textit{DPL explores the search space more efficiently compared to the baselines.} We conduct further analyses to understand the source of the efficiency of DPL versus the baselines. As a result, we analyze two important aspects, the quality of the evaluated configurations, as well as the exploration capability of our gray-box HPO. Initially, we measure what fraction of the top 1\% configurations (ranked by accuracy) can our method discover. Figure~\ref{fig:power_law_efficiency} (left) shows that until convergence our method can discover significantly more top configurations compared to the baselines. The middle plots of Figure~\ref{fig:power_law_efficiency}, show the average regret for each configuration promoted to the respective budget. According to the plot, DPL is more efficient and assigns the budget only to configurations with lower regret compared to the other methods. The precision and regret plots demonstrate that the quality of the evaluated configurations is largely better than all baselines, therefore, giving our method a significant lift in the performance rank. Last but not least, the right plot shows the percentage of configurations that were performing poorly in an earlier epoch (i.e. accuracy-wise in the bottom $2/3$ of configurations up to the epoch indicated at the x-axis) but performed better at later epochs (i.e. at the top $1/3$ of configurations). Furthermore, we added a line labeled with "Baseline", which represents the fraction of previously poor-performing configurations of all configurations. Such behavior is observed often with learning curves, for instance, strongly regularized networks converge slowly. For the same analysis regarding the PD1 benchmark, we point the reader to Appendix~\ref{app:plots}. The results indicate that our method explores well the unpromising early configurations, by giving them a chance through the uncertainty estimation of our ensemble and the respective Bayesian optimization mechanism. The results validate Hypothesis 3 and confirm that \textbf{DPL explores the search space more efficiently.} \section{Conclusions} In this work, we introduce Deep Power Law (DPL), a probabilistic surrogate based on an ensemble of power law functions. The proposed surrogate is used within a novel gray-box HPO method based on Bayesian optimization. In contrast to the prior work we exploit scaling laws for estimating the performance of Deep Learning models. Through extensive experiments comprising 7 baselines, 57 datasets, and search spaces of diverse deep learning architectures, we showed that DPL outperforms strong HPO baselines for DL by a large margin. As an overarching contribution, we advanced the state-of-the-art in the important field of HPO for Deep Learning. \section{Limitations} Our proposed ensemble of power law functions sets the new state-of-the-art in gray-box HPO and highlights the efficiency of modeling learning curves with a power law assumption. However, we believe that further research is needed to calibrate the power law model for the beginning and the end parts of the learning curves. Recent work highlighted that the error rate at very small budgets (e.g. after a few mini-batches) is not a power law~\citep{Rosenfeld2021}. Contrary to the common perception, we experienced that the uncertainty estimation arising from the Deep Ensemble approach~\citep{Lakshminarayanan2017} is suboptimal compared to standard BO surrogates such as Gaussian Processes. In addition, having to train an ensemble has additional computational costs, due to the necessity of training multiple power law models. In the future, we plan to investigate the combination of power laws with Gaussian Processes. \section{Baselines} \label{app:baselines} \textbf{Random Search:} We implemented random search by randomly sampling hyperparameter configurations from the benchmarks with the maximal budget. \textbf{Hyperband, BOHB, LCNet:} We use version 0.7.4 of the HpBandSter library as a common codebase for all 3 baselines~\footnote{\url{https://github.com/automl/HpBandSter}}. For the last approach mentioned, despite heavy hyperparameter tuning of the method, we could not get stable results across all the benchmarks and hence dropped the method from our comparison. \textbf{ASHA:} For the implementation of ASHA we use the public implementation from the optuna library, version 2.10.0. \textbf{DEHB:} We use the public implementation offered by the authors~\footnote{\url{https://github.com/automl/DEHB/}}. \textbf{MF-DNN:} In our experiments we used the official implementation from the authors~\footnote{\url{https://github.com/shib0li/DNN-MFBO}}. However, the method crashes which does not allow for full results on all benchmarks. \textbf{SMAC:} For our experiment with SMAC we used the official code base from the authors~\footnote{\url{https://github.com/automl/SMAC3}}. \textbf{Dragonfly:} We use version 0.1.6 of the publicly available Dragonfly library. For all the multi-fidelity methods considered in the experiments, we use the same minimal and maximal fidelities. In more detail, for the LCBench, TaskSet and PD1 benchmarks we use a minimal fidelity lower bound of 1 and a maximal fidelity lower bound equal to the max budget. \section{Benchmarks} \label{app:benchmarks} \paragraph{LCBench:} We use the official implementation as the interface for the LCBench benchmark~\footnote{\url{https://github.com/automl/LCBench}}. As suggested by the authors, we use the benchmark information starting from the second step and we skip the last step of the curve since it is a repeat of the preceding step. \paragraph{TaskSet:} The TaskSet benchmark features 1000 diverse tasks. We decide to focus on only 12 NLP tasks from the TaskSet benchmark to add variety to our entire collection of datasets. Our limitation on the number of included tasks is related to the limited compute power, as we are unable to run for the entire suite of tasks offered in TaskSet. TaskSet features a set of 8 hyperparameters, that consists of i) optimizer-specific hyperparameters, such as the learning rate, the exponential decay rate, $\beta_1$ and $\beta_2$, and Adam's constant for numerical stability $\varepsilon$, ii) hyperparameters that control the linear and exponential decay schedulers for the learning rate decay, and lastly iii) hyperparameters that control the L1 and L2 regularization terms. Every hyperparameter in TaskSet except $\beta_1$ and $\beta_2$ is sampled logarithmically. \paragraph{PD1:} We use the synetune library~\citep{salinas2022syne} for our interface to the PD1 benchmark. From the benchmark, we only include datasets that have a learning curve of length greater than 10. We furthermore only include datasets that have a learning curve lower or equal to 50 to have a fair comparison between all benchmarks by having approximately 20 full function evaluations. PD1 features 4 numerical hyperparameters, $lr\_initial\_value$, $lr\_power$, $lr\_decay\_steps\_factor$ and $one\_minus\_momentum$, where $lr\_initial\_value$ and $one\_minus\_momentum$ are log sampled. The learning rate decay is applied based on a polynomial schedule, it's hyperparameters taken from the search space.. \section{Modeling} \label{app:dpl_modeling} \input{Tables/power_law_models.tex} To investigate the modeling aspect of our DPL surrogate, we consider different formulations for the ensemble members of our surrogate as shown in Table~\ref{app:surrogate_models}. Initially, we consider Candidate 1 that can handle shifts in the learning curve by introducing $d$. Furthermore, we consider a more complex version, Candidate 2, that allows to additionally scale the budget, by introducing variable $e$. Lastly, we consider Broken Laws~\citep{brokenlaws}, which, can handle breaking points in the power law curve, by using a version that can handle one breaking point, since, the authors of the method suggest it as a sufficient formulation to approximate the majority of cases. We run the DPL surrogate with every individual formulation on the LCBench benchmark to invenstigate the performance of every formulation. Figure~\ref{app:dpl_modeling} presents the results, where, our chosen surrogate formulation which is the simplest from all formulations, outperforms all the other DPL formulations in the LCBench benchmark experiment. The DPL using the Candidate 1 formulation does not manage to outperform all competitor methods, the DPL with Candidate 2 formulation manages to outperform all rival methods, however, only with a marginal difference. The DPL with Broken Law formulation, does manage to outperform all the rival baselines with a considerable margin, however, it still performs worse than the used formulation for the surrogate. Lastly, we would like to point out that the alternative power law formulations are more difficult to optimize/run since they are prone to diverge and fall into a dead state given different combinations of parameter values. The most common being division by zero and taking the root of a negative number. \section{Plots} \label{app:plots} \input{Plots/mean_relative_error_fractions} \input{Plots/power_law_efficiency_pd1.tex} \input{Plots/lcbench_dataset_epoch_performances.tex} \input{Plots/lcbench_dataset_epoch_performances_2.tex} \input{Plots/taskset_dataset_epoch_performances.tex} \input{Plots/pd1_dataset_epoch_performances.tex} \section{Continuous Search Space} \label{app:dpl_continous_search_space} To study the modeling capabilities of Deep Power Law (DPL) in a continuous search space, we evaluate it in a Hyperparameter Optimization (HPO) task for finetuning. We choose the EfficientNetV2~\citep{efficientnetv2_github} model as a strong baseline with a hyperparameter search space that follows the authors' experimental setup~\citep{efficientnetv2} of transfer from ImageNet~\citep{imagenet} to the CIFAR10 dataset~\citep{cifar10}. \begin{wraptable}{!tr}{0.4\textwidth} \centering \footnotesize \begin{adjustbox}{width=0.4\textwidth}{ \begin{tabular}{@{}lrrrr@{}} \toprule \textbf{HP} & \textbf{Values} & \textbf{Baseline} & \textbf{DPL} \\ \midrule learning rate & $[10^{-5}, 10^{-2}]$ & $5 \cdot 10^{-4}$ & $10^{-4}$ \\ weight decay & $[0, 10^{-1}]$ & $10^{-5}$ & $10^{-7}$ \\ \bottomrule \end{tabular} }\end{adjustbox} \caption{\label{app:finetune_search_space} Hyperparameter search space for the finetuning task along with the baseline hyperparameters and the hyperparameters found by DPL.} \vspace{-0.5cm} \end{wraptable} \paragraph{Baseline:} The lightweight variant of EfficientNetV2-b0 is finetuned for $15$ epochs using the RMSprop optimizer with a learning rate of $5 \cdot 10^{-4}$ (warmstarted from $10^{-6}$ for a single epoch), $10^{-5}$ weight decay, and no momentum. Additionally, dropout is equal to $10^{-6}$ and the model exponential moving average decay to $0.9996$. Batch size is set to $64$ for the training phase and $8$ for the validation phase. We reproduce results using the timm library~\citep{rw2019timm}. The average and standard deviation of validation accuracy across 5 random seeds equate to $95.9 \pm 0.2\% $. \paragraph{Experimental setup:} We choose the two most significant hyperparameters, learning rate, and weight decay, and construct a search space comprising baseline hyperparameter values (Table~\ref{app:finetune_search_space}). To simulate the continuous search space for the acquisition function we take 100 equally-sized steps on a log interval from the lower bound to the upper bound of each dimension, effectively constructing a search space consisting of $10^4$ possible hyperparameter configurations. \paragraph{Discussion:} First and foremost, the aim of this experiment is to investigate the behavior that DPL exhibits in a continuous search space. Since, a possible failure mode could be not having a learning curve for different hyperparameter configurations by always evaluating new hyperparameter configurations in a good region that have high uncertainty. Secondly, we aim to show the applicability of our method in finding good hyperparameters from a vast search space that yield a matching or higher performance compared to the baseline performance. We set an HPO budget of 10 full function evaluations (150 epochs). The EfficientNetV2-b0 model finetuned using hyperparameters optimized by DPL achieves a high performance of $97 \pm 0.2 \%$, outperforming the baseline. Therefore, we conclude that the DPL model can be effectively applied to an HPO task with a continuous search space. To further analyze DPL predictions, in Figure~\ref{fig:finetune_task} (left), we show learning curves of configurations suggested by our model. We observe the efficiency of DPL that allocates the budget only to configurations with a strong final performance, not continuing the poor hyperparameter settings based on learning curve predictions. To visualize better the exploratory capabilities of DPL, we show the configuration promotion schedule in Figure~\ref{fig:finetune_task} (right). \begin{figure}[ht] \centering \includegraphics[width=0.4\columnwidth]{Figures/appendix_c/rs_vs_dpl.pdf} \includegraphics[width=0.4\columnwidth]{Figures/appendix_c/config_promotion.pdf} \caption{ \textbf{Left:} Learning curves of configurations selected by DPL to be trained until the end. Showing that only the well-performing configurations are exploited. \textbf{Right:} Each colored dot represents a distinct hyperparameter configuration explored by the model, and each line represents a promotion to a higher budget. We observe that the model initially queries many configurations for only one epoch, promoting the best-seen configuration only after 2.5 full function evaluations (that equates to around 30 explored configurations on a minimum budget). Later in the optimization, DPL allocates resources to the well-performing configurations, showcasing early-stopping termination capabilities.} \label{fig:finetune_task} \end{figure} \section{DPL Overhead} \label{app:dpl_overhead} \input{Tables/power_law_times.tex} To investigate the efficiency of DPL in terms of method runtime, we provide results in Table~\ref{app:power_law_times}, regarding the time it takes DPL to pass a batch of examples through the surrogate over the different benchmarks. As can be observed, the runtime overhead of our method is negligible. We would like to point out that all the provided results are on the CPU, so a considerable speed-up will be achieved when run on a GPU. \section{Implementation Details} \label{app:dpl_details} For our method, we use a 2-layer feedforward neural network with 128 units per layer. We use Leaky ReLU as our non-linearity. Our network has 3 output units, that are then combined to yield the power law output. We apply the GLU non-linearity activation only on the $\beta$ and $\gamma$ output units. We use the L1 loss to train our network, coupled with Adam featuring an initial learning rate of $10^{-3}$. We train every network of our ensemble for 250 epochs only at the beginning of the HPO optimization process for a total of 10 iterations (a heuristic for having a decent sample size for the initial training of the weights.) where, the weights are initialized randomly at the beginning of every HPO iteration. Next, we continuously refine the model for 20 epochs every HPO iteration. If learning stagnates (performance does not improve) for more than the LC Length + a buffer of 0.2 $\cdot$ LC Length, the training procedure is restarted with random weights, where, the model is trained again for 250 and then only refined. Lastly, we use 5 models to build our ensemble. For the experiments, we use an initial history H of 1 randomly sampled hyperparameter configuration evaluated for 1 epoch for DPL. \section*{Acknowledgements} \textbf{JG}, \textbf{AK} and \textbf{MJ} would like to acknowledge the grant awarded by the Eva-Mayr-Stihl Stiftung. In addition, this research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828 and grant INST 39/963-1 FUGG (bwForCluster NEMO). In addition, \textbf{JG} acknowledges the support of the BrainLinks-BrainTools center of excellence.
1,116,691,500,595
arxiv
\section{Introduction} \label{sec:intro} Nowadays, task oriented dialogue systems allow intuitive interaction through natural language, where natural language understanding~(NLU) is an essential part. Structured Query Language~(SQL) is a standard language for accessing knowledge bases or relational databases. Thus, SQL generation from text is crucial for many NLU applications. However, SQL is very difficult for users without technical training, thus natural language interfaces to databases have been widely studied~\cite{DB:DBLP:journals/nle/AndroutsopoulosRT95, DBLP:conf/coling/PopescuAEKY04, NLI-DB:Li:2014:CIN:2735461.2735468}. Most of these work adopts one or more of the following techniques, rule based pattern matching, syntactic grammars based parse tree mapping, semantic grammars based constituent tree mapping. Some work~\cite{SemanticParsing:Clarke:2010:DSP:1870568.1870571, SemanticParsing:DBLP:conf/acl/LiangJK11, SemanticParsing:DBLP:conf/acl/CaiY13, SemanticParsing:Zettlemoyer:2005:LMS:3020336.3020416, SemanticParsing:Zettlemoyer07onlinelearning, SemanticParsing:DBLP:conf/emnlp/ArtziLZ15,SemanticParsing:DBLP:conf/acl/YihCHG15} is proposed as a subtask of semantic parsing. These techniques focus on grammar parsing for specific domains, which cannot be easily generalized to different databases or application domains. Several work on SQL generation from natural language (NL) has been proposed recently. A SQL generation model Seq2SQL is proposed in \cite{Seq2SQL} based on pointer networks~\cite{PointerNetworks}, together with a WikiSQL corpus of natural language questions, SQL queries and tables from Wikipedia. Some work~\cite{SQLNet,DBLP:journals/corr/abs-1804-09769} follows Seq2SQL and proposes various approaches to improve the performance of WikiSQL task. \cite{AAAI18:DBLP:journals/corr/abs-1711-06061} proposes an SQL generation model integrated with SQL grammar. These work needs model training on datasets containing NL questions and corresponding SQL queries. Such data is hard to collect since SQL annotation requires a full knowledge of SQL grammars and the relations between all database tables. Therefore, we propose to learn SQL parsers from \textbf{indirect supervision}, where each NL sentence is labeled with the answer instead of the SQL query. This learning paradigm facilitates data acquisition, since the training data can be easily acquired from Internet or non-expert users' annotation. In this paper, we propose a reinforcement learning based SQL generator (SQLGen), learned from indirect supervision of natural language questions and corresponding answers. SQLGen takes COPYNET~\cite{DBLP:conf/acl/GuLLL16}, an encoder-decoder structure as the neural network component. The policy based reinforcement learning is used to guide the learning of SQL generation, and two types of rewards are proposed. The rewards reflect the extent of correctness of generated SQL, which is an integration of correctness in the manner of logic and query execution. In order to provide more precise supervision, the rewards are designed to be vectors instead of scalars, where each element is assigned to a corresponding word in the generated SQL query. The main contributions of this paper are as follows. (1) We propose a novel learning paradigm for SQL generation without annotated SQL queries for the first time. (2) We design an end-to-end neural model based on COPYNET with policy-based reinforcement learning for the answer-driven learning paradigm. (3) We design a compound point-wise reward assignment mechanism for SQL generation policy learning. \section{Related Work} \label{sec:relatedwork} Semantic parsing has attracted researchers' attention recent years, which refers to the problem of converting a natural language sentence to a formal meaning representation~\cite{SemanticParsing:Clarke:2010:DSP:1870568.1870571, SemanticParsing:DBLP:conf/acl/LiangJK11, SemanticParsing:DBLP:conf/acl/CaiY13}. Some research work focused on learning semantic parsers that generate logics executable on knowledge bases~\cite{SemanticParsing:Zelle:1996:LPD:1864519.1864543, SemanticParsing:Zettlemoyer:2005:LMS:3020336.3020416, SemanticParsing:Zettlemoyer07onlinelearning, SemanticParsing:DBLP:conf/emnlp/ArtziLZ15}. Recently, there has been some work attempting to learn parsers utilizing the results of query execution as indirect supervision~\cite{SemanticParsing:DBLP:journals/tacl/ReddyLS14, SemanticParsing:DBLP:conf/acl/YihCHG15, SemanticParsing:DBLP:conf/acl/PasupatL15, SemanticParsing:DBLP:conf/acl/GuuPLL17}. However, the grammar structure of SQL is much more complicated than the logical forms in semantic parsing~\cite{AAAI18:DBLP:journals/corr/abs-1711-06061}, and it is non-trivial to adapt the semantic parsing techniques to SQL generation domain. Although translating natural language into SQL queries has been extensively studied~\cite{DB:DBLP:journals/coling/WarrenP82, DB:DBLP:journals/nle/AndroutsopoulosRT95, DBLP:conf/coling/PopescuAEKY04, DBLP:conf/coling/GiordaniM12}, most work focuses on grammar parsing or interactive interface building which heavily relies on the grammar, and the proposed methods are difficult to be generalized to new databases. A neural system based on Seq2Seq model~\cite{Seq2Seq} is proposed~\cite{DBLP:conf/acl/IyerKCKZ17} to translate natural language to SQL queries with user feedbacks, which requires gathering user feedbacks to improve accuracy or adapt to new domains. There has also been some work on answering natural language questions based on knowledge bases~\cite{QueryTables:DBLP:journals/debu/LuLK16, QueryTables:DBLP:journals/corr/MouLLJ16}. The most relevant work includes the following. Seq2SQL~\cite{Seq2SQL} proposes a neural architecture based on pointer networks~\cite{PointerNetworks} to generate SQL queries with reinforcement learning. Seq2SQL also proposes a WikiSQL corpus of natural language questions, SQL queries and tables from Wikipedia source. SQLNet~\cite{SQLNet} follows the work of Seq2SQL and proposes a sequence-to-set-based approach without reinforcement learning, which improves the performance of WikiSQL task. TYPESQL~\cite{DBLP:journals/corr/abs-1804-09769} employs a slot filling model to predict the attribute values in SQL. All methods split a SQL query into several parts, and predict each part using a different neural module. Furthermore, WikiSQL task only considers generating SQL queries with respect to one table. \cite{AAAI18:DBLP:journals/corr/abs-1711-06061} proposes an encoder-decoder framework integrated with SQL grammatical structures for SQL generation. It requires preprocessing of annotating the potential attribute values in natural language questions. Compared to the three methods, our approach has the following differences. (1) Our approach learns SQL queries with respect to multiple tables from indirect supervision of natural language question and answer pairs, instead of question and SQL pairs. (2) Our approach adopts an end-to-end learning framework without segmenting SQL queries and learning separately. Our work is also related to the work on attentional Seq2Seq models, which show promising performances on neural machine translation~\cite{AttentionalNMT:DBLP:journals/corr/BahdanauCB14, AttentionalNMT:DBLP:conf/acl/TuLLLL16}, dialog generation~\cite{Dialog:DBLP:conf/aaai/SerbanSLCPCB17, Dialog:DBLP:conf/acl/ShangLL15}, question answering~\cite{QA:DBLP:conf/acl/ChenFWB17, QA:DBLP:journals/corr/XiongZS16}, etc. Our work adopts the framework of COPYNET~\cite{DBLP:conf/acl/GuLLL16}, which incorporates the copying mechanism into the attentional encoder-decoder model. The intuition is that the words from the source sequence may appear in the target sequence, which is true for SQL generation. \section{Task Description} \label{sec:task} The SQL generation task from natural language questions is described as follows. The input is the natural language question querying the database. The output is a SQL query, the meaning of which should be equivalent to that of the input question. We show an example in Figure~\ref{fig:example}. The ``Movie'' table contains the information of ``\emph{name}'', ``\emph{genre}'', ``\emph{director}'', ``\emph{year}'', ``\emph{vote}'' and ``\emph{language}'' of each movie, with ``\emph{name}'' as the primary key. The input question queries the names of movies in 2001 that are acted by Jackie Chan, and the output SQL query is shown in the figure where the table join operation is needed. \begin{figure}[t] \centering \setlength{\abovecaptionskip}{-5pt} \includegraphics[width=\linewidth]{example.pdf} \caption{An example of SQL generation task. The two tables are sampled from a movie database. The question queries the movies acted by Jackie Chan in 2001, and the correct SQL query is shown. The information in the brackets of both tables are translations of the Chinese words.} \label{fig:example} \end{figure} In order to make the problem more tractable, we make a similar assumption to WikiSQL, i.e., any non-SQL token in the generated SQL query should be a substring of the natural language question. Here the \textbf{SQL tokens} refer to all the SQL keywords (e.g. ``select'', ``from'', ``where'', etc.) and the names (including aliases) of tables and columns. For the example in Figure~\ref{fig:example}, the non-SQL tokens in the SQL query are ``Jackie Chan'' and ``2001'', which should appear in the question. This assumption also facilitates the utilization of COPYNET model, which learns to extract useful keywords from the questions. Compared to WikiSQL task, our task has the following differences. (1) Our task learns from indirect supervision of the answers to natural language questions instead of SQL queries. (2) Our task considers generating a SQL query with respect to multiple tables, while WikiSQL considers only one table. \section{Approach} \label{sec:approach} In this section, we introduce our SQL generator SQLGen (shown in Figure~\ref{fig:model}), where an encoder-decoder based architecture COPYNET is employed for SQL generation. We also design a reward assignment mechanism based on the generated SQL queries and the answers. Thus, the generation policy can be supervised by reinforcement learning using the designed reward mechanism. \subsection{Copying Mechanism for SQL Generation} \label{subsec:copynet} \begin{figure*}[th] \centering \setlength{\abovecaptionskip}{-2pt} \includegraphics[width=\linewidth]{model.pdf} \caption{The overview of our SQL generator SQLGen. An example of SQL generation process is shown. The input natural language question asks to ``recommend some movies that are produced in China in 2012''. The SQL query is generated on the basis of a COPYNET structure, and the point-wise reward is computed for learning the generation policy by reinforcement learning.} \label{fig:model} \end{figure*} An encoder-decoder based framework COPYNET is employed, which incorporates the copying mechanism while decoding. As shown in Figure \ref{fig:model}, the input sequence of the natural language question is transformed by the encoder~(e.g. bidirectional RNN) into a representation $M$, and the decoder generates the output SQL query by predicting words based on a mixed probabilistic model of two modes, the generate-mode and the copy-mode. While decoding, COPYNET has not only an attentive read to $M$, but also a selective read to $M$, which renders the word generation from the designated vocabulary and the source sequence. \textbf{Vocabulary.} The vocabulary in SQL generation domain consists of two parts since the generated SQL query should contain both SQL tokens~(as defined in Section \ref{sec:task}) and non-SQL tokens~(the attribute values appeared in the source sequence). The first portion of the vocabulary is denoted by $V_{SQL}$, which contains the SQL keywords, operators and database symbols. \begin{itemize} \setlength{\itemsep}{-1pt} \item The SQL keyword set $V_{key}$ contains all the SQL keywords, such as ``select'', ``where''. \item The comparator set $V_{cmp}$ contains all the comparative operators, e.g. ``$=$'', ``$>$'', etc. \item The database symbol set $V_{db}$ contains all the names of database tables and columns. \end{itemize} Here we further introduce the constituents of the database symbol set $V_{db}$. Let the \textbf{table set} of the database be $T=\{T_1, T_2, \cdots, T_t\}$, where $T_i$ is the name of the $i$-th table. Let the \textbf{column set} with respect to table $T_i$ be $Col_i=\{col_{ij}\}$, where $col_{ij}$ is the name of the $j$-th column in table $T_i$. The elements in both $T$ and $\{Col_i\}$ are database symbols. In order to reduce the exploring space of reinforcement learning, we further clarify the column set by introducing the \textbf{attribute set} $Attr_i=\{T_i.col_{ij}|j=1,\cdots,|Col_i|\}$. Take the example in Figure~\ref{fig:example}, the attribute set for the ``Movie'' table is $\{$``movie.movie\_id'', ``movie.movie\_name'', ``movie.director'', $\cdots\}$. The database symbol set $V_{db}$ covers the table set $T$ and the attribute sets $\{Attr_i\}$. Thus, $V_{SQL}=V_{key}\cup V_{cmp}\cup V_{db}$. The second portion of the vocabulary is denoted by $V_{NL}$, which covers all the unique words that appear in the natural language questions. Therefore, the whole vocabulary $\mathcal{V}$ is $V_{SQL}\cup V_{NL}$. \textbf{Encoder.} Let $\mathcal{X} = \{x_1,\cdots,x_n\}$ be the input sequence. As shown in Figure \ref{fig:model}, the input sequence $\mathcal{X}$ (``Recommend some movies that are produced in China in 2012'') is converted into a representation $M=\{h_1, \cdots, h_n\}$ by an RNN encoder as follows. Note that a bidirectional GRU~\cite{GRU} is used in this work. {\setlength\abovedisplayskip{6pt} \setlength\belowdisplayskip{6pt} \begin{equation} h_t=\text{BiGRU}(h_{t-1}, x_t) \end{equation}} The representation $M$ will be accessed by the decoder during the process of SQL generation. \textbf{Decoder.} A GRU layer is used as the decoder to predict the target sequence. Let the decoder states be $\{s_t\}$ and the generated words be $\{y_t\}$. We apply a standard attention mechanism on $M$ and obtain a context vector sequence $C=\{c_t\}$. Given the decoder state $s_t$, context vector $c_t$ and $M$, the probability of generating a word $y_t$ is computed as follows. {\setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{5pt} \begin{align} \nonumber p(y_t|s_t, y_{t-1}, c_t, M)& = p(y_t,\mathbf{g}|s_t, y_{t-1}, c_t, M) + \\ &p(y_t,\mathbf{c}|s_t, y_{t-1}, c_t, M) \end{align}} where $\mathbf{g}$ stands for the generate-mode, and $\mathbf{c}$ for the copy-mode. The probabilities for the two modes are computed as follows. {\setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation*} p(y_t , \mathbf{g} |\cdot) = \begin{cases} \frac{1}{Z}e^{\psi_g(y_t)}\quad\qquad\qquad &\text{if $y_t\in{V_{SQL}}$}\\ 0 &\text{otherwise} \end{cases} \end{equation*}} {\setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation*} p(y_t , \mathbf{c} |\cdot) = \begin{cases} \frac{1}{Z}\sum_{j:x_j=y_t}{e^{\psi_c(x_j)}} &\text{if $y_t\in \mathcal{X}$}\\ 0 &\text{otherwise} \end{cases} \end{equation*}} where $Z$ is the normalization term shared by the generate-mode and copy-mode as follows. {\setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{4pt} \begin{equation} Z=\sum_{v\in V_{SQL}}e^{\psi_g(v)} + \sum_{x\in \mathcal{X}}e^{\psi_c(x)} \end{equation}} $\psi_g(\cdot)$ and $\psi_c(\cdot)$ are scoring functions as follows, for generate-mode and copy-mode, respectively. {\setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{0pt} \begin{equation} \psi_g (y_t = v_i) = \mathbf{v_i^\mathrm{T}} W_o s_t \qquad v_i \in \mathcal{V}_{SQL} \end{equation}} {\setlength\abovedisplayskip{0pt} \setlength\belowdisplayskip{5pt} \begin{equation} \psi_c (y_t = x_j ) = \sigma(h_j^\mathrm{T} W_c)s_t \qquad x_j \in \mathcal{X} \end{equation}} where $W_o$ and $W_c$ are learnable parameters, and $\mathbf{v_i}$ is the one-hot indicator vector for $v_i$. Note that a specific state update mechanism is introduced in COPYNET, which can be eliminated if Chinese word segmentation or English chunking is done during preprocessing, or reserved otherwise. The state update mechanism helps to copy a consecutive sub-sequence in the source text, while an attribute value to be copied should be words in a single chunk after preprocessing in our task. \textbf{Mask.} We rely on reinforcement learning to learn the generation policy since there is no correct SQL queries as direct supervision. However, the exploration space is enormous due to the complexity of the natural language and SQL logic. To solve this problem, we introduce a masking mechanism to reduce the exploration space. When the decoder is predicting the next target word, a mask vector $\mathbf{m}=(m_1, \cdots, m_k)$ is introduced to indicate whether a word is legal for generation given the previous word(s), as illustrated in Figure \ref{fig:model}. The dimension $k$ of $\mathbf{m}$ is $|V_{SQL}|+|\mathcal{X}|$, and $m_t=1$ if word $v_t$ is legal, $m_t=0$ otherwise. The mask mechanism can be easily implemented based on SQL grammar. For example, if the previous generated word is the SQL keyword ``from'', the current word should be the name of a certain table, thus the other words are illegal. Therefore, the mask mechanism helps to generate grammatically correct SQL queries. \subsection{Reinforcement Learning with Compound Reward} \label{subsec:rl} We apply reinforcement learning to learn a SQL generation policy under the indirect supervision of answers. Unlike \cite{Seq2SQL}'s work, which assigns a scalar reward to a generated SQL query, we design a compound point-wise reward that acts on each token of the generated SQL query. This mechanism helps to guide the learning of SQL generation policy more precisely. \begin{figure}[t] \centering \setlength{\abovecaptionskip}{-5pt} \includegraphics[width=\linewidth]{SQLdivision.pdf} \caption{An illustration of two types of rewards, which act on different parts of the SQL query.} \label{fig:rewards} \end{figure} The point-wise reward mechanism is composed of two types of rewards, the coverage reward and the execution reward, which are acted on different portions of SQL queries. As illustrated in Figure~\ref{fig:rewards}, the coverage reward is acted on the words of attribute values in the where-conditions, the operators~(``and'', ``or'') connecting where-conditions, and the token for end-of-sentence (EOS), while the execution reward is acted on all the other words and the operators as in coverage reward. \textbf{Coverage reward.} The coverage reward aims to guide the learning of word selection from the source text, and the procedure of coverage computation is shown in Algorithm 1. In order to better supervise the copy-mode learning of COPYNET, a vocabulary of attribute values is extracted from the database, which covers the possible values of queried attributes. Thus, the attribute values in the source text can be obtained based on this attribute-value vocabulary. The correct copied words in the generated SQL query are assigned positive rewards of $1$, while the incorrect words and the duplicate correct words are assigned negative rewards of $-1$. Similarly, the correct operators in the generated SQL query are assigned equally positive rewards, while incorrect operators are assigned non-positive rewards. Since there is no direct supervision of the correct SQL query, it is impossible to know whether a generated operator is semantically correct. What we know is the number $K$ of attributes in the correct SQL based on the source text and the attribute-value vocabulary. Hence, a \textbf{correct operator} here refers to the first $K$ operators in the generated SQL, while an \textbf{incorrect operator} refers to the other redundant operators. The first incorrect operator is assigned a negative reward of $-1$, leaving the others no penalty in case that the operators are excessively penalized. For the EOS token, we reward EOS in the SQL queries with the correct number of attributes and penalize EOS in those with insufficient number of attributes, leaving EOS in other cases no penalty. \begin{algorithm} \label{alg:coverage} \caption{Coverage reward computation} \begin{algorithmic}[1] \REQUIRE SQL query $Q$, Source text $S$ \ENSURE Coverage reward $R_c$ \STATE $U \gets$\text{ the set of attribute values in source text} \STATE $V \gets \emptyset$ \FOR{$w$ in copied words in $Q$} \IF{$w \in U$ \AND $w \notin V$} \STATE $R_c(w)\gets 1, V\gets V \cup \{w\}$ \ELSE \STATE Set $R_c(w)$ to $-1$ \ENDIF \ENDFOR \FOR{$l$-th operator $op_l$ in $Q$} \STATE Set $R_c(op_l)=\begin{cases}1/(|U|-1) &\text{if } l<|U|\\-1 &\text{if } l=|U|\\0 &\text{if } l>|U|\end{cases}$ \ENDFOR \STATE $N_{op}\gets$ the number of operators in $Q$ \STATE Set $R_c(EOS)=\begin{cases}-1 &\text{if } N_{op}<|U|-1\\1 &\text{if } N_{op}=|U|-1\\0 &\text{if } N_{op}>|U|-1\end{cases}$ \end{algorithmic} \end{algorithm} \textbf{Execution reward.} The execution reward aims to guide the learning of SQL representation for natural language logics. The procedure of execution reward computation is shown in Algorithm 2. The execution rewards act on three types of SQL segments, the text segment $b$ from ``select'' to ``where'', the condition-clauses $C'$ without attribute values, operators $O$ connecting condition-clauses. For the example in Figure \ref{fig:rewards}, $b$ is ``select $\cdots$where'', $C'$ is \{``MA.actor\_name='',``M.year=''\}, $O$ is \{``and''\}. The words in these SQL segments constitute the targeted word set for the execution reward. The generated SQL query $Q$ is executed. If the query result is equal to the answer, it is believed that $Q$ is correctly generated and the rewards for the targeted words of $Q$ is set to $1$. Otherwise, the rewards of words in $b$ are set to $-1$, while those of words in $O$ are set to $0$. For $c'_i$ in $C'$, the SQL query with corresponding single condition is executed. If the result and answer set $A$ have common elements, the rewards are set to $1$ since the attribute-value pair in the condition should be correct, $-1$ otherwise. In this way, execution reward guides the reinforcement learning model by assigning higher rewards to correct SQL queries. Note that we assume the form of the condition clause to be ``attribute=value'', which restricts the comparator to ``=''. The reasons are two fold. First, the value types are mostly strings in our movie domain, thus equality is the most common comparator, while there is rare data with other comparators. Second, considering all comparators significantly raises the learning complexity, which we hope to study in our future work. For a SQL query $Q$, the whole point-wise reward $R$ is a combination of the coverage reward $R_c$ and the execution reward $R_e$, which act on word set $V_c(Q)$ and $V_e(Q)$, respectively. As described above, $V_c(Q)\cap V_e(Q)=O(Q)$, which is the set of the operators $O(Q)$ connecting condition clauses. The whole reward $R(w)$ for each $w$ in $Q$ is computed as follows. {\setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation*} R(w) = \begin{cases} R_c(w) &\text{if } w\in V_c\setminus O\\ R_e(w) &\text{if } w\in V_e\setminus O\\ \min\{R_c(w), R_e(w)\} &\text{if $w\in O$} \end{cases} \end{equation*}} \begin{algorithm} \label{alg:execution} \caption{Execution reward computation} \begin{algorithmic}[1] \REQUIRE SQL query $Q$, Answer set $A$ \ENSURE Execution reward $R_e$ \STATE Segment $Q$ by ``where'' and operators \STATE $b \gets$\text{ text from ``select'' to ``where''} \STATE \textit{\# condition clause form: ``attribute=value''} \STATE $C \gets$\text{ the set of condition-clauses} \STATE $C' \gets \{c'_i\text{=substring ``attribute='' of } c|c\in C\}$ \STATE $O \gets \{\text{operators connecting clauses in C}\}$ \STATE Execute SQL query $Q$ and get result $Res$ \IF{$A=Res$} \STATE Set $R_e$ for words in $\{b\}\cup C'\cup O$ to $1$ \RETURN \ENDIF \STATE Set $R_e$ for words in $b$ to $-1$ \STATE Set $R_e$ for words in $O$ to $0$ \FOR{$c_i$ in $C$} \STATE Concatenate $b$ with $c_i$, get SQL query $Q_i$ \STATE Execute SQL query $Q_i$, get result $Res_i$ \IF{$Res_i\cap A \neq \emptyset$} \STATE Set $R_e$ for words in $c'_i$ to $1$ \ELSE \STATE Set $R_e$ for words in $c'_i$ to $-1$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \textbf{Learning.} We define the accumulative reward of SQL query $Q=[q_1,\cdots,q_T]$ to be $\tilde{R}(Q)=\sum_{i}R(q_i)$. The loss function is the negative expected accumulative reward over possible SQL queries, i.e., $L=-\mathbb{E}(\tilde{R}(Q))$. We have the following equality as shown in \cite{DBLP:conf/nips/SchulmanHWA15}. {\setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{4pt} \begin{equation*} \begin{aligned} \nabla_\Theta(\mathbb{E}_y(R(y))) = \mathbb{E}_y(R(y)\cdot\nabla_\Theta\log p(y;\Theta)) \end{aligned} \end{equation*}} Thus, the policy gradient of the loss function $L$ can be derived as follows. We approximate the expected gradient with a single Monte-Carlo sample $Q$ in the last step of the derivation. {\setlength\abovedisplayskip{4pt} \setlength\belowdisplayskip{-4pt} \begin{equation*} \begin{aligned} \nabla_\Theta(L) & = -\nabla_\Theta\mathbb{E}_{Q\sim p(Q)}(\sum_{i}R(q_i))\\ & = -\sum_{i}\nabla_\Theta\mathbb{E}_{Q\sim p(Q)}(R(q_i)) \\ & = -\sum_{i}\mathbb{E}_{Q\sim p(Q)}(R(q_i) \nabla_\Theta\log p_Q(q_i;\Theta)) \\ & = -\mathbb{E}_{Q\sim p(Q)}(\nabla_\Theta\sum_{i}(R(q_i) \log p_Q(q_i;\Theta))) \\ & \approx -\nabla_\Theta\sum_{i}(R(q_i) \log p_Q(q_i;\Theta)) \end{aligned} \end{equation*}} \section{Experiments} \label{sec:experiments} \textbf{Data.} We collect the datasets for evaluation, including a Chinese dataset in movie domain, two English datasets in the domains of academic publication and movie. The datasets consist of natural language questions, corresponding answers and database tables. For the comparison with direct supervised learning methods, we ask volunteers to label the questions with SQL queries. \emph{Movie-Chinese dataset}. The questions and answers are collected from a Chinese QA community (Baidu Zhidao), and the database is constructed using data collected from a Chinese movie community (Douban). There are 3 tables in the database, containing information of actors, directors, types, areas and languages of movies. We preprocess the data to eliminate the illegal data, such as confusing questions, incorrect answers. The proportion of questions involving multiple tables is 78\%, while that involving multiple conditions is 43\%. The questions involving multiple tables have a high proportion because most SQL queries contain at least the ``movie'' table, since users tend to query their interested movie names. Different from \textit{Movie-Chinese}, the other two datasets are synthetic, where the databases are constructed by data collection from Internet and question-answer pairs are generated by templates. \emph{Academic dataset}. The database is constructed using the data from \cite{Roy2013The}, where we select 3 tables for our task, containing records of papers, researchers and conferences. \emph{Movie dataset}. The database is constructed using an open-source dataset\footnote{https://github.com/sundeepblue/movie\_rating\_prediction} of IMDB. The dataset contains the same attributes as \textit{Movie-Chinese}. Each dataset contains around 10,000 question-answer pairs, and is randomly partitioned into training set, validation set and test set with the proportion of $80\%$, $10\%$, $10\%$, respectively. \textbf{The datasets can be referred to the supplementary materials in our submission.} \textbf{Baselines.} (1) Seq2Seq-RL is an attentional Seq2Seq model with reinforcement learning using our point-wise rewards. (2) CopyNet-Seq2SQL is a COPYNET model with reinforcement learning using rewards in Seq2SQL~\cite{Seq2SQL}. (3) CopyNet-SL is a COPYNET model supervised by the annotated SQL queries. We also study the performance of SQLGen with pretraining by the annotated SQL queries, which we denote by SQLGen-Pretrain. \textbf{Evaluation.} Two evaluation metrics are used, accuracy and redundancy. \textit{Accuracy} refers to the ratio of correct SQL queries, where a query is correct if it executes to the correct result. \textit{Redundancy} refers to the ratio of redundant SQL queries, where a query is redundant if it joins the tables that are in none of the conditions. \textbf{Settings.} The hidden unit size of the encoder and decoder is 32 and 64, respectively. The embedding size is set to 50 due to a small vocabulary size. The models are trained at most 100 epochs with early stopping, using the Adam optimizer. While decoding, we either randomly sample a word from the distribution with probability $\epsilon$, or pick the highest-scoring word with the probability $1-\epsilon$, rendering reinforcement learning more exploration opportunities. We set $\epsilon=0.3$ in the experiments. \begin{table*}[htbp] \label{tab:result1} \setlength{\abovecaptionskip}{3pt} \setlength{\belowcaptionskip}{-1pt} \centering \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}*{Models} &\multicolumn{2}{c|}{\textit{Movie-Chinese}}&\multicolumn{2}{c|}{\textit{Academic}}&\multicolumn{2}{c}{\textit{Movie}}\\ \cline{2-7} & Accuracy &Redundancy & Accuracy &Redundancy& Accuracy &Redundancy\\ \hline Seq2Seq-RL & 8.1 & 24.7 &0.0 & - &0.0 & - \\ CopyNet-Seq2SQL & 17.0& 100.0&2.5 &29.4 &0.0 & - \\ CopyNet-SL & 56.9& 0.2 &62.6&0.0 &51.9 &0.0\\ \hline \textbf{SQLGen}& 59.8&68.6 &64.6&70.2 &70.0 &76.0\\ SQLGen-Pretrain& 80.4&75.4 &67.8&99.4 &73.6 &97.4\\ \hline \end{tabular} \caption{The accuracy and redundancy of SQLGen and the baselines on three datasets.} \end{table*} \subsection{Main Results} Table 1 shows the accuracy and redundancy results of SQLGen and the baselines on the three datasets. The first two baselines, Seq2Seq-RL and CopyNet-Seq2SQL, have very low accuracy. This result shows the difficulty of the proposed learning paradigm with indirect supervision. We also try the Seq2Seq model with the typical Seq2SQL reward, which hardly learns anything and have an accuracy of $0$, thus we do not take it as a baseline model. CopyNet-SL has better performances than the other two baselines since it learns from the direct supervision of correct SQL queries. SQLGen has higher accuracy than CopyNet-SL. A probable reason is that CopyNet-SL learns from supervised SQL queries but penalizes correct SQL queries with different orders of table joins or conditions. SQLGen-Pretrain have higher accuracy than SQLGen by 1\%-34\% for different datasets. This demonstrates that supervised pretraining helps improve the subsequent policy learning but needs the manual annotation. Thus, a suitable method can be selected based on a tradeoff between performance and annotation cost in practical scenarios. We study the redundancy of different models when the accuracy is higher than $0$. SQLGen has the redundancy of 68\%-75\% on different datasets. The reason is that the space for the combinations of table joins and conditions is enormous and it is very difficult for the indirect supervised learning. Thus, the exploration of reinforcement learning has a tendency of joining more potential tables, which has a higher probability of generating correct SQL queries. This tendency results in relatively high redundancy. Note that SQLGen does not generate SQL queries with duplicate tables or conditions using the mask proposed in Section \ref{subsec:copynet}. CopyNet-SL has very low redundancy since the model learns from direct supervision of SQL queries, which has a low probability of joining redundant tables. SQLGen-Pretrain has higher redundancy than SQLGen. The reason is that the pretrained model has a tendency of joining more tables than a randomly initialized model, since the training data involving multiple tables takes a high proportion, as shown in the data description. Table 2 shows the accuracy on \textit{Movie-Chinese} dataset in different cases, including SQL queries containing single and multiple conditions (tables). For SQLGen and the baselines, the accuracy of SQL queries with single condition is higher than that with multiple conditions, because the natural language related to a single condition is easier to learn. SQLGen has much lower accuracy for SQL queries with single table than those with multiple tables. By observing the test cases, we find most of the incorrect SQL queries predict the wrong attributes in the condition clauses. As shown in Figure \ref{fig:cases}, the attribute is difficult to learn since the patterns querying different attributes could be similar due to the characteristics of Chinese language. In \textit{Movie-Chinese} domain, such patterns mostly occur in the cases where single table is involved. \begin{table}[t] \label{tab:result2} \setlength{\abovecaptionskip}{3pt} \setlength{\belowcaptionskip}{0pt} \centering \begin{tabular}{p{3.05cm}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}} \hline Models & $\text{Acc}_\text{sc}$ &$\text{Acc}_\text{mc}$& $\text{Acc}_\text{st}$ &$\text{Acc}_\text{mt}$\\ \hline Seq2Seq-RL & 13.6 &0.7 &1.1 &9.7 \\ CopyNet-Seq2SQL & 29.7 & 0.0 &0.0 &21.0 \\ CopyNet-SL & 65.6 & 45.2 &90.4 &49.2 \\ \hline \textbf{SQLGen} & 61.8 & 57.1 &34.2 &65.7 \\ SQLGen-Pretrain & 91.1 &57.6 &90.9 &73.6 \\ \hline \end{tabular} \caption{The accuracy analysis on \textit{Movie-Chinese} dataset. $\text{Acc}_\text{sc}$($\text{Acc}_\text{mc}$) is the accuracy of SQL queries with single (multiple) condition(s). $\text{Acc}_\text{st}$($\text{Acc}_\text{st}$) is that with single (multiple) table(s).} \end{table} \begin{figure}[t] \centering \setlength{\abovecaptionskip}{-15pt} \setlength{\belowcaptionskip}{-2pt} \includegraphics[width=\linewidth]{cases.pdf} \caption{An illustration of similar patterns.} \label{fig:cases} \end{figure} Compared to CopyNet-SL, SQLGen shows higher accuracy on SQL queries with multiple conditions (tables) but lower accuracy for single condition (table), because CopyNet-SL penalizes correct SQLs with multiple conditions (tables) with different orders from the training data. SQLGen-Pretrain outperforms SQLGen by better learning attributes of values in natural language, which helps to improve the accuracy for single condition and table. \section{Conclusion and Future Work} \label{sec:conclusion} We propose a SQL generation learning paradigm from indirect supervision of question-answer pairs in this paper. A COPYNET-based neural model integrating policy-based reinforcement learning is proposed, where a compound reward mechanism is designed to precisely learn the generation policy. Experimental results show that our model has higher accuracy than baselines on various datasets. In the future work, we would like to design models that can generate more complex SQL queries, e.g. queries with more operators and comparators in the condition clauses.
1,116,691,500,596
arxiv
\section{Limitations and Future work} Although {$f$-DM} enables diffusion with signal transformations, which greatly extends the scope of {DM}s to work in transformed space, there still exist limitations and opportunities for future work. First, it is an empirical question to find the optimal stage schedule for all transformations. Our ablation studies also show that different heuristics have differences for DS-based and VAE-based models. A metric that can automatically determine the best stage schedule based on the property of each transformation is needed and will be explored in the future. In addition, although the current method achieves faster inference when generating with transformations like down-sampling, the speed-up is not very significant as we still take the standard DDPM steps. How to further accelerate the inference process of DMs is a challenging and orthogonal direction. For example, it has great potential to combine {$f$-DM} with speed-up techniques such as knowledge distillation~\citep{salimans2022progressive}. Moreover, no matter hand-designed or learned, all the transformations used in {$f$-DM} are still fixed when training {DM}. It is, however, different from typical VAEs, where both the encoder and decoder are jointly optimized during training. Therefore, starting from a random/imperfect transformation and training {$f$-DM} jointly with the transformations towards certain target objectives will be studied as future work. \section{Conclusion} We proposed {$f$-DM}, a generalized family of diffusion models that enables generation with signal transformations. As a demonstration, we apply {$f$-DM} to image generation tasks with a range of transformations, including downsampling, blurring and VAEs, where {$f$-DM}s outperform the baselines in terms of synthesis quality and semantic interpretation. \input{sections/statement.tex} \section*{Acknowledgements} Jiatao would like to thank Arnaud Autef, David Berthelot, Walter Talbott, Peiye Zhuang, Xiang Kong, Lingjie Liu, Kyunghyun Cho, Yinghao Xu for useful discussions, help and feedbacks. \section{Method} In this section, we first dive into $f$-DM, an extended family of {DM}s to enable diffusion on transformed signals. We start by introducing the definition of the proposed multi-stage formulation with general signal transformations, followed by modified training and generation algorithms (\Cref{sec.define}). Then, we specifically apply {$f$-DM} with three categories of transformations (\Cref{sec.application}). \subsection{Multi-stage Diffusion} \label{sec.define} \paragraph{Signal Transformations} We consider a sequence of deterministic functions ${\bm{f}}=\{f_0, \ldots, f_K\}$, where $f_0 \ldots f_k$ progressively transforms the input signal ${\bm{x}}\in \mathbb{R}^N$ into ${\bm{x}}^k=f_{0:k}({\bm{x}}) \in \mathbb{R}^{M_k}$. We assume ${\bm{x}}^0=f_0({\bm{x}})={\bm{x}}$. In principle, ${\bm{f}}$ can be any function. In this work, we focus on transformations that gradually destroy the information contained in ${\bm{x}}$ (e.g., down-sampling), leading towards more compact representations. Without loss of generality, we assume $M_0 \geq M_1 \geq \ldots \geq M_K$. A sequence of \textit{inverse} mappings ${\bm{g}}=\{g_0, \ldots, g_{K-1}\}$ is used to connect a corresponding sequence of pairs of consecutive spaces. Specifically, we define $\hat{{\bm{x}}}_k$ as: \begin{equation} \hat{{\bm{x}}}^{k} := \begin{cases} g_k\left(f_{k+1}({\bm{x}}^k)\right) \approx {\bm{x}}^k, & \text{if} \ \ k < K, \\ {\bm{x}}^k, & \text{if} \ \ k = K. \end{cases}\label{eq.g_func} \end{equation} The approximation of \Eqref{eq.g_func} ($k<K$) is not necessarily (and sometimes impossibly) accurate. For instance, $f_k$ downsamples an input image ${\bm{x}}$ from $128^2$ into $64^2$ with average pooling, and $g_k$ can be a bilinear interpolation that upsamples back to $128^2$, which is a lossy reconstruction. The definition of ${\bm{f}}$ and ${\bm{g}}$ can be seen as a direct analogy of the encoder ($\phi$) and decoder ($\theta$) in hierarchical VAEs (see Figure~\ref{fig:motivation} (b)). However, there are still major differences: (1) the VAE encoder/decoder is stochastic, and the encoder's outputs are regularized by the prior. In contrast, ${\bm{f}}$ and ${\bm{g}}$ are deterministic, and the encoder output ${\bm{x}}^K$ does not necessarily follow a simple prior; (2) VAEs directly use the decoder for generation, while ${\bm{f}}, {\bm{g}}$ are fused in the diffusion steps of {$f$-DM}. \vspace{-5pt} \paragraph{Forward Diffusion} We extend the continuous-time {DM}s for signal transformations. We split the diffusion time $0\rightarrow1$ into $K+1$ stages, where for each stage, a partial diffusion process is performed. More specifically, we define a set of time boundaries $0 = \tau_0 < \tau_1 < \ldots < \tau_K < \tau_{K+1}=1$, and for $t \in [0, 1]$, the latent ${\bm{z}}_t$ has the following marginal probability: \begin{equation} q({\bm{z}}_t | {\bm{x}}) = \mathcal{N}({\bm{z}}_t;\alpha_t {\bm{x}}_t, \sigma_t^2I), \ \; \ \; \ \text{where} \ \ {\bm{x}}_t = \frac{(t-\tau_k)\hat{{\bm{x}}}^k + (\tau_{k+1}-t){\bm{x}}^k}{\tau_{k+1}-\tau_k}, \ \ \ \tau_k \leq t < \tau_{k+1}. \label{eq.marginal_multi} \end{equation} As listed above, ${\bm{x}}_t$ is the interpolation of ${\bm{x}}^k$ and its approximation $\hat{{\bm{x}}}^{k}$ when $t$ falls in stage $k$. We argue that interpolation is crucial as it creates a \textit{continuous} transformation that slowly corrupts information inside each stage. In this way, such change can be easily reversed by our model. Also, it is non-trivial to find the optimal stage schedule $\tau_k$ for each model as it highly depends on how much the information is destroyed in each stage $f_k$. In this work, we tested two heuristics: (1) \textit{linear} schedule $\tau_k = k/(K+1)$; (2) \textit{cosine} schedule $\tau_k=\cos(1-k/(K+1))$. Note that the standard {DM}s can be seen as a special case of our {$f$-DM} when there is only one stage ($K=0)$. \Eqref{eq.marginal_multi} does not guarantee a Markovian transition. Nevertheless, our formulation only need $q({\bm{z}}_t|{\bm{z}}_s, {\bm{x}})$, which has the following simple form focusing on diffusion steps within a stage: \begin{equation} \begin{split} q({\bm{z}}_t|{\bm{z}}_s, {\bm{x}}) &= \mathcal{N}({\bm{z}}_t; \alpha_{t|s}{\bm{z}}_s + \alpha_t\cdo \left({\bm{x}}_t - {\bm{x}}_s \right), \sigma^2_{t|s}I) , \ \ \ \tau_k \leq s < t < \tau_{k+1}. \end{split} \vspace{-4pt} \label{eq.transition_multi} \end{equation} From \Eqref{eq.transition_multi}, we further re-write ${{\bm{x}}}_t - {{\bm{x}}}_s=-\bm{\delta}_t\cdot(t-s)/(t-\tau_k)$, where $\bm{\delta}_t = {\bm{x}}^k - {{\bm{x}}}_t$ is the signal degradation. \Eqref{eq.transition_multi} also indicates that the reverse diffusion distribution $q({\bm{z}}_s|{\bm{z}}_t,{\bm{x}})\propto q({\bm{z}}_t|{\bm{z}}_s,{\bm{x}})q({\bm{z}}_s|{\bm{x}})$ can be written as the function of ${{\bm{x}}}_t$ and $\bm{\delta}_t$ which will be our learning objectives. \vspace{-5pt} \paragraph{Boundary Condition} To enable diffusion across stages, we need the transition at stage boundaries $\tau_k$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/rescaling_v2} \vspace{-20pt} \caption{Left: an illustration of the proposed SNR computation for different sampling rates; Right: the comparison of rescaling the noise level for progressive down-sampling. Without noise rescaling, the diffused images in low-resolution quickly become too noisy to distinguish the underline signal.} \vspace{-10pt} \label{fig:rescaling} \end{figure} More specifically, when the step approaches the boundary $\tau^-$ (the left limit of $\tau$), the transition $q({\bm{z}}_{\tau}|{\bm{z}}_{\tau^-}, {\bm{x}})$ should be as deterministic (ideally invertible) \& smooth as possible to minimize information loss.\footnote{For simplicity, we omit the subscript $k$ for $\tau_k$ in the following paragraphs.}. First, we can easily expand ${\bm{z}}_\tau$ and ${\bm{z}}_{\tau^-}$ as the signal and noise combination: \begin{equation} \begin{split} \textit{Before:}& \ \ {\bm{z}}_{\tau^-} = {\color{blue} \alpha_{\tau^-}\cdot {\bm{x}}_{\tau^-}} + {\color{red} \sigma_{\tau^-}\cdot{\bm{\epsilon}}}, \ \ p({\bm{\epsilon}}) = \mathcal{N}(\bm{0}, I), \\ \textit{After:}& \ \, \, \ {\bm{z}}_{\tau} \ = {\color{blue} \alpha_{\tau}\cdot {\bm{x}}_\tau} \ \ \ \, \ + {\color{red} \sigma_{\tau}\cdot\zeta({\bm{\epsilon}})}, \ \ p(\zeta({\bm{\epsilon}})) = \mathcal{N}(\bm{0}, I). \end{split} \label{eq.zeta} \end{equation} Based on definition, ${\bm{x}}_{\tau^-}=\hat{{\bm{x}}}^{k-1}=g({\bm{x}}^k)=g({\bm{x}}_\tau)$, which means the signal part is invertible. Therefore we only need to find $\zeta$. Under the initial assumption of $M_k \leq M_{k-1}$, this can be achieved easily by dropping elements from ${\bm{\epsilon}}$. Take down-sampling ($M_{k-1}=4M_k$) as an example. We can directly drop $3$ out of every $2\times 2$ values from ${\bm{\epsilon}}$. More details are included in Appendix~\ref{sec.noise_at_boundary}. The second requirement of a smooth transition is not as straightforward as it looks, which asks the ``noisiness'' of latents ${\bm{z}}$ to remain unchanged across the boundary. We argue that the conventional measure -- the signal-to-noise-ratio (SNR) -- in {DM} literature is not compatible with resolution change as it averages the signal/noise power element-wise. In this work, we propose a generalized \textit{resolution-agnostic} SNR by viewing data as points sampled from a continuous field: \begin{equation} \texttt{SNR}({\bm{z}}) = \frac{\mathbb{E}_{\Omega\sim I}\|\mathbb{E}_{i\sim\Omega}\texttt{SIGNAL}({\bm{z}})\|^2} {\mathbb{E}_{\Omega\sim I}\|\mathbb{E}_{i\sim\Omega}\texttt{NOISE}({\bm{z}})\|^2}, \label{eq.snr} \end{equation} where $I$ is the data range, and $\Omega$ is the minimal interested patch relative to $I$, which is invariant to different sampling rates (resolutions). As shown in Figure~\ref{fig:rescaling} (left), we can obtain a reliable measure of noisiness by averaging the signal/noise inside patches. We derive $\alpha_{\tau}, \sigma_{\tau}$ from $\alpha_{\tau^-}, \sigma_{\tau^-}$ for any transformations by forcing $\texttt{SNR}({\bm{z}}_{\tau}) =\texttt{SNR}({\bm{z}}_{\tau^-})$ under this new definition. Specifically, if dimensionality change is solely caused by the change of sampling rate (e.g., down-sampling, average RGB channels, deconvolution), we can get the following relation: \begin{equation} \vspace{-4pt} \left.{\alpha_{\tau}^2} \middle/ {\sigma_{\tau}^2}\right. = d_k \cdot \gamma_k \cdot \left.{\alpha_{\tau^-}^2}\middle/{\sigma_{\tau^-}^2}\right., \label{eq.rescaling} \end{equation} where $d_k = M_{k-1} / M_k$ is the total dimension change, and $\gamma_k = \mathbb{E}||\hat{{\bm{x}}}^{k-1}||^2 / \mathbb{E}||{\bm{x}}^k||^2$ is the change of signal power. For example, we have $d_k=4, \gamma_k\approx 1$ for down-sampling. Following \Eqref{eq.rescaling}, the straightforward rule is to rescale the magnitude of the noise, and keep the signal part unchanged: $\alpha \leftarrow \alpha, \sigma \leftarrow \sigma /{\sqrt{d_k}}$, which we refer as signal preserved (SP) rescaling. Note that, to ensure the noise schedule is continuous over time and close to the original schedule, such rescaling is applied to the noises of the entire stage, and will be accumulated when multiple transformations are used. As the comparison shown in Figure~\ref{fig:rescaling}, the resulting images are visually closer to the standard {DM}. However, the variance of ${\bm{z}}_t$ becomes very small, especially when $t\rightarrow 1$, which might be hard for the neural networks to distinguish. Therefore, we propose the variance preserved (VP) alternative to further normalize the rescaled $\alpha, \sigma$ so that $\alpha^2+\sigma^2=1$. We show the visualization in Figure~\ref{fig:rescaling}~(b). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/pipeline.png} \caption{An illustration of the training pipeline. } \label{fig:pipeline} \end{figure} \vspace{-5pt} \paragraph{Training} We train a neural network $\theta$ to denoise. An illustration of the training pipeline is shown in Figure~\ref{fig:pipeline}. In {$f$-DM}, noise is caused by two factors: (1) the perturbation ${\bm{\epsilon}}$ from noise injection; (2) the degradation $\bm{\delta}$ due to signal transformation. Thus, we propose to predict ${{\bm{x}}}_\theta$ and $\bm{\delta}_\theta$ jointly, which simultaneously remove both noises from ${\bm{z}}_t$ with a ``double reconstruction'' loss: \begin{equation} \mathcal{L}_\theta = \mathbb{E}_{{\bm{z}}_t\sim q({\bm{z}}_t | {\bm{x}}), t \sim [0, 1]}\left[\omega_t\cdot \left( \|{{\bm{x}}}_\theta({\bm{z}}_t, t) - {{\bm{x}}}_t\|_2^2 + \|\bm{\delta}_\theta({\bm{z}}_t, t) - \bm{\delta}_t\|_2^2 \right)\right], \label{eq.learn_double} \end{equation} where the denoised output is ${{\bm{x}}}_\theta({\bm{z}}_t, t)+\bm{\delta}_\theta({\bm{z}}_t, t)$. Unlike standard {DM}s, the denoising goals are the transformed signals of each stage rather than the final real images, which are generally simpler targets to recover. The same as standard {DM}s, we also choose to predict ${\bm{\epsilon}}_\theta$, and compute ${\bm{x}}_\theta=({\bm{z}}_t - \sigma_t{\bm{\epsilon}}_\theta)/\alpha_t$. We adopt the same U-Net architecture for all stages, where input ${\bm{z}}_t$ will be directed to the corresponding inner layer based on spatial resolutions (see Appendix Figure~\ref{fig:architecture} for details). \vspace{-5pt} \paragraph{Unconditional Generation} We present the generation steps in Algorithm~\ref{alg.sampling}, where ${{\bm{x}}}_t$ and $\bm{\delta}_t$ are replaced by model's predictions ${{\bm{x}}}_\theta, \bm{\delta}_\theta$. Thanks to the interpolation formulation (\Eqref{eq.marginal_multi}), generation is independent of the transformations ${\bm{f}}$. Only the inverse mappings ${\bm{g}}$ -- which might be simple and easy to compute -- is needed to map the signals at boundaries. This brings flexibility and efficiency to learning complex or even test-time inaccessible transformations. In addition, Algorithm~\ref{alg.sampling} includes a ``noise-resampling step'' for each stage boundary, which is the reverse process for $\zeta({\bm{\epsilon}})$ in \Eqref{eq.zeta}. While $\zeta$ is deterministic, the reverse process needs additional randomness. For instance, if $\zeta$ drops elements in the forward process, then the reverse step should inject standard Gaussian noise back to the dropped locations. Because we assume $M_0\geq\ldots\geq M_K$, we propose to sample a full-size noise ${\bm{\epsilon}}_\mathrm{full}$ before generation, and gradually adding subsets of ${\bm{\epsilon}}_\mathrm{full}$ to each stage. Thus, ${\bm{\epsilon}}_\mathrm{full}$ encodes multi-scale information similar to RealNVP~\citep[][]{dinh2016density}. \vspace{-5pt} \paragraph{Conditional Generation} Given an unconditional {$f$-DM}, we can do conditional generation by replacing the denoised output ${\bm{x}}_\theta$ with any condition ${\bm{x}}_c$ at a suitable time ($T$), and starting diffusion from $T$. For example, suppose ${\bm{f}}$ is \texttt{downsample}, and ${\bm{x}}_c$ is a low-resolution image, {$f$-DM} enables super-resolution (SR) without additional training. To achieve that, it is critical to initialize ${\bm{z}}_T$, which implicitly asks ${\bm{z}}_T\approx\alpha_{T}{\bm{x}}_c + \sigma_{T}{{\bm{\epsilon}}}_\theta({\bm{z}}_T)$. In practice, we choose $T$ to be the corresponding stage boundary, and initialize ${\bm{z}}$ by adding random noise $\sigma_{T}{\bm{\epsilon}}$ to $\alpha_{T}{\bm{x}}_c$. A \textit{gradient-based} method is used to iteratively update ${\bm{z}}_T\leftarrow{\bm{z}}_T - \lambda\nabla_{{\bm{z}}_T}\|{\bm{x}}_\theta({\bm{z}}_T)-{\bm{x}}_c\|^2_2$ for a few steps before the diffusion starts. \input{sections/sampling_algorithm.tex} \subsection{Applications on Various Transformations} \label{sec.application} With the definition in \Cref{sec.define}, next we show {$f$-DM} applied with different transformations. In this paper, we consider the following three categories of transformations: \begin{itemize}[leftmargin=*] \vspace{-2pt} \item \textbf{Downsampling} \ \ As the motivating example in \Cref{sec.define}, we let ${\bm{f}}$ a sequence of \texttt{downsampe} operations that transforms a given image (e.g., $256^2$) progressively down to $16^2$, where each $f_k(.)$ reduces the length by $2$, and correspondingly $g_k(.)$ upsamples by $2$. Thus, the generation starts from a low-resolution noise and progressively performs super-resolution. We denote the model as {$f$-DM}-DS, where $d_k=4, \gamma_k=1$ in \Eqref{eq.rescaling} and $K=4$ for $256^2$ images. \item \textbf{Blurring} \ \ {$f$-DM} also supports general \texttt{blur} transformations. Unlike recent works~\citep{rissanen2022generative,hoogeboom2022blurring} that focuses on continuous-time blur (heat dissipation), \Eqref{eq.marginal_multi} can be seen as an instantiation of progressive blur function if we treat $\hat{{\bm{x}}}^k$ as a blurred version of ${\bm{x}}^k$. This design brings more flexibility in choosing any kind of blurring functions, and using the blurred versions as stages. In this paper, we experiment with two types of blurring functions. (1) {$f$-DM}-Blur-U: utilizing the same downsample operators as {$f$-DM}-DS, while always up-sampling the images back to the original sizes; (2) {$f$-DM}-Blur-G: applying standard Gaussian blurring kernels following~\cite{rissanen2022generative}. In both cases, we use $g_k({\bm{x}})={\bm{x}}$. As the dimension is not changed, no rescaling and noise resampling is required. \item \textbf{Image $\rightarrow$ Latent Trans.} \ \ We further consider diffusion with learned non-linear transformations such as VAEs (see Figure~\ref{fig:motivation}~(b), ${\bm{f}}$: VAE encoder, ${\bm{g}}$: VAE decoder). By inverting such an encoding process, we are able to generate data from low-dimensional latent space similar to~\citet[LDM,][]{rombach2021highresolution}. As a major difference, LDM operates only on the latent variables, while {$f$-DM} learns diffusion in the latent and image spaces jointly. Because of this, our performance will not be bounded by the quality of the VAE decoder. In this paper, we consider VQVAE~\citep{van2017neural} together with its GAN variant~\citep[VQGAN,][]{esser2021taming}. For both cases, we transform $256^2\times3$ images into $32^2\times4$ (i.e., $d_k=48$) latent space. The VQVAE encoder/decoder is trained on ImageNet~\citep{5206848}, and is frozen for the rest of the experiments. For {$f$-DM}-VQGAN, we directly take the checkpoint provided by~\cite{rombach2021highresolution}. Besides, we need to tune $\gamma_k$ separately for each encoder due to the change in signal magnitude. \end{itemize} \vspace{-2pt} Generated examples with the above transformations are shown in Figure~\ref{fig:teaser}. We use \textit{cosine} stage schedule for all DS- and Blur-based models, and \textit{linear} schedule for VQVAE/VQGAN models. \section{Introduction} Diffusion probabilistic models~\citep[{DM}s,][]{sohl2015deep,ho2020denoising,nichol2021improved} and score-based~\citep{song2020score} generative models have become increasingly popular as the tools for high-quality image~\citep{dhariwal2021diffusion}, video~\citep{ho2022video}, text-to-speech~\citep{popov2021grad} and text-to-image~\citep{rombach2021highresolution,ramesh2022hierarchical,saharia2022photorealistic} synthesis. Despite the empirical success, conventional {DM}s are restricted to operate in the ambient space throughout the Gaussian noising process. On the other hand, common generative models like VAEs~\citep{kingma2013auto} and GANs~\citep{Goodfellow14,karras2021alias} employ a coarse-to-fine process that hierarchically generates high-resolution outputs. We are interested in combining the best of the two worlds: the expressivity of {DM}s and the benefit of hierarchical features. To this end, we propose $f$-DM, a generalized multi-stage framework of {DM}s to incorporate progressive transformations to the inputs. As an important property of our formulation, {$f$-DM} does not make any assumptions about the type of transformations. This makes it compatible with many possible designs, ranging from domain-specific ones to generic neural networks. In this work, we consider representative types of transformations, including down-sampling, blurring, and neural-based transformations. What these functions share in common is that they allow one to derive increasingly more global, coarse, and/or compact representations, which we believe can lead to better sampling quality as well as reduced computation. Incorporating arbitrary transformations into {DM}s also brings immediate modeling challenges. For instance, certain transformations destroy the information drastically, and some might also change the dimensionality. For the former, we derive an interpolation-based formulation to smoothly bridge consecutive transformations. For the latter, we verify the importance of rescaling the noise level, and propose a \textit{resolution-agnostic} signal-to-noise ratio (SNR) as a practical guideline for noise rescaling. Extensive experiments are performed on image generation benchmarks, including FFHQ, AFHQ, LSUN Bed/Church and ImageNet. {$f$-DM}s consistently match or outperform the baseline performance, while requiring relatively less computing thanks to the progressive transformations. Furthermore, given a pre-trained {$f$-DM}, we can readily manipulate the learned latent space, and perform conditional generation tasks (e.g., super-resolution) without additional training. \section{Background} \begin{wrapfigure}{R}{0.5\textwidth} \centering \vspace{-10pt} \includegraphics[width=\linewidth]{figures/motivation_v4} \caption{(a) the standard {DM}s; (b) a bottom-up hierarchical VAEs; (c) our proposed {$f$-DM}.} \label{fig:motivation} \end{wrapfigure} \textbf{Diffusion Models}~\citep[DMs,][]{sohl2015deep,song2019generative,ho2020denoising} are deep generative models defined by a Markovian Gaussian process. In this paper, we consider diffusion in continuous time similar to~\cite{song2020score,kingma2021variational}. Given a datapoint ${\bm{x}} \in \mathbb{R}^N$, a {DM} models time-dependent latent variables ${\bm{z}}=\{{\bm{z}}_t | t\in [0, 1], {\bm{z}}_0={\bm{x}}\}$ based on a fixed signal-noise schedule $\{\alpha_t, \sigma_t\}$: \begin{equation*} q({\bm{z}}_t | {\bm{z}}_s) = \mathcal{N}({\bm{z}}_t; \alpha_{t|s}{\bm{z}}_s, \sigma^2_{t|s}I), \end{equation*} where $\alpha_{t|s} = \alpha_t/\alpha_s, \sigma^2_{t|s}=\sigma_t^2-\alpha_{t|s}^2\sigma_s^2$, $s < t$. It also defines the marginal distribution $q({\bm{z}}_t|{\bm{x}})$ as: \begin{equation*} q({\bm{z}}_t | {\bm{x}}) = \mathcal{N}({\bm{z}}_t;\alpha_t{\bm{x}}, \sigma_t^2I), \label{eq.standard_marginal} \end{equation*} By default, we assume the variance preserving form~\citep{ho2020denoising}. That is, $\alpha^2_t+\sigma_t^2=1, \alpha_0 = \sigma_1 = 1$, and the signal-to-noise-ratio (SNR, ${\alpha_t^2}/{\sigma_t^2}$) decreases monotonically with $t$. For generation, a parametric function $\theta$ is optimized to reverse the diffusion process by denoising ${\bm{z}}_t=\alpha_t{\bm{x}}+\sigma_t{\bm{\epsilon}}$ to the clean input ${\bm{x}}$, with a weighted reconstruction loss $\mathcal{L}_\theta$. For example, the ``simple loss'' proposed in \cite{ho2020denoising} is equivalent to weighting residuals by $\omega_t = {\alpha_t^2}/{\sigma_t^2}$: \begin{equation} \mathcal{L}_\theta = \mathbb{E}_{{\bm{z}}_t\sim q({\bm{z}}_t | {\bm{x}}), t \sim [0, 1]}\left[\omega_t\cdot\|{\bm{x}}_\theta({\bm{z}}_t, t) - {\bm{x}}\|_2^2\right]. \label{eq.learn_dpm} \end{equation} In practice, $\theta$ is parameterized as a U-Net~\citep{ronneberger2015u}. As suggested in \cite{ho2020denoising}, predicting the noise ${{\bm{\epsilon}}}_\theta$ empirically achieves better performance than predicting ${{\bm{x}}}_\theta$, where ${\bm{x}}_\theta({\bm{z}}_t,t) = ({\bm{z}}_t - \sigma_t{\bm{\epsilon}}_\theta({\bm{z}}_t,t))/\alpha_t$. Sampling from such a learned model can be performed from ancestral sampling~\citep[DDPM,][]{ho2020denoising}, or a deterministic DDIM sampler~\citep{song2021denoising}. Starting from ${\bm{z}}_1\sim \mathcal{N}(\bm{0}, I)$, a sequence of timesteps $1 = t_0 > \ldots > t_N = 0$ are sampled for iterative generation, and we can readily summarize both methods for each step as follows: \begin{equation} {\bm{z}}_s = \alpha_s\cdot{\bm{x}}_\theta({\bm{z}}_t) + \sqrt{\sigma_s^2 - \eta^2\bar{\sigma}^2}\cdot{\bm{\epsilon}}_\theta({\bm{z}}_t) + \eta\bar{\sigma}\cdot{\bm{\epsilon}}, \ \ \ {\bm{\epsilon}} \sim \mathcal{N}(\bm{0}, I), \ \ \ s < t, \label{eq.dpm_sampling} \end{equation} where $\bar{\sigma} = \sigma_s \sigma_{t|s}/ \sigma_t$, and $\eta$ controls the proportion of additional noise. (i.e., DDIM $\eta=0$). As the score function ${\bm{\epsilon}}_\theta$ is defined in the ambient space, it is clear that all the latent variables ${\bm{z}}$ are forced to be the same shape as the input data ${\bm{x}}$ ($\mathbb{R}^N$). This not only leads to inefficient training, especially for steps with high noise level~\citep{jing2022subspace}, but also makes {DM}s hard to learn abstract and semantically meaningful latent space as pointed out by~\citet{preechakul2022diffusion}. \textbf{Variational Autoencoders}~\citep[VAEs,][]{kingma2013auto} are a broader class of generative models with latent variables, and {DM}s can be seen as a special case with a fixed encoder and ``infinite'' depth. Instead, the encoder ($q_\phi({\bm{z}}|{\bm{x}})$) and decoder ($p_\theta({\bm{x}},{\bm{z}})$) of VAEs are optimized jointly for the variational lower bound similar to {DM}s. Unlike {DM}s, there are no restrictions on the dimensions of the latent space. Therefore, VAEs are widely known as tools for learning semantically meaningful representations in various domains. Figure~\ref{fig:motivation}(a,b) shows a comparison between {DM}s and ``bottom-up style'' hierarchical VAEs. Recently, deep VAEs~\citep{vahdat2020nvae,child2020very} have made progress on high-quality generation with top-down architectures, however, the visual outputs still have gaps relative to other generative models such as GANs or {DM}s. Inspired by hierarchical VAEs, the goal of this work is to incorporate hierarchical structures into DMs for both high-quality generation with signal transformation in the synthesis process. \section{Related Work} \paragraph{Progressive Generation with {DM}s} Conventional {DM}s generate images in the same resolutions. Therefore, existing work generally adopt \textit{cascaded} approaches~\citep{nichol2021improved,ho2022cascaded,saharia2022photorealistic} that chains a series of conditional {DM}s to generate coarse-to-fine, and have been used in super-resolution~\citep[SR3,][]{saharia2022image}. However, cascaded models tend to suffer error propagation problems. More recently, \cite{ryu2022pyramidal} dropped the need of conditioning, and proposed to generate images in a pyramidal fashion with additional reconstruction guidance; \cite{jing2022subspace} explored learning subspace DMs and connecting the full space with Langevin dynamics. By contrast, the proposed {$f$-DM} is distinct from all the above types, which only requires one diffusion process, and the images get naturally up-sampled through reverse diffusion. \vspace{-5pt} \paragraph{Blurring {DM}s Several concurrent research~\citep{rissanen2022generative,daras2022soft,lee2022progressive} have recently looked into {DM} alternatives to combine blurring into diffusion process, some of which also showed the possibility of deterministic generation~\citep{bansal2022cold}. Although sharing similarities, our work starts from a different view based on signal transformation. Furthermore, our empirical results also show that stochasticity plays a critical role in high-quality generation. \vspace{-5pt} \paragraph{Latent Space {DM}s} Existing work also investigated combining {DM}s with standard latent variable models. To our best knowledge, most of these works adopt {DM}s for learning the prior of latent space, where sampling is followed by a pre-trained~\citep{rombach2021highresolution} or jointly optimized~\citep{vahdat2021score} decoder. Conversely, {$f$-DM} does not rely on the quality decoder. \section{Experiments} \subsection{Experimental Settings} \paragraph{Datasets} We evaluate {$f$-DM}s on five commonly used benchmarks testing generation on a range of domains: FFHQ~\citep[][]{karras2019style}, AFHQ~\citep[][]{choi2020stargan}, LSUN Church \& Bed~\citep[][]{yu2015lsun}, and ImageNet~\citep[][]{5206848}. All images are center-cropped and resized to $256\times 256$. \vspace{-5pt} \paragraph{Training Details} We implement the three types of transformations with the same architecture and hyper-parameters except for the stage-specific adapters. We adopt a lighter version of ADM~\citep{dhariwal2021diffusion} as the main U-Net architecture. For all experiments, we adopt the same training scheme using AdamW~\citep{kingma2014adam} optimizer with a learning rate of \num{2e-5} and an EMA decay factor of $0.9999$. We set the weight $\omega_t=\mathrm{sigmoid}(-\log(\alpha^2_t/\sigma^2_t))$ following P2-weighting~\citep{choi2022perception}. The cosine noise schedule $\alpha_t=\cos(0.5\pi t)$ is adopted for diffusion working in the $256^2\times 3$ image space. As proposed in \Eqref{eq.rescaling}, noise rescaling (VP by default) is applied for {$f$-DM}s when the resolutions change. All our models are trained with batch-size $32$ images for $500$K (FFHQ, AFHQ, LSUN Church), $1.2$M (LSUN Bed) and $2.5$M (ImageNet) iterations, respectively. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/samplesv6} \caption{$\uparrow$ Random samples from {$f$-DM}-DS trained on various datasets; $\downarrow$ Comparison of {$f$-DM}s and the corresponding baselines under various transformations. Best viewed when zoomed in. All faces presented are synthesized by the models, and are not real identities.} \label{fig:gen_examples} \end{figure} \vspace{-5pt} \paragraph{Baselines \& Evaluation} We compare {$f$-DM}s against a standard DM~\citep[DDPM,][]{ho2020denoising} on all five datasets. To ensure a fair comparison, we train DDPM following the same settings and continuous-time formulation as our approaches. We also include transformation-specific baselines: (1) we re-implement the cascaded {DM}~\citep[Cascaded,][]{ho2022cascaded} to adapt {$f$-DM}-DS setup from $16^2$ progressively to $256^2$, where for each stage a separate {DM} is trained conditioned on the consecutive downsampled image; (2) we re-train a latent-diffusion model~\citep[LDM,][]{rombach2021highresolution} on the extracted latents from our pretrained VQVAE; (3) to compare with {$f$-DM}-Blur-G, we include the scores and synthesised examples of IHDM~\citep{rissanen2022generative}. We set $250$ timesteps ($\Delta t=0.004$) for {$f$-DM}s and the baselines with $\eta=1$ (Algorithm~\ref{alg.sampling}). We use Frechet Inception Distance~\citep[FID,][]{heusel2017gans} and Precision/Recall~\citep[PR,][]{kynkaanniemi2019improved} as the measures of visual quality, based on $50$K samples and the entire training set. \subsection{Results} \paragraph{Qualitative Comparison} To demonstrate the capability of handling various complex datasets, Figure~\ref{fig:gen_examples} ($\uparrow$) presents an uncurated set of images generated by {$f$-DM}-DS. We show more samples from all types of {$f$-DM}s in the Appendix~\ref{sec.additional_results}. We also show a comparison between {$f$-DM}s and the baselines with various transformations on FFHQ (Figure~\ref{fig:gen_examples} $\downarrow$). Our methods consistently produce better visual results with more coherence and without noticeable artifacts. \input{tables/main_comparison.tex} \begin{figure}[t] \centering \vspace{-5pt} \includegraphics[width=0.95\linewidth]{figures/cond_gen_v3} \vspace{-2pt} \caption{Random DDIM samples ($\eta=0$) from (a) {$f$-DM}s on AFHQ and LSUN-Church by given \{downsampled, blurred, latent\} images as conditions; (b){$f$-DM}-VQVAE by interpolating the initial noise of the latent stage; (c){$f$-DM}-DS starting from the same initial noise of the $16\times 16$ stage. For (c), we also show the ``mean image'' of $300$ random samples using the same initial noise.} \label{fig:cond_gen} \end{figure} \vspace{-5pt} \paragraph{Quantitative Comparison} We measure the generation quality (FID and precision/recall) and relative inference speed of {$f$-DM}s and the baselines in Table~\ref{tab.comparison}. Across all five datasets, {$f$-DM}s consistently achieves similar or even better results for the DDPM baselines, while gaining near $\times 2$ inference speed for {$f$-DM}-\{DS, VQVAE, VQGAN\} due to the nature of transformations. As a comparison, having fewer timesteps (DDPM 1/2) greatly hurts the generation quality of DDPM. We also show comparisons with transformation-specific baselines on FFHQ \& AFHQ. \vspace{-5pt} \paragraph{v.s. Cascaded DMs} Although cascaded DMs have been shown effective in literature~\citep{nichol2021improved,ho2022cascaded}, it is underexplored to apply cascades in a sequence of consecutive resolutions ($16\rightarrow32\rightarrow64\rightarrow\ldots$) like ours. In such cases, the prediction errors get easily accumulated during the generation, yielding serious artifacts in the final resolution. To ease this, \cite{ho2022cascaded} proposed to apply ``noise conditioning augmentation'' which reduced the domain gap between stages by adding random noise to the input condition. However, it is not straightforward to tune the noise level for both training and inference time. By contrast, {$f$-DM} is by-design non-cascaded, and there are no domain gaps between stages. That is, we can train our model end-to-end without worrying the additional tuning parameters and achieve stable results. \vspace{-5pt} \paragraph{v.s. LDMs} We show comparisons with LDMs~\citep{rombach2021highresolution} in Table~\ref{tab.comparison}. LDMs generate more efficiently as the diffusion only happens in the latent space. However, the generation is heavily biased by the behavior of the fixed decoder. For instance, it is challenging for VQVAE decoders to synthesize sharp images, which causes low scores in Table~\ref{tab.comparison}. However, LDM with VQGAN decoders is able to generate sharp details, which are typically favored by InceptionV3~\citep{szegedy2016rethinking} used in FID and PR. Therefore, despite having artifacts (see Figure~\ref{fig:gen_examples}, below, right-most) in the output, LDMs (GAN) still obtain good scores. In contrast, {$f$-DM}, as a pure {DM}, naturally bridges the latent and image spaces, where the generation is not restricted by the decoder. \vspace{-5pt} \paragraph{v.s. Blurring DMs} Table~\ref{tab.comparison} compares with a recently proposed blurring-based method~\citep[IHDM,][]{rissanen2022generative}. Different from our approach, IHDM formulates a fully deterministic forward process. We conjecture the lack of randomness is the cause of their poor generation quality. Instead, {$f$-DM} proposes a natural way of incorporating blurring with stochastic noise, yielding better quantitative and qualitative results. \vspace{-5pt} \paragraph{Conditional Generation} In Figure~\ref{fig:cond_gen}(a), we demonstrate the example of using pre-trained {$f$-DM}s to perform conditional generation based on learned transformations. We downsample and blur the sampled real images, and start the reverse diffusion following \Cref{sec.define} with {$f$-DM}-DS and -Blur-U, respectively. Despite the difference in fine details, both our models faithfully generate high-fidelity outputs close to the real images. The same algorithm is applied to the extracted latent representations. Compared with the original VQVAE output, {$f$-DM}-VQVAE is able to obtain better reconstruction. We provide additional conditional generation samples with the ablation of the ``gradient-based" initialization method in Appendix~\ref{sec.conditional_generation}. \vspace{-5pt} \paragraph{Latent Space Manipulation} To demonstrate {$f$-DM}s have learned certain abstract representations by modeling with signal transformation, we show results of latent manipulation in Figure~\ref{fig:cond_gen}. Here we assume DDIM sampling ($\eta=0$), and the only stochasticity comes from the initially sampled noise ${\bm{\epsilon}}_\mathrm{full}$. In (b), we obtain a semantically smooth transition between two cat faces when linearly interpolating the low-resolution noises; on the other hand, we show samples of the same identity with different fine details (e.g., expression, poses) in (c), which is achieved easily by sampling {$f$-DM}-DS with the low-resolution ($16^2$) noise fixed. This implies that {$f$-DM} is able to allocate high-level and fine-grained information in different stages via learning with downsampling. \subsection{Ablation Studies} \input{tables/ablation.tex} Table~\ref{tab:ablation} presents the ablation of the key design choices. As expected, the interpolation formulation (\Eqref{eq.marginal_multi}) effectively bridges the information gap between stages, without which the prediction errors get accumulated, resulting in blurry outputs and bad scores. Table~\ref{tab:ablation} also demonstrates the importance of applying correct scaling. For both models, rescaling improves the FID and recall by large margins, where SP works slightly worse than VP. In addition, we also empirically explore the difference of stage schedules. Compared to VAE-based models, we usually have more stages in DS/Blur-based models to generate high-resolution images. The \textit{cosine} schedule helps diffusion move faster in regions with low information density (e.g., low-resolution, heavily blurred). \section*{Ethics Statement} Our work focuses on technical development, i,e., synthesizing high-quality images with a range of signal transformations (e.g., downsampling, blurring). Our approach has various applications, such as movie post-production, gaming, helping artists reduce workload, and generating synthetic data as training data for other computer vision tasks. Our approach can be used to synthesize human-related images (e.g., faces), and it is not biased towards any specific gender, race, region, or social class. However, the ability of generative models, including our approach, to generate high-quality images that are indistinguishable from real images, raises concerns about the misuse of these methods, e.g., generating fake images. To resolve these concerns, we need to mark all the generated results as ``synthetic". In addition, we believe it is crucial to have authenticity assessment, such as fake image detection and identity verification, which will alleviate the potential for misuse. We hope our approach can be used to foster the development of technologies for authenticity assessment. Finally, we believe that creating a set of appropriate regulations and laws would significantly reduce the risks of misuse while bolstering positive effects on technology development. \section*{Reproducibility Statement} We assure that all the results shown in the paper and supplemental materials can be reproduced. We believe we have provided enough implementation details in the paper and supplemental materials for the readers to reproduce the results. Furthermore, we will open-source our code together with pre-trained checkpoints upon the acceptance of the paper. \section*{APPENDIX} \label{app} \section{Detailed Derivation of {$f$-DM}s} \subsection{$q(\mathbf{z}_t|\mathbf{z}_s,\mathbf{x})$} We derive the definition in \Eqref{eq.transition_multi} with the change-of-variable trick given the fact that ${{\bm{x}}}_t, {{\bm{x}}}_s$ and ${\bm{x}}^k$ are all deterministic functions of ${\bm{x}}$. More precisely, suppose ${\bm{z}}_t\sim\mathcal{N}(\alpha_t{{\bm{x}}}_t, \sigma^2_tI), {\bm{z}}_s\sim\mathcal{N}(\alpha_s{{\bm{x}}}_s, \sigma^2_sI)$, where $\tau_{k}\leq s < t <\tau_{k+1}$. Thus, it is equivalent to have ${\bm{u}}_t \sim \mathcal{N}(\alpha_t{\bm{x}}^k, \sigma^2_tI)$, ${\bm{u}}_s \sim \mathcal{N}(\alpha_s{\bm{x}}^k, \sigma^2_sI)$, ${\bm{u}}_t = {\bm{z}}_t-\alpha_t({{\bm{x}}}_t - {\bm{x}}^k), {\bm{u}}_s = {\bm{z}}_s-\alpha_s({{\bm{x}}}_s - {\bm{x}}^k)$. From the above definition, it is reasonable to assume ${\bm{u}}_t, {\bm{u}}_s$ follow the standard {DM} transitionm which means that: \begin{equation*} \begin{split} {\bm{u}}_t &= \alpha_{t|s}{\bm{u}}_s + \sigma_{t|s}{\bm{\epsilon}}, \ \ {\bm{\epsilon}}\sim \mathcal{N}(\bm{0}, I) \\ \Rightarrow {\bm{z}}_t - \alpha_t({{\bm{x}}}_t - {\bm{x}}^k) &= \alpha_{t|s}\left( {\bm{z}}_s - \alpha_s({{\bm{x}}}_s - {\bm{x}}^k) \right) + \sigma_{t|s}{\bm{\epsilon}}, \ \ {\bm{\epsilon}}\sim \mathcal{N}(\bm{0}, I) \\ \Rightarrow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\bm{z}}_t &= \alpha_{t|s}{\bm{z}}_s + \alpha_t({{\bm{x}}}_t - {{\bm{x}}}_s) + \sigma_{t|s}{\bm{\epsilon}}, \ \ {\bm{\epsilon}}\sim \mathcal{N}(\bm{0}, I) \end{split} \end{equation*} As typically ${{\bm{x}}}_t \neq {{\bm{x}}}_s$ and both ${{\bm{x}}}_t, {{\bm{x}}}_s$ are the functions of ${\bm{x}}^k$. Then ${\bm{z}}_t$ is dependent on both ${\bm{z}}_s$ and ${\bm{x}}^k=f_{0:k}({\bm{x}})$, resulting in a non-Markovian transition: \begin{equation*} \begin{split} q({\bm{z}}_t|{\bm{z}}_s, {\bm{x}}) = \mathcal{N}({\bm{z}}_t; \alpha_{t|s}{\bm{z}}_s + \alpha_t\cdot \left({{\bm{x}}}_t - {{\bm{x}}}_s \right),\sigma^2_{t|s}I), \end{split} \end{equation*} Note that, this equation stands only when ${{\bm{x}}}_t, {{\bm{x}}}_s$ and ${\bm{x}}_k$ in the same space, and we did not make specific assumptions to the form of ${{\bm{x}}}_t$. \subsection{$q(\mathbf{z}_s|\mathbf{z}_t,\mathbf{x})$} The reverse diffusion distribution follows the Bayes' Theorem: $q({\bm{z}}_s|{\bm{z}}_t, {\bm{x}})\propto q({\bm{z}}_s|{\bm{x}})q({\bm{z}}_t|{\bm{z}}_s,{\bm{x}})$, where both $q({\bm{z}}_s|{\bm{x}})$ and $q({\bm{z}}_t|{\bm{z}}_s,{\bm{x}})$ are Gaussian distributions with general forms of $\mathcal{N}({\bm{z}}_s|\bm{\mu}, \sigma^2I)$ and $\mathcal{N}({\bm{z}}_t|A{\bm{z}}_s+{\bm{b}}, \sigma'^2I)$, respectively. Based on \cite{bishop2006pattern}~(2.116), we can derive: \begin{equation*} q({\bm{z}}_s|{\bm{z}}_t,{\bm{x}}) = \mathcal{N}({\bm{z}}_s|\bar{\sigma}^{-2}\left( \sigma'^{-2}A^\top({\bm{z}}_t - {\bm{b}}) + \sigma^{-2}\bm{\mu} \right), \bar{\sigma}^2I), \end{equation*} where $\bar{\sigma}^2=(\sigma^{-2}+\sigma'^{-2} \|A\|^2)^{-1}$. Therefore, we can get the exact form by plugging our variables $\bm{\mu}=\alpha_s\hat{{\bm{x}}}^s_k$, $\sigma=\sigma_s$, $A = \alpha_{t|s}I$, ${\bm{b}}=\alpha_t\cdot ({{\bm{x}}}_t-{{\bm{x}}}_s)$, $\sigma'=\sigma_{t|s}$ into above equation, we get: \begin{equation*} q({\bm{z}}_s|{\bm{z}}_t,{\bm{x}}) = \mathcal{N}({\bm{z}}_s|\alpha_s{{\bm{x}}}_s + \sqrt{\sigma_s^2-\bar{\sigma}^2}{\bm{\epsilon}}_t, \bar{\sigma}^2I), \label{eq.reverse0} \end{equation*} where ${\bm{\epsilon}}_t = ({\bm{z}}_t - \alpha_t{{\bm{x}}}_t) / \sigma_t$ and $\bar{\sigma}={\sigma_s\sigma_{t|s}}/{\sigma_t}$. Alternatively, if we assume ${{\bm{x}}}_t$ take the interpolation formulation in \Eqref{eq.marginal_multi}, we can also re-write ${{\bm{x}}}_s$ with ${{\bm{x}}}_t + \frac{t-s}{t-\tau_k}\bm{\delta}_t$, where we define a new variable $\bm{\delta}_t={\bm{x}}^k - {{\bm{x}}}_t$. As stated in the main context (\Cref{sec.define}), such change makes $q({\bm{z}}_t|{\bm{z}}_s, {\bm{x}})$ avoid computing ${{\bm{x}}}_s$ which may be potentially costly. In this way, we re-write the above equation as follows: \begin{equation} q({\bm{z}}_s|{\bm{z}}_t,{\bm{x}}) = \mathcal{N}({\bm{z}}_s|, \alpha_s ({{\bm{x}}}_t + \bm{\delta}_t\cdot (t-s)/(t-\tau_k)) + \sqrt{\sigma_s^2-\bar{\sigma}^2}{\bm{\epsilon}}_t, \bar{\sigma}^2I), \label{eq.reverse} \end{equation} \subsection{Diffusion inside stages} In the inference time, we generate data by iteratively sampling from the conditional distribution $p({\bm{z}}_s|{\bm{z}}_t) = \mathbb{E}_{{\bm{x}}}\left[q({\bm{z}}_s|{\bm{z}}_t,{\bm{x}})\right]$ based on \Eqref{eq.reverse}. In practice, the expectation over ${\bm{x}}$ is approximated by our model's prediction. As shown in \Eqref{eq.learn_double}, in this work, we propose a ``double-prediction'' network $\theta$ that reads ${\bm{z}}_t$, and simultaneously predicts ${{\bm{x}}}_t$ and $\bm{\delta}_t$ with ${{\bm{x}}}_\theta$ and $\bm{\delta}_\theta$, respectively. The predicted Gaussian noise is denoted as ${{\bm{\epsilon}}}_\theta=({\bm{z}}_t - \alpha_t{{\bm{x}}}_\theta) / \sigma_t$. Note that the prediction ${{\bm{x}}}_\theta$ and ${{\bm{\epsilon}}}_\theta$ are interchangable, which means that we can readily derive one from the other's prediction. Therefore, by replacing ${{\bm{x}}}_t, \bm{\delta}_t, {\bm{\epsilon}}_t, $ with ${{\bm{x}}}_\theta, \bm{\delta}_\theta, {{\bm{\epsilon}}}_\theta$ in \Eqref{eq.reverse}, we obtain the sampling algorithm shown in Algorithm~\ref{alg.sampling}: Line 6. \subsection{Noise at boundaries} \label{sec.noise_at_boundary} In this paper, the overall principle is to handle the transition across stage boundary is to ensure the forward diffusion to be deterministic and smooth, therefore almost no information is lost during the stage change. Such requirement is important as it directly correlated to the denoising performance. Failing to recover the lost information will directly affect the diversity of the model generates. \begin{wrapfigure}{R}{0.5\textwidth} \centering \vspace{-10pt} \includegraphics[width=\linewidth]{figures/example_eps} \caption{Two na\"ive ways for down-sampling.\label{fig:example_eps}} \vspace{-5pt} \end{wrapfigure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/rescale.png} \caption{Illustration of noise schedule ($\alpha_t$ and $\sigma_t$) for {$f$-DM}-DS models with $5$ stages ($16^2\rightarrow 256^2$). We use the standard cosine noise schedule $\alpha_t=\cos(0.5\pi t)$. We also show the difference between the linear/cosine stage schedule, as well as the proposed SP/VP re-scaling methods. } \label{fig:illust_rescale} \end{figure} \paragraph{Forward diffusion} As described in \Cref{sec.define}, since we have the control of the signal and the noise separately, we can directly apply the deterministic transformation on the signal, and dropping noise elements. Alternatively, we also implemented a different $\zeta({\bm{\epsilon}})$ based on averaging. As shown in Figure~\ref{fig:example_eps}, if the transformation is down-sampling, we can use the fact that the mean of Gaussian noises is still Gaussian with lower variance: ${({\bm{\epsilon}}_0 + {\bm{\epsilon}}_1 + {\bm{\epsilon}}_2 + {\bm{\epsilon}}_3)}/{4} \sim \mathcal{N}(\bm{0}, \frac{1}{4}I)$. Therefore, $\times 2$ rescaling is needed on the resulted noise. \paragraph{Reverse diffusion} Similarly, we can also define the reverse process if $\zeta$ is chosen to be averaging. Different from ``dropping'' where the reverse process is simply adding independent Gaussian noises, the reverse of ``averaging'' requests to sample $\sum_{i=0}^3{\bm{\epsilon}}_i = 2{\bm{\epsilon}}$ given the input noise ${\bm{\epsilon}}$, while having $p({\bm{\epsilon}}_i) = \mathcal{N}(\bm{0}, I), i=0,1,2,3$. Such problem has a closed solution and can be implemented in an autoregressive fashion: \begin{equation*} \begin{split} &{\bm{a}} = 2{\bm{\epsilon}}; \\ &{\bm{\epsilon}}_0 = {\bm{a}} / 4 + \sqrt{3/4}\cdot\hat{{\bm{\epsilon}}}_1, \ {\bm{a}} = {\bm{a}} - {\bm{\epsilon}}_0, \ \hat{{\bm{\epsilon}}}_1\sim \mathcal{N}(\bm{0}, I); \\ &{\bm{\epsilon}}_1 = {\bm{a}} / 3 + \sqrt{2/3}\cdot\hat{{\bm{\epsilon}}}_2, \ {\bm{a}} = {\bm{a}} - {\bm{\epsilon}}_1, \ \hat{{\bm{\epsilon}}}_2\sim \mathcal{N}(\bm{0}, I); \\ &{\bm{\epsilon}}_2 = {\bm{a}} / 2 + \sqrt{1/2}\cdot\hat{{\bm{\epsilon}}}_3, \ {\bm{a}} = {\bm{a}} - {\bm{\epsilon}}_2, \ \hat{{\bm{\epsilon}}}_3\sim \mathcal{N}(\bm{0}, I); \\ &{\bm{\epsilon}}_3 = {\bm{a}} \end{split} \end{equation*} Similar to the case of ``dropping'', we also need $3$ additional samples $\hat{{\bm{\epsilon}}}_{1:3}$ to contribute to four noises, therefore it can be implemented in the same way as described in \Cref{sec.define}. Empirically, reversing the ``averaging'' steps tends to produce samples with better FID scores. However, since it introduces correlations into the added noise, which may cause undesired biases especially in DDIM sampling. \paragraph{Intuition behind Re-scaling} Here we present a simple justification of applying noise rescaling. Suppose the signal dimensionality changes from $M_{k-1}$ to $M_{k}$ when crossing the stage boundary, and such change is caused by different sampling rates. Based the proposed resolution-agnostic SNR (\Eqref{eq.snr}), the number of sampled points inside $\Omega$ is proportional to its dimensionality. Generally, it is safe to assume signals are mostly low-frequency. Therefore, averaging signals will not change its variance. By contrast, as shown above, averaging Gaussian noises results in lower variance, where in our case, the variance is proportional to $M^{-1}$. Therefore, suppose the signal magnitude does not change, we can get the re-scaling low by forcing $\texttt{SNR}({\bm{z}}_{\tau}) =\texttt{SNR}({\bm{z}}_{\tau^-})$ at the stage boundary: \begin{equation*} \sigma_{\tau^-}^2 \cdot M^{-1}_{k-1} = \sigma_{\tau}^2 \cdot M^{-1}_{k}, \end{equation*} which derives the signal preserving (SP) rescaling in \Eqref{eq.rescaling}. In Figure~\ref{fig:illust_rescale}, we show an example of the change of $\alpha$ and $\sigma$ with and without applying the re-scaling techqnique for {$f$-DM}-DS models. \subsection{DDIM sampling} The above derivations only describe the standard ancestral sampling ($\eta=1$) where $q({\bm{z}}_s|{\bm{z}}_t,{\bm{x}})$ is determined by Bayes' Theorem. Optionally, one can arbitrarily define any proper reverse diffusion distribution as long as the marginal distributions match the definition. For example, {$f$-DM} can also perform deterministic DDIM~\citep{song2021denoising} by setting $\eta=0$ in Algorithm~\ref{alg.sampling}. Similar to~\cite{song2021denoising}, we can also obtain the proof based on the induction argument. Figure~\ref{fig:ddim_example} shows the comparison of DDIM sampling between the standard DMs and the proposed {$f$-DM}. In DDIM sampling ($\eta=0$), the only randomness comes from the initial noise at $t=1$. Due to the proposed noise resampling technique, {$f$-DM} enables a multi-scale noising process where the sampled noises are splitted and sent to different steps of the diffusion process. In this case, compared to standard DMs, we gain the ability of controlling image generation at different levels, resulting in smooth semantic interpretation. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/example_ddim.png} \caption{We show the comparison of the DDIM sampling.} \label{fig:ddim_example} \end{figure} \section{Detailed Information of Transformations} \label{sec.transformation_details} We show the difference of all the transformations used in this paper in Figure~\ref{fig:f_example}. \subsection{Downsampling} In early development of this work, we explored various combinations of performing down-sampling: ${\bm{f}}=\{\text{bilinear}, \text{nearest}, \text{Gaussian blur $+$ subsample}\}$, ${\bm{g}} = \{\text{bilinear}, \text{bicubic}, \text{nearest}, \text{neural-based}\}$. While all these combinations produced similar results, we empirically on FFHQ found that both choosing \textit{bilinear interpolation} for both ${\bm{f}},{\bm{g}}$ achieves most stable results. Therefore, all the main experiments of {$f$-DM}-DS are conducted on bilinear interpolation. As discussed in \Cref{sec.application}, we choose $K=4$, which progressively downsample a $256^2$ into $16^2$. \subsection{Blurring} We experimented two types of blurring functions. For upsampling-based blurring, we use the same number of stages as the downsampling case; for Gaussian-based blurring, we adopt $K=7$ with corresponding kernel sizes $\sigma_B = 15 \sin^2(\frac{\pi}{2}\tau_k)$, where $\tau_k$ follows the \textit{cosine} stage schedule. In practice, we implement blurring function in frequency domain following~\cite{rissanen2022generative} based on discrete cosine transform (DCT). \subsection{VAEs} In this paper, we only consider vector quantized (VQ) models with single layer latent space, while our methods can be readily applied to hierarchical~\citep{razavi2019generating} and KL-regularized VAE models~\citep{vahdat2020nvae}. Following~\cite{rombach2021highresolution}, we take the feature vectors before the quantization layers as the latent space, and keep the quantization step in the decoder (${\bm{g}}$) when training diffusion models. We follow an open-sourced implementation~\footnote{\url{https://github.com/rosinality/vq-vae-2-pytorch}} to train our VQVAE model on ImageNet. The model consists of two strided convolution blocks which by default downsamples the input image by a factor of $8$. We use the default hyper-parameters and train the model for $50$ epochs with a batch-size of $128$. For a fair comparison to match the latent size of VQVAE, we use the pre-trained autoencoding model~\citep{rombach2021highresolution} with the setting of \{f=8, VQ (Z=256, d=4)\}. We directly use the checkpoint~\footnote{\url{https://ommer-lab.com/files/latent-diffusion/vq-f8-n256.zip}} provided by the authors. Note that the above setting is not the best performing model (LDM-4) in the original paper. Therefore, it generates more artifacts when reconstructing images from the latents. Before training, we compute the signal magnitude ratio $\gamma_k$ (\Eqref{eq.rescaling}) over the entire training set of FFHQ, where we empirically set $\gamma_k=2.77$ for VQ-GAN and $\gamma_k=2.0$ for VQ-VAE, respectively. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/example_f.png} \caption{We show examples of the five transformations (downsample, blur, VAEs) used in this paper. For downsampling, we resize the image with nearest upsampler; for VQ-VAE/VQ-GAN, we visualize the first 3 channels of the latent feature maps.} \label{fig:f_example} \end{figure} \section{Dataset Details} \label{sec.dataset} \paragraph{FFHQ} (\url{https://github.com/NVlabs/ffhq-dataset}) contains 70k images of real human faces in resolution of $1024^2$. For most of our experiments, we resize the images to $256^2$. In addition, we also test our {$f$-DM}-DS model on the original $1024^2$ resolutions where the generated results are shown in Figure~\ref{fig:ffhq_1024}. \paragraph{AFHQ} (\url{https://github.com/clovaai/stargan-v2#animal-faces-hq-dataset-afhq}) contains 15k images of animal faces including cat, dog and wild three categories in resolution of $512^2$. We train conditional diffusion models by merging all training images with the label information. All images are resized to $256^2$. \paragraph{LSUN} (\url{https://www.yf.io/p/lsun}) is a collection of large-scale image dataset containing 10 scenes and 20 object categories. Following previous works~\cite{rombach2021highresolution}, we choose the two categories -- Church ($126$k images) and Bed ($3$M images), and train separate unconditional models on them. As LSUN-Bed is relatively larger, we set the iterations longer than other datasets. All images are resized to $256^2$ with center-crop. \paragraph{ImageNet} (\url{https://image-net.org/download.php}) we use the standard ImageNet-1K dataset which contains $1.28$M images across $1000$ classes. We directly merge all the training images with class-labels. All images are resized to $256^2$ with center-crop. For both {$f$-DM} and the baseline models, we adopt the classifier-free guidance~\citep{ho2022classifier} with the unconditional probability $0.2$. In the inference time, we use the guidance scale ($s=2$) for computating FIDs, and $s=3$ to synthesize examples for comparison. \section{Implementation Details} \subsection{Architecture Configurations} We implement {$f$-DM} strictly following standard U-Net architecture in~\cite{nichol2021improved}. As shown in Figure~\ref{fig:architecture}, input ${\bm{z}}_t$ will be directed to the corresponding inner layer based on spatial resolutions, and a stage-specific adapter is adopted to transform the channel dimension. Such architecture also allows memory-efficient batching across stages where we can create a batch with various resolutions, and split the computation based on the resolutions. \subsection{Hyper-parameters} In our experiments, we adopt the following two sets of parameters based on the complexity of the dataset: \textit{base} (FFHQ, AFHQ, LSUN-Church/Bed) and \textit{big} (ImageNet). For \textit{base}, we use $1$ residual block per resolution, with the basic dimension $128$. For \textit{big}, we use $2$ residual blocks with the basic dimension $192$. Given one dataset, all the models with various transformations including the baseline DMs share the same hyper-parameters except for the adapters. We list the hyperparameter details in Table~\ref{tab:hparams}. \begin{table}[h] \small \centering \begin{tabular}{l |c|c|c|c|c|c} \toprule Hyper-param. & FFHQ* & FFHQ & AFHQ & LSUN-Church & LSUN-Bed & ImageNet \\ \midrule image res. & $1024^2$& $256^2$ & $256^2$ & $256^2$ & $256^2$& $256^2$ \\ \# of classes & None & None & 3 & None & None & 1000 \\ c.f. guidance & - & - & No & - & - & Yes \\ \midrule \#channels & $128$ & $128$ & $128$ & $128$ & $128$ & $192$\\ \#res-blocks & $1$ & $1$ & $1$ & $1$ & $1$ & $2$ \\ channel multi. & $[\frac{1}{4},\frac{1}{2},1,1,2,2,4,4]$ & \multicolumn{5}{c}{$[1,1,2,2,4,4]$} \\ attention res. & \multicolumn{6}{c}{$16,8$} \\ \midrule batch size & $32$ & $32$ & $32$ & $32$ & $32$ & $64$ \\ lr & \multicolumn{6}{c}{\num{2e-5}} \\ iterations & $500K$ & $500K$ & $500K$ & $500K$ & $1200K$ & $2500K$ \\ \bottomrule \end{tabular} \caption{Hyperparameters and settings for {$f$-DM} on different datasets. *We also include experiments of training models on $1024^2$ resolutions.} \label{tab:hparams} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/arch_v2.png} \caption{An illustration of the modified U-Net architecture. Time conditioning is omitted. The parameters are partially shared across stages based on the resolutions. Stage-specific adapters are adopted to transform the input dimensions. } \label{fig:architecture} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_vsCSC} \caption{Additional comparisons with Cascaded DM on AFHQ. $\uparrow$ Comparison of the reverse diffusion process from $16^2$ to $256^2$. We visualize the denoised outputs (${\bm{x}}_t$) and the corresponding next noised input (${\bm{z}}_s$) near the start \& end of each resolution diffusion. $\downarrow$ Comparison of random samples generated by Cascaded DM and {$f$-DM}-DS.} \label{fig:vs_csc_afhq} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_vsLDM} \caption{Additional comparisons with LDMs on AFHQ.} \label{fig:vs_ldm_afhq} \end{figure} \section{Additional Results} \subsection{v.s. Transformation-specific Baselines} We include more comparisons in Figure~\ref{fig:vs_csc_afhq} and \ref{fig:vs_ldm_afhq}. From Figure~\ref{fig:vs_csc_afhq}, we compare the generation process of {$f$-DM} and the cascaded DM. It is clear that {$f$-DM} conducts coarse-to-fine generation in a more natural way, and the results will not suffer from error propagation. As shown in Figure~\ref{fig:vs_ldm_afhq}, LDM outputs are easily affected by the chosen decoder. VQVAE decoder tends output blurry images; the output from VQGAN decoder has much finer details while remaining noticable artifacts (e.g., eyes, furs). By contrast, {$f$-DM} perform stably for both latent spaces. \subsection{Conditional Generation} \label{sec.conditional_generation} We include additional results of conditional generation, i.e., super-resolution (Figure~\ref{fig:sr_afhq}) and de-blurring (Figure~\ref{fig:deblur_afhq}). We also show the comparison with or without the proposed gradient-based initialization, which greatly improves the faithfulness of conditional generation when the input noise is high (e.g., $16\times 16$ input). \subsection{Additional Qualitative Results} \label{sec.additional_results} Finally, we provide additional qualitative results for our unconditional models for FFHQ (Figure~\ref{fig:ffhq_samples},\ref{fig:ffhq_1024}), AFHQ (Figure~\ref{fig:afhq_samples}), LSUN (Figure~\ref{fig:lsun_samples}) and our class-conditional ImageNet model (Figure~\ref{fig:imagenet},\ref{fig:imagenet2}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_SR_comp} \caption{Additional examples of super-resolution (SR) with the unconditional {$f$-DM}-DS trained on AFHQ. $\uparrow$ The same input image with various resolution $16^2,32^2,64^2,128^2$. We sample $3$ random seeds for each resolution input. We also show the difference with and without applying gradient-based initialization (Grad-Init) on ${\bm{z}}$. $\downarrow$ SR results of various $16^2$ inputs. } \label{fig:sr_afhq} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_DeBlur_comp} \caption{Additional examples of de-blurring with the unconditional {$f$-DM}-Blur-G trained on AFHQ. $\uparrow$ The same input image with various Gaussian kernel sizes $\sigma=15,9,4,1.4$. We sample $3$ random seeds for each resolution input. We also show the difference with and without applying gradient-based initialization (Grad-Init) on ${\bm{z}}$. $\downarrow$ Deblurred results of various blur images. } \label{fig:deblur_afhq} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_FFHQ_v1} \caption{Random samples generated by five {$f$-DM}s trained on FFHQ $256\times 256$. All faces presented are synthesized by the models, and are not real identities.} \label{fig:ffhq_samples} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_AFHQ_v1} \caption{Random samples generated by five {$f$-DM}s trained on AFHQ $256\times 256$.} \label{fig:afhq_samples} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_LSUN_v1} \caption{Random samples generated by {$f$-DM}s trained on LSUN-Church \& -Bed $256\times 256$.} \label{fig:lsun_samples} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_ImageNet_v1} \caption{Random samples generated by {$f$-DM}-DS/VQVAE trained on ImageNet $256\times 256$ with classifier-free guidance ($s=3$). Classes from top to bottom: {\it red panda, robin, daisy, valley, trifle, comic book}.} \label{fig:imagenet} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/Appendix_ImageNet2_v1} \caption{Random samples generated by {$f$-DM}-DS/VQVAE trained on ImageNet $256\times 256$ with classifier-free guidance ($s=3$). Classes from top to bottom: {\it school bus, pizza, seashore, photocopier, golden retriever, axolotl}.} \label{fig:imagenet2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/debug_ffhq2} \caption{Random samples generated by {$f$-DM}-DS trained on FFHQ $1024\times 1024$. All faces presented are synthesized by the models, and are not real identities.} \label{fig:ffhq_1024} \end{figure}
1,116,691,500,597
arxiv
\section{Introduction} Open-domain question answering (OpenQA) aims to answer factoid questions without a pre-specified domain and has numerous real-world applications. In OpenQA, a large collection of documents (\eg, Wikipedia) are often used to seek information pertaining to the questions. One of the most common approaches uses a retriever-reader architecture \cite{chen-etal-2017-reading}, which first retrieves a small subset of documents \textit{using the question as the query} and then reads the retrieved documents to extract (or generate) an answer. The retriever is crucial as it is infeasible to examine every piece of information in the entire document collection (\eg, millions of Wikipedia passages) and the retrieval accuracy bounds the performance of the (extractive) reader. Early OpenQA systems \cite{chen-etal-2017-reading} use classic retrieval methods such as TF-IDF and BM25 with sparse representations. Sparse methods are lightweight and efficient, but unable to perform semantic matching and fail to retrieve relevant passages without lexical overlap. More recently, methods based on dense representations \cite{guu2020realm,karpukhin2020dense} learn to embed queries and passages into a latent vector space, in which text similarity beyond lexical overlap can be measured. Dense retrieval methods can retrieve semantically relevant but lexically different passages and often achieve better performance than sparse methods. However, the dense models are more computationally expensive and suffer from information loss as they condense the entire text sequence into a fixed-size vector that does not guarantee exact matching \cite{luan2020sparse}. There have been some recent studies on query reformulation with text generation for other retrieval tasks, which, for example, rewrite the queries to context-independent~\cite{yu2020few,lin2020query,vakulenko2020question} or well-formed~\cite{liu2019generative} ones. However, these methods require either task-specific data (\eg, conversational contexts, ill-formed queries) or external resources such as paraphrase data~\cite{zaiem2019sequence,wang2020deep} that cannot or do not transfer well to OpenQA. Also, some rely on time-consuming training process like reinforcement learning (RL)~\cite{nogueira-cho-2017-task,liu2019generative,wang2020deep} that is not efficient enough for OpenQA (more discussions in Sec.~\ref{sec:related_work}). In this paper, we propose Generation-Augmented Retrieval (\textsc{Gar}\xspace), which augments a query through text generation of a pre-trained language model (PLM). Different from prior studies that reformulate queries, \textsc{Gar}\xspace does not require external resources or downstream feedback via RL as supervision, because it does not \textit{rewrite} the query but \textit{expands} it with heuristically discovered relevant contexts, which are fetched from PLMs and provide richer background information (Table~\ref{tab:generation_eg}). For example, by prompting a PLM to generate the title of a relevant passage given a query and appending the generated title to the query, it becomes easier to retrieve that relevant passage. Intuitively, the generated contexts explicitly express the search intent not presented in the original query. As a result, \textsc{Gar}\xspace with sparse representations achieves comparable or even better performance than state-of-the-art approaches~\cite{karpukhin2020dense,guu2020realm} with dense representations of the original queries, while being more lightweight and efficient in terms of both training and inference (including the cost of the generation model) (Sec.~\ref{sec:runtime}). Specifically, we expand the query (question) by adding relevant contexts as follows. We conduct seq2seq learning with the question as the input and various freely accessible in-domain contexts as the output such as \textit{the answer, the sentence where the answer belongs to}, and \textit{the title of a passage that contains the answer}. We then append the generated contexts to the question as the \textit{generation-augmented query} for retrieval. We demonstrate that using multiple contexts from diverse generation targets is beneficial as fusing the retrieval results of different generation-augmented queries consistently yields better retrieval accuracy. We conduct extensive experiments on the Natural Questions (NQ) \cite{kwiatkowski-etal-2019-natural} and TriviaQA (Trivia) \cite{joshi-etal-2017-triviaqa} datasets. The results reveal four major advantages of \textsc{Gar}\xspace: (1) \textsc{Gar}\xspace, combined with BM25, achieves significant gains over the same BM25 model that uses the original queries or existing unsupervised query expansion (QE) methods. (2) \textsc{Gar}\xspace with sparse representations (BM25) achieves comparable or even better performance than the current state-of-the-art retrieval methods, such as DPR \cite{karpukhin2020dense}, that use dense representations. (3) Since \textsc{Gar}\xspace uses sparse representations to measure lexical overlap\footnote{Strictly speaking, \textsc{Gar}\xspace with sparse representations handles semantics before retrieval by enriching the queries, while maintaining the advantage of exact matching.}, it is complementary to dense representations: by fusing the retrieval results of \textsc{Gar}\xspace and DPR (denoted as $\textsc{Gar}^{\texttt{+}}$\xspace), we obtain consistently better performance than either method used individually. (4) \textsc{Gar}\xspace outperforms DPR in the end-to-end QA performance (EM) when the same extractive reader is used: EM=41.8 (43.8 for $\textsc{Gar}^{\texttt{+}}$\xspace) on NQ and 62.7 on Trivia, creating new state-of-the-art results for extractive OpenQA. \textsc{Gar}\xspace also outperforms other retrieval methods under the generative setup when the same generative reader is used: EM=38.1 (45.3 for $\textsc{Gar}^{\texttt{+}}$\xspace) on NQ and 62.2 on Trivia. \start{Contributions} (1) We propose Generation-Augmented Retrieval (\textsc{Gar}\xspace), which augments queries with heuristically discovered relevant contexts through text generation without external supervision or time-consuming downstream feedback. (2) We show that using generation-augmented queries achieves significantly better retrieval and QA results than using the original queries or existing unsupervised QE methods. % (3) We show that \textsc{Gar}\xspace, combined with a simple BM25 model, achieves new state-of-the-art performance on two benchmark datasets in extractive OpenQA and competitive results in the generative setting. \section{Related Work} \label{sec:related_work} \start{Conventional Query Expansion} \textsc{Gar}\xspace shares some merits with query expansion (QE) methods based on pseudo relevance feedback \cite{rocchio1971relevance,abdul2004umass,lv2010positional} in that they both expand the queries with relevant contexts (terms) without the use of external supervision. \textsc{Gar}\xspace is superior as it expands the queries with knowledge stored in the PLMs rather than the retrieved passages and its expanded terms are learned through text generation. \start{Recent Query Reformulation} There are recent or concurrent studies \cite{nogueira-cho-2017-task,zaiem2019sequence,yu2020few,vakulenko2020question,lin2020query} that reformulate queries with generation models for other retrieval tasks. However, these studies are not easily applicable or efficient enough for OpenQA because: (1) They require external resources such as paraphrase data~\cite{zaiem2019sequence}, search sessions~\cite{yu2020few}, or conversational contexts~\cite{lin2020query,vakulenko2020question} to form the reformulated queries, which are not available or showed inferior domain-transfer performance in OpenQA~\cite{zaiem2019sequence}; (2) They involve time-consuming training process such as RL. For example, \citet{nogueira-cho-2017-task} reported a training time of 8 to 10 days as it uses retrieval performance in the reward function and conducts retrieval at each iteration. In contrast, \textsc{Gar}\xspace uses freely accessible in-domain contexts like passage titles as the generation targets and standard seq2seq learning, which, despite its simplicity, is not only more efficient but effective for OpenQA. \start{Retrieval for OpenQA} Existing sparse retrieval methods for OpenQA \cite{chen-etal-2017-reading} solely rely on the information of the questions. \textsc{Gar}\xspace extends to contexts relevant to the questions by extracting information inside PLMs and helps sparse methods achieve comparable or better performance than dense methods~\cite{guu2020realm,karpukhin2020dense}, while enjoying the simplicity and efficiency of sparse representations. \textsc{Gar}\xspace can also be used with dense representations to seek for even better performance, which we leave as future work. \start{Generative QA} Generative QA generates answers through seq2seq learning instead of extracting answer spans. Recent studies on generative OpenQA \cite{lewis2020retrieval,min2020ambigqa,izacard2020leveraging} are orthogonal to \textsc{Gar}\xspace in that they focus on improving the reading stage and directly reuse DPR \cite{karpukhin2020dense} as the retriever. Unlike generative QA, the goal of \textsc{Gar}\xspace is not to generate perfect answers to the questions but pertinent contexts that are helpful for retrieval. Another line in generative QA learns to generate answers without relevant passages as the evidence but solely the question itself using PLMs \cite{roberts2020much,brown2020language}. \textsc{Gar}\xspace further confirms that one can extract factual knowledge from PLMs, which is not limited to the answers as in prior studies but also other relevant contexts. \section{Generation-Augmented Retrieval} \subsection{Task Formulation} OpenQA aims to answer factoid questions without pre-specified domains. We assume that a large collection of documents $C$ (\ie, Wikipedia) are given as the resource to answer the questions and a retriever-reader architecture is used to tackle the task, where the retriever retrieves a small subset of the documents $D \subset C$ and the reader reads the documents $D$ to extract (or generate) an answer. Our goal is to improve the effectiveness and efficiency of the retriever and consequently improve the performance of the reader. \subsection{Generation of Query Contexts} In \textsc{Gar}\xspace, queries are augmented with various heuristically discovered relevant contexts in order to retrieve more relevant passages in terms of both quantity and quality. For the task of OpenQA where the query is a question, we take the following three freely accessible contexts as the generation targets. We show in Sec.~\ref{res_retrieval} that having multiple generation targets is helpful in that fusing their results consistently brings better retrieval accuracy. \start{Context 1: The default target (answer)} The default target is the label in the task of interest, which is the answer in OpenQA. The answer to the question is apparently useful for the retrieval of relevant passages that contain the answer itself. As shown in previous work \cite{roberts2020much,brown2020language}, PLMs are able to answer certain questions solely by taking the questions as input (\ie, closed-book QA). Instead of using the generated answers directly as in closed-book QA, \textsc{Gar}\xspace treats them as contexts of the question for retrieval. The advantage is that even if the generated answers are partially correct (or even incorrect), they may still benefit retrieval as long as they are relevant to the passages that contain the correct answers (\eg, co-occur with the correct answers). \start{Context 2: Sentence containing the default target} The sentence in a passage that contains the answer is used as another generation target. Similar to using answers as the generation target, the generated sentences are still beneficial for retrieving relevant passages even if they do not contain the answers, as their semantics is highly related to the questions/answers (examples in Sec.~\ref{sec:exp_gen}). One can take the relevant sentences in the ground-truth passages (if any) or those in the positive passages of a retriever as the reference, depending on the trade-off between reference quality and diversity. \start{Context 3: Title of passage containing the default target} One can also use the titles of relevant passages as the generation target if available. Specifically, we retrieve Wikipedia passages using BM25 with the question as the query, and take the page titles of positive passages that contain the answers as the generation target. We observe that the page titles of positive passages are often entity names of interest, and sometimes (but not always) the answers to the questions. Intuitively, if \textsc{Gar}\xspace learns which Wikipedia pages the question is related to, the queries augmented by the generated titles would naturally have a better chance of retrieving those relevant passages. While it is likely that some of the generated query contexts involve unfaithful or nonfactual information due to hallucination in text generation \cite{mao2020constrained} and introduce noise during retrieval, they are beneficial rather than harmful overall, as our experiments show that \textsc{Gar}\xspace improve both retrieval and QA performance over BM25 significantly. Also, since we generate 3 different (complementary) query contexts and fuse their retrieval results, the distraction of hallucinated content is further alleviated. \subsection{Retrieval with Generation-Augmented Queries} \label{sec:retrieval} After generating the contexts of a query, we append them to the query to form a \textit{generation-augmented query}.\footnote{One may create a title field during document indexing and conduct multi-field retrieval but here we append the titles to the questions as other query contexts for generalizability.} We observe that conducting retrieval with the generated contexts (\eg, answers) alone as queries instead of concatenation is ineffective because (1) some of the generated answers are rather irrelevant, and (2) a query consisting of the correct answer alone (without the question) may retrieve false positive passages with unrelated contexts that happen to contain the answer. Such low-quality passages may lead to potential issues in the following passage reading stage. If there are multiple query contexts, we conduct retrieval using queries with different generated contexts separately and then fuse their results. The performance of one-time retrieval with all the contexts appended is slightly but not significantly worse. For simplicity, we fuse the retrieval results in a straightforward way: an equal number of passages are taken from the top-retrieved passages of each source. One may also use weighted or more sophisticated fusion strategies such as reciprocal rank fusion \cite{cormack2009reciprocal}, the results of which are slightly better according to our experiments.\footnote{We use the fusion tools at \url{https://github.com/joaopalotti/trectools}.} Next, one can use any off-the-shelf retriever for passage retrieval. Here, we use a simple BM25 model to demonstrate that \textsc{Gar}\xspace with sparse representations can already achieve comparable or better performance than state-of-the-art dense methods while being more lightweight and efficient (including the cost of the generation model), closing the gap between sparse and dense retrieval methods. \section{OpenQA with \textsc{Gar}\xspace} To further verify the effectiveness of \textsc{Gar}\xspace, we equip it with both extractive and generative readers for end-to-end QA evaluation. We follow the reader design of the major baselines for a fair comparison, while virtually any existing QA reader can be used with \textsc{Gar}\xspace. \subsection{Extractive Reader} For the extractive setup, we largely follow the design of the extractive reader in DPR \cite{karpukhin2020dense}. Let $D = [d_1, d_2, ..., d_k]$ denote the list of retrieved passages with passage relevance scores $\mathbf{D}$. Let $S_i = [s_1, s_2, ..., s_N]$ denote the top $N$ text spans in passage $d_i$ ranked by span relevance scores $\mathbf{S_i}$. Briefly, the DPR reader uses BERT-base \cite{devlin-etal-2019-bert} for representation learning, where it estimates the passage relevance score $\mathbf{D}_k$ for each retrieved passage $d_k$ based on the [CLS] tokens of all retrieved passages $D$, and assigns span relevance scores $S_i$ for each candidate span based on the representations of its start and end tokens. Finally, the span with the highest span relevance score from the passage with the highest passage relevance score is chosen as the answer. We refer the readers to \citet{karpukhin2020dense} for more details. \start{Passage-level Span Voting} Many extractive QA methods \cite{chen-etal-2017-reading,min2019knowledge,guu2020realm,karpukhin2020dense} measure the probability of span extraction in different retrieved passages independently, despite that their collective signals may provide more evidence in determining the correct answer. We propose a simple yet effective passage-level span voting mechanism, which aggregates the predictions of the spans in the same surface form from different retrieved passages. Intuitively, if a text span is considered as the answer multiple times in different passages, it is more likely to be the correct answer. Specifically, \textsc{Gar}\xspace calculates a normalized score $p(S_i[j])$ for the j-th span in passage $d_i$ during inference as follows: $p (S_i[j]) = \text{softmax} (\mathbf{D})[i] \times \text{softmax} (\mathbf{S_i})[j]$. \textsc{Gar}\xspace then aggregates the scores of the spans with the same surface string among all the retrieved passages as the collective passage-level score.\footnote{We find that the number of spans used for normalization in each passage does not have significant impact on the final performance (we take $N=5$) and using the raw or normalized strings for aggregation also perform similarly.} \subsection{Generative Reader} For the generative setup, we use a seq2seq framework where the input is the concatenation of the question and top-retrieved passages and the target output is the desired answer. Such generative readers are adopted in recent methods such as SpanSeqGen~\cite{min2020ambigqa} and Longformer~\cite{beltagy2020longformer}. Specifically, we use BART-large \cite{lewis2019bart} as the generative reader, which concatenates the question and top-retrieved passages up to its length limit (1,024 tokens, 7.8 passages on average). Generative \textsc{Gar}\xspace is directly comparable with SpanSeqGen \cite{min2020ambigqa} that uses the retrieval results of DPR but not comparable with Fusion-in-Decoder (FID) \cite{izacard2020leveraging} since it encodes 100 passages rather than 1,024 tokens and involves more model parameters. \section{Experiment Setup} \subsection{Datasets} We conduct experiments on the open-domain version of two popular QA benchmarks: Natural Questions (NQ) \cite{kwiatkowski-etal-2019-natural} and TriviaQA (Trivia) \cite{joshi-etal-2017-triviaqa}. The statistics of the datasets are listed in Table~\ref{tab:dataset}. \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{ \scalebox{1}{ \begin{tabular}{llrrr} \toprule \textbf{Dataset} & \textbf{Train / Val / Test} & \textbf{Q-len} & \textbf{A-len} & \textbf{\#-A}\\ \midrule NQ &79,168 / 8,757 / 3,610 &12.5 & 5.2 & 1.2\\ Trivia & 78,785 / 8,837 / 11,313 & 20.2 & 5.5 & 13.7\\ \bottomrule \end{tabular} } } \vspace{-.0cm} \caption{Dataset statistics that show the number of samples per data split, the average question (answer) length, and the number of answers for each question. } \label{tab:dataset} \vspace{-.1cm} \end{table} \subsection{Evaluation Metrics} Following prior studies \cite{karpukhin2020dense}, we use top-k retrieval accuracy to evaluate the performance of the retriever and the Exact Match (EM) score to measure the performance of the reader. \textit{Top-k retrieval accuracy} is defined as the proportion of questions for which the top-k retrieved passages contain at least one answer span, which is an upper bound of how many questions are ``answerable'' by an extractive reader. \textit{Exact Match (EM)} is the proportion of the predicted answer spans being exactly the same as (one of) the ground-truth answer(s), after string normalization such as article and punctuation removal. \subsection{Compared Methods} For passage retrieval, we mainly compare with BM25 and DPR, which represent the most used state-of-the-art methods of sparse and dense retrieval for OpenQA, respectively. For query expansion, we re-emphasize that \textsc{Gar}\xspace is the first QE approach designed for OpenQA and most of the recent approaches are not applicable or efficient enough for OpenQA since they have task-specific objectives, require external supervision that was shown to transfer poorly to OpenQA, or take many days to train (Sec.~\ref{sec:related_work}). We thus compare with a classic unsupervised QE method RM3 \cite{abdul2004umass} that does not need external resources for a fair comparison. For passage reading, we compare with both extractive~\citep{min-etal-2019-discrete,asai2019learning,lee-etal-2019-latent,min2019knowledge,guu2020realm,karpukhin2020dense} and generative~\citep{brown2020language,roberts2020much,min2020ambigqa,lewis2020retrieval,izacard2020leveraging} methods when equipping \textsc{Gar}\xspace with the corresponding reader. \input{generation_eg} \subsection{Implementation Details} \start{Retriever} We use Anserini \cite{yang2017anserini} for text retrieval of BM25 and \textsc{Gar}\xspace with its default parameters. We conduct grid search for the QE baseline RM3 \cite{abdul2004umass}. \start{Generator} We use BART-large \cite{lewis2019bart} to generate query contexts in \textsc{Gar}\xspace. When there are multiple desired targets (such as multiple answers or titles), we concatenate them with [SEP] tokens as the reference and remove the [SEP] tokens in the generation-augmented queries. For Trivia, in particular, we use the value field as the generation target of answer and observe better performance. We take the checkpoint with the best ROUGE-1 F1 score on the validation set, while observing that the retrieval accuracy of \textsc{Gar}\xspace is relatively stable to the checkpoint selection since we do not directly use the generated contexts but treat them as augmentation of queries for retrieval. \start{Reader} Extractive \textsc{Gar}\xspace uses the reader of DPR with largely the same hyperparameters, which is initialized with BERT-base \cite{devlin-etal-2019-bert} and takes 100 (500) retrieved passages during training (inference). Generative \textsc{Gar}\xspace concatenates the question and top-10 retrieved passages, and takes at most 1,024 tokens as input. Greedy decoding is adopted for all generation models, which appears to perform similarly to (more expensive) beam search. \section{Experiment Results} We evaluate the effectiveness of \textsc{Gar}\xspace in three stages: \textit{generation} of query contexts (Sec.~\ref{sec:exp_gen}), \textit{retrieval} of relevant passages (Sec.~\ref{res_retrieval}), and passage \textit{reading} for OpenQA (Sec.~\ref{sec:exp_read}). Ablation studies are mostly shown on the NQ dataset to understand the drawbacks of \textsc{Gar}\xspace since it achieves better performance on Trivia. \subsection{Query Context Generation} \label{sec:exp_gen} \start{Automatic Evaluation} To evaluate the quality of the generated query contexts, we first measure their lexical overlap with the ground-truth query contexts. As suggested by the nontrivial ROUGE scores in Table~\ref{tab:generation_num}, \textsc{Gar}\xspace does learn to generate meaningful query contexts that could help the retrieval stage. We next measure the lexical overlap between the query and the ground-truth passage. The ROUGE-1/2/L F1 scores between the original query and ground-truth passage are 6.00/2.36/5.01, and those for the generation-augmented query are 7.05/2.84/5.62 (answer), 13.21/6.99/10.27 (sentence), 7.13/2.85/5.76 (title) on NQ, respectively. Such results further demonstrate that the generated query contexts significantly increase the word overlap between the queries and the positive passages, and thus are likely to improve retrieval results.\footnote{We use F1 instead of recall to avoid the unfair favor of (longer) generation-augmented query.} \begin{table}[ht] \centering \scalebox{.8}{ \begin{tabular}{lrrr} \toprule \textbf{Context} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L}\\ \midrule Answer &33.51 & 20.54 & 33.30\\ Sentence & 37.14 & 24.71 & 33.91\\ Title & 43.20 & 32.11 & 39.67\\ \bottomrule \end{tabular} } \vspace{-.0cm} \caption{\textbf{ROUGE F1 scores of the generated query contexts} on the validation set of the NQ dataset. } \label{tab:generation_num} \vspace{-.1cm} \end{table} \begin{table*}[ht] \centering \resizebox{2\columnwidth}{!}{ \begin{tabular}{l ccccc | ccccc} \toprule \multirow{2}{*}{ \textbf{Method}} & \multicolumn{5}{c|}{\textbf{NQ}} & \multicolumn{5}{c}{\textbf{Trivia}} \\ & Top-5 & Top-20 & Top-100 & Top-500 & Top-1000 & Top-5 & Top-20 & Top-100 & Top-500 & Top-1000 \\ \midrule BM25 (ours) & 43.6 & 62.9& 78.1 & 85.5 & 87.8 & 67.7 & 77.3 & 83.9 & 87.9 & 88.9 \\ BM25 +RM3 & 44.6 & 64.2& 79.6 & 86.8 & 88.9 & 67.0 & 77.1 & 83.8 & 87.7 & 88.9 \\ DPR & \underline{68.3} & \underline{80.1} & \underline{86.1} & 90.3 & 91.2 & 72.7 & 80.2 & 84.8 & - & - \\ \textsc{Gar}\xspace &60.9 & 74.4 & 85.3 & \underline{90.3} & \underline{91.7} & \underline{73.1} & \underline{80.4} & \underline{85.7} & \textbf{88.9} & \textbf{89.7} \\ $\textsc{Gar}^{\texttt{+}}$\xspace &\textbf{70.7} & \textbf{81.6} & \textbf{88.9} & \textbf{92.0} & \textbf{93.2} & \textbf{76.0} & \textbf{82.1} & \textbf{86.6} & - & - \\ \bottomrule \end{tabular} } \caption[Caption]{\textbf{Top-k retrieval accuracy on the test sets}. The baselines are evaluated by ourselves and better than reported in \citet{karpukhin2020dense}. \textsc{Gar}\xspace helps BM25 to achieve comparable or better performance than DPR. Best and second best methods are \textbf{bold} and \underline{underlined}, respectively.} \label{tab:top_k_acc} \end{table*} \start{Case Studies} In Table~\ref{tab:generation_eg}, we show several examples of the generated query contexts and their ground-truth references. In the first example, the correct album release date appears in both the generated answer and the generated sentence, and the generated title is the same as the Wikipedia page title of the album. In the last two examples, the generated answers are wrong but fortunately, the generated sentences contain the correct answer and (or) other relevant information and the generated titles are highly related to the question as well, which shows that different query contexts are complementary to each other and the noise during query context generation is thus reduced. \subsection{Generation-Augmented Retrieval} \label{res_retrieval} \start{Comparison w. the state-of-the-art} We next evaluate the effectiveness of \textsc{Gar}\xspace for retrieval. In Table~\ref{tab:top_k_acc}, we show the top-k retrieval accuracy of BM25, BM25 with query expansion (+RM3) \cite{abdul2004umass}, DPR~\citep{karpukhin2020dense}, \textsc{Gar}\xspace, and $\textsc{Gar}^{\texttt{+}}$\xspace (\textsc{Gar}\xspace+DPR). On the NQ dataset, while BM25 clearly underperforms DPR regardless of the number of retrieved passages, the gap between \textsc{Gar}\xspace and DPR is significantly smaller and negligible when $k \geq 100$. When $k \geq 500$, \textsc{Gar}\xspace is slightly better than DPR despite that it simply uses BM25 for retrieval. In contrast, the classic QE method RM3, while showing marginal improvement over the vanilla BM25, does not achieve comparable performance with \textsc{Gar}\xspace or DPR. By fusing the results of \textsc{Gar}\xspace and DPR in the same way as described in Sec.~\ref{sec:retrieval}, we further obtain consistently higher performance than both methods, with top-100 accuracy 88.9\% and top-1000 accuracy 93.2\%. On the Trivia dataset, the results are even more encouraging -- \textsc{Gar}\xspace achieves consistently better retrieval accuracy than DPR when $k \geq 5$. On the other hand, the difference between BM25 and BM25 +RM3 is negligible, which suggests that naively considering top-ranked passages as relevant (\ie, pseudo relevance feedback) for QE does not always work for OpenQA. Results on more cutoffs of $k$ can be found in App.~\ref{sec_app:top_k_acc}. \start{Effectiveness of diverse query contexts} In Fig.~\ref{fig:fuse}, we show the performance of \textsc{Gar}\xspace when different query contexts are used to augment the queries. Although the individual performance when using each query context is somewhat similar, fusing their retrieved passages consistently leads to better performance, confirming that different generation-augmented queries are complementary to each other (recall examples in Table~\ref{tab:generation_eg}). \start{Performance breakdown by question type} In Table~\ref{tab:breakdown}, we show the top-100 accuracy of the compared retrieval methods per question type on the NQ test set. Again, \textsc{Gar}\xspace outperforms BM25 on all types of questions significantly and $\textsc{Gar}^{\texttt{+}}$\xspace achieves the best performance across the board, which further verifies the effectiveness of \textsc{Gar}\xspace. \begin{figure}[ht] \centering \includegraphics[width=0.99\linewidth]{fig/fuse_plot} \vspace{-.0cm} \vspace{-.2cm} \vspace{-.2cm} \caption{\textbf{Top-k retrieval accuracy} on the test set of NQ when fusing retrieval results of different generation-augmented queries.} \label{fig:fuse} \vspace{-.1cm} \end{figure} \begin{table}[ht] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{l rcccc } \toprule \textbf{Type} & \textbf{Percentage} & \textbf{BM25} & \textbf{DPR} & \textbf{\textsc{Gar}\xspace} & \textbf{$\textsc{Gar}^{\texttt{+}}$\xspace} \\ \midrule Who & 37.5\%& 82.1 & \underline{88.0} & 87.5 & \textbf{90.8} \\ When & 19.0\%& 73.1 & \underline{86.9} & 83.8 & \textbf{88.6} \\ What & 15.0\%& 76.5 & \underline{82.6} & 81.5 & \textbf{86.0} \\ Where & 10.9\%& 77.4 & \underline{89.1} & 87.0 & \textbf{90.8} \\ Other & 9.1\%& 79.3 & 78.1 & \underline{81.8} & \textbf{84.2} \\ How & 5.0\%& 78.2 & \underline{83.8} & 83.2 & \textbf{85.5} \\ Which & 3.3\%& 89.0 & 90.7 & \underline{94.1} & \textbf{94.9} \\ Why & 0.3\%& 90.0 & 90.0 & 90.0 & 90.0 \\ \bottomrule \end{tabular} } \caption[Caption]{\textbf{Top-100 retrieval accuracy breakdown of question type on NQ}. Best and second best methods in each category are \textbf{bold} and \underline{underlined}, respectively.} \label{tab:breakdown} \end{table} \begin{table}[t] \centering \resizebox{1.05\columnwidth}{!}{ \begin{tabular}{clcccc} \cmidrule[0.06em]{2-5} &\textbf{Method} & \textbf{NQ} & \multicolumn{2}{c}{\textbf{Trivia}} \\ \cmidrule{2-5} \multirow{9}{*}{ \rotatebox[origin=c]{90}{Extractive}} &Hard EM~\citep{min-etal-2019-discrete} & 28.1 & 50.9 & - \\ &Path Retriever~\citep{asai2019learning} & 32.6 & - & - \\ &ORQA~\citep{lee-etal-2019-latent} & 33.3 & 45.0 & - \\ &Graph Retriever~\citep{min2019knowledge} & 34.5 & 56.0 & - \\ &REALM~\citep{guu2020realm} & 40.4 & - & - & \\ &DPR~\citep{karpukhin2020dense} & 41.5 & 57.9 & - \\ &BM25 (ours) & 37.7 & 60.1 & -& \\ &\textsc{Gar}\xspace & \textbf{41.8} & \textbf{62.7} & \textbf{74.8} & \\ &$\textsc{Gar}^{\texttt{+}}$\xspace & \textbf{43.8}& - & - & \\ \cmidrule{2-5} \multirow{8}{*}{ \rotatebox[origin=c]{90}{Generative}} &GPT-3~\citep{brown2020language} & 29.9 & - & 71.2 \\ &T5~\citep{roberts2020much} & 36.6 & 60.5 & - \\ &SpanSeqGen~\citep{min2020ambigqa} & 42.2 & - & - \\ &RAG~\citep{lewis2020retrieval} & 44.5 & 56.1 & 68.0 \\ &FID \cite{izacard2020leveraging} & \textbf{51.4} & \textbf{67.6} & \textbf{80.1} \\ &BM25 (ours) & 35.3 & 58.6 & -& \\ &\textsc{Gar}\xspace & 38.1 & \textbf{62.2} & - & \\ &$\textsc{Gar}^{\texttt{+}}$\xspace & \textbf{45.3} & - & - & \\ \cmidrule[0.06em]{2-5} \end{tabular} } \caption[Caption]{\textbf{End-to-end comparison with the state-of-the-art methods in EM}. For Trivia, the left column denotes the open-domain test set and the right is the hidden Wikipedia test set on the public leaderboard.} \label{tab:sota} \end{table} \subsection{Passage Reading with \textsc{Gar}\xspace} \label{sec:exp_read} \start{Comparison w. the state-of-the-art} We show the comparison of end-to-end QA performance of extractive and generative methods in Table~\ref{tab:sota}. Extractive \textsc{Gar}\xspace achieves state-of-the-art performance among extractive methods on both NQ and Trivia datasets, despite that it is more lightweight and computationally efficient. Generative \textsc{Gar}\xspace outperforms most of the generative methods on Trivia but does not perform as well on NQ, which is somewhat expected and consistent with the performance at the retrieval stage, as the generative reader only takes a few passages as input and \textsc{Gar}\xspace does not outperform dense retrieval methods on NQ when $k$ is very small. However, combining \textsc{Gar}\xspace with DPR achieves significantly better performance than both methods or baselines that use DPR as input such as SpanSeqGen~\citep{min2020ambigqa} and RAG~\citep{lewis2020retrieval}. Also, \textsc{Gar}\xspace outperforms BM25 significantly under both extractive and generative setups, which again shows the effectiveness of the generated query contexts, even if they are heuristically discovered without any external supervision. The best performing generative method FID \cite{izacard2020leveraging} is not directly comparable as it takes more (100) passages as input. As an indirect comparison, \textsc{Gar}\xspace performs better than FID when FID encodes 10 passages (cf. Fig.~2 in \citet{izacard2020leveraging}). Moreover, since FID relies on the retrieval results of DPR as well, we believe that it is a low-hanging fruit to replace its input with \textsc{Gar}\xspace or $\textsc{Gar}^{\texttt{+}}$\xspace and further boost the performance.\footnote{This claim is later verified by the best systems in the NeurIPS 2020 EfficientQA competition \cite{min2021neurips}.} We also observe that, perhaps surprisingly, extractive BM25 performs reasonably well, especially on the Trivia dataset, outperforming many recent state-of-the-art methods.\footnote{We find that taking 500 passages during reader inference instead of 100 as in \citet{karpukhin2020dense} improves the performance of BM25 but not DPR.} Generative BM25 also performs competitively in our experiments. \start{Model Generalizability} Recent studies \cite{lewis2020question} show that there are significant question and answer overlaps between the training and test sets of popular OpenQA datasets. Specifically, 60\% to 70\% test-time answers also appear in the training set and roughly 30\% test-set questions have a near-duplicate paraphrase in the training set. Such observations suggest that many questions might have been answered by simple question or answer memorization. To further examine model generalizability, we study the per-category performance of different methods using the annotations in \citet{lewis2020question}. \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{ \scalebox{1}{ \begin{tabular}{lcccc} \toprule \textbf{Method} & \textbf{Total} & \multicolumn{1}{m{1.5cm}}{\centering \textbf{Question Overlap}} & \multicolumn{1}{m{1.5cm}}{\centering \textbf{Answer Overlap Only}} & \multicolumn{1}{m{1.5cm}}{\centering \textbf{No Overlap}}\\ \midrule DPR & 41.3 & \textbf{69.4} & 34.6 & 19.3\\ $\textsc{Gar}^{\texttt{+}}$\xspace (E) &\textbf{43.8} &66.7 &\textbf{38.1} &\textbf{23.9} \\ \midrule BART & 26.5 & 67.6 & 10.2 & 0.8\\ RAG & 44.5 & \textbf{70.7} & 34.9 & 24.8\\ $\textsc{Gar}^{\texttt{+}}$\xspace (G) &\textbf{45.3} &67.9 &\textbf{38.1} &\textbf{27.0} \\ \bottomrule \end{tabular} } } \vspace{-.0cm} \caption{\textbf{EM scores with question-answer overlap category breakdown on NQ.} (E) and (G) denote extractive and generative readers, respectively. Results of baseline methods are taken from \citet{lewis2020question}. The observations on Trivia are similar and omitted. } \label{tab:overlap} \vspace{-.1cm} \end{table} As listed in Table~\ref{tab:overlap}, for the \textit{No Overlap} category, $\textsc{Gar}^{\texttt{+}}$\xspace (E) outperforms DPR on the extractive setup and $\textsc{Gar}^{\texttt{+}}$\xspace (G) outperforms RAG on the generative setup, which indicates that better end-to-end model generalizability can be achieved by adding \textsc{Gar}\xspace for retrieval. $\textsc{Gar}^{\texttt{+}}$\xspace also achieves the best EM under the \textit{Answer Overlap Only} category. In addition, we observe that a closed-book BART model that only takes the question as input performs much worse than additionally taking top-retrieved passages, \ie, $\textsc{Gar}^{\texttt{+}}$\xspace (G), especially on the questions that require generalizability. Notably, all methods perform significantly better on the \textit{Question Overlap} category, which suggests that the high \textit{Total} EM is mostly contributed by question memorization. That said, $\textsc{Gar}^{\texttt{+}}$\xspace appears to be less dependent on question memorization given its lower EM for this category.\footnote{The same ablation study is also conducted on the retrieval stage and similar results are observed. More detailed discussions can be found in App.~\ref{sec_app:top_k_acc}.} \subsection{Efficiency of \textsc{Gar}\xspace } \label{sec:runtime} \textsc{Gar}\xspace is efficient and scalable since it uses sparse representations for retrieval and does not involve time-consuming training process such as RL \cite{nogueira-cho-2017-task,liu2019generative}. The only overhead of \textsc{Gar}\xspace is on the generation of query contexts and the retrieval with generation-augmented (thus longer) queries, whose computational complexity is significantly lower than other methods with comparable retrieval accuracy. \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \scalebox{1}{ \begin{tabular}{lccc} \toprule \textbf{} & \textbf{Training} & \textbf{Indexing} & \textbf{Retrieval} \\ \midrule DPR & 24h w. 8 GPUs & 17.3h w. 8 GPUs & 30 min w. 1 GPU \\ \textsc{Gar}\xspace & 3 $\sim$ 6h w. 1 GPU & 0.5h w. 35 CPUs & 5 min w. 35 CPUs \\ \bottomrule \end{tabular} } } \vspace{-.0cm} \caption{\textbf{Comparison of computational cost between DPR and \textsc{Gar}\xspace at different stages.} The training time of \textsc{Gar}\xspace is for one generation target but different generators can be trained in parallel. } \label{tab:runtime} \vspace{-.1cm} \end{table} We use Nvidia V100 GPUs and Intel Xeon Platinum 8168 CPUs in our experiments. As listed in Table~\ref{tab:runtime}, the training time of \textsc{Gar}\xspace is 3 to 6 hours on 1 GPU depending on the generation target. As a comparison, REALM \cite{guu2020realm} uses 64 TPUs to train for 200k steps during pre-training alone and DPR \cite{karpukhin2020dense} takes about 24 hours to train with 8 GPUs. To build the indices of Wikipedia passages, \textsc{Gar}\xspace only takes around 30 min with 35 CPUs, while DPR takes 8.8 hours on 8 GPUs to generate dense representations and another 8.5 hours to build the FAISS index \cite{JDH17}. For retrieval, \textsc{Gar}\xspace takes about 1 min to generate one query context with 1 GPU, 1 min to retrieve 1,000 passages for the NQ test set with answer/title-augmented queries and 2 min with sentence-augmented queries using 35 CPUs. In contrast, DPR takes about 30 min on 1 GPU. \section{Conclusion} In this work, we propose Generation-Augmented Retrieval and demonstrate that the relevant contexts generated by PLMs without external supervision can significantly enrich query semantics and improve retrieval accuracy. Remarkably, \textsc{Gar}\xspace with sparse representations performs similarly or better than state-of-the-art methods based on the dense representations of the original queries. \textsc{Gar}\xspace can also be easily combined with dense representations to produce even better results. Furthermore, \textsc{Gar}\xspace achieves state-of-the-art end-to-end performance on extractive OpenQA and competitive performance under the generative setup. \section{Future Extensions} \start{Potential improvements} There is still much space to explore and improve for \textsc{Gar}\xspace in future work. For query context generation, one can explore multi-task learning to further reduce computational cost and examine whether different contexts can mutually enhance each other when generated by the same generator. One may also sample multiple contexts instead of greedy decoding to enrich a query. For retrieval, one can adopt more advanced fusion techniques based on both the ranking and score of the passages. As the generator and retriever are largely independent now, it is also interesting to study how to jointly or iteratively optimize generation and retrieval such that the generator is aware of the retriever and generates query contexts more beneficial for the retrieval stage. Last but not least, it is very likely that better results can be obtained by more extensive hyper-parameter tuning. \start{Applicability to other tasks} Beyond OpenQA, \textsc{Gar}\xspace also has great potentials for other tasks that involve text matching such as conversation utterance selection \cite{lowe2015ubuntu,dinan2020second} or information retrieval \cite{nguyen2016ms,craswell2020overview}. The default generation target is always available for supervised tasks. For example, for conversation utterance selection one can use the reference utterance as the default target and then match the concatenation of the conversation history and the generated utterance with the provided utterance candidates. For article search, the default target could be (part of) the ground-truth article itself. Other generation targets are more task-specific and can be designed as long as they can be fetched from the latent knowledge inside PLMs and are helpful for further text retrieval (matching). Note that by augmenting (expanding) the queries with heuristically discovered relevant contexts extracted from PLMs instead of reformulating them, \textsc{Gar}\xspace bypasses the need for external supervision to form the original-reformulated query pairs. \section*{Acknowledgments} We thank Vladimir Karpukhin, Sewon Min, Gautier Izacard, Wenda Qiu, Revanth Reddy, and Hao Cheng for helpful discussions. We thank the anonymous reviewers for valuable comments. \bibliographystyle{acl_natbib} \section{Introduction} Open-domain question answering (OpenQA) aims to answer factoid questions without a pre-specified domain and has numerous real-world applications. In OpenQA, a large collection of documents (\eg, Wikipedia) are often used to seek information pertaining to the questions. One of the most common approaches uses a retriever-reader architecture \cite{chen-etal-2017-reading}, which first retrieves a small subset of documents \textit{using the question as the query} and then reads the retrieved documents to extract (or generate) an answer. The retriever is crucial as it is infeasible to examine every piece of information in the entire document collection (\eg, millions of Wikipedia passages) and the retrieval accuracy bounds the performance of the (extractive) reader. Early OpenQA systems \cite{chen-etal-2017-reading} use classic retrieval methods such as TF-IDF and BM25 with sparse representations. Sparse methods are lightweight and efficient, but unable to perform semantic matching and fail to retrieve relevant passages without lexical overlap. More recently, methods based on dense representations \cite{guu2020realm,karpukhin2020dense} learn to embed queries and passages into a latent vector space, in which text similarity beyond lexical overlap can be measured. Dense retrieval methods can retrieve semantically relevant but lexically different passages and often achieve better performance than sparse methods. However, the dense models are more computationally expensive and suffer from information loss as they condense the entire text sequence into a fixed-size vector that does not guarantee exact matching \cite{luan2020sparse}. There have been some recent studies on query reformulation with text generation for other retrieval tasks, which, for example, rewrite the queries to context-independent~\cite{yu2020few,lin2020query,vakulenko2020question} or well-formed~\cite{liu2019generative} ones. However, these methods require either task-specific data (\eg, conversational contexts, ill-formed queries) or external resources such as paraphrase data~\cite{zaiem2019sequence,wang2020deep} that cannot or do not transfer well to OpenQA. Also, some rely on time-consuming training process like reinforcement learning (RL)~\cite{nogueira-cho-2017-task,liu2019generative,wang2020deep} that is not efficient enough for OpenQA (more discussions in Sec.~\ref{sec:related_work}). In this paper, we propose Generation-Augmented Retrieval (\textsc{Gar}\xspace), which augments a query through text generation of a pre-trained language model (PLM). Different from prior studies that reformulate queries, \textsc{Gar}\xspace does not require external resources or downstream feedback via RL as supervision, because it does not \textit{rewrite} the query but \textit{expands} it with heuristically discovered relevant contexts, which are fetched from PLMs and provide richer background information (Table~\ref{tab:generation_eg}). For example, by prompting a PLM to generate the title of a relevant passage given a query and appending the generated title to the query, it becomes easier to retrieve that relevant passage. Intuitively, the generated contexts explicitly express the search intent not presented in the original query. As a result, \textsc{Gar}\xspace with sparse representations achieves comparable or even better performance than state-of-the-art approaches~\cite{karpukhin2020dense,guu2020realm} with dense representations of the original queries, while being more lightweight and efficient in terms of both training and inference (including the cost of the generation model) (Sec.~\ref{sec:runtime}). Specifically, we expand the query (question) by adding relevant contexts as follows. We conduct seq2seq learning with the question as the input and various freely accessible in-domain contexts as the output such as \textit{the answer, the sentence where the answer belongs to}, and \textit{the title of a passage that contains the answer}. We then append the generated contexts to the question as the \textit{generation-augmented query} for retrieval. We demonstrate that using multiple contexts from diverse generation targets is beneficial as fusing the retrieval results of different generation-augmented queries consistently yields better retrieval accuracy. We conduct extensive experiments on the Natural Questions (NQ) \cite{kwiatkowski-etal-2019-natural} and TriviaQA (Trivia) \cite{joshi-etal-2017-triviaqa} datasets. The results reveal four major advantages of \textsc{Gar}\xspace: (1) \textsc{Gar}\xspace, combined with BM25, achieves significant gains over the same BM25 model that uses the original queries or existing unsupervised query expansion (QE) methods. (2) \textsc{Gar}\xspace with sparse representations (BM25) achieves comparable or even better performance than the current state-of-the-art retrieval methods, such as DPR \cite{karpukhin2020dense}, that use dense representations. (3) Since \textsc{Gar}\xspace uses sparse representations to measure lexical overlap\footnote{Strictly speaking, \textsc{Gar}\xspace with sparse representations handles semantics before retrieval by enriching the queries, while maintaining the advantage of exact matching.}, it is complementary to dense representations: by fusing the retrieval results of \textsc{Gar}\xspace and DPR (denoted as $\textsc{Gar}^{\texttt{+}}$\xspace), we obtain consistently better performance than either method used individually. (4) \textsc{Gar}\xspace outperforms DPR in the end-to-end QA performance (EM) when the same extractive reader is used: EM=41.8 (43.8 for $\textsc{Gar}^{\texttt{+}}$\xspace) on NQ and 62.7 on Trivia, creating new state-of-the-art results for extractive OpenQA. \textsc{Gar}\xspace also outperforms other retrieval methods under the generative setup when the same generative reader is used: EM=38.1 (45.3 for $\textsc{Gar}^{\texttt{+}}$\xspace) on NQ and 62.2 on Trivia. \start{Contributions} (1) We propose Generation-Augmented Retrieval (\textsc{Gar}\xspace), which augments queries with heuristically discovered relevant contexts through text generation without external supervision or time-consuming downstream feedback. (2) We show that using generation-augmented queries achieves significantly better retrieval and QA results than using the original queries or existing unsupervised QE methods. % (3) We show that \textsc{Gar}\xspace, combined with a simple BM25 model, achieves new state-of-the-art performance on two benchmark datasets in extractive OpenQA and competitive results in the generative setting. \section{Related Work} \label{sec:related_work} \start{Conventional Query Expansion} \textsc{Gar}\xspace shares some merits with query expansion (QE) methods based on pseudo relevance feedback \cite{rocchio1971relevance,abdul2004umass,lv2010positional} in that they both expand the queries with relevant contexts (terms) without the use of external supervision. \textsc{Gar}\xspace is superior as it expands the queries with knowledge stored in the PLMs rather than the retrieved passages and its expanded terms are learned through text generation. \start{Recent Query Reformulation} There are recent or concurrent studies \cite{nogueira-cho-2017-task,zaiem2019sequence,yu2020few,vakulenko2020question,lin2020query} that reformulate queries with generation models for other retrieval tasks. However, these studies are not easily applicable or efficient enough for OpenQA because: (1) They require external resources such as paraphrase data~\cite{zaiem2019sequence}, search sessions~\cite{yu2020few}, or conversational contexts~\cite{lin2020query,vakulenko2020question} to form the reformulated queries, which are not available or showed inferior domain-transfer performance in OpenQA~\cite{zaiem2019sequence}; (2) They involve time-consuming training process such as RL. For example, \citet{nogueira-cho-2017-task} reported a training time of 8 to 10 days as it uses retrieval performance in the reward function and conducts retrieval at each iteration. In contrast, \textsc{Gar}\xspace uses freely accessible in-domain contexts like passage titles as the generation targets and standard seq2seq learning, which, despite its simplicity, is not only more efficient but effective for OpenQA. \start{Retrieval for OpenQA} Existing sparse retrieval methods for OpenQA \cite{chen-etal-2017-reading} solely rely on the information of the questions. \textsc{Gar}\xspace extends to contexts relevant to the questions by extracting information inside PLMs and helps sparse methods achieve comparable or better performance than dense methods~\cite{guu2020realm,karpukhin2020dense}, while enjoying the simplicity and efficiency of sparse representations. \textsc{Gar}\xspace can also be used with dense representations to seek for even better performance, which we leave as future work. \start{Generative QA} Generative QA generates answers through seq2seq learning instead of extracting answer spans. Recent studies on generative OpenQA \cite{lewis2020retrieval,min2020ambigqa,izacard2020leveraging} are orthogonal to \textsc{Gar}\xspace in that they focus on improving the reading stage and directly reuse DPR \cite{karpukhin2020dense} as the retriever. Unlike generative QA, the goal of \textsc{Gar}\xspace is not to generate perfect answers to the questions but pertinent contexts that are helpful for retrieval. Another line in generative QA learns to generate answers without relevant passages as the evidence but solely the question itself using PLMs \cite{roberts2020much,brown2020language}. \textsc{Gar}\xspace further confirms that one can extract factual knowledge from PLMs, which is not limited to the answers as in prior studies but also other relevant contexts. \section{Generation-Augmented Retrieval} \subsection{Task Formulation} OpenQA aims to answer factoid questions without pre-specified domains. We assume that a large collection of documents $C$ (\ie, Wikipedia) are given as the resource to answer the questions and a retriever-reader architecture is used to tackle the task, where the retriever retrieves a small subset of the documents $D \subset C$ and the reader reads the documents $D$ to extract (or generate) an answer. Our goal is to improve the effectiveness and efficiency of the retriever and consequently improve the performance of the reader. \subsection{Generation of Query Contexts} In \textsc{Gar}\xspace, queries are augmented with various heuristically discovered relevant contexts in order to retrieve more relevant passages in terms of both quantity and quality. For the task of OpenQA where the query is a question, we take the following three freely accessible contexts as the generation targets. We show in Sec.~\ref{res_retrieval} that having multiple generation targets is helpful in that fusing their results consistently brings better retrieval accuracy. \start{Context 1: The default target (answer)} The default target is the label in the task of interest, which is the answer in OpenQA. The answer to the question is apparently useful for the retrieval of relevant passages that contain the answer itself. As shown in previous work \cite{roberts2020much,brown2020language}, PLMs are able to answer certain questions solely by taking the questions as input (\ie, closed-book QA). Instead of using the generated answers directly as in closed-book QA, \textsc{Gar}\xspace treats them as contexts of the question for retrieval. The advantage is that even if the generated answers are partially correct (or even incorrect), they may still benefit retrieval as long as they are relevant to the passages that contain the correct answers (\eg, co-occur with the correct answers). \start{Context 2: Sentence containing the default target} The sentence in a passage that contains the answer is used as another generation target. Similar to using answers as the generation target, the generated sentences are still beneficial for retrieving relevant passages even if they do not contain the answers, as their semantics is highly related to the questions/answers (examples in Sec.~\ref{sec:exp_gen}). One can take the relevant sentences in the ground-truth passages (if any) or those in the positive passages of a retriever as the reference, depending on the trade-off between reference quality and diversity. \start{Context 3: Title of passage containing the default target} One can also use the titles of relevant passages as the generation target if available. Specifically, we retrieve Wikipedia passages using BM25 with the question as the query, and take the page titles of positive passages that contain the answers as the generation target. We observe that the page titles of positive passages are often entity names of interest, and sometimes (but not always) the answers to the questions. Intuitively, if \textsc{Gar}\xspace learns which Wikipedia pages the question is related to, the queries augmented by the generated titles would naturally have a better chance of retrieving those relevant passages. While it is likely that some of the generated query contexts involve unfaithful or nonfactual information due to hallucination in text generation \cite{mao2020constrained} and introduce noise during retrieval, they are beneficial rather than harmful overall, as our experiments show that \textsc{Gar}\xspace improve both retrieval and QA performance over BM25 significantly. Also, since we generate 3 different (complementary) query contexts and fuse their retrieval results, the distraction of hallucinated content is further alleviated. \subsection{Retrieval with Generation-Augmented Queries} \label{sec:retrieval} After generating the contexts of a query, we append them to the query to form a \textit{generation-augmented query}.\footnote{One may create a title field during document indexing and conduct multi-field retrieval but here we append the titles to the questions as other query contexts for generalizability.} We observe that conducting retrieval with the generated contexts (\eg, answers) alone as queries instead of concatenation is ineffective because (1) some of the generated answers are rather irrelevant, and (2) a query consisting of the correct answer alone (without the question) may retrieve false positive passages with unrelated contexts that happen to contain the answer. Such low-quality passages may lead to potential issues in the following passage reading stage. If there are multiple query contexts, we conduct retrieval using queries with different generated contexts separately and then fuse their results. The performance of one-time retrieval with all the contexts appended is slightly but not significantly worse. For simplicity, we fuse the retrieval results in a straightforward way: an equal number of passages are taken from the top-retrieved passages of each source. One may also use weighted or more sophisticated fusion strategies such as reciprocal rank fusion \cite{cormack2009reciprocal}, the results of which are slightly better according to our experiments.\footnote{We use the fusion tools at \url{https://github.com/joaopalotti/trectools}.} Next, one can use any off-the-shelf retriever for passage retrieval. Here, we use a simple BM25 model to demonstrate that \textsc{Gar}\xspace with sparse representations can already achieve comparable or better performance than state-of-the-art dense methods while being more lightweight and efficient (including the cost of the generation model), closing the gap between sparse and dense retrieval methods. \section{OpenQA with \textsc{Gar}\xspace} To further verify the effectiveness of \textsc{Gar}\xspace, we equip it with both extractive and generative readers for end-to-end QA evaluation. We follow the reader design of the major baselines for a fair comparison, while virtually any existing QA reader can be used with \textsc{Gar}\xspace. \subsection{Extractive Reader} For the extractive setup, we largely follow the design of the extractive reader in DPR \cite{karpukhin2020dense}. Let $D = [d_1, d_2, ..., d_k]$ denote the list of retrieved passages with passage relevance scores $\mathbf{D}$. Let $S_i = [s_1, s_2, ..., s_N]$ denote the top $N$ text spans in passage $d_i$ ranked by span relevance scores $\mathbf{S_i}$. Briefly, the DPR reader uses BERT-base \cite{devlin-etal-2019-bert} for representation learning, where it estimates the passage relevance score $\mathbf{D}_k$ for each retrieved passage $d_k$ based on the [CLS] tokens of all retrieved passages $D$, and assigns span relevance scores $S_i$ for each candidate span based on the representations of its start and end tokens. Finally, the span with the highest span relevance score from the passage with the highest passage relevance score is chosen as the answer. We refer the readers to \citet{karpukhin2020dense} for more details. \start{Passage-level Span Voting} Many extractive QA methods \cite{chen-etal-2017-reading,min2019knowledge,guu2020realm,karpukhin2020dense} measure the probability of span extraction in different retrieved passages independently, despite that their collective signals may provide more evidence in determining the correct answer. We propose a simple yet effective passage-level span voting mechanism, which aggregates the predictions of the spans in the same surface form from different retrieved passages. Intuitively, if a text span is considered as the answer multiple times in different passages, it is more likely to be the correct answer. Specifically, \textsc{Gar}\xspace calculates a normalized score $p(S_i[j])$ for the j-th span in passage $d_i$ during inference as follows: $p (S_i[j]) = \text{softmax} (\mathbf{D})[i] \times \text{softmax} (\mathbf{S_i})[j]$. \textsc{Gar}\xspace then aggregates the scores of the spans with the same surface string among all the retrieved passages as the collective passage-level score.\footnote{We find that the number of spans used for normalization in each passage does not have significant impact on the final performance (we take $N=5$) and using the raw or normalized strings for aggregation also perform similarly.} \subsection{Generative Reader} For the generative setup, we use a seq2seq framework where the input is the concatenation of the question and top-retrieved passages and the target output is the desired answer. Such generative readers are adopted in recent methods such as SpanSeqGen~\cite{min2020ambigqa} and Longformer~\cite{beltagy2020longformer}. Specifically, we use BART-large \cite{lewis2019bart} as the generative reader, which concatenates the question and top-retrieved passages up to its length limit (1,024 tokens, 7.8 passages on average). Generative \textsc{Gar}\xspace is directly comparable with SpanSeqGen \cite{min2020ambigqa} that uses the retrieval results of DPR but not comparable with Fusion-in-Decoder (FID) \cite{izacard2020leveraging} since it encodes 100 passages rather than 1,024 tokens and involves more model parameters. \section{Experiment Setup} \subsection{Datasets} We conduct experiments on the open-domain version of two popular QA benchmarks: Natural Questions (NQ) \cite{kwiatkowski-etal-2019-natural} and TriviaQA (Trivia) \cite{joshi-etal-2017-triviaqa}. The statistics of the datasets are listed in Table~\ref{tab:dataset}. \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{ \scalebox{1}{ \begin{tabular}{llrrr} \toprule \textbf{Dataset} & \textbf{Train / Val / Test} & \textbf{Q-len} & \textbf{A-len} & \textbf{\#-A}\\ \midrule NQ &79,168 / 8,757 / 3,610 &12.5 & 5.2 & 1.2\\ Trivia & 78,785 / 8,837 / 11,313 & 20.2 & 5.5 & 13.7\\ \bottomrule \end{tabular} } } \vspace{-.0cm} \caption{Dataset statistics that show the number of samples per data split, the average question (answer) length, and the number of answers for each question. } \label{tab:dataset} \vspace{-.1cm} \end{table} \subsection{Evaluation Metrics} Following prior studies \cite{karpukhin2020dense}, we use top-k retrieval accuracy to evaluate the performance of the retriever and the Exact Match (EM) score to measure the performance of the reader. \textit{Top-k retrieval accuracy} is defined as the proportion of questions for which the top-k retrieved passages contain at least one answer span, which is an upper bound of how many questions are ``answerable'' by an extractive reader. \textit{Exact Match (EM)} is the proportion of the predicted answer spans being exactly the same as (one of) the ground-truth answer(s), after string normalization such as article and punctuation removal. \subsection{Compared Methods} For passage retrieval, we mainly compare with BM25 and DPR, which represent the most used state-of-the-art methods of sparse and dense retrieval for OpenQA, respectively. For query expansion, we re-emphasize that \textsc{Gar}\xspace is the first QE approach designed for OpenQA and most of the recent approaches are not applicable or efficient enough for OpenQA since they have task-specific objectives, require external supervision that was shown to transfer poorly to OpenQA, or take many days to train (Sec.~\ref{sec:related_work}). We thus compare with a classic unsupervised QE method RM3 \cite{abdul2004umass} that does not need external resources for a fair comparison. For passage reading, we compare with both extractive~\citep{min-etal-2019-discrete,asai2019learning,lee-etal-2019-latent,min2019knowledge,guu2020realm,karpukhin2020dense} and generative~\citep{brown2020language,roberts2020much,min2020ambigqa,lewis2020retrieval,izacard2020leveraging} methods when equipping \textsc{Gar}\xspace with the corresponding reader. \input{generation_eg} \subsection{Implementation Details} \start{Retriever} We use Anserini \cite{yang2017anserini} for text retrieval of BM25 and \textsc{Gar}\xspace with its default parameters. We conduct grid search for the QE baseline RM3 \cite{abdul2004umass}. \start{Generator} We use BART-large \cite{lewis2019bart} to generate query contexts in \textsc{Gar}\xspace. When there are multiple desired targets (such as multiple answers or titles), we concatenate them with [SEP] tokens as the reference and remove the [SEP] tokens in the generation-augmented queries. For Trivia, in particular, we use the value field as the generation target of answer and observe better performance. We take the checkpoint with the best ROUGE-1 F1 score on the validation set, while observing that the retrieval accuracy of \textsc{Gar}\xspace is relatively stable to the checkpoint selection since we do not directly use the generated contexts but treat them as augmentation of queries for retrieval. \start{Reader} Extractive \textsc{Gar}\xspace uses the reader of DPR with largely the same hyperparameters, which is initialized with BERT-base \cite{devlin-etal-2019-bert} and takes 100 (500) retrieved passages during training (inference). Generative \textsc{Gar}\xspace concatenates the question and top-10 retrieved passages, and takes at most 1,024 tokens as input. Greedy decoding is adopted for all generation models, which appears to perform similarly to (more expensive) beam search. \section{Experiment Results} We evaluate the effectiveness of \textsc{Gar}\xspace in three stages: \textit{generation} of query contexts (Sec.~\ref{sec:exp_gen}), \textit{retrieval} of relevant passages (Sec.~\ref{res_retrieval}), and passage \textit{reading} for OpenQA (Sec.~\ref{sec:exp_read}). Ablation studies are mostly shown on the NQ dataset to understand the drawbacks of \textsc{Gar}\xspace since it achieves better performance on Trivia. \subsection{Query Context Generation} \label{sec:exp_gen} \start{Automatic Evaluation} To evaluate the quality of the generated query contexts, we first measure their lexical overlap with the ground-truth query contexts. As suggested by the nontrivial ROUGE scores in Table~\ref{tab:generation_num}, \textsc{Gar}\xspace does learn to generate meaningful query contexts that could help the retrieval stage. We next measure the lexical overlap between the query and the ground-truth passage. The ROUGE-1/2/L F1 scores between the original query and ground-truth passage are 6.00/2.36/5.01, and those for the generation-augmented query are 7.05/2.84/5.62 (answer), 13.21/6.99/10.27 (sentence), 7.13/2.85/5.76 (title) on NQ, respectively. Such results further demonstrate that the generated query contexts significantly increase the word overlap between the queries and the positive passages, and thus are likely to improve retrieval results.\footnote{We use F1 instead of recall to avoid the unfair favor of (longer) generation-augmented query.} \begin{table}[ht] \centering \scalebox{.8}{ \begin{tabular}{lrrr} \toprule \textbf{Context} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L}\\ \midrule Answer &33.51 & 20.54 & 33.30\\ Sentence & 37.14 & 24.71 & 33.91\\ Title & 43.20 & 32.11 & 39.67\\ \bottomrule \end{tabular} } \vspace{-.0cm} \caption{\textbf{ROUGE F1 scores of the generated query contexts} on the validation set of the NQ dataset. } \label{tab:generation_num} \vspace{-.1cm} \end{table} \begin{table*}[ht] \centering \resizebox{2\columnwidth}{!}{ \begin{tabular}{l ccccc | ccccc} \toprule \multirow{2}{*}{ \textbf{Method}} & \multicolumn{5}{c|}{\textbf{NQ}} & \multicolumn{5}{c}{\textbf{Trivia}} \\ & Top-5 & Top-20 & Top-100 & Top-500 & Top-1000 & Top-5 & Top-20 & Top-100 & Top-500 & Top-1000 \\ \midrule BM25 (ours) & 43.6 & 62.9& 78.1 & 85.5 & 87.8 & 67.7 & 77.3 & 83.9 & 87.9 & 88.9 \\ BM25 +RM3 & 44.6 & 64.2& 79.6 & 86.8 & 88.9 & 67.0 & 77.1 & 83.8 & 87.7 & 88.9 \\ DPR & \underline{68.3} & \underline{80.1} & \underline{86.1} & 90.3 & 91.2 & 72.7 & 80.2 & 84.8 & - & - \\ \textsc{Gar}\xspace &60.9 & 74.4 & 85.3 & \underline{90.3} & \underline{91.7} & \underline{73.1} & \underline{80.4} & \underline{85.7} & \textbf{88.9} & \textbf{89.7} \\ $\textsc{Gar}^{\texttt{+}}$\xspace &\textbf{70.7} & \textbf{81.6} & \textbf{88.9} & \textbf{92.0} & \textbf{93.2} & \textbf{76.0} & \textbf{82.1} & \textbf{86.6} & - & - \\ \bottomrule \end{tabular} } \caption[Caption]{\textbf{Top-k retrieval accuracy on the test sets}. The baselines are evaluated by ourselves and better than reported in \citet{karpukhin2020dense}. \textsc{Gar}\xspace helps BM25 to achieve comparable or better performance than DPR. Best and second best methods are \textbf{bold} and \underline{underlined}, respectively.} \label{tab:top_k_acc} \end{table*} \start{Case Studies} In Table~\ref{tab:generation_eg}, we show several examples of the generated query contexts and their ground-truth references. In the first example, the correct album release date appears in both the generated answer and the generated sentence, and the generated title is the same as the Wikipedia page title of the album. In the last two examples, the generated answers are wrong but fortunately, the generated sentences contain the correct answer and (or) other relevant information and the generated titles are highly related to the question as well, which shows that different query contexts are complementary to each other and the noise during query context generation is thus reduced. \subsection{Generation-Augmented Retrieval} \label{res_retrieval} \start{Comparison w. the state-of-the-art} We next evaluate the effectiveness of \textsc{Gar}\xspace for retrieval. In Table~\ref{tab:top_k_acc}, we show the top-k retrieval accuracy of BM25, BM25 with query expansion (+RM3) \cite{abdul2004umass}, DPR~\citep{karpukhin2020dense}, \textsc{Gar}\xspace, and $\textsc{Gar}^{\texttt{+}}$\xspace (\textsc{Gar}\xspace+DPR). On the NQ dataset, while BM25 clearly underperforms DPR regardless of the number of retrieved passages, the gap between \textsc{Gar}\xspace and DPR is significantly smaller and negligible when $k \geq 100$. When $k \geq 500$, \textsc{Gar}\xspace is slightly better than DPR despite that it simply uses BM25 for retrieval. In contrast, the classic QE method RM3, while showing marginal improvement over the vanilla BM25, does not achieve comparable performance with \textsc{Gar}\xspace or DPR. By fusing the results of \textsc{Gar}\xspace and DPR in the same way as described in Sec.~\ref{sec:retrieval}, we further obtain consistently higher performance than both methods, with top-100 accuracy 88.9\% and top-1000 accuracy 93.2\%. On the Trivia dataset, the results are even more encouraging -- \textsc{Gar}\xspace achieves consistently better retrieval accuracy than DPR when $k \geq 5$. On the other hand, the difference between BM25 and BM25 +RM3 is negligible, which suggests that naively considering top-ranked passages as relevant (\ie, pseudo relevance feedback) for QE does not always work for OpenQA. Results on more cutoffs of $k$ can be found in App.~\ref{sec_app:top_k_acc}. \start{Effectiveness of diverse query contexts} In Fig.~\ref{fig:fuse}, we show the performance of \textsc{Gar}\xspace when different query contexts are used to augment the queries. Although the individual performance when using each query context is somewhat similar, fusing their retrieved passages consistently leads to better performance, confirming that different generation-augmented queries are complementary to each other (recall examples in Table~\ref{tab:generation_eg}). \start{Performance breakdown by question type} In Table~\ref{tab:breakdown}, we show the top-100 accuracy of the compared retrieval methods per question type on the NQ test set. Again, \textsc{Gar}\xspace outperforms BM25 on all types of questions significantly and $\textsc{Gar}^{\texttt{+}}$\xspace achieves the best performance across the board, which further verifies the effectiveness of \textsc{Gar}\xspace. \begin{figure}[ht] \centering \includegraphics[width=0.99\linewidth]{fig/fuse_plot} \vspace{-.0cm} \vspace{-.2cm} \vspace{-.2cm} \caption{\textbf{Top-k retrieval accuracy} on the test set of NQ when fusing retrieval results of different generation-augmented queries.} \label{fig:fuse} \vspace{-.1cm} \end{figure} \begin{table}[ht] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{l rcccc } \toprule \textbf{Type} & \textbf{Percentage} & \textbf{BM25} & \textbf{DPR} & \textbf{\textsc{Gar}\xspace} & \textbf{$\textsc{Gar}^{\texttt{+}}$\xspace} \\ \midrule Who & 37.5\%& 82.1 & \underline{88.0} & 87.5 & \textbf{90.8} \\ When & 19.0\%& 73.1 & \underline{86.9} & 83.8 & \textbf{88.6} \\ What & 15.0\%& 76.5 & \underline{82.6} & 81.5 & \textbf{86.0} \\ Where & 10.9\%& 77.4 & \underline{89.1} & 87.0 & \textbf{90.8} \\ Other & 9.1\%& 79.3 & 78.1 & \underline{81.8} & \textbf{84.2} \\ How & 5.0\%& 78.2 & \underline{83.8} & 83.2 & \textbf{85.5} \\ Which & 3.3\%& 89.0 & 90.7 & \underline{94.1} & \textbf{94.9} \\ Why & 0.3\%& 90.0 & 90.0 & 90.0 & 90.0 \\ \bottomrule \end{tabular} } \caption[Caption]{\textbf{Top-100 retrieval accuracy breakdown of question type on NQ}. Best and second best methods in each category are \textbf{bold} and \underline{underlined}, respectively.} \label{tab:breakdown} \end{table} \begin{table}[t] \centering \resizebox{1.05\columnwidth}{!}{ \begin{tabular}{clcccc} \cmidrule[0.06em]{2-5} &\textbf{Method} & \textbf{NQ} & \multicolumn{2}{c}{\textbf{Trivia}} \\ \cmidrule{2-5} \multirow{9}{*}{ \rotatebox[origin=c]{90}{Extractive}} &Hard EM~\citep{min-etal-2019-discrete} & 28.1 & 50.9 & - \\ &Path Retriever~\citep{asai2019learning} & 32.6 & - & - \\ &ORQA~\citep{lee-etal-2019-latent} & 33.3 & 45.0 & - \\ &Graph Retriever~\citep{min2019knowledge} & 34.5 & 56.0 & - \\ &REALM~\citep{guu2020realm} & 40.4 & - & - & \\ &DPR~\citep{karpukhin2020dense} & 41.5 & 57.9 & - \\ &BM25 (ours) & 37.7 & 60.1 & -& \\ &\textsc{Gar}\xspace & \textbf{41.8} & \textbf{62.7} & \textbf{74.8} & \\ &$\textsc{Gar}^{\texttt{+}}$\xspace & \textbf{43.8}& - & - & \\ \cmidrule{2-5} \multirow{8}{*}{ \rotatebox[origin=c]{90}{Generative}} &GPT-3~\citep{brown2020language} & 29.9 & - & 71.2 \\ &T5~\citep{roberts2020much} & 36.6 & 60.5 & - \\ &SpanSeqGen~\citep{min2020ambigqa} & 42.2 & - & - \\ &RAG~\citep{lewis2020retrieval} & 44.5 & 56.1 & 68.0 \\ &FID \cite{izacard2020leveraging} & \textbf{51.4} & \textbf{67.6} & \textbf{80.1} \\ &BM25 (ours) & 35.3 & 58.6 & -& \\ &\textsc{Gar}\xspace & 38.1 & \textbf{62.2} & - & \\ &$\textsc{Gar}^{\texttt{+}}$\xspace & \textbf{45.3} & - & - & \\ \cmidrule[0.06em]{2-5} \end{tabular} } \caption[Caption]{\textbf{End-to-end comparison with the state-of-the-art methods in EM}. For Trivia, the left column denotes the open-domain test set and the right is the hidden Wikipedia test set on the public leaderboard.} \label{tab:sota} \end{table} \subsection{Passage Reading with \textsc{Gar}\xspace} \label{sec:exp_read} \start{Comparison w. the state-of-the-art} We show the comparison of end-to-end QA performance of extractive and generative methods in Table~\ref{tab:sota}. Extractive \textsc{Gar}\xspace achieves state-of-the-art performance among extractive methods on both NQ and Trivia datasets, despite that it is more lightweight and computationally efficient. Generative \textsc{Gar}\xspace outperforms most of the generative methods on Trivia but does not perform as well on NQ, which is somewhat expected and consistent with the performance at the retrieval stage, as the generative reader only takes a few passages as input and \textsc{Gar}\xspace does not outperform dense retrieval methods on NQ when $k$ is very small. However, combining \textsc{Gar}\xspace with DPR achieves significantly better performance than both methods or baselines that use DPR as input such as SpanSeqGen~\citep{min2020ambigqa} and RAG~\citep{lewis2020retrieval}. Also, \textsc{Gar}\xspace outperforms BM25 significantly under both extractive and generative setups, which again shows the effectiveness of the generated query contexts, even if they are heuristically discovered without any external supervision. The best performing generative method FID \cite{izacard2020leveraging} is not directly comparable as it takes more (100) passages as input. As an indirect comparison, \textsc{Gar}\xspace performs better than FID when FID encodes 10 passages (cf. Fig.~2 in \citet{izacard2020leveraging}). Moreover, since FID relies on the retrieval results of DPR as well, we believe that it is a low-hanging fruit to replace its input with \textsc{Gar}\xspace or $\textsc{Gar}^{\texttt{+}}$\xspace and further boost the performance.\footnote{This claim is later verified by the best systems in the NeurIPS 2020 EfficientQA competition \cite{min2021neurips}.} We also observe that, perhaps surprisingly, extractive BM25 performs reasonably well, especially on the Trivia dataset, outperforming many recent state-of-the-art methods.\footnote{We find that taking 500 passages during reader inference instead of 100 as in \citet{karpukhin2020dense} improves the performance of BM25 but not DPR.} Generative BM25 also performs competitively in our experiments. \start{Model Generalizability} Recent studies \cite{lewis2020question} show that there are significant question and answer overlaps between the training and test sets of popular OpenQA datasets. Specifically, 60\% to 70\% test-time answers also appear in the training set and roughly 30\% test-set questions have a near-duplicate paraphrase in the training set. Such observations suggest that many questions might have been answered by simple question or answer memorization. To further examine model generalizability, we study the per-category performance of different methods using the annotations in \citet{lewis2020question}. \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{ \scalebox{1}{ \begin{tabular}{lcccc} \toprule \textbf{Method} & \textbf{Total} & \multicolumn{1}{m{1.5cm}}{\centering \textbf{Question Overlap}} & \multicolumn{1}{m{1.5cm}}{\centering \textbf{Answer Overlap Only}} & \multicolumn{1}{m{1.5cm}}{\centering \textbf{No Overlap}}\\ \midrule DPR & 41.3 & \textbf{69.4} & 34.6 & 19.3\\ $\textsc{Gar}^{\texttt{+}}$\xspace (E) &\textbf{43.8} &66.7 &\textbf{38.1} &\textbf{23.9} \\ \midrule BART & 26.5 & 67.6 & 10.2 & 0.8\\ RAG & 44.5 & \textbf{70.7} & 34.9 & 24.8\\ $\textsc{Gar}^{\texttt{+}}$\xspace (G) &\textbf{45.3} &67.9 &\textbf{38.1} &\textbf{27.0} \\ \bottomrule \end{tabular} } } \vspace{-.0cm} \caption{\textbf{EM scores with question-answer overlap category breakdown on NQ.} (E) and (G) denote extractive and generative readers, respectively. Results of baseline methods are taken from \citet{lewis2020question}. The observations on Trivia are similar and omitted. } \label{tab:overlap} \vspace{-.1cm} \end{table} As listed in Table~\ref{tab:overlap}, for the \textit{No Overlap} category, $\textsc{Gar}^{\texttt{+}}$\xspace (E) outperforms DPR on the extractive setup and $\textsc{Gar}^{\texttt{+}}$\xspace (G) outperforms RAG on the generative setup, which indicates that better end-to-end model generalizability can be achieved by adding \textsc{Gar}\xspace for retrieval. $\textsc{Gar}^{\texttt{+}}$\xspace also achieves the best EM under the \textit{Answer Overlap Only} category. In addition, we observe that a closed-book BART model that only takes the question as input performs much worse than additionally taking top-retrieved passages, \ie, $\textsc{Gar}^{\texttt{+}}$\xspace (G), especially on the questions that require generalizability. Notably, all methods perform significantly better on the \textit{Question Overlap} category, which suggests that the high \textit{Total} EM is mostly contributed by question memorization. That said, $\textsc{Gar}^{\texttt{+}}$\xspace appears to be less dependent on question memorization given its lower EM for this category.\footnote{The same ablation study is also conducted on the retrieval stage and similar results are observed. More detailed discussions can be found in App.~\ref{sec_app:top_k_acc}.} \subsection{Efficiency of \textsc{Gar}\xspace } \label{sec:runtime} \textsc{Gar}\xspace is efficient and scalable since it uses sparse representations for retrieval and does not involve time-consuming training process such as RL \cite{nogueira-cho-2017-task,liu2019generative}. The only overhead of \textsc{Gar}\xspace is on the generation of query contexts and the retrieval with generation-augmented (thus longer) queries, whose computational complexity is significantly lower than other methods with comparable retrieval accuracy. \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \scalebox{1}{ \begin{tabular}{lccc} \toprule \textbf{} & \textbf{Training} & \textbf{Indexing} & \textbf{Retrieval} \\ \midrule DPR & 24h w. 8 GPUs & 17.3h w. 8 GPUs & 30 min w. 1 GPU \\ \textsc{Gar}\xspace & 3 $\sim$ 6h w. 1 GPU & 0.5h w. 35 CPUs & 5 min w. 35 CPUs \\ \bottomrule \end{tabular} } } \vspace{-.0cm} \caption{\textbf{Comparison of computational cost between DPR and \textsc{Gar}\xspace at different stages.} The training time of \textsc{Gar}\xspace is for one generation target but different generators can be trained in parallel. } \label{tab:runtime} \vspace{-.1cm} \end{table} We use Nvidia V100 GPUs and Intel Xeon Platinum 8168 CPUs in our experiments. As listed in Table~\ref{tab:runtime}, the training time of \textsc{Gar}\xspace is 3 to 6 hours on 1 GPU depending on the generation target. As a comparison, REALM \cite{guu2020realm} uses 64 TPUs to train for 200k steps during pre-training alone and DPR \cite{karpukhin2020dense} takes about 24 hours to train with 8 GPUs. To build the indices of Wikipedia passages, \textsc{Gar}\xspace only takes around 30 min with 35 CPUs, while DPR takes 8.8 hours on 8 GPUs to generate dense representations and another 8.5 hours to build the FAISS index \cite{JDH17}. For retrieval, \textsc{Gar}\xspace takes about 1 min to generate one query context with 1 GPU, 1 min to retrieve 1,000 passages for the NQ test set with answer/title-augmented queries and 2 min with sentence-augmented queries using 35 CPUs. In contrast, DPR takes about 30 min on 1 GPU. \section{Conclusion} In this work, we propose Generation-Augmented Retrieval and demonstrate that the relevant contexts generated by PLMs without external supervision can significantly enrich query semantics and improve retrieval accuracy. Remarkably, \textsc{Gar}\xspace with sparse representations performs similarly or better than state-of-the-art methods based on the dense representations of the original queries. \textsc{Gar}\xspace can also be easily combined with dense representations to produce even better results. Furthermore, \textsc{Gar}\xspace achieves state-of-the-art end-to-end performance on extractive OpenQA and competitive performance under the generative setup. \section{Future Extensions} \start{Potential improvements} There is still much space to explore and improve for \textsc{Gar}\xspace in future work. For query context generation, one can explore multi-task learning to further reduce computational cost and examine whether different contexts can mutually enhance each other when generated by the same generator. One may also sample multiple contexts instead of greedy decoding to enrich a query. For retrieval, one can adopt more advanced fusion techniques based on both the ranking and score of the passages. As the generator and retriever are largely independent now, it is also interesting to study how to jointly or iteratively optimize generation and retrieval such that the generator is aware of the retriever and generates query contexts more beneficial for the retrieval stage. Last but not least, it is very likely that better results can be obtained by more extensive hyper-parameter tuning. \start{Applicability to other tasks} Beyond OpenQA, \textsc{Gar}\xspace also has great potentials for other tasks that involve text matching such as conversation utterance selection \cite{lowe2015ubuntu,dinan2020second} or information retrieval \cite{nguyen2016ms,craswell2020overview}. The default generation target is always available for supervised tasks. For example, for conversation utterance selection one can use the reference utterance as the default target and then match the concatenation of the conversation history and the generated utterance with the provided utterance candidates. For article search, the default target could be (part of) the ground-truth article itself. Other generation targets are more task-specific and can be designed as long as they can be fetched from the latent knowledge inside PLMs and are helpful for further text retrieval (matching). Note that by augmenting (expanding) the queries with heuristically discovered relevant contexts extracted from PLMs instead of reformulating them, \textsc{Gar}\xspace bypasses the need for external supervision to form the original-reformulated query pairs. \section*{Acknowledgments} We thank Vladimir Karpukhin, Sewon Min, Gautier Izacard, Wenda Qiu, Revanth Reddy, and Hao Cheng for helpful discussions. We thank the anonymous reviewers for valuable comments. \bibliographystyle{acl_natbib}
1,116,691,500,598
arxiv
\section{Condensate Recovery} In the main text of the paper, we have asserted that after the merons pass from the system into the contacts, then the system recovers full interlayer coherence when the system is in the PFM-PPM oscillatory phase. This is difficult to observe in the main text as the interaction strength of $U=-5~eV$ which led to a long coherence length and non-zero spatial overlap between successive meron bound states. This leads to noticeable discrepancies between contact currents that should be absent when the condensate recovers full interlayer coherence. To demonstrate this full recovery of interlayer coherence, we increase the interaction strength to $U=-7~eV$ thereby reducing the coherence length from 12 points in the $\hat{x}$-direction to 4 points and eliminating the spatial overlap between merons. In Fig. (\ref{fig:fullrec}), we plot the terminal current flow at each contact at the increased interlayer interaction strength and set $V_{TL} = 3~V$ to place the system firmly within the PFM-PPM oscillatory phase. Once again, the dips in the terminal current at times of $20~fs$ and $40~fs$ signal that a charged meron vortex has passed though the contact. By examining the terminal currents, we find that the current at bottom layer is fully recovered after vortex passes as the terminal currents have the following relation: $I_{TL}=-I_{BL}$, and $I_{TR}=-I_{BR}$. \begin{figure} \includegraphics[width=0.45\textwidth]{suppl2} \caption{\label{fig:fullrec} Current flow at each contact in oscillatory phase at $U=-7~eV$ and $V_{TL}=3~V$. In high $U$ limit, the charged vortex states are well-localized and allow for the full recovery of interlayer coherence after each oscillation.} \end{figure} \section{Charge of The Vortex State} In this section, we endeavor to clarify the charge of the vortex state within our system. The charge of vortex trapped state is numerically confirmed to be $\frac{q}{2}$ in each of the layer via the following procedure. At given time $t>0$, the density fluctuations within each layer are calculated from diagonal part of density matrix with the results plotted in Fig. (\ref{fig:vor}). For illustrative purposes, we have set the interlayer interaction strength to be $U=-7~eV$ thereby ensuring that the vortex states are not overlapping with one another by reducing the size of the numerical coherence length, which is calculated to be 4 lattice points. In order to best observe the vortex dynamics, we set $V_{TL}=3~V$ ensuring the persistent launching of charged merons associated with the metastable phase of the PFM-PPM. In Fig. \ref{fig:vor}(a) and Fig. \ref{fig:vor}(b), we observe the density fluctuations associated with the propagating merons centered at vortex core are propagating along with the current flow in the top and bottom layer, respectively. By numerically integrating density fluctuations and removing the background quasiparticle density, as shown in Fig. \ref{fig:vor}(c) and Fig. \ref{fig:vor}(d), we ascertain that our numerical method results in vortex charge of $q_{top}=0.49(5)e$ and $q_{bot}=0.49(1)e$ confirming that half electron charge localized within each layer. \begin{figure} \includegraphics[width=0.45\textwidth]{suppl1} \caption{\label{fig:vor} Time evolution of electron occupation number within the (a) top layer and (b) bottom layer. Each successive snapshot in the figure (blue, green, orange, and red) is taken at times of 13.8,15.8,17.8, and 19.7~fs respectively. Results of the integration scheme to obtain the relevant vortex charge within the (c) top layer and (d) bottom layer. The density profile is integrated with the background density (black solid line) at a time of 15.8~fs at an applied bias of $V_{TL}=3~V$.} \end{figure} \section{Frequency Dependence of Metastable PFM-PPM State} In order to understand the frequency dependence of our results, we use the Gross-Pitaevskii equation, \begin{equation} \label{eq:gpeq} i\hbar \frac{\partial \psi_{ps}}{\partial t}=(-\frac{1}{2m}\nabla^2+V(r)+g|\psi_{ps}|^2)\psi_{ps} \end{equation} For a generic 1D system of length, $L$, we input our voltage profile of $V(0)=(V-V_c)$ and $V(L)=0~V$. From this we have \begin{equation} \label{eq:gpeq_int} \frac{2e}{\hbar}(V-V_c)=\frac{\partial}{\partial t}(\arg(\psi(L))-\arg(\psi(0))) \end{equation} To obtain a stable condensate, there must be additional winding of the phase of the order parameter induced by the chemical potential difference across the system and the associated $\pi$ phase discontinuity of the order parameter in the merons. Therefore, we obtain the typical expression for the resultant AC Josephson frequency \begin{equation} \label{eq:acfreq} f_{ps}=\frac{e}{h}(V-V_c) \end{equation} In Fig. \ref{fig:fre}, we plot the frequency dependence of the terminal currents past $I_c$. We find the slope of the linear fitting to be $\frac{1}{2\pi}$ which confirms the vortex generation follows the AC Josephson frequency as predicted in Eq. (\ref{eq:acfreq}). At low bias, which governs the linear portion of Fig. \ref{fig:fre}, we observe that as the interlayer voltage changes, the stable oscillations occur with increasing frequency. The critical voltage is naturally proportional to the interaction strength, $\Delta$, which is implicitly included in $V_c$ of Eq. (\ref{eq:acfreq}) as the strength of $U$ determines the location of metastable PFM-PPM phase boundary. \begin{figure} \includegraphics[width=0.45\textwidth]{suppl3} \caption{\label{fig:fre} Plot of the voltage dependence of the oscillation frequency of the terminal currents. The red dashed line is $1/2\pi$ fitting while the points are data taken from numerical simulations of the DEC. We observe an AC Josephson frequency dependence of the terminal currents past $I_c$.} \end{figure} \section{Phase Boundary Dependence on the Interaction Strength} Based on our results, it is clear that there is some dependence on the locations of the phase transitions and the strength of the interlayer interactions. We explore this relationship in Fig. \ref{fig:udep} which shows interaction strength dependence of phases. We find that the location of the phase boundary of PFM-PPM metastable transition is set by $V_c$ from Eq. (\ref{eq:acfreq}) in the Supplementary text. In Fig. \ref{fig:udep}, we find a clear linear dependence of $V_c$ with $\Delta$ when we examine the location of the phase transition between the PFM and PFM-PPM metastable phases. As the gap size increases with the increases in the interaction strength, the PFM phase stability to interlayer voltage increases accordingly with the critical voltage, $V_c$, moving to higher interlayer voltages. Additionally, we observe a similar trend in the transition between the metastable and PPM regions. In the zero gap limit, we know that the both the PFM and the PFM-PPM metastable phase must vanish. Therefore, in limit of infinite time response, intersection of the two boundaries must meet at origin of the plot. In Fig. \ref{fig:udep}, the intersection of the two lines is shifted from the origin due to nature of time dependent simulation. As we always have a finite time window within the simulation methodology associated with the TDKB formalism, it is inevitable to set a criteria of phase transition from given finite time simulation. This gives a time scale cutoff which shifts the phase boundaries from infinite time response limit. To be more precise, the Metastable-PPM transition is defined to be the point at which the interlayer coherence decreases to 30\% of initial self-consistently obtained value. Meanwhile, PFM-oscillation transition is defined to be a point where two merons are launched within 10~fs. \begin{figure} \includegraphics[width=0.45\textwidth]{suppl4} \caption{\label{fig:udep} Phase boundaries of PFM-Metastable transition (blue circle) and Metastable-PPM transition (green diamond). The phases are calculated within a given time window of 10~fs. For each point of blue curve, we use interaction strengths of $U=-5,-5.5,-6,-6.5,-7,-7.5,-8~eV$ while, for green curve, $U=-5,-5.15,-5.3~eV$ are used. Strength of Hartree term is doubled to see the location of phase boundary in short time simulation.} \end{figure} \end{document} \section{Acknowledgements} E.M.H. thanks German Research Foundation (DFG) under grant HA 5893/4-1 within SPP 1666 for a financial support and the ENB graduate school ”Topological insulators". M.J.P. and M.J.G. acknowledge financial support from the Office of Naval Research (ONR) under grant N0014-11-1-0123 and the National Science Foundation (NSF) under grant CAREER EECS-1351871. M.J.P. acknowledge useful discussion from Gil Young Cho and Brian Dellabetta. \begin{acknowledgements} \end{acknowledgements}
1,116,691,500,599
arxiv
\section{Introduction} The physics of neutron stars entered a new era \cite{Watts2016,Ozel2016} with the observation of massive (two-solar-mass) pulsars \cite{Demorest2010, Fonseca2016, Antoniadis2013}, which set stringent constraints on the stiffness of the equation of state (EoS) of cold and dense baryonic matter, further stressed by the recent Shapiro delay measurement \cite{Cromartie2019} of a neutron star with mass $M \approx 2.17 M_\odot$. The detection of gravitational wave signals from two merging neutron stars \cite{Abbott2017} has furnished additional important information on the EoS, by providing limits for the tidal deformability and neutron star radii \cite{Most2018,De2018}. The composition and properties of strongly interacting matter at high densities and temperatures is a topic of continuing interest. Lattice QCD provides information on the properties of QCD matter at high temperatures and vanishing or small baryon chemical potential. However, owing to the sign problem, lattice QCD cannot at present reliably address the physics of highly compressed cold matter as it is realized in the central region of neutron stars. In this context, a broad variety of options are under discussion. A viable hypothesis invokes the description of matter in the core of neutron stars in terms of hadronic degrees of freedom (baryons and mesons) with strong many-body correlations \cite{APR1998}. Alternative descriptions involve a (possibly smooth) transition from hadronic matter to some form of quark matter \cite{Baym2018, McLerran2018}. The former option suggests that neutron star matter can be viewed as a relativistic Fermi liquid \cite{Landau1,Landau2,Landau3}, composed of neutron quasiparticles plus a small fraction of protons. This is the basic theme of the present study, in which we explore how neutron star matter behaves as a strongly coupled fermionic many-body system and how it compares with textbook Fermi systems \cite{Abrikosov1959,BP1991}, such as liquid $^3$He. Relativistic Landau Fermi-liquid theory \cite{BC1976} is an appropriate framework to address such questions. In the present study we are guided by a microscopic equation of state of neutron star matter derived from a chiral nucleon-meson (ChNM) field theory within the nonperturbative functional renormalization group (FRG) framework \cite{DW2015,DW2017}. The low-density neutron star crust region is parametrized by the Skyrme-Lyon (SLy) EoS \cite{DH2001}. The ChNM-FRG equation of state takes over at baryon densities $\rho > 0.3~\rho_0$ (with $\rho_0 = 0.16$ fm$^{-3}$, the equilibrium density of normal nuclear matter) and extends into the inner core region of the neutron star. Beta equilibrium conditions with both electrons and muons are routinely incorporated and imply a proton fraction of about 5\% in the core. The resulting EoS satisfies all empirical constraints from astrophysical observations. It is also constructed such that properties of symmetric and asymmetric nuclear matter are consistent with nuclear physics constraints (e.g., the binding energy and equilibrium density of $N=Z$ nuclear matter, the compressibility modulus, the critical temperature and density of the liquid-gas phase transition, etc.) \cite{DW2017}. For neutron star matter, the presence of hyperons (in particular $\Lambda$ hyperons) would soften the equation of state such that a maximum mass around $2\,M_\odot$ could not possibly be reached~\cite{Djapo:2008au,Lonardoni:2014bwa}. Several recent and ongoing investigations~\cite{Lonardoni:2014bwa,Haidenbauer:2016vfq} point out that repulsive hyperon-nuclear three-body forces may be capable of preventing the appearance of hyperons in neutron stars alltogether and thus maintain the necessary stiffness of the EoS. Given its nonperturbative nature, the ChNM-FRG equation of state is in principle not limited to low densities, in contrast to the EoS derived from (perturbative) chiral effective field theory (ChEFT). The two schemes are mutually consistent in the low-density range, $\rho\lesssim 2\,\rho_0$. The window of applicability for the ChNM-FRG model potentially extends considerably beyond these densities. Its limitation is ultimately determined by the transition from the (spontaneously broken) Nambu-Goldstone phase to the (restored) Wigner-Weyl realization of chiral symmetry. In the presence of multipion fluctuations and nuclear many-body correlations, all treated nonperturbatively by solving the FRG equations, it turns out that this transition from spontaneously broken to restored chiral symmetry is, in this model, shifted to baryon densities well above five times $\rho_0$. For comparison, the central density of a $2\,M_\odot$ neutron star, computed with such a model, does not exceed about $5\,\rho_0$. In such an approach the neutron star core is thus composed entirely of ``nonexotic" (nucleonic and pionic) degrees of freedom in the presence of strongly repulsive correlations. The input for the effective action of the ChNM-FRG model \cite{DW2015,DW2017} is prepared such that it is consistent with well-known ground state properties of nuclear matter, nuclear thermodynamics including the critical point of the liquid-gas phase transition \cite{Elliot2013}, and {\it ab initio} neutron matter computations using realistic nucleon-nucleon interactions \cite{Gandolfi2014, Heb2013, HK2017,LH2018,Dri2019}. With isospin-dependent interactions that reproduce the empirical asymmetry energy of $(32\pm 2)$ MeV at nuclear saturation density {\cite{Baldo2016}, the resulting EoS does indeed yield stable neutron stars that satisfy the $2\,M_\odot$ constraint. Moreover, it is located well within the band of equations of state, $P({\cal E})$ [pressure as a function of energy density] that have been extracted in refs.\,\cite{Annala2018,Vuorinen2018}. It is also consistent with the EoS band deduced by the LIGO and Virgo Collaborations from the gravitational wave signals generated by the neutron star merger event GW170817 \cite{Abbott2018}, as well as with the EoS recently deduced from neutron star data using neural network techniques \cite{FFM2019}. The emerging picture of a neutron star with such a generic chiral EoS is that of a relativistic liquid of nucleons (primarily neutrons) correlated by strong multipion fields and short-distance repulsive forces. A description in terms of fermionic quasiparticles thus suggests a treatment using Landau Fermi liquid theory. The (small) proton fraction in neutron star matter plays a non-negligible role in quantitative astrophysical considerations. It is of minor importance, however, for our purpose of studying Fermi-liquid properties which can therefore be focused primarily on pure neutron matter. The aim of the present work is therefore to identify quasiparticle properties of the neutrons as the dominant Fermionic degrees of freedom in the microscopic chiral FRG equation of state~\cite{DW2015,DW2017} that satisfies all observational constraints, and to study and interpret the corresponding Fermi liquid properties. In the following sections we prepare the framework for such a Fermi-liquid description. We then proceed to deduce and interpret the lowest Landau parameters of the spin-independent quasiparticle interaction. These quantify the strength of the correlations and characterize the bulk properties of matter in the deep interior of neutron stars. \section{Landau theory of relativistic Fermi liquids \label{sec:Landau}} \subsection{Reminder of Landau Fermi-liquid theory} It is useful to start by first recalling the nonrelativistic theory. For nuclear many-body systems, this limit is realized at low baryon densities $\rho$ where the Fermi momentum, $p_F= (6\pi^2\rho/\nu)^{1/3}$ (with spin-isospin degeneracy $\nu = 2$ for neutron matter and $\nu = 4$ for symmetric nuclear matter), is small compared to the quasiparticle effective mass. Most of our discussion will be restricted to vanishing temperatures, i.e., $T=0$. Superfluidity is ignored since the EoS in the density range of interest is practically unaffected by pairing. In Landau's theory of normal Fermi liquids \cite{Landau1,Landau2,Landau3,Abrikosov1959,BP1991}, the variation of the energy of the system with changes of the quasiparticle occupation numbers is given by \begin{equation}\label{eq:Landau-energy} \delta E = \sum_p \varepsilon_p\,\delta n_p + \frac{1}{2\,V}\,\sum_{p,p^\prime} {\cal F}_{pp^\prime}\,\delta n_p\,\delta n_{p^\prime}~. \end{equation} Here $V$ is the volume, $\varepsilon_p$ is the quasiparticle energy, ${\cal F}_{pp^\prime}$ is the quasiparticle interaction and $\delta n_p=n_p-n^{(0)}_p$ is the deviation of the quasiparticle distribution function from the ground state distribution \begin{equation} n^{(0)}_p=\left\{\begin{array}{cc}1,&~~~\varepsilon_p<\mu~\\ 0,&~~~\varepsilon_p>\mu~,\end{array}\right. \end{equation} where $\mu$ is the chemical potential, or equivalently the energy of a quasiparticle on the Fermi surface. Consequently, in the ground state of a uniform system, the distribution function equals unity for quasiparticle momenta below the Fermi momentum $p_F$ and vanishes for momenta above $p_F$. The energy of a quasiparticle with momentum $p$ is given by the first variation of the energy with respect to the occupation number $n_p$ \begin{equation} \varepsilon_p=\frac{\delta E}{\delta n_p}\,, \end{equation} while the quasiparticle interaction is determined by the second variation \begin{equation}\label{eq:qp-interaction} {\cal F}_{pp^\prime}=V\frac{\delta^2 E}{\delta n_p\,\delta n_{p\prime}}=V\frac{\delta\varepsilon_p}{\delta n_{p^\prime}}\,. \end{equation} For the low-lying excitations of interest in Fermi-liquid theory, the relevant quasiparticle states are near the Fermi surface. Hence, in the quasiparticle interaction one in general can set $|\boldsymbol{p}|=|\boldsymbol{p}^\prime|=p_F$. The velocity of a quasiparticle on the Fermi surface is given by \begin{equation} v_F=\left(\frac{\partial \varepsilon_p}{\partial p}\right)_{p=p_F}, \end{equation} and defines the quasiparticle effective mass $m^*$ through \begin{equation} v_F=\frac{p_F}{m^*}\,. \end{equation} Thus, near the Fermi surface, the quasiparticle energy takes the form \begin{equation} \varepsilon_p=\mu+v_F(p-p_F)\,. \end{equation} The density of quasiparticle states at the Fermi surface is given by \begin{equation}\label{eq:density-of-states} N(0)=\frac{1}{V}\sum_p\delta(\varepsilon_p-\mu)\,. \end{equation} Replacing the sum over $p$ by an integral (including the sums over spin and isospin degrees of freedom), one finds: \begin{equation} N(0)=\frac{\nu\, m^*p_F}{2 \pi^2}\,. \end{equation} In particular, for neutron matter, $N(0)=m^*p_F/\pi^2$. We now focus on pure neutron matter. For simplicity we ignore noncentral forces (e.g. spin-orbit interactions). These are non-leading effects in neutron matter which contribute only in $p$- and higher partial waves. With these restrictions the spin-dependent quasiparticle interaction is of the form \begin{equation}\label{eq:qp-int-spin} {\cal F}_{pp^\prime}=f_{pp^\prime}+g_{pp^\prime}\,\boldsymbol{\sigma}\cdot\boldsymbol{\sigma^\prime}. \end{equation} The momentum dependence of the functions $f_{pp^\prime}$ and $g_{pp^\prime}$ is expanded in Legendre polynomials according to \begin{equation}\label{eq:qp-int-legendre} f_{pp^\prime}=\sum_{\ell=0}^{\infty}f_\ell\,P_\ell(\cos \theta)\,,~~~g_{pp^\prime}=\sum_{\ell=0}^{\infty}g_\ell\,P_\ell(\cos \theta)~, \end{equation} where $\theta$ is the angle between the two momenta $\boldsymbol{p}$ and $\boldsymbol{p}^\prime$. The coefficients in this expansion are the Landau Fermi-liquid parameters. It is useful to define dimensionless Landau parameters: \begin{equation} F_\ell=N(0)\,f_\ell~,~~~G_\ell=N(0)\,g_\ell~. \end{equation} In the present context, as we focus on the ground state of spin-saturated neutron matter, only the spin-independent parameters $F_\ell$ are of prime interest. Basic properties of the Fermi liquid can be expressed in terms of the first few Landau parameters. For example, the quasiparticle effective mass, $m^*$, is given by the spin-independent Landau parameter $F_1$ which is a measure of the velocity dependence of the quasiparticle interaction: \begin{equation}\label{eq:F1} \frac{m^*}{M_0}=1+\frac{F_1}{3}\,, \end{equation} with the free (vacuum) particle mass $M_0$. The specific heat of a Fermi liquid at low temperatures is determined by the quasiparticle effective mass as follows: \begin{equation} c_V=\frac{m^*p_F}{3}\,T. \end{equation} Finally, the incompressibility of the Fermi liquid is given by \begin{equation} K=9\rho\,\frac{\partial^2 {\cal E}}{\partial \rho^2}=6\frac{p_F^2}{2 m^*}(1+F_0)=\frac{3\,p_F^2}{M_0}\,\,\frac{1+F_0}{1+F_1/3}, \end{equation} where ${\cal E}=E/V$ is the energy density, and the speed of (first) sound in the system is given by \begin{equation} c_1^2=\frac{p_F^2}{3\,M_0^2}\,\,\frac{1+F_0}{1+F_1/3}. \end{equation} This is the nonrelativistic sound speed with $c_1 \ll 1$. \subsection{Relativistic Fermi liquids} Baryonic matter at the high densities encountered in the core of neutron stars makes a relativistic treatment mandatory. For example, the Fermi momentum in neutron matter at $\rho = 5\,\rho_0$ is $p_F\approx 0.57$\,GeV, i.e., of a magnitude comparable to the effective mass. In a relativistic Fermi liquid \cite{BC1976}, the speed of sound is given by \begin{equation}\label{eq:soundspeed} c_1^2=\frac{\partial P}{\partial{\cal E}}=\frac{\rho}{\mu}\frac{\partial \mu}{\partial \rho}, \end{equation} where $P$ is the pressure, ${\cal E}$ the energy density, $\mu$ the baryon chemical potential, and we have used $d P=\rho\, d\mu$ and $d{\cal E}=\mu \,d\rho$. With \begin{equation} \rho =\frac{1}{V}\sum_p \theta(\mu-\varepsilon_p), \end{equation} one finds that \begin{equation}\label{eq:d-rho-d-mu} \frac{\partial \rho}{\partial \mu}=\frac{1}{V}\sum_p\delta(\mu-\varepsilon_p)\left(1-\frac{\partial\varepsilon_p}{\partial \mu}\right). \end{equation} Now, at zero temperature we have \begin{equation}\label{eq:d-epsilon-d-mu} \frac{\partial\varepsilon_p}{\partial \mu}=\frac{\partial\varepsilon_p}{\partial \rho}\frac{\partial\rho}{\partial \mu}. \end{equation} In order to compute $\partial \varepsilon_p/\partial\rho$, we introduce a variation of the quasiparticle occupation number, which is spin independent and spherically symmetric, i.e., $\delta n_p=\eta\, \delta(p-p_F)$, and satisfies \begin{equation}\label{eq:delta-rho} \delta \rho=\frac{1}{V}\sum_p \delta n_p=\frac{\nu}{(2\pi)^3}\int d^3 p\,\eta\,\delta(p-p_F)=\frac{\nu\,\eta\, p_F^2}{2\pi^2}. \end{equation} From (\ref{eq:qp-interaction}) and (\ref{eq:qp-int-legendre}) it follows that \begin{equation} \delta \varepsilon_p=\frac{1}{V}\sum_{p^\prime}{\cal F}_{pp^\prime}\,\delta n_{p^\prime} =\frac{\nu\,\eta\, p_F^2}{2\pi^2}f_0~, \end{equation} and consequently that \begin{equation}\label{eq:d-epsilon-d-rho} \frac{\partial \varepsilon_p}{\partial \rho}=f_0~. \end{equation} Inserting (\ref{eq:d-epsilon-d-rho}) and (\ref{eq:d-epsilon-d-mu}) in (\ref{eq:d-rho-d-mu}) and solving for $\partial\rho/\partial\mu$, one finds \begin{equation} \frac{\partial\rho}{\partial \mu}=\frac{N(0)}{1+N(0)f_0}=\frac{p_F\,m^*}{\pi^2(1+F_0)}\,. \label{eq:24} \end{equation} We note that the definiton of the Landau effective mass, $m^*=p_F\left(\frac{\partial \varepsilon_p}{\partial p}\right)^{-1}_{p=p_f}$ remains unchanged in the relativistic formulation, while the corresponding effective mass relation is given by~\cite{BC1976} \begin{equation}\label{eq:rel-eff-mass-relation} \frac{m^*}{\mu}=1+F_1/3\,. \end{equation} Finally the squared speed of sound becomes \begin{equation} \label{eq:sound-speed-Landau} c_1^2=\frac{p_F^2}{3\,\mu\, m^*}(1+F_0)=\frac{p_F^2}{3\,\mu^2}\,\,\frac{1+F_0}{1+F_1/3}\,, \end{equation} and the incompressibility is \begin{equation} K=6\,\frac{p_F^2}{2 \mu}\,\frac{1+F_0}{1+F_1/3} = 9\mu\,c_1^2\,. \end{equation} One notes that in these expressions, the relativistic treatment simply replaces the Fermion mass $M_0$ in the nonrelativistic forms by the baryon chemical potential \begin{equation} \mu = \frac{\partial{\cal E}}{\partial\rho} = \varepsilon_{p=p_F}\,. \end{equation} \subsection{Relativistic quasiparticles} Central to Landau Fermi-liquid theory is the notion of quasiparticles dressed by their interactions with the surrounding many-body system. Consider first a simplified example with fermions moving in a scalar field. This scalar field modifies the fermion mass at nonzero densities. The quasiparticle energy is given by \begin{equation}\label{eq:rel-qp-energy1} \varepsilon_p=\sqrt{p^2+M^2}~, \end{equation} where $M(\rho)$ is a function of the scalar field and thus of the density, with $M(\rho = 0) \equiv M_0$, the vacuum mass. The variation of the energy of the system is again given by (\ref{eq:Landau-energy}). Using (\ref{eq:rel-qp-energy1}) and (\ref{eq:qp-interaction}) we find \begin{equation} \delta E=\sum_p \sqrt{p^2+M^2}\,\delta n_p+\frac{1}{2}\sum_{pp^\prime}\frac{\delta \sqrt{p^2+M^2}}{\delta n_{p^\prime}}\, \delta n_p\,\delta n_{p^\prime}~. \end{equation} Thus, in this model the quasiparticle interaction is \begin{equation} {\cal F}_{pp^\prime}=V\,\frac{M}{\sqrt{p^2+M^2}}\,\frac{\delta M}{\delta n_{p^\prime}}~. \end{equation} Using \begin{equation} \frac{\delta M}{\delta n_{p^\prime}}=\frac{\partial M}{\partial\rho}\,\frac{\delta\rho}{\delta n_{p^\prime}}~. \end{equation} together with the first equality in (\ref{eq:delta-rho}), \begin{equation}\label{eq:rho-variation} \frac{\delta\rho}{\delta n_{p^\prime}}=\frac{1}{V}~, \end{equation} one finds: \begin{equation}\label{eq:qp-scalar-int} {\cal F}_{pp^\prime}=\frac{M(\rho)}{\sqrt{p_F^2+M^2(\rho)}}\,\frac{\partial M}{\partial \rho}~, \end{equation} where we have put the quasiparticles on the Fermi surface. Since the quasiparticle interaction is independent of the angle between $\boldsymbol{p}$ and $\boldsymbol{p}^\prime$ and independent of spin, the only nonzero Fermi-liquid parameter is $f_0$. This means that, in particular, $F_1=0$ and that the effective mass at the Fermi surface is equal to the chemical potential: \begin{equation} m^*=\mu=\sqrt{p_F^2+M^2(\rho)}. \end{equation} Hence, the dimensionless Fermi-liquid parameter is given by \begin{equation} F_0=\frac{p_F\,M}{\pi^2}\,\frac{\partial M}{\partial \rho}=\frac{M}{p_F}\,\frac{\partial M}{\partial p_F} \end{equation} and the squared speed of sound is \begin{equation} c_1^2=\frac{p_F^2}{3\,(p_F^2+M^2)}\left(1+\frac{p_F\, M}{\pi^2}\frac{\partial M}{\partial \rho}\right)\,. \end{equation} The in-medium nucleon mass $M(\rho)$ is usually a decreasing function of density. With $\partial M/\partial \rho <0$, the Landau parameter $F_0$ stays negative. The squared speed of sound will always be $c_1^2 < 1/3$ and approach the limit of a free ultra-relativistic gas from below. As we shall see, the equation of state of neutron star matter implies instead $c_1^2>1/3$ starting from some intermediate density, reflecting increasingly strong repulsive correlations between the quasiparticles as the density increases. In a relativistic theory, a repulsive short-distance interaction is most naturally viewed as being mediated by a vector field. Consider therefore quasiparticles interacting with a vector field $V^\mu$ in addition to the scalar field. The vector field is in turn assumed to be generated by the baryon four-current $j^\mu =(\rho,\boldsymbol{j})$: \begin{equation} V^\mu=(V_0,\boldsymbol{V})=h\,j^\mu\,. \end{equation} In the mean-field approximation the vector field is a linear function of the current, i.e., $h$ is a constant. Effects beyond the mean-field limit imply a nonlinear behavior that can be incorporated by a generalized {\it ansatz}: \begin{equation} V^\mu=h(j^2)\,j^\mu\,, \end{equation} with $h$ now assumed to be a function of the Lorentz invariant $j^2=j_\mu\,j^\mu$. In the rest frame of a system in its ground state, only the zeroth component of the baryon current, $j^0=\rho$, is nonzero. The current in the frame of an observer moving relative to the system is obtained by a Lorentz transformation. In a general frame, the quasiparticle energy in this model can be written: \begin{equation}\label{eq:qp-energy-II} \varepsilon_p=\sqrt{(\boldsymbol{p}-\boldsymbol{V})^2+M^2}+V_0~, \end{equation} where $\boldsymbol{p}-\boldsymbol{V}$ is the kinetic momentum. Thus, the velocity of a quasiparticle is given by \begin{equation}\label{eq:qp-velocity} \boldsymbol{v}_p=\frac{\boldsymbol{p}-\boldsymbol{V}}{\sqrt{(\boldsymbol{p}-\boldsymbol{V})^2+M^2}}~. \end{equation} In the rest frame of the system, (\ref{eq:qp-energy-II}) reduces to \begin{equation} \varepsilon_p=\sqrt{\boldsymbol{p}^2+M^2}+V_0~, \end{equation} In general, a variation of the occupation number leads to nonzero spatial components of the baryon current, \begin{equation}\label{eq:qp-current} \boldsymbol{j}=\frac{1}{V}\sum_p\boldsymbol{v}_p\, n_p~. \end{equation} The quasiparticle interaction is again obtained by varying the quasiparticle energy with respect to the occupation number: \begin{equation} {\cal F}_{pp^\prime}=V\,\frac{\delta\,\varepsilon_p}{\delta n_{p^\prime}}~. \end{equation} The following discussion is similar to that of Ref.\,\cite{Matsui1981}, although our assumptions are more general. Given the quasiparticle energy (\ref{eq:qp-energy-II}), one thus finds: \begin{equation}\label{eq:qp-int-II} \frac{f_{pp^\prime}}{V}=\frac{M}{\sqrt{(\boldsymbol{p}-\boldsymbol{V})^2+M^2}}\left(\frac{\delta M}{\delta n_{p^\prime}}\right)+\frac{\delta V_0}{\delta n_{p^\prime}}-\frac{\boldsymbol{p}-\boldsymbol{V}}{\sqrt{(\boldsymbol{p}-\boldsymbol{V})^2+M^2}}\left(\frac{\delta\boldsymbol{V}}{\delta n_{p^\prime}}\right)~. \end{equation} At this level there are only spin-independent contributions to the quasiparticle interaction. In the rest frame of the fluid, the first term in (\ref{eq:qp-int-II}) reduces to the result already found in Eq.\,(\ref{eq:qp-scalar-int}), \begin{equation} f_{pp^\prime}(1)=\frac{M}{\sqrt{\boldsymbol{p}^2+M^2}}\,\frac{\partial M}{\partial \rho}~. \end{equation} The variation of the zeroth component of the vector field yields \begin{equation}\label{eq:variation-V0} \frac{\delta V_0}{\delta n_{p^\prime}}=\frac{\partial V_0}{\partial\rho}\,\frac{\delta \rho}{\delta n_{p^\prime}}+ \frac{\partial V_0}{\partial\boldsymbol{j}}\,\frac{\delta \boldsymbol{j}}{\delta n_{p^\prime}}~. \end{equation} The second term in (\ref{eq:variation-V0}) vanishes in the rest frame of the system, since \begin{equation} \frac{\partial V_0}{\partial \boldsymbol{j}}=-h^\prime(j^2)\,\boldsymbol{j}\,\rho~, \end{equation} and $\boldsymbol{j}=0$ in that frame. Hence, using (\ref{eq:rho-variation}), we find \begin{equation} \frac{\delta V_0}{\delta n_{p^\prime}}=\frac{1}{V}\,\frac{\partial V_0}{\partial\rho}~, \end{equation} and for the contribution of the second term to the quasiparticle interaction (\ref{eq:qp-int-II}): \begin{equation} f_{pp^\prime}(2)=\frac{\partial V_0}{\partial \rho}~. \end{equation} The detailed derivation of the last term on the right-hand side. of Eq.\,(\ref{eq:qp-int-II}) is relegated to Appendix A. Its contribution to the quasiparticle interaction is: \begin{equation} f_{pp^\prime}(3)=-h(j^2)\frac{\boldsymbol{p}\cdot\boldsymbol{p}^\prime}{\mu\,\sqrt{p_F^2+M^2}}~, \label{eq:3rdterm} \end{equation} Now, collecting all pieces, the resulting Landau Fermi-liquid parameters are given by \begin{eqnarray} F_0&=&\frac{p_F\,}{\pi^2}\,\left(M(\rho)\,\frac{\partial M}{\partial\rho}+\sqrt{p_F^2+M^2(\rho)}\,\,\frac{\partial V_0}{\partial \rho}\right)\,\label{eq:xF0}~,\\ F_1&=&-3\,\frac{h(j^2)\,\rho}{\mu}=-\frac{3V_0(\rho)}{\mu}~. \label{eq:xF1} \end{eqnarray} An equivalent way of deriving the result for $F_0$ proceeds directly through the derivative of the chemical potential with the assumed ansatz \begin{equation} \mu = \sqrt{p_F^2+M^2(\rho)} + V_0(\rho) \equiv m^*(\rho) + V_0(\rho)\,. \label{eq:mu} \end{equation} Observing that \begin{equation} \frac{\partial\mu}{\partial\rho} = \frac{1}{ m^*}\left[\frac{\pi^2}{ p_F} + M\frac{\partial M}{\partial\rho}\right]+\frac{\partial V_0}{ \partial\rho}\,, \end{equation} leads to \begin{equation} 1+F_0(\rho) = \frac{p_F\,m^*}{ \pi^2}\left(\frac{\partial\mu}{\partial\rho}\right)\,, \label{eq:yF0} \end{equation} [see also Eq.\,(\ref{eq:24})], while the relation for $F_1$, \begin{equation} 1+\frac{F_1(\rho)}{ 3} = 1-\frac{V_0(\rho)}{\mu} = \frac{m^*}{\mu}\,, \label{eq:yF1} \end{equation} is consistent with the nonrelativistic Eq.\,(\ref{eq:F1}). \section{Neutron Star Equation of State and Fermi-Liquid Theory} \subsection{Chiral FRG equation of state} The starting point is now an equation of state, pressure $P({\cal E})$ as a function of energy density ${\cal E}$, derived from a chiral field theory of nucleons and mesons (the ChNM model), based on a linear sigma model with a nonlinear effective potential. The basic Lagrangian involves the isospin-doublet field of the nucleon, $N = (p,n)^\top$, and the chiral boson field $\phi = (\sigma,\boldsymbol{\pi})$ composed of a heavy scalar $\sigma$ and the pseudoscalar Nambu-Goldstone boson $\boldsymbol{\pi}$ of spontaneously broken chiral $SU(2)_L\times SU(2)_R$ symmetry: \begin{eqnarray} {\cal L} &=& \bar{N}\left[i\gamma_\mu\partial^\mu - g(\sigma + i\gamma_5\,\boldsymbol{\tau\cdot\pi})\right]N \nonumber \\ &+& {\frac12}\left(\partial_\mu \sigma \partial^\mu \sigma + \partial_\mu \boldsymbol{\pi}\cdot\partial^\mu \boldsymbol{\pi}\right) - {\cal U}(\sigma, \boldsymbol{\pi})+ \Delta{\cal L}~. \label{eq:ChNM} \end{eqnarray} The $\Delta{\cal L}$ term of this Lagrangian represents short-distance dynamics expressed in terms of isoscalar and isovector vector fields coupled to nucleons, corresponding to contact interactions in chiral effective field theory (ChEFT). The potential ${\cal U}(\sigma, \boldsymbol{\pi})$ is written as a polynomial up to fourth order in the chiral invariant, $\chi \equiv\phi^\dagger\phi = \sigma^2 + \boldsymbol{\pi}^2$, plus a symmetry breaking piece proportional to $m_\pi^2\sigma$. This potential is constructed such as to be consistent with pion-nucleon data and selected ground state properties of nuclear matter. (For details see Refs.\,\cite{DW2015,DW2017} and a brief overview in Appendix B.) The action $S=\int d^4x\,{\cal L}$ of the ChNM Lagrangian (\ref{eq:ChNM}) serves as input for a functional renormalization group (FRG) calculation starting at a UV scale (the chiral symmetry breaking scale $4\pi f_\pi \approx 1$ GeV, with the pion decay constant $f_\pi \approx 0.09$ GeV). This is then evolved down to the full effective action at the low momentum (IR) limit using the FRG flow equations \cite{Wet1993}. The grand-canonical potential $\Omega(T,\mu)$ is constructed \cite{DW2015,DW2017}, from which the pressure $P = -\Omega$ and the energy density ${\cal E} = -P+\mu\rho$ (at $T=0$) are derived, resulting in the equation of state $P({\cal E})$. Figure\,\ref{fig:1} shows this equation of state, including an estimated band of uncertainties. These uncertainties arise primarily from the input parameters of the ChNM model, in particular from varying the nuclear asymmetry energy, $A_S$, in the empirical range between 30 and 34 MeV. The solid curve with $A_S = 32$ MeV serves as our prototype EoS, referred to as the chiral FRG equation of state in the following. A rough estimate of uncertainties in the pressure $P$ amounts to $\pm 15\, \%$. For practical purposes, an accurate Pad\'e fit and a table of $P({\cal E})$ are given in Appendix C. The squared velocity of (first) sound (\ref{eq:soundspeed}) is shown in Fig.\,\ref{fig:2}. Notably, the sound speed exceeds its canonical value for a noninteracting ultrarelativistic Fermi gas $c_1^2 = 1/3$ at densities $\rho\gtrsim 4\,\rho_0$, mainly as a consequence of repulsive multi-nucleon correlations which govern the stiffness of the EoS. A similar behavior of $c_1^2$ is reported in \cite{FFM2019} and also discussed in \cite{Tews2018}. Important input for the chiral FRG EoS is set by nuclear physics constraints at baryon densities $\rho\lesssim 2\rho_0$. The nonperturbative chiral FRG approach is designed to be consistent with perturbative chiral effective field theory calculations \cite{Heb2013, HK2017, LH2018, Dri2019} for both symmetric nuclear matter and neutron matter in this density range. In fact, whereas the chiral FRG starts from a Lagrangian based on a linear sigma model, the pion sector of the ChEFT is built on a nonlinear sigma model, where the heavy scalar $\sigma$ field has been eliminated. While the linear and nonlinear sigma models are not equivalent at any perturbative level, resummations to all orders in the nonperturbative FRG treatment of the ChNM model should yield results that match those of the perturbative ChEFT at sufficiently low momentum scales and densities. This is indeed demonstrated by explicit calculations in Refs. \cite{DW2015, DW2017}. Figure \ref{fig:3} displays the resulting EoS, $P({\cal E})$, in comparison with the families of equations of state obtained in Refs.\,\cite{Annala2018,Vuorinen2018}. The latter interpolate between the low-density ChEFT EoS and the high-density EoS of perturbative QCD and satisfy the astrophysical constraints from neutron star masses and the tidal deformability deduced from the recently observed neutron star merger GW170817. Clearly the ChNM-FRG EoS fits well into the allowed band. \begin{figure*}[t] \begin{center} \includegraphics[height=70mm,angle=-00]{fig1} \caption{The equation of state of neutron star matter in beta equilibrium (pressure $P$ as a function of energy density ${\cal E}$) derived from chiral nucleon-meson field theory combined with functional renormalization group equations \cite{DW2015,DW2017}. The shaded band gives an uncertainty estimate when varying the symmetry energy in the range $30 - 34$\,MeV. In the limit of very low densities ($\rho < 0.3\,\rho_0$ corresponding to ${\cal E} < 0.045$\,GeV/fm$^3$), the EoS is matched to the Skyrme-Lyon (SLy) parametrization \cite{DH2001}.} \label{fig:1} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=60mm,angle=-00]{fig2} \caption{The squared speed of sound, $c_1^2 = \partial P({\cal E})/\partial{\cal E}$, derived from the neutron star matter equation of state, Fig.\,\ref{fig:1}. } \label{fig:2} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=75mm,angle=-00]{fig3} \caption{Pressure as a function of energy density for neutron star matter with constraints from perturbative QCD and nuclear EFT calculations at the upper and lower ends of the density scale, adapted from Refs.\cite{Annala2018,Vuorinen2018}. The area between the upper and lower limits marks the range of acceptable equations of state subject to empirical conditions on neutron star maximum mass and tidal deformability, the latter from LIGO and Virgo gravitational wave analysis. The solid curve represents the chiral FRG equation of state $P({\cal E})$ as in Fig.\,\ref{fig:1}. } \label{fig:3} \end{center} \end{figure*} \subsection{Connection to relativistic Fermi-liquid theory and quasiparticles} The observation that chiral nucleon-meson field theory in combination with FRG appears to successfully describe neutron star matter at core densities suggests a study of the Fermi-liquid properties of neutron matter in a relativistic framework \cite{BC1976}. Given the neutron star EoS of the chiral FRG model, the aim is to deduce the spin-independent Fermi-liquid parameters $F_0$ and $F_1$ and examine their density dependence. The speed of sound (\ref{eq:sound-speed-Landau}) involves a combination of these Landau parameters. In order to determine $F_0(\rho)$ and $F_1(\rho)$ separately, additional information on the quasiparticle properties is required. In the core of neutron stars the relevant quasiparticles are dominantly neutrons dressed by the strong interactions with the surrounding matter, plus a small fraction of protons. Once again, we assume that the small proton fraction in neutron star matter can be neglected in the analysis of the spin-independent Fermi liquid properties. The single-particle motion in a relativistic theory of neutron matter is described by the in-medium Dirac equation \cite{Brockmann:1996xy} \begin{equation} \left[\slashed{p}-M_0-\Sigma_s(p;\rho)-\slashed{\Sigma}_v(p;\rho)\right]u(\mathbf{p},s)=0, \end{equation} where in standard notation $\slashed{p}=\gamma_\mu \,p^\mu$, $p^\mu=(p_0,\mathbf{p})$, $\Sigma_s$ is the scalar, $\Sigma_v^\mu=(\Sigma_0,\mathbf{\Sigma})$ is the vector neutron self energy, and $u(\mathbf{p},s)$ is the corresponding Dirac spinor. The quasiparticle energies (in the rest frame of the system) are defined by the self-consistent solutions of the equation \cite{vanDalen:2005ns} \begin{equation}\label{eq:qp-energy-full} \varepsilon_p=\sqrt{\mathbf{p}^2+[M_0+\Sigma_s(\mathbf{p},\varepsilon_p;\rho)]^2}+\Sigma_0(\mathbf{p},\varepsilon_p;\rho) . \end{equation} Here, the spatial component of the vector self-energy $\mathbf{\Sigma}$ vanishes as in the mean-field treatment discussed above. In order to obtain a tractable scheme, we set $|\mathbf{p}|=p_F$ in the self-energies and use the following {\em ansatz} for the quasiparticle energy \begin{equation}\label{eq:qp-energy-ansatz} \varepsilon_p=\sqrt{\mathbf{p}^2+M(\rho)^2}+U(\rho) , \end{equation} with a density-dependent Fermion (nucleon) mass $M(\rho)=M_0+\Sigma_s(p_F,\varepsilon_F;\rho)$ and an effective vector potential $U(\rho)=\Sigma_0(p_F,\varepsilon_F;\rho)$. In Sec.\, \ref{sec:results} we discuss possible implications of this approximation. Consider again the Fermi velocity \cite{BC1976}, \begin{equation} v_F = \left(\frac{\partial\varepsilon_p}{\partial p}\right)_{p=p_F} = \frac{p_F }{ \mu\left(1+\frac13 F_1\right)}~. \label{eq:vf1} \end{equation} Thus the Fermi velocity as a function of baryon density $\rho$ yields $F_1(\rho)$. The parameter $F_0(\rho)$ is then obtained by insertion into Eq.\,(\ref{eq:sound-speed-Landau}) (cf.~Eq.~(10) in \cite{Landau2}): \begin{equation} F_0(\rho) = \frac{3\mu}{ p_F\,v_F}\,c_1^2(\rho) -1~. \label{eq:F0} \end{equation} A key quantity for further steps is obviously the baryon chemical potential $\mu(\rho)$ as a function of baryon density. This is obtained by first constructing the energy density ${\cal E}(\rho)$ as a function of $\rho$. The (zero temperature) thermodynamic relation \begin{equation} {\cal E} + P({\cal E}) = \rho\frac{\partial{\cal E}}{\partial\rho}~, \end{equation} yields the density as a function of ${\cal E}$ \begin{equation} \rho({\cal E}) = \rho^{(0)}\exp\left[\int_{{\cal E}^{(0)}}^{\cal E}\frac{d{\cal E}'}{{\cal E}' +P({\cal E}')}\right]~. \label{eq:xdensity} \end{equation} Here the lower limit of the integral is chosen at a very low density $\rho^{(0)}$, where the energy density is well approximated by the mass density, \begin{equation} {\cal E}^{(0)} \equiv {\cal E}(\rho^{(0)})= M_0\,\rho^{(0)}~.\nonumber \end{equation} By inverting Eq.\,(\ref{eq:xdensity}) with $P({\cal E})$ as input, we obtain the energy density ${\cal E}(\rho)$ and also the baryon chemical potential, $\mu = \partial{\cal E}/\partial\rho$, as functions of the density. With our {\em ansatz} (\ref{eq:qp-energy-ansatz}) for the quasiparticle energy, the Fermi energy is given by \begin{equation}\label{eq:qp-ansatz} \mu = \varepsilon_{p=p_F} = \sqrt{p_F^2 + {M}{^2}(\rho)} + U(\rho)~. \end{equation} It follows that \begin{equation} v_F(\rho) = \frac{p_F}{\sqrt{p_F^2 + {M}{^2}(\rho)}}~. \label{eq:vf2} \end{equation} In terms of the energy per particle, $E(\rho)/A$, the energy density is given by: \begin{equation} {\cal E}(\rho) = \left(M_0 +\frac{E(\rho)}{ A}\right)\rho~, \label{eq:energyden} \end{equation} and the chemical potential by \begin{equation} \mu = M_0 + \left(1 + \rho\frac{\partial}{\partial\rho}\right)\frac{E(\rho)}{ A}~. \label{eq:chempot} \end{equation} Equating the two expressions for $\mu$, we find \begin{equation} M_0 + \left(1 + \rho\frac{\partial}{\partial\rho}\right)\frac{E(\rho)}{ A} = m^*(\rho) + U(\rho)~. \label{eq:chempot-eq} \end{equation} where we have identified the Landau effective mass \begin{equation}\label{eq:Landau-mstar} m^*(\rho) = \sqrt{p_F^2 + {M}{^2}(\rho)} . \end{equation} Thus, the density dependence of $E(\rho)/A$ is reflected in the density-dependent mass $M(\rho)$ and the potential $U(\rho)$. In the chiral FRG model, the in-medium nucleon mass $M(\rho)$ scales with the expectation value of the scalar $\sigma$ field, which is identified with the in-medium pion decay constant $f_\pi^*(\rho)$, associated with the time component of the axial current. This plays the role of a chiral order parameter. Consequently, in this model we have: \begin{equation} M(\rho) = M_0\,\frac{\langle\sigma\rangle}{ f_\pi}= M_0\,\frac{f_\pi^*(\rho)}{ f_\pi}~. \end{equation} The proportionality of the nucleon mass to the pion decay constant originates from current algebra and the Goldberger-Treiman relation. The chiral FRG approach maintains this property for a nucleon in a dense medium. Finally, given $M(\rho)$ and $U(\rho)$ one can return to Eqs.\,(\ref{eq:xF0},\ref{eq:xF1}) or (\ref{eq:yF0},\ref{eq:yF1}) and determine the Landau parameters $F_0$ and $F_1$. \section{Results: Quasiparticles and Fermi-Liquid parameters}\label{sec:results} \subsection{Bulk and quasiparticle properties} Using Eq.\,(\ref{eq:xdensity}), with the chiral FRG equation of state $P({\cal E})$ as input, we compute the energy density, ${\cal E}(\rho)$, and the energy per particle, shown in Figs.\,\ref{fig:4} and \ref{fig:5}. The corresponding baryon chemical potential $\mu(\rho)= \partial{\cal E}/\partial\rho$ is shown in Fig.\,\ref{fig:6}. \begin{figure*}[t] \begin{center} \includegraphics[height=60mm,angle=-00]{fig4} \caption{Energy density of neutron star matter from chiral FRG calculations \cite{DW2015,DW2017} as a function of density given in units of $\rho_0 = 0.16$ fm$^{-3}$.} \label{fig:4} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=60mm,angle=-00]{fig5} \caption{Energy per particle deduced from the chiral FRG equation of state as a function of baryon density given in units of $\rho_0 = 0.16$ fm$^{-3}$.} \label{fig:5} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=65mm,angle=-00]{fig6} \caption{Chemical potential of neutron matter from ChNM-FRG calculations \cite{DW2015,DW2017} as a function of density given in units of $\rho_0 = 0.16$ fm$^{-3}$. } \label{fig:6} \end{center} \end{figure*} The next step is the decomposition of the baryon chemical potential (\ref{eq:chempot}) in terms of the density-dependent quasiparticle mass and effective potential. As mentioned, the mass $M(\rho)$ scales with the density dependence of the chiral order parameter $\langle\sigma\rangle = f_\pi^*(\rho)$. Even at $\rho\approx 5\,\rho_0$, i.e., at densities encountered in the inner core region of neutron stars, the in-medium nucleon mass is still almost half of its vacuum value. This indicates that the spontaneous breaking of the chiral symmetry remains strong at such large densities. Here the nonperturbative treatment of fluctuations beyond the mean-field approximation in the chiral FRG approach is of crucial importance \cite{DW2015, DW2017}. In a mean-field calculation one finds instead a first-order chiral phase transition at densities as low as $\rho\approx 3\,\rho_0$. With inclusion of fluctuating fields and, in particular, many-body correlations featuring repulsive Pauli effects that become increasingly active with increasing density, the transition to chiral symmetry restoration is shifted to densities beyond $5\,\rho_0$. Once $M(\rho)$ is determined (see Appendix C for numerical details), the effective potential is obtained by \begin{equation} U(\rho) = \mu - m^*(\rho)~, \end{equation} with the Landau effective mass $m^*(\rho)$, Eq.~(\ref{eq:Landau-mstar}). Results for the in-medium mass $M(\rho)$ and the effective potential $U(\rho)$ are summarized in Fig.\,\ref{fig:7}. The deviations in $M(\rho)$ from linear density dependence reflect correlations and fluctuations beyond the mean field approximation. The Landau effective mass, $m^*(\rho)$, is shown in Fig.\,\ref{fig:8}. This is the quantity that determines the Landau parameter $F_1$, see Eq.\,(\ref{eq:yF1}). At large densities, the neutrons at the Fermi surface become relativistic, with an effective mass $M(\rho)$ comparable to or smaller than $p_F$. Nevertheless, at the densities relevant to neutron stars, $M(\rho)$ is still large compared to values expected when approaching chiral symmetry restoration. It is important to point out that $M(\rho)$ and $U(\rho)$ behave quite differently in comparison with the corresponding quantities of standard relativistic mean-field (RMF) models, which yield much stronger scalar and vector mean fields. This difference comes primarily from the explicit treatment of multipion exchange processes in the chiral FRG theory. For example, we have $U(\rho = \rho_0) \approx 0.14$ GeV whereas the corresponding RMF vector potential would be almost three times as large. Similarly, the density-dependent mass $M(\rho)$ in the chiral FRG approach drops to $M(\rho=\rho_0) \approx 0.83\,M_0$ at normal nuclear matter density. The equivalent attractive scalar potential is $U_s(\rho=\rho_0)\equiv M(\rho_0)-M_0 \approx -0.16$ GeV, whereas the typical scalar potential in RMF models is about twice as strong. In the chiral FRG scheme, much of the intermediate-range attraction between nucleons in the medium is generated by two-pion exchange processes treated explicitly, with inclusion of Pauli effects. \begin{figure*}[t] \begin{center} \includegraphics[height=70mm,angle=-00]{fig7} \caption{Neutron quasiparticle mass $M(\rho)$ and effective potential $U(\rho)$ as functions of density. The effective mass is given in units of the vacuum neutron mass, $M_0 = 939.57$ MeV. } \label{fig:7} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=60mm,angle=-00]{fig8} \caption{Landau effective mass $m^*(\rho) = \sqrt{p_F^2 + M^2(\rho)}$ as a function of density in units of $\rho_0 = 0.16$ fm$^{-3}$.} \label{fig:8} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=70mm,angle=-00]{fig9} \caption{Landau Fermi liquid parameters $F_0$ and $F_1$ for neutron star matter as a function of density $\rho/\rho_0$ in units of $\rho_0 = 0.16$ fm$^{-3}$.} \label{fig:9} \end{center} \end{figure*} \subsection{Landau parameters \texorpdfstring{$F_0$}{F0} and \texorpdfstring{$F_1$}{F1}} The input is now prepared to evaluate the dimensionless Fermi liquid parameters $F_0$ and $F_1$ as functions of density, using Eqs.\,(\ref{eq:xF0} and \ref{eq:xF1}). The results for these Landau parameters are plotted in Fig.\,\ref{fig:9}. At low densities, $\rho < \rho_0$, the parameter $F_0$ starts out negative and then turns positive around $\rho_0$. This behavior is consistent with a perturbartive ChEFT calculation for neutron matter \cite{HKW2013}. With next-to-next-to-next-to-leading order ($N^3LO$) chiral NN interactions and $N^2LO$ three-body forces, a second-order many-body calculation gives $f_0(\rho=0.2\,\rho_0) = - 1.25$ fm$^{-2}$ and $f_0(\rho=\rho_0) = 0.7$ fm$^{-2}$. The resulting dimensionless $F_0 = N(0)f_0$ with the density of states $N(0)= m^*p_F/\pi^2$ is indeed close to the one deduced from the chiral FRG EoS at low densities. On the other hand the Landau effective mass $m^*$ from the ChEFT calculation \cite{HKW2013} does not decrease as fast as the one found in the relativistic chiral FRG model. The resulting $F_1(\rho = \rho_0)$ is close to zero but with a negative slope indicating a decreasing effective mass at higher density, as observed in the chiral FRG result. Historically, the behavior of the nucleon effective mass close to the Fermi surface has in fact been subject to many detailed investigations. A representative overview can be found in Ref.\,\cite{Mahaux1985}. While $m^*$ in mean-field calculations is generally less than the free mass, correlations involving particle-hole excitations tend to increase the effective mass near the Fermi surface. Thus, at least at densities $\rho\lesssim \rho_0$, the resulting Landau effective mass in neutron matter remains close to its vacuum value \cite{HKW2013,Ismail:2019rjg}. Our parametrization of the quasiparticle energy (\ref{eq:qp-energy-ansatz}) is presumably not able to fully capture these detailed correlation effects. Consequently, at low densities we find that the Landau effective mass, and consequently also $F_1$, is somewhat smaller than obtained in ChEFT \cite{HKW2013}. A more detailed treatment of the correlations near the Fermi surface, e.g., retaining the explicit momentum dependence of the neutron self-energies in (\ref{eq:qp-energy-full}), is expected to yield an $F_1$ somewhat larger than in the present analysis \cite{vanDalen:2005ns,HKW2013}. Consequently the $F_0(\rho)$ shown in Fig.\,\ref{fig:9} is presumably a lower bound. The overall strong increase of $F_0$ at high densities reflects the growing importance of repulsive many-body correlations as the matter gets more and more compact. Part of this effect is due to the action of the Pauli principle on nucleons fluctuating around the Fermi surface. Such repulsive correlations are at the same time responsible for the increase of the sound velocity at high densities beyond its canonical value, $c_1 > 1/\sqrt{3}$. \subsection{Upper bound for \texorpdfstring{$F_0$}{F0}} So far we have assumed that the density-dependent neutron mass, $M(\rho)$, is proportional to the in-medium pion decay constant, $f_\pi^*(\rho)$, and decreases continuously with increasing density. In order to assess uncertainties possibly implied by this assumption, it is instructive to examine a limiting case, replacing $M(\rho)$ by the constant free neutron mass $M_0$ at all densities. Given the speed of sound (\ref{eq:sound-speed-Landau}) constrained by astrophysical observations, such an extreme limit reduces the magnitude of (negative) $F_1 <0$, balanced by a corresponding increase of $F_0$. The resulting $F_0$ can be considered as an upper limit that provides an estimate of uncertainties in the determination of the Landau parameters. With the constant (vacuum) mass $M_0 = g f_\pi$ as input, the chemical potential is \begin{equation}\label{eq:chem-pot-free-mass} \mu = \sqrt{p_F^2 + M_0^2} + V_0(\rho)\,, \end{equation} where the previously used vector potential $U(\rho)$ in Eq.\,(\ref{eq:qp-ansatz}) is now replaced by $V_0(\rho)$ subject to the condition that $\mu = \partial{\cal E}/\partial\rho$ remains unchanged, given by the values listed in Table \ref{t1} and shown in Fig.\,\ref{fig:6}. This limiting vector potential is plotted in Fig.\,\ref{fig:10}. Note that $V_0(\rho)$ is weaker in magnitude than $U(\rho)$ because the attraction, previously manifest in the decreasing $M(\rho)$, is now effectively transferred to $V_0$. Also shown in Fig.\,\ref{fig:10} is the Landau effective mass divided by the chemical potential, $m^*/\mu = \sqrt{p_F^2 + M_0^2}/\mu = 1-V_0/\mu$. This ratio is close to unity at densities up to about $2\rho_0$. Hence $F_1$ stays close to zero within that density range, in qualitative agreement with the results obtained in ChEFT~\cite{HKW2013}. \begin{figure*}[t] \begin{center} \includegraphics[height=70mm,angle=-00]{fig10} \caption{The effective vector potential $V_0(\rho)$ and the ratio of Landau effective mass over chemical potential, $m^*/\mu$, for the limiting case $M(\rho)\equiv M_0$, as functions of baryon density in units of $\rho_0 = 0.16$ fm$^{-3}$. The dashed line indicates the range of densities, where the {\em ansatz} (\ref{eq:chem-pot-free-mass}) yields $V_0<0$, at variance with the model assumptions (see main text). } \label{fig:10} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=70mm,angle=-00]{fig11} \caption{Landau parameters $F_0$ and $F_1$ as functions of baryon density in units of $\rho_0 = 0.16$ fm$^{-3}$. The upper and lower bounds of the shaded areas correspond to the following limiting cases for choices of the baryon mass: (a) lower boundary lines: $M(\rho)/M_0 = \langle\sigma\rangle(\rho)/f_\pi$ as in Fig.\,\ref{fig:9}; (b) upper boundary lines: $M(\rho) \equiv M_0$ at all densities. } \label{fig:11} \end{center} \end{figure*} The Landau parameters are given by [see Eqs.\,(\ref{eq:xF0}) and (\ref{eq:xF1})]: \begin{equation} F_0(\rho) = {p_F\over\pi^2} \sqrt{p_F^2 + M_0^2}~ {\partial V_0(\rho)\over\partial\rho}~~,~~~~F_1(\rho) = -{3 V_0(\rho)\over \mu(\rho)}~. \end{equation} They are shown in Fig.\,\ref{fig:11} together with the ones previously computed. The areas between the upper and lower boundary curves for $F_0$ and $F_1$ can be considered as uncertainty measures covering the two limiting cases, $M(\rho) \propto f_\pi^*(\rho)$ (lower bounds) and $M(\rho)\equiv M_0$ (upper bounds). We note that at low densities, where the effective vector potential is negative, the {\em ansatz} (\ref{eq:chem-pot-free-mass}) is strictly speaking not consistent with the model assumptions, since the interaction between two baryons mediated by the exchange of an isoscalar vector boson is repulsive.\footnote{This inconsistency would presumably not occur in a more refined approach, where the momentum dependence of the potential $V_0$ is accounted for (see, e.g.,~\cite{vanDalen:2005ns}).} Nevertheless, as already indicated, the resulting values for the Landau effective mass at low densities are consistent with ChEFT. We can therefore conclude that our estimate of the upper bound on the Landau parameters remains valid also at low densities. \subsection{Zero sound} Cold Fermi liquids can develop a sound-like collective mode, {\it zero sound} \,\cite{Landau2}. The velocity of zero sound, $c_0 = \omega/q$ (in terms of frequency and wave vector of the mode), is yet another characteristic property of the fluid that is linked to $F_0$ and $F_1$. In particular, for the model considered, where $F_\ell=0$ for $\ell \geq 2$, the zero-sound velocity is determined by real solutions of the equation \cite{Abrikosov1959,BP1991} \begin{equation} \label{eq:zerosound} \left(F_0 + {F_1\over 1+ F_1/3}\,s^2\right)\Omega_{00}(s) = -1~, \end{equation} where \begin{equation} \Omega_{00}(s) = 1+{s\over 2}\ln\left({s-1\over s+1}\right) \end{equation} is the long-wavelength limit of the Lindhard function \cite{Lindhard:1954va} \footnote{Note that in Ref.\cite{Matsui1981} $\Phi(s) = - \Omega_{00}(s)$ is used.} and $s = c_0/v_F$, the velocity of the sound mode divided by the Fermi velocity. The zero sound velocity in units of the velocity of light is shown in Fig.\,\ref{fig:12}. A comparison is once again made between the standard case with density dependent mass $M(\rho) = g\langle\sigma\rangle$ and the limit $M(\rho) \equiv M_0$. With the former choice it turns out that Eq.\,(\ref{eq:zerosound}) permits real solutions of $c_0$ only in a restricted range of densities $\rho$. Outside this range the zero-sound velocity is complex. Its imaginary part indicates Landau damping. From Fig.\,\ref{fig:12} one concludes that, while neutron star matter becomes a relativistic fluid at high densities, it strictly satisfies the causality constraint, $c_0 < c$. This is implied by the strong repulsion encoded in the vector field that grows linearly at high density and contributes to both $F_0$ and $F_1$. Notably, setting $F_1= 0$ would not be consistent as it would lead to a superluminal velocity of zero sound at high densities. \begin{figure*}[t] \begin{center} \includegraphics[height=70mm,angle=-00]{fig12} \caption{Velocity $c_0$ (in units of the speed of light) of the zero-sound collective mode, as a function of baryon density in units of $\rho_0 = 0.16$ fm$^{-3}$. The two curves correspond to different choices of quasiparticle masses. Curve (a): baryon mass $M(\rho) \propto f_\pi^*(\rho)$; curve (b): using $M(\rho)\equiv M_0$. At densities below and above the end points indicated in curve (a) the zero-sound velocity is complex.} \label{fig:12} \end{center} \end{figure*} \subsection{Comparison with liquid helium-3} By their magnitudes, the Landau parameters are measures of the correlation strength in the Fermi liquid. At this point a comparison with another example of a strongly correlated fermionic many-body system, liquid $^3$He, is quite instructive \cite{Vollhardt1984,Leggett2016}. Normal $^3$He at low temperature is a high-density liquid in which the average distance between the helium atoms is of the same order as the atomic diameter. Their interaction has an attractive van der Waals part with a range of a few angstroms, and a strongly repulsive short-range core. Apart from a re-scaling of distances by a factor of $10^{-5}$ between {\AA} and fm, this is qualitatively reminiscent of the situation in neutron matter at densities $\rho \approx 5\,\rho_0$, where the average distance between baryons is of the same order as the diameter of their compact valence quark cores, and the interaction is also characterized by the combination of intermediate-range attraction and strong short-range repulsion. The dimensionless Landau parameters $F_0$ and $F_1$ can thus give, by comparison, an impression of how strong the correlations are in these two systems. A qualitative difference between liquid $^3$He and neutron matter is seen in the quasiparticle effective masses. The Landau effective mass $m^*$ in liquid $^3$He is a factor of approximately 3 to 6 larger (depending on pressure) than the mass of an isolated $^3$He atom \cite{Wheatley1975,Greywall1986}, indicating the presence of strongly repulsive correlations. This implies that $F_1$ for liquid $^3$He is positive and increasing with pressure, in contrast to the much more moderate modifications of $m^*$ with increasing pressure in neutron matter. Another qualitative distinction between the matter in neutron star cores and liquid $^3$He is the highly nonrelativistic nature of the latter. Characteristic Fermi velocities $v_F$ for $^3$He reported in a wide range of pressures from zero up to about 35 bar \cite{Wheatley1975} are on the order of $10^{-7}$ in units of the speed of light and actually decrease with increasing pressure as a consequence of the increasing effective mass. On the other hand, the sound velocity $c_1$ in liquid $^3$He is typically an order of magnitude larger than $v_F$, and their ratio \begin{equation} \left({c_1\over v_F}\right)^2 = {1\over 3}(1+F_0)(1+F_1/3)~,\nonumber \end{equation} is then reflected in large values of the Landau parameters. The following values are reported for liquid $^3$He (cf. Appendix C of Ref.\,\cite{BP1991}): at zero pressure, $F_0 \approx 9.3$ and $F_1 \approx 5.4$; at a pressure of $P = 27$ bar, $F_0 \approx 68.2$ and $F_1 \approx 12.8$. Understanding such large Fermi liquid parameters requires resummations to all orders and inclusion of collective modes in the quasiparticle interaction (the induced interaction \cite{Babu-Brown1978}). We note that the neutron matter parameters $F_0$ and $F_1$ shown in Figs.\,\ref{fig:9} and \ref{fig:11} turn out to be strikingly smaller in magnitude than those for liquid $^3$He. The reason for this qualitative difference can be traced to the scales at work in the respective quasiparticle interactions. Both neutron star matter and liquid $^3$He are governed by repulsive interactions with characteristic ranges, $r_c$. This correlation scale is to be compared with the average distance between Fermions in the medium $d\propto p_F^{-1}$. Their ratio $r_c/d\propto r_c\,p_F$ is a parameter that measures the growing importance of the repulsive forces as the density increases \cite{BM1969}. For neutron matter, even at densities several times larger than $\rho_0$, this parameter is still much smaller than in liquid $^3$He over a broad range of pressures. Thus, although the repulsive correlations in dense neutron matter are sufficiently strong to support two-solar-mass neutron stars against gravitational collapse, they appear as relatively moderate in comparison with those in liquid $^3$He. In this perspective, the dense baryonic matter encountered in the core of neutron stars is perhaps not as extreme as sometimes imagined. \section{Summary and Concluding Remarks} This work has focused on the Fermi liquid properties of dense baryonic matter as it may be realized in the center of neutron stars. While the detailed composition of strongly interacting matter at densities encountered in neutron star cores continues to be open to discussions of different scenarios ranging from conventional hadronic matter to quark matter, our starting point in this work has been an equation of state based on a chiral nucleon-meson field theory in combination with a nonperturbative approach using functional renormalization group methods. The input of the effective Lagrangian is tuned to pion-nucleon and pion-pion interactions in vacuum and to nuclear physics observables at densities around the equilibrium density of normal nuclear matter, $\rho_0\approx 0.16$\,fm$^{-3}$. The output is consistent with state-of-the-art chiral effective field theory calculations at baryon densities $\rho\lesssim 2\rho_0$. Moreover, the resulting EoS of neutron star matter is in agreement with observations, including the existence of two-solar mass stars and information from the gravitational wave signals produced by the merger of two neutron stars. It turns out that dense matter at zero temperature described by this EoS remains in the hadronic phase characterized by spontaneously broken chiral symmetry up to baryon densities even beyond five times the density of normal nuclear matter. This shifting of the chiral transition to extremely high densities, even above those encountered in neutron star cores, is a consequence of the important role played by fluctuations and correlations beyond mean-field approximation that are treated nonperturbatively in the FRG framework. A special feature of this EoS is its stiffness, produced by repulsive multi-nucleon correlations that become increasingly important with increasing baryon density and drive the sound velocity significantly beyond the massless Fermi gas limit, $c_1^2 = 1/3$. Repulsive short-range correlations and the action of the Pauli principle in the dense medium are important ingredients behind this mechanism. It is then interesting to explore the quasiparticle properties of the dense Fermi system under such conditions. Relativistic Fermi-liquid theory is applied to deduce the Landau parameters $F_0$ and $F_1$ and to investigate their density dependence. A relativistic treatment is mandatory because, at high densities, the Fermi momentum becomes large and comparable to the quasiparticle mass. The density-dependent neutron mass decreases in the compressed medium but still remains at about half of its vacuum value at densities typically reached in the neutron star core region. The magnitudes of the dimensionless Landau Fermi-liquid parameters then provide a measure of the many-body correlations as they grow continuously in strength with rising density. Remaining uncertainties in $F_0$ are correlated with uncertainties in $F_1$, i.e., the quasiparticle effective mass and its density dependence. Thus, an estimate of the upper limit for $F_0$ is obtained by setting the in-medium nucleon mass equal to the free mass at all densities. Such a choice reduces the magnitude of $F_1$ and consequently increases $F_0$ under the condition of leaving the sound velocity unchanged. At densities up to $2\,\rho_0$, the resulting $F_1$ remains close to zero, in agreement with ChEFT results~\cite{HKW2013}. On the other hand, at higher baryon densities one may expect a more strongly negative $F_1$ if the chiral order parameter (the in-medium pion decay constant) is reduced from its vacuum value. The results for the dimensionless parameters $F_0(\rho)$ and $F_1(\rho)$ display the typical behavior of a strongly correlated Fermi system. However, it is interesting to observe by comparison with Landau parameters in a system such as liquid $^3$He, that the correlations in neutron star matter are still fairly moderate. For example, while one finds $F_0 \approx 3$ at baryon densities $\rho\approx 0.8$ fm$^{-3}$ in neutron matter, the $F_0$ in normal liquid $^3$He is already more than three times larger even at zero pressure, and it strongly increases further with increasing pressure. Even in the extreme case of setting the in-medium baryon mass equal to its mass in vacuum, this Landau parameter does not exceed $F_0\approx 5$ at $\rho \approx 5\,\rho_0$, the densities characteristic of neutron star cores. Extrapolations of the chiral FRG equation of state~\cite{DW2015,DW2017} to densities beyond the range realized in neutron stars indicate the existence of a continuous chiral crossover transition around $\rho \gtrsim 8\,\rho_0$. Under such conditions the valence quark cores of the nucleons begin to overlap. It is then an interesting issue how to describe a possible hadron-quark continuity region in an actual model, such as the one developed in Ref.\,\cite{Baym2018}. The sound velocity is again a quantity of prime interest in this context. While the present work deals with neutron matter, extensions to symmetric and asymmetric nuclear matter in order to study isospin-dependent Fermi-liquid parameters at high densities are certainly of interest. At densities close to equilibrium nuclear matter, such investigations have previously been performed with nuclear interactions based on chiral effective field theory in conjunction with many-body perturbation theory~\cite{Holt:2011yj}. The nonperturbative chiral FRG approach~\cite{DW2015,DW2017} used in the present work, is in fact designed to be compatible with in-medium ChEFT at low densities, $\rho\lesssim 2\rho_0$, and thus suitable for extrapolations to higher densities. Such considerations might motivate future studies. \section*{APPENDICES}
1,116,691,500,600
arxiv
\section{INTRODUCTION} \label{sec:intro} Since the Green and Schwarz's anomaly cancellation \cite{GSW} proved the importance of the super string theory(SST), detailed studies of it have been done. One unfortunate feature of SST is the fact that its typical energy scale is $ 10^{19}$ GeV which is far beyond our experimental access. However this does not necessarily imply that SST allows no experimental test. The most important feature of SST is that it unifies the theories of matter and the gravity. This implies that SST in principle has an ability to determine the structure of space time in which the strings themselves live. If SST is the true theory of the whole universe, it is conceivable that SST has left some relics in our universe observable even today. In fact the presently observed isotropic, uniform and almost flat universe must have been determined by SST. {}From this point of view the string cosmology has been studied by some authors \cite{Witten,Ah,BV,MM,TV}. Brandenberger and Vafa \cite{BV} proposed an interesting scenario of the string cosmology. The starting point of their scenario is the Heterotic string theory in the space of nine dimensional torus universe of the Plankian size and a time dimension $T^9\times R$. They argued that this small universe was oscillating in some period and eventually three dimensions out of nine began to expand resulting in the present large universe. In order to get deeper theoretical understanding of their scenario, we perform a detailed thermodynamical analysis of it in this paper. Our strategy in this paper is to use the microcanonical formalizm to follow an entire thermal history. So far several authors have employed the microcanonical formalism to examine the thermodynamical functions of the string gas. However the relation of their results to the thermal history of the string universe seems to be unclear to us. In order to clear up these situations in this paper we give a concrete framework by which we can follow the thermal history of the string universe using the thermodynamical functions of the microcanonical formalism. According to this framework and based on some assumptions such as the local thermal equilibrium and others which we will state precisely later, we will determine the thermal history of the nine dimensional torus universe of Brandenberger and Vafa as follows. In the initial epoch during which the torus universe is oscillating, very high energy strings occasionally emit the zero modes (massless point particles) due to the cosmological expansion and sometimes absorb the zero modes due to the cosmological contraction. This process is shown to be adiabatic. This adiabaticity, or in other words the reversibility, ensures that the oscillation is not damping. Thus our result is quite consistent with the picture of the initially oscillating universe which is followed by the three dimensionally expanding epoch. After some three dimensional directions start to expand with thre remaining six dimensions being kept Plankian, the energy of zero modes and the strings having no winding along the three dimension (which we call non-winding strings) grows roughly in proportion to the expanding volume. The temperature is shown to be fixed to be the Hagedorn temperature in this period. This inflation like energy growth is possible because the highest energy strings (which we call winding strings) continue to supply the energy by their decay. This epoch ends when the high energy strings decay away. At this stage there left are the dominant zero modes along with a few non-winding strings. {}From that time the redshift of the zero modes get effective resulting in the falling down of the temperature. The remaining non-winding string modes are shown to be quickly decay away because of their high specific heat. In this way the string universe is shown to transit to the conventional radiation dominant universe. The exposition of this thermal history is the main result of this paper. Our plan of the discussion is as follows. In the next section we explain on what setting we proceed the discussion in this paper and summarize the approximations and the assumptions to use in this paper. In Sec. \ref{sec:scenario} we will give a brief review of the Brandenberger and Vafa's scenario to fix the notation. In order to follow the thermal history of the string universe, we need to calculate the multi string state density. In the ideal string approximation the multi state density is evaluated by the single state density. In Sec.\ref{sec:singlestate} we give a detailed discussion on the single string state density. Especially we explain the change occurring in the single state density induced by the cosmological expansion, which is of importance in discussing the thermal history. In Sec.\ref{sec:big} we give a remark on the value of the total energy of our microcanonical ensemble. This value turns out to be the key parameter which determines the thermal history. In Sec.\ref{sec:framework} we will provide a framework of the microcanonical formalism on which we can follow the thermal history. From Sec.\ref{sec:non} to Sec.\ref{sec:zero} we will evaluate the multi state density from the single state densities. In these calculation a novel technique is introduced and used extensively. In Sec.\ref{sec:thermal} the thermal history of our string universe is deduced by gathering all the knowledges obtained in the preceding sections. The last section is devoted to some discussions. In the Appendendix we will ascertain the validity of the Maxwel-Boltzmann approximation which will be use in this paper. \section{Setting} \label{sec:set} In this section we are going to make sure what kind of tools and approximations we use in this paper. As is well known the thermodynamical treatment needs special care for the string theory because of the exponentially growing state density \cite{Hagedorn,GSW}. One method to treat such system is to extend the temperature to complex number\cite{Hagedorn,Frau,Sund} in the canonical formalism, and the other is to quit to use the canonical formalism and to use the microcanonical formalism \cite{BV,MT,AT,Deo,BG,DeoII}. Both methods are actually connected through the Laplace transformation. We will take the latter treatment in this paper. Our interest in this paper is the fundamental string theory not a cosmic string. However so as to clarify our setting it is useful to review what is known in the studies of the cosmic string. The ensemble of the cosmic or fundamental strings is in general subject to both the statistical mechanics and the dynamics of the theory. In the case of the cosmic string theory the dynamics is shown to prevail over the statistics \cite{MT,AT}. This is a consequence of the following settings. First of all for the cosmic string theory one assumes the Einstein gravity with Robertson-Walker metric as a background since the relevant energy scale is not so close to the Plankian scale. One describes the string as the Nambu-Goto string in the radiation or matter dominated back ground. Then the description of the system simplifies well thanks to the one scale principle \cite{MT,AT}. This principle ensures that sooner or later the system will be attracted to a scaling solution irrespective of the initial configuration of the strings. It is shown that this behavior of the string ensemble is far from the thermal equilibrium. However in our case of the fundamental string theory, it is no longer a natural assumption that simple Einstein gravity is applicable because the relevant energy scale is as high as the Plank mass. The dynamics of the fundamental strings are poorly understood at present, thus we cannot proceed further as in the case of cosmic string. That is why we focus on the thermodynamical analysis and follow the thermal history using it in this paper. This strategy is essentially the one proposed by Brandenberger and Vafa \cite{BV}. Below we clarify our approximations used in this paper. We are going to investigate actually the properties of the string ideal gas in this paper. Thus we need some assumptions to identify our system with the real universe. We summarize them here. First we have to assume the validity of the local thermal equilibrium and that the ideal gas approximation are reasonably good for the string universe in the period of interest. If one of these is not a good one, our system of the string ideal gas is not guaranteed to be a good approximation of our universe. Next we have to assume that some special roles of (quantum) gravity if any are not important in considering the thermal history of our universe. Of course string theory by birth is the quantum theory unifying gravity. Therefore, some special effects may well exist concerning gravity. But our present knowledge is so poor that we have to assume their unimportance. Even under this difficult circumstance we do think that it is much more meaningful to do something rather than doing nothing. Some foothold may well be found by such trial. Our main purpose of this paper is to provide the zeroth approximation of the whole story of the string universe. Within these approximations the thermodynamical functions have been calculated by some authors in several models of SST \cite{Hagedorn,Frau,Sund,MT,Deo,BG,BV}. In following the thermal history we actually need another assumption to determine the history uniquely. As the last assumption we require that the usual mechanism of the redshift works for the massless particles even in the Plank time. We call this assumption a normal energy loss. In the initial epoch this condition is shown to be equivalent to the equi-entropy condition which is also adopted by \cite{BV}. \section{Cosmological scenario} \label{sec:scenario} In order to fix the notations we present a brief review of the Brandenberger and Vafa's \cite{BV,TV} scenario in this section. They started with the Heterotic string theory \cite{Gross} in the nine dimensional torus $ T^9\times R$. For this model the single string spectrum reads \cite{GSW} \begin{eqnarray} \varepsilon^2&=&2 r^2+4(n_R+n_L) M_s^2 \nonumber\\ r^2&=&\sum_{i=1}^9 \left[ \left( {n_i\over a_i}\right)^2+ \left( {m_i a_i M_s^2}\right)^2 \right ] \nonumber\\ \label{eq:spec} \end{eqnarray} where \begin{eqnarray} m_i=n_i&=0,\pm 1,\pm 2,\cdots \nonumber\\ n_R,n_L&=0,1,2,\cdots\nonumber\\ M_s&=1/\sqrt{2 \alpha^\prime}.\nonumber\\ \nonumber\end{eqnarray} In these expressions $\sqrt{2}\pi a_i$ is a linear size of the torus. In the string theory, $M_s$ which is of the order of the Plank mass is the only dimensionful constant. We frequently set $M_s$ to unity in the sequel. The significant feature of this model is a duality $a \leftrightarrow 1/a$ which is manifest in the spectrum. This symmetry connects the large volume world with the small volume world. This is called the target space duality \cite{KY}. The self dual point of this duality is $a=1$ (in units of $1/M_s$). It is known that in the low energy limit of the closed string theory there emerges Einstein gravity \cite{GSW}. However Einstein gravity does not respect this duality \cite{BV}. This means that the use of Einstein gravity is not legitimate in this realm. They considered that Einstein gravity is modified in this realm so as to respect this duality. The winding mode is thought to play an essential role in this realm. Their approximate estimation showed that the winding mode works so as to slow down the expansion of the universe while the momentum mode to slow down the contraction. These effects make the universe oscillating around the self dual point for a while \cite{TV}. Their description of how three dimensional universe is born is as follows. Suppose that a $d$ dimensional space out of nine gets larger than the average by accident. Then the winding modes along these $d$ directions get more massive than the rest so that they tend to decay more than the average. Consequently, the number of these modes would be reduced. Since the winding modes slow down the expansion, these $d$ directions becomes easier to expand than the other directions. Thus the perturbation considered above has an unstable nature. They suggested that in this way the $d$ dimensional space becomes much larger in size than the Plank length while the remaining $9-d$ dimensions are kept Plankian scale. Since the Plankian scale is invisible at low energies, this mechanism effectively reduces the dimensionality of space time; a $d+1<10$ dimensional large universe arises out of small $T^9\times R$. They called this mechanism a decompactification. \section{Single state density} \label{sec:singlestate} In this section we are going to investigate fundamental properties of the single string state density $f(\varepsilon)$. This provides the theoretical foundation of our discussion of the thermal history of the string universe. In fact as we see in the later sections, the thermal history is deduced from the functional form of the multi state density and the multi state density is calculated from the single state density. In the high energy range the functional form of $f(\varepsilon)$ has already been estimated analytically. We first review this result and explain how its volume dependence comes out. Later this volume dependence will prove to have key importance in observing the thermal history of the string universe. Next we will give our numerical estimation of $f(\varepsilon)$ by the direct counting of the single string states in the low energy range. This clarifies the explicit number distribution of the strings. Lastly we remark that an interesting effect in the single state density is induced as the cosmological expansion proceeds. \subsection{High energy behavior} \label{subsec:high} For the general closed super string theory in a compact space which is multiply connected, the single state density is written as \cite{Hagedorn,Frau,Sund,MT,AT,Deo,BG,BV} \begin{equation} f(\varepsilon)={CV\over\varepsilon^{\eta+1} } \hbox{e}^{{\beta_H \varepsilon}} \label{eq:single} \end{equation} for large enough energy $\varepsilon$. In this expression $\eta=D/2$ with $D$ being the number of noncompact dimensions, $V$ denotes a $D$-dimensional volume and $1/{\beta_H } $ is a constant called the Hagedorn temperature. The constants $C$ and ${\beta_H } $ depend on the string model. The above form of the single state density is uniquely implied by the single string spectrum (\ref{eq:spec}). {}From (\ref{eq:spec})\ we learn that the energy of the string consists of a kinetic part $r$ and an oscillation part $n_R+n_L$. For the kinetic part $r^2$ we have winding modes $a_i m_i$ in addition to the usual momentum modes because it is the closed string theory in the non-simply connected manifold. Before considering the volume dependence of (\ref{eq:single}), let us consider what form of the single state density is implied in the usual relativistic particle. Such system does not have the oscillation mode and the winding mode, the spectrum (\ref{eq:spec})\ reduces to a simpler form $\varepsilon^2=\sum_{i=1}^d \left( { n_i / a_i} \right)^2 $ on the $d$-dimensional torus. The single state density in this case is proportional to the surface area of the elliptic sphere in $d$-dimension having axes $\varepsilon a_1,\cdots, \varepsilon a_d$. Namely we get \begin{equation} f(\varepsilon)\propto {d\over d\varepsilon} \left(\prod a_i\right) \varepsilon^d=V \varepsilon^{d-1} \end{equation} where $V$ is a $d$-dimensional volume. This represents a simple fact that the single state density is an extensive quantity. One of the peculiar phenomena in the string thermodynamics is that $f$ is no longer an extensive quantity. In fact the number $D$ concerning in (\ref{eq:single}) is not the total space dimension but the noncompact dimension. This means that $f$ is not extensive. In the extreme case of a totally compact space, $f$ is volume independent. Let us see how this peculiar behavior comes out. Only the kinetic part reflects the structure of the space, so that we concentrate ourselves on the degeneracy of the kinetic part. Just like the case of the usual point particles, the degeneracy is obtained as the surface area of the elliptic sphere having the axes $a_1 r,\cdots,a_9r,r/a_1,\cdots,r/a_9$, see (\ref{eq:spec}). The state density is therefore proportional to \begin{equation} {d\over d\varepsilon}\left(a_1r\times a_2r\times \cdots\times a_9r\times {r\over a_1}\times{r\over a_2}\times \cdots \times{r\over a_9}\right). \end{equation} This shows that $f(\varepsilon)$ is certainly $a_i$ independent. This property is essentially a consequence of the cancellation between the momentum mode and the corresponding winding mode. As the volume expands the phase space of the former increases while that of the the latter decreases. Now let us see what happens if $D$ dimensions are open. This time $D$ momentum modes miss their partner to cancel, so that $a_i$ dependence remains. Accordingly we get $f(\varepsilon) \propto a_1\ldots a_D$. This surely explains the peculiar volume dependence shown in (\ref{eq:single}). \subsection{Numerical analysis in low energy range} \label{subsec:num} We present here the result of our numerical analysis. In Fig.\ref{fig1} presented is the plot of $f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}$ in the totally compact case $D=0$. In the current situation the kinetic energy is discrete by the finiteness of the space, which makes the energy spectrum discrete as seen in Fig.\ref{fig1}. We also plot $\varepsilon f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}$ in Fig.\ref{fig2}. The asymptotic behavior $ f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}\rightarrow 1/\varepsilon$ is clearly seen in Fig.2. This is the first time that this aysmptotic behavior is shown to set in already at $\sim 10 M_s$. These two figures in fact have a clear physical meaning. It is shown \cite{Deo} in the microcanonical formalism that $f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}$ and $\varepsilon f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}$ represent the number distribution and the energy distribution of the strings respectively. As we will recognize later, $D=3$ case is relevant to our discussion. The value of $f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}/V$ versus $\varepsilon$ is presented in Fig.\ref{fig3}. The spiky behavior therein represents the opening of various modes. The analytic estimation indicates that the quantity tends to behave $C/\varepsilon^{5/2}$ for large $\varepsilon$ (see (\ref{eq:single})). The plot $\varepsilon^{5/2} f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}/V$ is shown in Fig.\ref{fig4} which justifies this asymptotic behavior and tells us where this behavior sets in. \subsection{Change in the single state density} \label{subsec:change} In this subsection we examine what occurs to $f(\varepsilon)$ when accidentally chosen three directions are expanding while the remaining dimensions are kept Plankian. Because we will restrict ourselves to the case in which three directions are expanding at an equal rate, we set $a_1=a_2=a_3=a $ and $a_4=\cdots=a_9=b \sim 1$ (see (\ref{eq:spec})) from now on. In the preceding section we saw that the high energy behavior of the single state density $f(\varepsilon)$ is independent of $a $ since $D=0$. This is the consequence of the cancellation between the momentum mode and the winding mode. However this cancellation becomes incomplete at low energies for the following reason. As $a$ gets larger the winding mode along the $a$-direction is getting heavier. Eventually the winding mode in that direction becomes too heavy to be excited especially in the low energy range. This means the winding modes along the expanding three directions are effectively frozen. Then the cancellation between the momentum mode and the winding mode breaks down in the low energy region. As a result the single state density behaves as if $D=3$ instead of $D=0$ at low energies. Namely $f(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}$ behaves as $CV/ \varepsilon^{5/2}$ at low energies and as $ 1/\varepsilon$ in the high energy range, respectively. We denote this energy $m_0$ which separates the low and high energy ranges. As $a$ becomes large, the strings with higher energies behave as if $D=3$. Namely the effective $D=3$ range extends as the universe expands. In fact it can be shown by examining the functional form of the state density that as $a$ grows $m_0$ grows at the rate $m_0(a)\propto a^2$. This is justified by the numerical analysis of Allega et al.'s and ours. This phenomena has been discussed also by other authors, in different ways by P.Slomonson et al in \cite{Sund} and by authors in \cite{Allega,DeoII}. Before closing this section we summarize the behavior of the single state density $f$ in an expanding epoch. Below the first excitation energy $\varepsilon<m_1$, $f$ describes the zero modes which is regarded as the usual point particles. In the range between $m_1$ and $m_0$ the state density behaves as if the string gas in open three space dimensions. We call the string in this range a non-winding string because such string does not have winding along the expanding directions. Note that the non-winding string in general has a winding along the remaining six directions. Lastly for the energy greater than $m_0(a)$, $f$ behaves as the string gas in the totally compact space. We call such string as a winding string. We can express the single state density using the step function $\theta$ as: \begin{eqnarray} f(\varepsilon)=f_z(\varepsilon)&+& {CV\over \varepsilon^{\eta+1}} \hbox{e}^{{\beta_H \varepsilon}} \theta(\varepsilon-m_1)\theta(m_0(a)-\varepsilon)\nonumber\\ &+&{1\over \varepsilon} \hbox{e}^{{\beta_H \varepsilon}}\theta(\varepsilon-m_0(a))\nonumber\\ \end{eqnarray} with $\eta=3/2$, where $f_z$ denotes the state density of the zero modes. Although our interest in this paper is in the $\eta=3/2$ case only, we keep $\eta$ arbitrary in the subsequent expressions for better understanding of the structure of our treatment. We again stress here that $f$ is proportional to $V$ below $m_0(a)$. \section{How big should the total energy be?} \label{sec:big} In this section we will give an important remark on the question how big the total energy $E$ of our microcanonical ensemble should be. Let us consider the string distribution in the initial epoch. The number distribution of strings are displayed in Fig.\ref{fig1}, see the second paragraph of \ref{subsec:num}. The first excited state opens at $m_1=\sqrt{8}(\times M_s)$ corresponding to $N=n_R+n_L=2$ instead of $N=1$ since the latter is inconsistent with the level matching condition of the Heterotic string theory \cite{GSW}. The quantity $E$ is the total energy of the microcanonical ensemble which we have to introduce at the beginning of the discussion. What we can find from Fig.\ref{fig1} is that if we take $E$ of the order of $M_s$ as in the usual dimensional analysis, we have no string modes from the beginning since the distribution terminates at $\varepsilon=E$. In such situation our universe is no longer a model of the string cosmology. You may say that we only have to take $E$ as large as we like. However in the cosmology with causality we can only have finite region in the thermal equilibrium because the speed of light is finite. We are dealing with the equilibrium thermodynamics in this paper. Therefore it is implicitly assumed that the spatial region having the energy $E$ must be in the thermal equlibrium. Consequently we cannot take $E$ as large as we like. Now we define $E$ to be the maximally allowed energy in the thermal equilibrium and discuss how big $E$ can be. Since our present knowledge of the string theory does not allows us to determine the value of $E$, we are left with two possibilities. The first one is that the region having $E$ (defined as above) is smaller than the whole universe, the second one is that the whole torus universe is in the thermal equilibrium. First we consider the former case. Because the value of $E$ is considered to be the maximal energy which a single string can occupy, it appears that $E$ must be large enough in order for our universe to be regarded as a model of the string cosmology. But it is not necessarily the case. In fact there is a loophole in this argument. A string extending over beyond the causal region can have the energy much greater than $E$. Such acausal fundamental string may well be produced if the universe itself is born through a quantum tunneling or something like that. However such situation is beyond control of our present technology. We do not and cannnot go further in such case in this paper. Next we consider the second case when the whole universe is in the thermal equilibrium. Now we simply conclude that the value of $E$ must be large for the universe to be full of strings. There is no loophole this time. Because the loophole in the former possibility is out of our control we decide to assume that $E$ is large enough in this paper. Of course the alternative case that our universe has no strings even in the initial epoch is another possibility. However we will not treat this case since the purpose of this paper is to explore the possibility of the universe full of strings. As we stated before our scenario is based on Brandenberger and Vafa's one in which the string universe is supposed to oscillate around the self dual point for the initial period. This picture nicely fits to the second case mentioned above, since the oscillation over many periods tends to thermalize the whole universe. Even if we had started in the loophole case, this oscillation makes the acausal strings causal. Moreover, we point out that this picture has another advantage from the cosmological view point. This picture is plausible from the view point of the horizon problem. If the whole universe would be thermalized during the initial period of the oscillation, we would go through the history with the background radiation of the same temperature at any part of the universe. \section{Framework to follow the thermal history} \label{sec:framework} In this section we intend to give a concrete framework to follow the thermal history of the string universe. Many authors have discussed the behavior of the string gas in the micro canonical formalism so far\cite{Hagedorn,Frau,Sund,MT,AT,Deo,BG,BV}. The str has been examined in various ways and the differences lying between the stringy phase (which is frequently referred to as a high density phase) and the low temperature phase (which is referred to as a low density phase) have been exposed. However very little of the attention have been paied to the transition of these two phases. The problem how the stringy universe evolves into the radiation dominated universe is still an open problem. In order to treat the transient period we present a concrete framework based on the microcanonical formalism. The multi state density is written by the single state density under the Maxwell-Boltzmann (M-B) approximation as \cite{Deo} \begin{equation} \Omega(E)= \sum_{n=1}^\infty {1\over n!}\int_0^\infty \prod_{j=1}^n d\varepsilon_j f(\varepsilon_j) \delta\left(E-\sum_{j=1}^n \varepsilon_j\right) . \label{eq:multi} \end{equation} We discuss the validity of the M-B approximation in the Appendix. \widetext We recall that the single state density $f$ is expressed as a sum of the state densities of the zero modes, the non-winding strings and the winding strings. We define the multi state densities associated with these regions by imitating (\ref{eq:multi}) as: \begin{eqnarray} \omega_z(\varepsilon_z)&=& \sum_{n=1}^\infty {1\over n!}\int_0^\infty \prod_{j=1}^n d\varepsilon_j f_z(\varepsilon_j) \delta \left(\varepsilon_z-\sum_{j=1}^n \varepsilon_j\right), \nonumber\\ \omega_N(\varepsilon_N)&=& \sum_{n=1}^\infty {1\over n!}\int_0^\infty \prod_{j=1}^n {CV d\varepsilon_j \over \varepsilon_j^{\eta+1} } \hbox{e}^{{\beta_H }\varepsilon_j} \theta(\varepsilon_j-m_1) \theta\left( m_0(a)-\varepsilon_j \right) \delta\left(\varepsilon_N-\sum_{j=1}^n \varepsilon_j\right), \nonumber\\ \omega_W(\varepsilon_W)&= &\sum_{n=1}^\infty {1\over n!}\int_0^\infty \prod_{j=1}^n { d\varepsilon_j \over \varepsilon_j }e^{{\beta_H }\varepsilon_j} \theta\left(\varepsilon_j-m_0(a)\right) \delta\left(\varepsilon_W-\sum_{j=1}^n \varepsilon_j\right). \nonumber\\ \label{eq:omegas} \end{eqnarray} \narrowtext The similarity between (\ref{eq:multi}) and the Taylor expansion of the exponential function enables us to anticipate that $\Omega$ is expressed as a product form of $\omega$'s. Actually we can prove by a straightforward calculation that \begin{eqnarray} \hat\Omega(E)= \int_0^\infty d{\varepsilon_z} &d&{\varepsilon_N} d{\varepsilon_W} \,\, \hat\omega_z({\varepsilon_z}) \hat\omega_N({\varepsilon_N}) \hat\omega_W({\varepsilon_W})\nonumber\\ &\times&\delta(E-{\varepsilon_z}-{\varepsilon_N}-{\varepsilon_W}).\nonumber\\ \label{eq:hmulti} \end{eqnarray} The carets on the top of $\omega$'s mean the addition of the delta function, $\hat\omega(\varepsilon)=\omega(\varepsilon)+\delta(\varepsilon)$. These carets enable us to express the equation in a simple form as above. The necessity of the delta functions is readily understood if we recall that right hand side of (\ref{eq:hmulti}) counts the number of all the composite states of three kinds of substances the zero modes, the non-winding strings and the winding strings. For example there are also the states having no zero modes, which must be counted. The term $\delta({\varepsilon_z}) \omega_N({\varepsilon_N})\omega_W({\varepsilon_W})$ is responsible for these states to be taken into account in the integration. Based on the equation we argue as follows. Because the delta function ensuring the energy conservation is included in the right hand side of (\ref{eq:hmulti}), we can perform an integral over $ {\varepsilon_W}$ to obtain a two dimensional integration over $({\varepsilon_z},{\varepsilon_N})$. In many cases of interest the integrand has a sharp peak at some single point on the $({\varepsilon_z},{\varepsilon_N})$ plane, and the contribution from this point dominates the integral. The position of the peak is dependent on $a$ since the $\omega$'s have an implicit dependence on $a$. We denote the position of the peak as $(e_z(a),e_N(a))$. If we recall the fundamental principle of the equal a priori probability, we conclude that we find the subsystem in the energies $(e_z(a),e_N(a))$ when the size of the universe is $a$. This is because this state has an overwhelming probability. This is nothing but the essence of the microcanonical formalism. Therefore, once we find the functional form of $e_z(a)$ and $e_N(a)$, we can follow the thermal history of the universe. This is our strategy to determine the thermal history in the microcanonical formalism. To carry out this program we need to calculate the multi string state densities. In the next three sections we will evaluate the multi state densities associated with non-winding strings, winding strings and the zero modes successively. \section{Multi state density of the non-winding strings } \label{sec:non} The evaluation of the multi state densities $\omega_N(\varepsilon)$ and $\omega_W(\varepsilon)$ have been given by several authors \cite{MT,Deo,BG,BV}. The method used there is the Laplace transformation and the saddle point approximation. We note that the latter is reliable only for large $\varepsilon$. However as we will see later that the small $\varepsilon$ behavior of $\omega_N(\varepsilon)$ is necessary in the analysis of the thermal history. In the following we will employ a completely new method to examine the form of $\omega_N$ and $\omega_W$. That is the characterization of $\omega$ 's by a differential-difference equation. We will show that $\omega$'s are solutions to some linear differential-difference equations, and solve them to find the functional form of $\omega$'s. \subsection{Evaluation} \label{subsec:nonmulti} \widetext First we remark that we can factor out the exponential part of $\omega_N$ as $\omega_N(\varepsilon)=A(\varepsilon,v) \hbox{e}^{{\beta_H \varepsilon}}$ with \begin{equation} A(\varepsilon,v)= \sum_{n=1}^\infty {1\over n!}\int_0^\infty \prod_{j=1}^n {v d\varepsilon_j \over \varepsilon_j^{\eta+1} } \theta(\varepsilon_j-m_1) \theta\left(m_0(a)-\varepsilon_j\right) \delta\left(\varepsilon-\sum_{j=1}^n \varepsilon_j\right). \label{eq:omegabar} \end{equation} where $v=CV$. This is possible since the integration is constrained by the delta function. If we denote \begin{eqnarray} A_n(\varepsilon,v)= \int_0^\infty \prod_{j=1}^n { d\varepsilon_j \over \varepsilon_j^{\eta+1} } &\theta&(\varepsilon_j-m_1) \theta\left(m_0(a)-\varepsilon_j\right)\nonumber\\ \times &\delta&\left(\varepsilon-\sum_{j=1}^n \varepsilon_j\right), \nonumber\\ \label{eq:bndef} \end{eqnarray} (\ref{eq:omegabar}) is rewritten as \begin{equation} A(\varepsilon,v)=\sum_{n=1}^\infty {v^n\over n!}A_n(\varepsilon,v) \end{equation} We change the variables as $\varepsilon_j\rightarrow \varepsilon x_j$ and obtain \begin{equation} A_n(\varepsilon,v)={1\over \varepsilon^{\eta n+1}} \int_0^\infty \prod_{j=1}^n { dx_j \over x_j^{\eta+1} } \theta(x_j-m_1/\varepsilon) \delta\left(m_0(a)/\varepsilon-x_j\right) \theta\left(1-\sum_{j=1}^n x_j\right). \end{equation} By operating $\varepsilon\partial_\varepsilon$ and $\eta v\partial_v$ on it we have \begin{eqnarray} \varepsilon&\partial_\varepsilon& A_n(\varepsilon,v) =-(\eta n+1) A_n(\varepsilon,v)+ {n\over m_1^\eta} A_{n-1}(\varepsilon-m_1,v)- {n\over m_0^\eta} A_{n-1}(\varepsilon-m_0,v),\nonumber\\ \hbox{and}\,\,\, & & \nonumber\\ \eta v &\partial_v& A_n(\varepsilon,v) ={n\over m_0^\eta} A_{n-1}(\varepsilon-m_0,v), \nonumber\\ \end{eqnarray} respectively. Here we made use of the fact that $m_0(a)\propto a^2$ $ \propto v^{1/\eta}$ (see Sec. \ref{subsec:change}). \narrowtext {}From these we obtain \begin{eqnarray} (1+&\varepsilon&\partial_\varepsilon +\eta v\partial_v ) A_n(\varepsilon,v)\nonumber\\ &=&-\eta n A_n(\varepsilon,v)+ {n\over m_1^\eta} A_{n-1}(\varepsilon-m_1,v) .\nonumber\\ \label{eq:rec} \end{eqnarray} Summing up this equation over all $n$ we get the equation \begin{equation} (1+\varepsilon\partial_\varepsilon +\eta v\partial_v ) A(\varepsilon,v)= {v \over m_1^\eta} A(\varepsilon-m_1,v) . \label{eq:ddeq} \end{equation} This is the equation exactly satisfied by $A$. When $\varepsilon $ is large compared with $m_1$, we can use an approximation $A(\varepsilon-m_1,v)$ $=A(\varepsilon,v)-m_1\partial_\varepsilon A(\varepsilon,v)$ to rewrite the equation as \begin{equation} (1+ \varepsilon\partial_\varepsilon+ {v\over m_1^\eta}m_1\partial_\varepsilon +\eta v\partial_v)A(\varepsilon,v)={v\over m_1^\eta}A(\varepsilon,v). \label{eq:aps} \end{equation} In the case that string has high energy density $\varepsilon/m_1>>{v \over\eta m_1^\eta}$ ( which is the case studied in refs.\cite{MT,Deo,BG}, we can neglect the third term of the left hand side of (\ref{eq:aps}) in comparison with the second term. The equation reduces to \begin{equation} ( 1+\varepsilon\partial_\varepsilon +\eta v\partial_v)A(\varepsilon,v) ={v\over m_1^\eta}A(\varepsilon,v). \end{equation} This can be solved with ease to obtain \begin{eqnarray} A(\varepsilon,v)&=& {v\over \varepsilon^{\eta+1} } g^\prime\left(-{v\over \eta \varepsilon^\eta} \right)\nonumber\\ &\times&\hbox{exp}\left[ g\left(-{v\over \eta \varepsilon^\eta} \right)+ {v\over \eta m_1^\eta} \right]\theta(\varepsilon-m_1),\nonumber\\ \end{eqnarray} with some analytic function $g(x)$. The last step function means that $A(\varepsilon,v)=0$ for $\varepsilon\leq m_1$ by definition. Comparing this with the $\eta=0$ solution which will be obtained in the next section, we can determine $g$ up to $x$ as \begin{equation} g(x)=0+(\eta O(\eta)+1)x+ \cdots, \end{equation} where $O(\eta)$ represents the function of $\eta$ vanishing at $\eta=0$. Consequently for the string gas which has the energy ${\varepsilon_N}$ such that $v/\eta \varepsilon_N^\eta\sim 0$, the multi state density is expressed as \begin{equation} \omega_N(\varepsilon,v)=const\times {CV \over \varepsilon^{\eta+1}} \hbox{exp}\left( {CV\over \eta m_1^\eta}+{\beta_H \varepsilon} \right). \end{equation} This reproduces the known result \cite{MT,Deo,BG}. We have examined the high density region above. However, we will realize that for our string universe the region with low energy density $\varepsilon/{m_1}<<{v/\eta m_1^\eta} $ is relevant. Thus we have to evaluate $\omega_N(\varepsilon,v)$ in this low energy density. Before carrying it out we estimate the position of the peak of $A$ and the height of it which are of importance in the thermal history. Setting $\varepsilon=e_z(a)$ in (\ref{eq:aps}) this equation is reduced to an ordinary differential equation \begin{equation} (1+\eta v\partial_v)A(e_N(a),v)= {v \over\eta m_1^\eta} A(e_N(a),v), \end{equation} since $\partial_\varepsilon A$ vanishes on that point. This ordinary differential equation is readily solved and we get the height of the peak: \begin{equation} A(e_N(a),v) ={const\over v^{1/\eta}}\hbox{exp}\left[{v\over \eta {m_1}^\eta}\right]. \label{eq:entn} \end{equation} The logarithm of this is nothing but the entropy of the non-winding strings. Therefore, the expression simply tells us that the entropy produced when $D$ dimensional volume out of nine expands is proportional to the expanding volume (note that $v=CV$). Next we determine the functional form of $e_N(a)$. {}From a numerical determination of $A(\varepsilon,v) $ which we will give later, we see that $v/\eta {m_1}^\eta $ is much greater than $e_z(a)/{m_1}$. In this case we can neglect the second term instead of the third one in (\ref{eq:aps}). This reduces the equation to \begin{equation} \left(1+{v \over {m_1}^\eta}{m_1} \partial_\varepsilon+\eta v\partial_v\right) A(\varepsilon,v) ={v\over {m_1}^\eta}A(\varepsilon,v). \end{equation} By solving it we can determine the functional form of $A(\varepsilon,v)$ in this region as \begin{equation} A(\varepsilon,v)={1\over v^{1/ \eta}} \hbox{exp}\left[h\left({\varepsilon\over {m_1}}-{v \over\eta m_1^\eta} \right)+{v \over\eta m_1^\eta} \right], \label{eq:dimadv} \end{equation} where $h$ is some function to be determined by a boundary condition. But the explicit form of $h$ is unnecessary for our present purpose. The function $A(\varepsilon,v)$ is maximized when the function $h(x)$ is maximal. Denoting the position of the peak of $h(x)$ as $x=c$, we can express the position of the peak of $A$, $e_N(a)$ as \begin{equation} e_N(a)/{m_1}={v \over\eta m_1^\eta}+c. \label{eq:upfour} \end{equation} Numerically it can be expressed as $e_N(a)=4.00 a^3+const$. Namely $e_N(a)$ increases in proportion to the volume for large $a$. We solved (\ref{eq:ddeq}) for small $\varepsilon$, at growing values of $a$. Fig.\ref{fig5} shows the plot of $\ln A(\varepsilon,v)$ versus $\varepsilon$. As the energy increases, new modes start to open. This fact is exhibited in the low energy behavior of $A(\varepsilon,v)$ as the emergence of several peaks. We see that the position of the peak moves to higher energy as $a$ grows. In high energy region, the spiky behavior seen at low energies is smeared, resulting in a smooth curve. Fig.\ref{fig6} and Fig.\ref{fig7} show the plots of $(e_N(a)/{m_1})/a^3$ and $\ln( A({\varepsilon_N},v) )/({v/\eta m_1^\eta})$ versus $a$, respectively. The behaviors in both figures are consistent with the above analytic estimate. To prepare for the later usage we present here the plots of a microcanonical temperature of the non-winding strings. This quantity defined as \begin{equation} \beta_N(\varepsilon,v)= \partial_\varepsilon\ln \omega_N(\varepsilon,v)= \partial_\varepsilon\ln A(\varepsilon,v)+{\beta_H } \label{eq:microM} \end{equation} measures the rate of the entropy increase due to the energy increase. We show the plots of $ \beta_N/{\beta_H }$ versus $\varepsilon$ in Fig.\ref{fig8}. The global decreasing behavior means that the specific heat is globally positive. The point of $\varepsilon$ on which $\beta_N={\beta_H }$ is where $\ln A(\varepsilon, v) $ peaks. The local spiky behavior around $\varepsilon\sim m_1$ means the thermodynamical instability in this region. \section {Multi state densities of the winding strings} \label{sec:windmulti} Next we calculate the multi state density of the winding strings. We only have to make an analogue of the previous discussion. Since $\eta=0$ in the present case we get \begin{equation} (1+\varepsilon\partial_\varepsilon )A(\varepsilon)=A(\varepsilon-m_0) \end{equation} instead of (\ref{eq:ddeq}). Using a similar approximation we have $\partial_\varepsilon A(\varepsilon)=0$. Determining the normalization by \begin{equation} A(m_0)= \int_{m_0}^\varepsilon {d\varepsilon_1\over \varepsilon_1} \theta(m_0-\varepsilon_1)\delta(\varepsilon-\varepsilon_1) \big\vert_{\varepsilon=m_0}=1/m_0, \end{equation} we finally obtain \begin{equation} \omega_W(\varepsilon) ={1\over m_0}\hbox{e}^{\beta_H \varepsilon} \theta(\varepsilon-m_0). \label{eq:flat} \end{equation} This reproduces the known result \cite{BV}. One particular feature of this functional form is that $\omega_W(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}$ has no peak. This is not the case for the non-winding strings and the zero modes. \section{Multi state density of the zero modes } \label{sec:zero} In this section we estimate the multi state density of the zero modes. When we enter into the thermal history of the string universe, the knowledge on the position of the peak of the function $\omega_z(\varepsilon,v)\hbox{e}^{-{\beta_H \varepsilon}}$ will be necessary. We will determine this in the last part of this section. The multi state density $\omega_z(\varepsilon,v)$ in (\ref{eq:omegas}) can be rewritten as \begin{eqnarray} \omega_z&(&\varepsilon,v) =\hbox{e}^{\beta \varepsilon}\nonumber\\ &\times&\sum_1^\infty {1\over n!}\int_0^\infty \prod_1^n d\varepsilon_j f_z(\varepsilon_j)\hbox{e}^{-\beta \varepsilon_j} \delta\left(\varepsilon-\sum_{j=1}^k \varepsilon_j\right), \nonumber\\ \label{eq:zero} \end{eqnarray} with an arbitrary positive $\beta$ parameter noting the delta function constraint. In order to estimate the power series above, we note two facts here. First the sum of the exponential function $\sum {x^k / k!} $ receives its dominant contribution from $k=x$ term. In fact it can be verified that the expression ${x^k / k!} $ if seen as a function of $k$ has a sharp peak at $k=x$. Next we can ascertain that the power series of $\omega_z (\varepsilon)$ can be regarded as $\sum x^k/k!$. In order for it to be valid it is sufficient that the important contribution to the integral of $\varepsilon$'s comes from the diagonal region $\varepsilon_1\sim\varepsilon_2\sim\cdots\sim\varepsilon_k$. As we mentioned before $f_z$ behaves as a positive power of $\varepsilon$, see the Sec. \ref{subsec:high}. This enables us to apply the well-known inequality, $\root k \of {\varepsilon_1\varepsilon_2\cdots\varepsilon_k} \leq {1\over k}\Sigma \varepsilon_j$ to conclude that $\prod f_z(\varepsilon)$ maximizes on the diagonal region justifying our claim. Because $x$ of $\sum x^k/k!$ corresponds to $x_0=\int\limits_0^{\infty} f_z (\varepsilon) e^{-\beta\varepsilon} d\varepsilon$ of (\ref{eq:zero}), we only have to focus on the $k=x_0$ term. Let us examine for which energy $\varepsilon$ this term does not vanish. For the delta function not to vanish, the argument of it must vanish. So let us see what is the typical value of $\sum_{j=1}^k \varepsilon_j$ in the delta function of (\ref{eq:zero}). This is estimated as \begin{eqnarray} <\sum_{j=1}^k \varepsilon_j > = k <\varepsilon_j>&=&k { \int\limits_0^{\infty} d\varepsilon\varepsilon f_z(\varepsilon)e^{-\beta\varepsilon} \over \int\limits_0^{\infty} d\varepsilon f_z(\varepsilon) e^{-\beta\varepsilon} }\nonumber\\ &=&\int\limits_0^{\infty} d\varepsilon \varepsilon f_z(\varepsilon)e^{-\beta\varepsilon}.\nonumber\\ \end{eqnarray} We define here $h$ and $n$ as \begin{eqnarray} h(\beta,v) &=&\int^\infty_0 \varepsilon f_z(\varepsilon)e^{-\beta\varepsilon} d\varepsilon ,\nonumber\\ n(\beta,v)&=&\int^\infty_0 f_z(\varepsilon)e^{-\beta\varepsilon} d\varepsilon \nonumber\\ \label{eq:eich} \end{eqnarray} for later convenience. As a result we realize that $\omega_z(\varepsilon,v)e^{-\beta\varepsilon} $ has a sharp peak at $\varepsilon=h(\beta,v)$ with the height $\hbox{exp}( n(\beta,v))$ with some width $q(\beta,v)$. Making use of a function $G(x)$ having a peak at $x=0$ with a unit width and a unit height we can express $\omega_z(\varepsilon,v)e^{-\beta\varepsilon}$ as \begin{equation} \omega_z(\varepsilon,v)e^{-\beta\varepsilon}= G\left( { \varepsilon-h(\beta,v) \over q(\beta,v)} \right) \hbox{exp}\left( n(\beta,v)\right). \label{eq:omZ} \end{equation} Especially in the case $\beta={\beta_H }$ will be relevant in the later application. We write it here with the explicit numerical coefficient \begin{equation} h({\beta_H },v)=65.0 a^3. \label{eq:rokugo} \end{equation} It is worth noting that this argument does not apply to the evaluation of $\omega_N(\varepsilon,v)$. As we saw in (\ref{eq:single}) the single state density was a product of the exponential part and the negative power of $\varepsilon$. The exponential part is irrelevant since it can be factored out as usual. While the fact that the remaning part is a negative power of $\varepsilon$ implies the situation opposite to the above case since $1/(\varepsilon_1\varepsilon_2\cdots\varepsilon_k)$ maximizes in the boundary region such as $\varepsilon_1\sim \varepsilon$, $\varepsilon_1\sim\varepsilon_2\sim\cdots\sim\varepsilon_1\sim m_1$. The is the essence of what called Frautshi Carlitz picture. \section{Thermal history} \label{sec:thermal} In this section we explicitly follow the thermal history of our system using the multi state densities calculated in the previous sections. The history which we are going to describe below consists of two distinct epochs which we refer to as epoch I and epoch II, respectively. We insert $\omega$'s obtained above into \ref{eq:hmulti}. Among the constituents of $\Omega$, $\omega_N $ and $ \omega_W$ have the same exponential dependence $\hbox{e}^{{\beta_H \varepsilon}}$. As for the zero modes, we can formally factor out the same exponential using the previous formula (\ref{eq:omZ}) with $\beta={\beta_H }$. If we insert these $\omega_z$, $\omega_N$ and $\omega_W$ into (\ref{eq:hmulti}), the exponential factors can be combined to form $\hbox{exp}({\beta_H } E)$ thanks to the delta function $\delta(\varepsilon-\varepsilon_W-\varepsilon_z-\varepsilon_N)$ and then be picked out from the integral as \begin{eqnarray} \hat \Omega (E,v) =&{1\over m_0}& \hbox{exp}\left[{\beta_H } E+n({\beta_H },v)\right] \int_0^{\infty} d\varepsilon_z d\varepsilon_N d\varepsilon_W\nonumber\\ &\times& G\left( { \varepsilon_z-h(\beta_H,v) \over q(\beta_H,v) }\right) \hat A(\varepsilon_N,v)\nonumber\\ &\times&\left(\theta(\varepsilon_W-m_0)+\delta(\varepsilon_W)\right)\nonumber\\ & \times& \delta(E-\varepsilon_z-\varepsilon_N-\varepsilon_W).\nonumber\\ \label{eq:omegath} \end{eqnarray} \subsection{Oscillating epoch and epoch I} \label{subsec:epochI} As we mentioned before we find the system with energies $\varepsilon_z$, $\varepsilon_N$, and $\varepsilon_W$ are determined as the position of the peak of the integrand. The functions $G$ and $\hat A$ prefer that ${\varepsilon_z}$ and ${\varepsilon_N}$ take their most probable values, respectively. The winding string energy $\varepsilon_W$ is adjusted to meet the requirement by the delta function since the integrand of (\ref{eq:omegath}) does not have other dependence on $\varepsilon_W$. The function for the zero modes strongly favors \begin{equation} \varepsilon_z=e_z(a)=h(\beta_H,v) =\int_0^\infty d\varepsilon\varepsilon f_z(\varepsilon)\hbox{e}^{-{\beta_H \varepsilon}}, \label{eq:zerogrow} \end{equation} see (\ref{eq:eich}). While the favorable value for the non-winding strings is similarly determined as $ \varepsilon_N=e_N(a)=m_1 v/(\eta m_1^\eta)+const$, which is derived in (\ref{eq:upfour}). Accordingly the most probable value of $\varepsilon_W$ is determined as $\varepsilon_W=e_W(a)=E-e_z(a)-e_N(a)$. Because both $e_z(a)$ and $e_N(a)$ grows as $a^3$ for large $a$, the ratio of $e_N(a)$ to $e_z(a)$ approaches to a constant. Numerically, however, we see that $e_N(a)<<e_z(a) $ from (\ref{eq:upfour}) and (\ref{eq:rokugo}). Namely the zero modes is always dominant over the non-winding strings. In view of $e_z(a)$ in (\ref{eq:zerogrow}), we can find that the zero modes are distributed in the canonical distribution with the Hagedorn temperature $1/{\beta_H }$. Namely the temperature in this period is fixed to be the Hagedorn temperature. Therefore, the total energy of the zero modes grows in proportion to the volume. The reason why it is possible is that the winding strings continuously supply the energy by decay. The supplied energy is also given to the non-winding strings, resulting in the growth of $e_N(a)$ found at (\ref{eq:upfour}). The conversion of the energy into the non-winding strings and the zero modes continues until the winding strings disappear (i.e. $e_W(a)=0$). We call the period epoch I before their disappearence. Here we give an important comment on the change of the total energy $E$. Generally in cosmology the total energy is not a constant quantity \cite{early}. For example the energy density of the radiation dominated universe scales as \begin{equation} e_z(a)/a^3\propto 1/a^4 \end{equation} implying $e_z(a)\propto 1/a$. This energy loss is attributed to the redshift of the radiation due to the expansion of the universe. If the size of the universe is multiplied by a factor $a$, the wavelength is multiplied by the same factor. The radiation loses its energy by this effect. The energy lost is given to the gravitational field. We call it a normal energy loss. This energy loss is deduced from Einstein gravity. It is true that in the period in question Einstein gravity is not a reliable approximation because of the possible string corrections. However even in this period, we consider that this energy loss works for the zero mode sector. Then we decide to take the normal energy loss as the assumption based on which we follow the thermal history of the universe. It is amazing to observe that this normal energy loss nicely fits to our scenario. We will show it below. In the pure radiation case the normal energy loss is described in the differential equation as $de_z(a)/d\varepsilon=-e_z(a)/a$. In order to apply this to our case we have to take the existence of the other modes into account. The energy exchange between the other modes and the zero modes is allowed while the energy loss is only through the redshift of the zero modes $- e_z(a)/a $. Therefore the normal energy loss now means \begin{equation} {d\over da}(e_z(a)+e_N(a)+e_W(a))={d\over da}E(a)=-{1\over a}e_z(a). \label{eq:normal} \end{equation} Adding this equation to the previously given conditions we can uniquely determine the functional form of $e_z(a)$, $e_N(a)$, $e_W(a)$ and $E(a)$ throughout the thermal history. In order to reveal what (\ref{eq:normal}) means we examine the entropy change of the system under this assumption, the entropy of this system is estimated from (\ref{eq:omegath}) as \begin{equation} S={\beta_H } E(a)+n({\beta_H },v)+\ln A(e_N(a),v). \label{eq:totent} \end{equation} The last term comes from the non-winding strings. Let us first examine the case without it. Then the change of $S$ due to the growth of $a$ reads \begin{equation} {d\over da}S=-{\beta_H }{e_z(a)\over a}+{3 n({\beta_H },v)\over a}. \end{equation} Surprisingly we can show that it vanishes. This is verified by combining the fact that $-\partial_{\beta_H } n({\beta_H },v)$ $=h({\beta_H },v)=e_z(a)$ and $n({\beta_H },v)\propto 1/{\beta_H }^3$ (see (\ref{eq:eich})). These two imply $e_z(a)={3\over {\beta_H }} n({\beta_H },v)$ which implies $dS/da=0$. Consequently, we have proved that as far as we neglect the entropy of the non-winding strings, the assumption of the normal energy loss is equivalent to the equi-entropy. Let us consider below what this fact means. The generation of non-winding strings is rephrased as the unknotting of the winding strings along the expanding direction. It was the necessary condition for our universe to exit out of the oscillation epoch and entering into the three dimensionally expanding universe due to the Brandenberger and Vafa's instability. It is natural to suppose the universe is oscillating around the self-dual point, until the condition for the unknotting along some three directions is met. On the other hand what we proved above is that the process in such epoch is adiabatic. In other words the process is reversible. Sometimes winding strings decay by generating the zero modes as the universe expands and sometimes the winding strings absorb the energy from the zero modes as the contraction proceeds. These processes can repeat themselves because they are reversible processes. This situation is naturally identified with the oscillating epoch of Brandenberger and Vafa's scenario. Once the unknotting in some three directions proceeds enough, the entropy generation occurs as we have shown above. This time we cannot go back because the entropy is generated. Namely the universe is destined to the three dimensionally expanding universe. This is natural as we naively expect that the birth of the three dimensional universe is a irreversible process. Consequently, we have recognized that the result of our thermal analysis is perfectly consistent with our cosmological scenario thus far. The epoch I ends when all the winding strings decay away, in other words when all the winding along the three directions unknot. The point $a=a_0$ when the epoch I ends is determined by solving the equation $E(a)=e_z(a)+e_N(a)$. We can easily determine the functional form of $E(a)$ from (\ref{eq:normal}) as \begin{equation} E(a)=E_0+{1\over 3}(e_z(1)-e_z(a)) \label{eq:Ezero} \end{equation} where $E_0=E(1)$ is the initially given total energy. Using this, (\ref{eq:upfour}) and (\ref{eq:rokugo}) we obtain \begin{equation} a_0= \left( {E_0+{1\over 3}e_z(1) \over {4\over 3}e_z(1)+e_N(1) } \right)^{1/3} \cong \left( {3 E_0 \over 4 e_z(1)}\right)^{1/3}. \end{equation} \subsection{Epoch II} \label{subsec:epochII} We call the period $a\geq a_0$ an epoch II. In the epoch II there is no energy supply from the winding strings. The non-winding strings and the zero modes compete for the limited amount of total energy $E(a)$ in this time. The three functions $E(a)$, $e_z(a)$ and $e_N(a)$ in this epoch is uniquely determined by the following three conditions: \begin{eqnarray} E(a)&=e_z(a)+e_N(a)\\ \beta_z(e_z(a),v)&=\beta_N(e_N(a),v)\\ {d\over da}E(a)&=-{1\over a}e_z(a) . \end{eqnarray} The functions in the second lines are the microcanonical temperature defined as $\beta_z(\varepsilon,v)=\partial_\varepsilon \ln \omega_z(\varepsilon,v)$ and (\ref{eq:microM}). The second equation is an equi-temperature condition. As we stated before the most probable value of the energy is determined as a meeting point of the competetion between the zero modes and the non-winding strings. The functions $\beta_z$ and $\beta_N$ measure how strongly the respective modes compete for the limited total energy. These equations provides the rule for the competition between the winding strings and the zero modes. Now the thermal history in the epoch II can be understood qualitatively with the knowledge of $\beta_N$, which is given Fig.\ref{fig8}. As $a$ grows, the temperature of the zero modes decreases due to the redshift. For the equi-temperature condition to be met, the non-winding strings must cool by the same amount. From Fig.\ref{fig8} we can readily find that the non-winding strings have very large specific heat. A very little change of the temperature corresponds to a large energy change. Thence the decrease of the temperature signifies the violent loss of its energy. For this reason the string modes surviving in the epoch II quickly decay away as the expansion proceeds. After the strings die away the universe come into the usual radiation dominated universe with usual redshift $ E(a)=e_z(a) =1/a$. \section{Discussion} \label{sec:discussion} In this paper we have performed the thermodynamical analysis of the string cosmology focusing on the Brandenberger and Vafa's scenario. Our analysis was based on the following assumptions. \begin{itemize} \item The local thermal equilibrium. \item Ideal gas approximation. \item Normal energy loss. \item No unexpected effect due to the non-Einstein correction. \end{itemize} As a result our analysis has presented the following thermal history of the string universe. In the very initial epoch Brandenberger and Vafa's scenario suggested that the universe oscillates around the self-dual point of the target space duality. Our analysis has shown that the emission of the zero modes from the strings due to the cosmological expansion and the absorption of it due to the contraction in the initial universe are adiabatic processes. Then these could be repeated, resulting the oscillatory behavior. These observations clarify the nature of the oscillating period from the thermodynamical point of view. Once the accidental three dimensional expansion is triggered, non-winding strings are produced with the entropy production. In this process the winding strings decay producing the non-winding strings and the zero modes. During this period the temperature is fixed to be the Hagedorn temperature, realizing the inflation-like situation. This period ends when the winding modes are exhausted. At that time the temperature begins to fall due to the assumed redshift of the zero modes. The surviving string modes quickly die away by this cooling process and there emerged the usual radiation dominant universe of the standard big bang cosmology. Brandenberger and Vafa also addressed the problem how our space time dimensionality is determined to be four within their scenario. In the rest of this paper we consider this interesting problem and we propose an alternative idea to determine the dimensionality of space time. We first review Brandenberger and Vafa's discussion shortly and give our reconsideration next. In their scenario, an expanding universe begins as a result of accidental growth of the torus universe along some $d$ directions in nine dimensional space. In order for the accidental perturbations to grow, the strings should collide with each other frequently so as to diminish the number of their winding modes along $d$-directions, because the winding modes slow down the expansion. If they intersect they would probably unwind. On the other hand the string is a two dimensional entity if seen in space time. They argued that in order for two strings to collide with finite probability, $d+1$ must be smaller than or equal to $2+2=4$, otherwise space time is too broad for two world sheet to have an intersection. This is their derivation of the relation $d+1\leq 4$. They said that the universe continues to make trials and errors until they learn that less than or equal to three spatial directions only become large. When the universe finished the course, $d\leq 3 $ dimensional large universe would have been born. Thus, while they presented the intriguing idea for understanding why $d\leq 3$, they did not succeed in giving compelling reasons why $d$ should be $3$. Now we reconsider their argument. We can not fully agree with their discussion to determine the dimensionality because of the following reasons. First, it is true that in the point particle case especially in $ \phi^4$ theory in $R^d$, the correlation functions are known \cite{GJ} to be represented in terms of a random walks in $R^d$. From this, it is rigorously proven that the theory is free if $d+1\geq 5$. However this is not the case for the string theory. We have repeated the same analysis as $\phi^4$ theory in the light cone string field theory \cite{KK}. We have found that $\Phi^3$ interaction term prevents us from constructing an analogous representation as the $\phi^4$ theory. Second, it is true that the low energy point particles cannot travel to the compact direction since the unit momenta are too heavy in such direction. This enforces the point particles effectively confined in the $d$ dimensional space. In this case it is only $d$ directions that the point particles can utilize to colide. However, we are now concerned with the collision of the strings long enough to wind the torus universe many times. One can imagine with ease that these strings need not be confined in the $d$ dimensional space and can move in any directions. Then what aspects are relevant to the string case? Let us recall what occurs to the point particles in the expanding universe. The expansion makes the mean separation of particles larger. If the time scale of expansion time get shorter than that of interaction time, the interaction is effectively frozen. This consideration gives rise to the following interesting possibility. Let us assume that the accidental expansion in the $d(\leq 9)$ directions is always too fast to keep their mutual intersections. If it is true, the expansion in the $d$ directions play a negative role with respect to the unwinding of the winding strings. To put it another way, the winding strings only can use the rest $9-d$ dimension to unwind. Hence the greater value of $9-d$ is favored from the view point of killing the winding modes along $d$-directions. This implies the inequality opposite in direction from that of Brandenberger and Vafa. One may say that this implies that $d=1$ is preferred. However the story is not so simple. As we have observed in the expression (\ref{eq:entn}), the entropy produced in the process of the unwinding is proportional to the expanding volume. This means that the universe prefers to expand as many dimensions as possible. The larger value of $d$ is preferred from the second law of thermodynamics, the entropy increase. This effect will compete the preceding one. Our idea is that the number of the expanding space dimension is determined to be three as a result of this competetion. This idea is not yet formulated on a rigorous ground at present. However we think it is one of the plausible candidates of a mechanism to fix the space time dimension. We are planning to make a numerical simulation of the strings to obtain some indications to this idea. \acknowledgments The present author thanks all the researchers in the high energy theory group and the group of the theoretical astronomy in Tokyo Metropolitan University for their continuous encouragements and valuable comments on this work. The author especially thanks Dr.H.Minakata. From the beginning of this study he continuously gave the author valuable advices in many aspects of the theory. This work was in part supported by Fund for Special Research Project at Tokyo Metropolitan University. \unletteredappendix{} In this appendix we examine what kind of correction is added when we include the full quantum statistics. Rough estimation of it has been done in \cite{BG}. We will consider this in our framework. As a result we can show that the correction in our case is very small. It is known \cite{Deo} that the thermodynamical functions in the canonical formalism and the micro canonical formalism is connected through the Laplace transformation. If we denote the single canonical partition function for the non-winding string as $\tilde f_N(\beta) $, this can be explicitly written as \begin{equation} \int_{-i\infty}^{i\infty}\bar d\beta \hbox{exp} \left( \tilde f_N(\beta) \right) \hbox{e}^{\beta \varepsilon}= \hat A(\varepsilon,v)\hbox{e}^{{\beta_H \varepsilon}}. \label{eq:trans} \end{equation} within the M-B approximation, where $\bar d\beta=d\beta/2\pi i$. The multi string state density with all the statistical effects $\omega_N(\varepsilon)$ is given by the similar integral, but the integrand should be changed to \begin{equation} Z(\beta)=\prod_{r=1,r:odd}^\infty \hbox{exp}\left[{1\over r}\tilde f(r\beta)\right]. \end{equation} Namely the integrand is a product of the functions ${1\over r} \tilde f(r\beta) $. The equation (\ref{eq:trans}) readily implies \begin{equation} \int_{-i\infty}^{i\infty}\bar d\beta \hbox{exp}\left({1\over r} \tilde f_N(r\beta)\right)\hbox{e}^{\beta \varepsilon}= {1\over r}\hat A(\varepsilon/r,v/r)\hbox{e}^{{\beta_H \varepsilon}/r}. \end{equation} Here we reminds the readers of a well known formula that the inverse Laplace transform of the product of two functions is equal to the convolution of the inverse Laplace transforms of the two functions. Namely we have \begin{eqnarray} \int_{-i\infty}^{i\infty}&\bar d&\beta \tilde \phi_1(\beta) \tilde \phi_2(\beta)\hbox{e}^{\beta \varepsilon}\nonumber\\ &=&\int_0^\infty d\varepsilon_1d\varepsilon_2 \phi_1(\varepsilon_1)\phi_2(\varepsilon_2) \delta(\varepsilon-\varepsilon_1-\varepsilon_2).\nonumber\\ \end{eqnarray} Now the repeated use of this formula leads us to the the multi state density with the full quantum statistical corrections: \begin{eqnarray}\hat \omega_N(\varepsilon,v) =\lim_{k\rightarrow\infty}&\int_0^\infty & d\varepsilon_1d\varepsilon_3\cdots d\varepsilon_k \nonumber\\ &\times &\hat \alpha(\varepsilon_1,v)\hat \alpha(\varepsilon_3,v/3)\cdots \hat \alpha(\varepsilon_k,v/k)\nonumber\\ &\times& \delta\left(\varepsilon-\sum_{r=1}^k r \varepsilon_r\right)\nonumber\\ \label{eq:aas} \end{eqnarray} where we have set $\alpha(\varepsilon,v) =A(\varepsilon,v)\hbox{e}^{{\beta_H \varepsilon}}$. All the summation and product indices in this appendix mean to run only odd integers unless otherwise stated. Now we are going to examine the size of the corrections to the M-B approximation using (\ref{eq:aas}). Because $ \alpha(\varepsilon,v) $ does not vanish only for $\varepsilon\geq m_1$, the product of (\ref{eq:aas}) is actually a finite product. For the range $k {m_1}\leq \varepsilon <(k+1){m_1}$ with $k$ being an integer we consider the sequence of odd numbers such that $1\leq r_0<r_1<\cdots<r_l$ and $r_0+r_1+\cdots+r_l=k$. Only $r_0$ has the possibility to become unity. All the corrections acquired by $\omega_N(\varepsilon,v)$ in this range are written as the form \begin{eqnarray} L=\int d\varepsilon_{r_0}\cdots d\varepsilon_{r_l}&{\alpha} (\varepsilon_{r_0},V/r_0) \cdots {\alpha}(\varepsilon_{r_l},V/r_l)\nonumber\\ &\times\delta(\varepsilon-(r_0\varepsilon_{r_0}+ \cdots+r_l\varepsilon_{r_l})). \nonumber\\ \label{eq:capl} \end{eqnarray} If we set ${v/\eta m_1^\eta}=w$ and $\varepsilon/{m_1}=x$, ${\alpha}$ is rewritten as \begin{equation} {\alpha}={1\over {m_1}}\hbox{exp}[h(x-w)+w+{\beta_H m_1} x]. \end{equation} \widetext Using it and making a variable change $\varepsilon_{r_j}={m_1} x_j$ ( Here $j$ runs even and odd integer.) enable us to rewrite $L\hbox{e}^{-{\beta_H \varepsilon}}$ as \begin{equation} L\hbox{e}^{-{\beta_H \varepsilon}}={m_1}^l\int dx_0\cdots dx_l \hbox{exp} \left[ \sum_{j=0}^l ( h(x_j-w/r_j)+w/r_j-{\beta_H m_1}(r_j-1)x_j)\right] \delta\left(x-\sum_{j=0}^l r_j x_j\right). \end{equation} This is a typical form the corrections added to $A(\varepsilon,v) ={\alpha}(\varepsilon,v)\hbox{e}^{-{\beta_H \varepsilon}}$. \narrowtext This function is expressed as an integration of a product of functions of the form \begin{equation} \hbox{exp}\left[h(x_j-w/r_j)+w/r_j-{\beta_H m_1}(r_j-1)x_j \right]. \label{eq:exhw} \end{equation} First we consider the case $r_0=1$. We already know that this peaks at $x_0^{max}=w+c $ with the height ${1\over w^{1/\eta}}\hbox{exp}(w)$, where $c $ is the point such that $h^\prime(c)=0$. Next we consider the general case $r_j\not=1$. In this case the function is strongly damped by the exponential suppression $\hbox{exp}(-{\beta_H m_1}(r_j-1)x_j)$. Actually the position of the peak is now located around $x_j^{max}\sim 1$. It can be verified by examining where the derivative of the exponent of (\ref{eq:exhw}) changes its sign from positive to negative. The derivative in question is written as $ h^\prime(x_j-w/r_j)-{\beta_H m_1} (r_j-1)$. We recall here that we had the information on the derivative $h^\prime(x)$ because it is essentially the microcanonical temperature examined before, see (\ref{eq:microM}). We know that \begin{eqnarray} h^\prime(x-w)=\partial_x \ln A&=&{m_1} \beta_N(\varepsilon,v)\nonumber\\ &=&{\beta_H m_1}(\beta_N(\varepsilon,v)/{\beta_H }).\nonumber\\ \end{eqnarray} {}From our previous numerical analysis (see Fig.\ref{fig7}) we know that $\beta_N/{\beta_H }$ is very close to unity except $x\sim 1$, we observe that the above derivative is negative all the range except for the very edge $x_j\sim 1$. This means that the position of the peak is $x_j^{max}\sim 1$, and accordingly its height is negligibly small. It is no longer the exponentially large like $\sim \hbox{exp}({v/\eta m_1^\eta})$. This is always true if (\ref{eq:capl}) contains the factor with $r_j\not=1$. Consequently we conclude that any corrections to M-B approximation are very small.
1,116,691,500,601
arxiv
\section{Alternate Pairwise Similarity-Based Ising Model} Consider a cluster numbered $c$, whose binary representation is $ c_l ... c_0 $, where $l = [\log k_{\pi^*}]$. For each data point $u \in X$, and for each $i \in [0, l]$, we initially set variables $q_{ui}$ to 0. If $\pi^*$ assigns $u$ to cluster $c$ and $c_i =1$ we set $q_{ui} = 1$. So essentially for any point $u$, the bit sequence $q_{ul} ... q_{u0}$ would give the binary expansion of the cluster number it belongs to. \added[id=ar]{There is no need for a one-hot encoding anymore.} The sum of quadratic terms $\sum_{i=1}^{l} (q_{ui} - q_{vi})^2 $ is 0 iff $\pi^*$ assigns both $u, v \in X$ to the same cluster. Therefore the value \begin{equation} \sum_{\substack{u, v \in X: \\ \pi^*(u) = \pi^*(v)}} (1 - S_{uv}) = \sum_{u, v \in X} (1-S_{uv}) \sum_{i=1}^{l} (q_{ui} - q_{vi})^2 \label{eq:alt-obj} \end{equation} and represents the sum of within-cluster dissimilarities resulting from $\pi^*$. In particular, the value $(1-S_{uv})$ is the fraction of the clusterings in $\Pi$ that assign $u$ and $v$ to different clusters while $\pi^*$ assigns them to the same cluster. Hence, we seek a consensus clustering $\pi^*$ that minimizes this value. We therefore reformulate the optimization problem in Equation (\ref{eq:sim_based}) as QUBO: \begin{equation} \begin{aligned} \min \sum_{u, v \in X} (1-S_{uv}) \sum_{c=1}^{K} q_{uc} q_{vc} + \sum_{u \in X} A (\sum_{c=1}^{K} q_{uc} -1)^2. \label{eq:ising_cluster} \end{aligned} \end{equation} where the term $ \sum_{u \in X} A (\sum_{k=1}^{k_{\pi^*}} q_{uk} -1)^2$ is added to the objective function to ensure that the constraints (\ref{eq:one_hot}) are satisfied. The value $A$ is positive constant that penalizes the objective value whenever the constraints (\ref{eq:one_hot}) are violated. One can show that if $A \geq n$, then the optimal solution of the QUBO defined in Equation (\ref{eq:ising_cluster}) does not violate the constraints (\ref{eq:one_hot}). We omit the proof and refer readers to \cite{kumar2018quantum} for a similar type of result. \section{Introduction} The increasingly challenging task of scaling the traditional Central Processing Unit (CPU) has lead to the exploration of new computational platforms such as quantum computers, CMOS annealers, neuromorphic computers, and so on (see~\cite{coffrin2019evaluating} for a detailed exposition). Although their physical implementations differ significantly, adiabatic quantum computers, CMOS annealers, memristive circuits, and optical parametric oscillators all share Ising models as their core mathematical abstraction \cite{coffrin2019evaluating}. This has lead to a growing interest in the formulation of computational problems as Ising models and in the empirical evaluation of these models on such novel computational platforms. This body of literature includes clustering and community detection \cite{kumar2018quantum,negre2019detecting,shaydulin2019network}, graph partitioning \cite{ushijima2017graph,ushijima2019multilevel}, and many NP-Complete problems such as covering, packing, and coloring \cite{lucas2014ising,liu2019modeling}. Consensus clustering is the problem of combining multiple `base clusterings' of the same set of data points into a single consolidated clustering \cite{ghosh2011cluster}. Consensus clustering is used to generate robust, stable, and more accurate clustering results compared to a single clustering approach \cite{ghosh2011cluster}. The problem of consensus clustering has received significant attention over the last two decades \cite{ghosh2011cluster}, and was previously considered under different names (clustering aggregation, cluster ensembles, clustering combination) \cite{gionis2007clustering}. It has applications in different fields including data mining, pattern recognition, and bioinformatics \cite{gionis2007clustering} and a number of algorithmic approaches have been used to solve this problem. The consensus clustering is, in essence, a combinatorial optimization problem \cite{wu2014k} and different instances of the problem have been proven to be NP-hard (e.g., \cite{filkov2004integrating,topchy2005clustering}). In this work, we investigate the use of special purpose hardware to solve the problem of consensus clustering. To this end, we formulate the problem of consensus clustering using Ising models and evaluate our approach on a specialized CMOS annealer. We make the following contributions: \begin{enumerate} \item We present and study two Ising models for consensus clustering that can be solved on a variety of special purpose hardware platforms. \item We demonstrate how our models are embedded on the Fujitsu Digital Annealer (DA), a quantum-inspired specialized CMOS hardware. \item We present an empirical evaluation based on seven benchmark datasets and show our approach outperforms existing techniques for consensus clustering. \end{enumerate} \section{Background} \subsection{Problem Definition}\label{sec:prob_def} Let $X=\{x_1, ..., x_n\}$ be a set of $n$ data points. A \emph{clustering} of $X$ is a process that partitions $X$ into subsets, referred to as \emph{clusters}, that together cover $X$. A clustering is represented by the mapping $\pi: X \to \{1, \dots, k_{\pi}\}$ where $k_{\pi}$ is the number of clusters produced by clustering $\pi$. Given $X$ and a set $\Pi = \{\pi_1, \dots, \pi_m\}$ of $m$ clusterings of the points in $X$, the \emph{Consensus Clustering Problem} is to find a new clustering, $\pi^*$, of the data $X$ that best summarizes the set of clusterings $\Pi$. The new clustering $\pi^*$ is referred to as the \emph{consensus} clustering. Due to the ambiguity in the definition of an optimal consensus clustering, several approaches have been proposed to measure the solution quality of consensus clustering algorithms \cite{ghosh2011cluster}. In this work, we focus on the approach of determining a consensus clustering that agrees the most with the original clusterings. As an objective measure to determine this agreement, we use the mean Adjusted Rand Index (ARI) metric (Equation \ref{eq:meanARI}). However, we also consider clustering quality measured by mean Silhouette Coefficient \cite{rousseeuw1987silhouettes} and clustering accuracy based on true labels. In Section \ref{sec:emp-evaluation} these evaluation criteria are discussed in more details. \subsection{Existing Criteria and Methods}\label{sec:existing_approaches} Various criteria or objectives have been proposed for the Consensus Clustering Problem. In this work we mainly focus on two well-studied criteria, one based on the pairwise similarity of the data points, and the other based on the different assignments of the base clusterings. Other well-known criteria and objectives for the Consensus Clustering Problem can be found in the excellent surveys of \cite{ghosh2011cluster,vega2011survey}, with most defining NP-Hard optimization problems. \paragraph{Pairwise Similarity Approaches:} In this approach, a similarity matrix $S$ is constructed such that each entry in $S$ represents the fraction of clusterings in which two data points belong to the same cluster \cite{nguyen2007consensus}. In particular, \begin{equation} S_{uv} = \frac{1}{m}\sum_{i=1}^m \mathbbm{1}(\pi_i(u) = \pi_i(v)),\label{eq:s_ij} \end{equation} with $ \mathbbm{1}$ being the indicator function. The value $S_{uv}$ lies between 0 and 1, and is equal to 1 if all the base clusterings assign points $u$ and $v$ to the same cluster. Once the pairwise similarity matrix is constructed, one can use any similarity-based clustering algorithm on $S$ to find a consensus clustering with a fixed number of clusters, $K$. For example, \cite{li2010combining} proposed to find a consensus clustering $\pi^*$ with exactly $K$ clusters that minimizes the within-cluster dissimilarity: \begin{equation} \min \sum_{\substack{u, v \in X: \\ \pi^*(u) = \pi^*(v)}} (1 - S_{uv}).\label{eq:sim_based} \end{equation} \paragraph{Partition Difference Approaches: } An alternative formulation is based on the different assignments between clustering. Consider two data points $u, v \in X$, and two clusterings $\pi_i, \pi_j \in \Pi$. The following binary indicator tests if $\pi_i$ and $\pi_j$ disagree on the clustering of $u$ and $v$: \begin{equation} d_{u,v}(\pi_i, \pi_j) = \begin{cases} 1,& \text{if } \pi_i(u) = \pi_i(v) \text{ and } \pi_j(u) \neq \pi_j(v)\\ 1,& \text{if } \pi_i(u) \neq \pi_i(v) \text{ and } \pi_j(u) = \pi_j(v)\\ 0,& \text{otherwise}. \end{cases} \end{equation} The distance between two clusterings is then defined based on the number of pairwise disagreements: \begin{equation} d(\pi_i, \pi_j) = \frac{1}{2}\sum_{u, v \in X } d_{u,v}(\pi_i, \pi_j) \end{equation} with the $\frac{1}{2}$ factor to take care of double counting and can be ignored. This measure is defined as the number of pairs of points that are in the same cluster in one clustering and in different clusters in the other, essentially considering the (unadjusted) Rand index \cite{ghosh2011cluster}. Given this measure, a common objective is to find a consensus clustering $\pi^*$ with respect to the following optimization problem: \begin{equation} \min \sum_{i=1}^m d(\pi_i, \pi^*).\label{eq:second_obj} \end{equation} \paragraph{Methods and Algorithms:} The two different criteria given above define fundamentally different optimization problems, thus different algorithms have been proposed. One key difference between the two approaches inherently lies in determining the number of clusters $k_{\pi^*}$ in $\pi^*$. The pairwise similarity approaches (e.g., Equation (\ref{eq:sim_based})) require an input parameter $K$ that fixes the number of clusters in $\pi^*$, whereas the partition difference approaches such as Equation (\ref{eq:second_obj}) do not have this requirement and determining $k_{\pi^*}$ is part of the objective of the problem. Therefore, for example, Equation (\ref{eq:sim_based}) will have a minimum value in the case when $k_{\pi^*}=n$, however this does not hold for Equation (\ref{eq:second_obj}). The Cluster-based Similarity Partitioning Algorithm (CSPA) is proposed in \cite{strehl2002cluster} for solving the pairwise similarity based approach. The CSPA constructs a similarity-based graph with each edge having a weight proportional to the similarity given by $S$. Determining the consensus clustering with exactly $K$ clusters is treated as a $K$-way graph partitioning problem, which is solved by methods such as METIS \cite{karypis1998multilevelk}. In \cite{nguyen2007consensus}, the authors experiment with different clustering algorithms including hierarchical agglomerative clustering (HAC) and iterative techniques that start from an initial partition and iteratively reassign points to clusters based on their pairwise similarities. For the partition difference approach, Li et al. \cite{li2007solving} proposed to solve Equation (\ref{eq:second_obj}) using nonnegative matrix factorization (NMF). Gionis et al. \cite{gionis2007clustering} proposed several algorithms that make use of the connection between Equation (\ref{eq:second_obj}) and the problem of correlation clustering. CSPA, HAC, NMF: these three approaches are considered as baseline in our empirical evaluation section (Section \ref{sec:emp-evaluation}). \subsection{Ising Models} Ising models are graphical models that include a set of nodes representing spin variables and a set of edges corresponding to the interactions between the spins. The energy level of an Ising model which we aim to minimize is given by: \begin{equation} E(\sigma) = \sum_{(i,j) \in \mathcal{E}} J_{i,j} \sigma_i\sigma_j + \sum_{i \in \mathcal{N}} h_i \sigma_i, \end{equation} where the variables $\sigma_i \in \{-1,1\}$ are the spin variables and the couplers, $J_{i,j}$, represent the interaction between the spins. A Quadratic Unconstrained Binary Optimization (QUBO) model includes binary variables $q_i \in \{0,1\}$ and couplers, $c_{i,j}$. The objective to minimize is: \begin{equation} E(\textbf{q}) = \sum_{i = 1} ^ n c_iq_i + \sum_{i<j} c_{i,j}q_{i}q_{j}. \end{equation} QUBO models can be transformed to Ising models by setting $\sigma_i = 2q_i-1$~\cite{bian2010ising}. \section{Ising Approach for Consensus Clustering on Specialized Hardware}\label{sec:approach} In this section, we present our approach for solving consensus clustering on specialized hardware using Ising models. We present two Ising models that correspond to the two approaches in Section \ref{sec:existing_approaches}. We then demonstrate how they can be solved on the Fujitsu Digital Annealer (DA), a specialized CMOS hardware. \subsection{Pairwise Similarity-based Ising Model} For each data point $u \in X$, let $q_{uc} \in \{0, 1\}$ be the binary variable such that $q_{uc} = 1$ if $\pi^*$ assigns $u$ to cluster $c$, and 0 otherwise. Then the constraints \begin{equation} \sum_{c=1}^{K}q_{uc} = 1, \quad \text{for each } u \in X \label{eq:one_hot} \end{equation} ensure $\pi^*$ assigns each point to exactly one cluster. Subject to the constraints (\ref{eq:one_hot}), the sum of quadratic terms $\sum_{c=1}^{K} q_{uc} q_{vc} $ is 1 if $\pi^*$ assigns both $u, v \in X$ to the same cluster, and is $0$ if assigned to different clusters. Therefore the value \begin{equation} \sum_{\substack{u, v \in X: \\ \pi^*(u) = \pi^*(v)}} (1 - S_{uv}) = \sum_{u, v \in X} (1-S_{uv}) \sum_{c=1}^{K} q_{uc} q_{vc} \label{eq:obj} \end{equation} represents the sum of within-cluster dissimilarities in $\pi^*$: $(1-S_{uv})$ is the fraction of clusterings in $\Pi$ that assign $u$ and $v$ to different clusters while $\pi^*$ assigns them to the same cluster. We therefore reformulate Equation (\ref{eq:sim_based}) as QUBO: \begin{equation} \begin{aligned} \min \sum_{u, v \in X} (1-S_{uv}) \sum_{c=1}^{K} q_{uc} q_{vc} + \sum_{u \in X} A (\sum_{c=1}^{K} q_{uc} -1)^2. \label{eq:ising_cluster} \end{aligned} \end{equation} where the term $ \sum_{u \in X} A (\sum_{c=1}^{K} q_{uc} -1)^2$ is added to the objective function to ensure that the constraints (\ref{eq:one_hot}) are satisfied. $A$ is positive constant that penalizes the objective for violations of constraints (\ref{eq:one_hot}). One can show that if $A \geq n$, the optimal solution of the QUBO in Equation (\ref{eq:ising_cluster}) does not violate the constraints (\ref{eq:one_hot}). The proof is very similar to proof of Theorem \ref{thm:thm1} and a similar result in \cite{kumar2018quantum}. \subsection{Partition Difference Ising Model} The partition difference approach essentially considers the (unadjusted) Rand Index \cite{ghosh2011cluster} and therefore can be expected to perform better. The \emph{Correlation Clustering Problem} is another important problem in data mining. Gionis et al. \cite{gionis2007clustering} showed that Equation (\ref{eq:second_obj}) is a restricted case of the Correlation Clustering Problem, and that Equation (\ref{eq:second_obj}) can be expressed as the following equivalent form of the Correlation Clustering Problem \begin{equation} \min_{\pi^*} \ \sum_{\substack{u, v \in X: \\ \pi^*(u) = \pi^*(v)}} (1 - S_{uv}) + \sum_{\substack{u, v \in X: \\ \pi^*(u) \neq \pi^*(v)}} S_{uv}.\label{eq:corr_clustering} \end{equation} We take advantage of this equivalence to model Equation (\ref{eq:second_obj}) as a QUBO. In a similar fashion to the QUBO formulated in the preceding subsection, the terms \begin{equation} \sum_{\substack{u, v \in X: \\ \pi^*(u) \neq \pi^*(v)}} S_{uv} = \sum_{u, v \in X} S_{uv} \sum_{1 \leq c \neq l \leq K} q_{uc} q_{vl} \label{eq:obj_corr} \end{equation} measure the similarity between points in \emph{different} clusters, where $K$ represents an \textit{upper bound} for the number of clusters in $\pi^*$. This then leads to the minimizing the following QUBO: \begin{equation} \begin{aligned} \sum_{u, v \in X} (1-S_{uv}) \sum_{c=1}^{K} q_{uc} q_{vc} + \sum_{u, v \in X} S_{uv} \sum_{1 \leq c \neq l \leq K} q_{uc} q_{vl}+ \sum_{u \in X} B (\sum_{c=1}^{K} q_{uc} -1)^2. \label{eq:ising_correlation} \end{aligned} \end{equation} Intuitively, Equation (\ref{eq:ising_correlation}) measures the disagreement between the consensus clustering and the clusterings in $\Pi$. This disagreement is due to points that are clustered together in the consensus clustering but not in the clusterings in $\Pi$, however it is also due to points that are assigned to different clusters in the consensus partition but in the same cluster in some of the partitions in $\Pi$. Formally, we can show that Equation (\ref{eq:ising_correlation}) is equivalent to the correlation clustering formulation in Equation (\ref{eq:corr_clustering}) when setting $B \ge n$. Consistent with other methods that optimize Equation (\ref{eq:second_obj}) (e.g., \cite{li2007solving}), our approach takes as an input $K$, an \textit{upper bound} on the number of clusters in $\pi^*$, however the obtained solution can use smaller number of clusters. In our proof, we assume $K$ is large enough to represent the optimal solution, i.e., greater than the number of clusters in optimal solutions to the correlation clustering problem in Equation (\ref{eq:corr_clustering}). \begin{theorem} Let $\bar{\textbf{q}}$ be the optimal solution to the QUBO given by Equation (\ref{eq:ising_correlation}). If $B \ge n$, for a large enough $K \leq n$, an optimal solution to the Correlation Clustering Problem in Equation~(\ref{eq:corr_clustering}), $\bar{\pi}$, can be efficiently evaluated from $\bar{\textbf{q}}$. \label{thm:thm1} \end{theorem} \begin{proof} First we show the optimal solution to the QUBO in Equation (\ref{eq:ising_correlation}) satisfies the one-hot encoding (${\sum_k q_{uk} = 1}$). This would imply given $\bar{\textbf{q}}$ we can create a valid clustering $\bar{\pi}$. Note, the optimal solution will never have ${\sum_c q_{uc} > 1}$ as it can only increase the cost. The only case in which an optimal solution will have ${\sum_c q_{uc} < 1}$ is when the cost of assigning a point to a cluster is higher than the cost of not assigning it to a cluster (i.e., the penalty $B$). Assigning a point $u$ to a cluster will incur a cost of $(1 - S_{uv})$ for each point $v$ in the same cluster and $S_{uv}$ for each point $v$ that is not in the cluster. As there is additional $n-1$ points in total, and both $(1 - S_{uv})$ and $S_{uv}$ are less or equal to one (Equation (\ref{eq:s_ij})), setting $B\ge n$ guarantees the optimal solution satisfies the one-hot encoding. Now we assume that $\bar{\pi}$ is not optimal, i.e., there exists an optimal solution $\hat{\pi}$ to Equation (\ref{eq:corr_clustering}) that has a strictly lower cost than $\bar{\pi}$. Let $\hat{\textbf{q}}$ be the corresponding QUBO solution to $\hat{\pi}$, such that $\bar{\pi}(u) = k$ if and only if $\bar{q}_{uk} = 1$. This is possible because $K$ is large enough to accomodate all clusters in $\hat{\pi}$. As both $\bar{\textbf{q}}$ and $\hat{\textbf{q}}$ satisfy that one-hot encoding (penalty terms are zero), their cost is identical to the cost of $\bar{\pi}$ and $\hat{\pi}$ . Since the cost of $\hat{\pi}$ is strictly lower than $\bar{\pi}$, and the cost of $\bar{\textbf{q}}$ is lower or equal to $\hat{\textbf{q}}$, we have a contradiction. \qed \end{proof} \subsection{Solving Consensus Clustering on the Fujitsu Digital Annealer}\label{sec:da} The Fujitsu Digital Annealer (DA) is a recent CMOS hardware for solving combinatorial optimization problems formulated as QUBO \cite{aramon2019physics,daweb}. We use the second generation of the DA that is capable of representing problems with up to 8192 variables with up to 64 bits of precision. The DA has previously been used to solve problems in areas such as communication \cite{naghsh2019digitally} and signal processing \cite{rahman2019ising}. The DA algorithm \cite{aramon2019physics} is based on simulated annealing (SA) \cite{kirkpatrick1983optimization}, while taking advantage of the massive parallelization provided by the CMOS hardware \cite{aramon2019physics}. It has several key differences compared to SA, most notably a \textit{parallel-trial} scheme in which each MC step considers all possible one-bit flips in parallel and \textit{dynamic offset} mechanism that increase the energy of a state to escape local minima \cite{aramon2019physics}. \subsubsection{Encoding Consensus Clustering on the DA} When embedding our Ising models on the DA, we need to consider the hardware specification and adapt the representation of our model accordingly. Due to hardware precision limit, we need to embed the couplers and biases on an integer scale with limited granularity. In our experiments, we normalize the pairwise costs $S_{uv}$ in the discrete range $[0, 100]$, $D_{ij} = \left[{S_{uv}\cdot 100}\right]$, and accordingly $(1-S_{uv})$ is replaced by $(100 - D_{uv})$. Note that the theoretical bound $B=n$ is adjusted accordingly to be $B=100\cdot n$. The theoretical bound guarantees that all constraints are satisfied if problems are solved to optimality. In practice, the DA does not necessarily solve problems to optimality and due to the nature of annealing-based algorithms, using very high weights for constraints is likely to create deep local minima and result in solutions that may satisfy the constraints but are often of low-quality. This is especially relevant to our pairwise similarity model where the bound tends to become loose as the number of clusters grows. In our experiments, we use constant, reasonably high, weights that were empirically found to perform well across datasets. For the pairwise similarity-based model (Equation (\ref{eq:ising_cluster})) we use $A = 2^{14}$, and for the partition difference model (Equation (\ref{eq:ising_correlation})) we use $B= 2^{15}$. While we expect to get better performance by tuning the weights per-dataset, our goal is to demonstrate the performance of our approach in a general setting. Automatic tuning of the weight values for the DA is a direction for future work. Unlike many of the existing consensus clustering algorithms that run until convergence, our method runs for a given time limit (defined by the number of runs and iterations) and returns the best solution encountered. In our experiments, we arbitrarily choose \emph{three seconds} as a (reasonably short) time limit to solve our Ising models. As with the weights, we employ a single temperature schedule across all datasets, and \emph{do not} tune it per dataset. \section{Empirical Evaluation} \label{sec:emp-evaluation} We perform an extensive empirical evaluation of our approach using a set of seven benchmark datasets. We first describe how we generate the set of clusterings, $\Pi$. Next, we describe the baselines, the evaluation metrics, and the datasets. \subsubsection{Generating Partitions} We follow \cite{fred2005combining} and generate a set of clusterings by randomizing the parameters of the K-Means algorithm, namely the number of clusters $K$ and the initial cluster centers. In this work, we only use labelled datasets for which we know the number of clusters, $\widetilde{K}$, based on the true labels. To generate the base clusterings we run the K-Means algorithm with random cluster centers and we randomly choose $K$ from the range $[2, 3\widetilde{K}]$. For each dataset, we generate 100 clusterings to serve as the clustering set $\Pi$. \subsubsection{Baseline Algorithms} We compare our pairwise similarity-based Ising model, referred to as DA-Sm, and our correlation clustering Ising model, referred to as DA-Cr, to three popular algorithms for consensus clustering: \begin{enumerate} \item The cluster-based similarity partitioning algorithm (CSPA) \cite{strehl2002cluster} solved as a $K$-way graph partitioning problem using METIS \cite{karypis1998multilevelk}. \item The nonnegative matrix factorization (NMF) formulation in \cite{li2007solving}. \item Hierarchical agglomerative clustering (HAC) starts with all points in singleton clusters and repeatedly merges the two clusters with the largest average similarity based on $S$, until reaching the desired number of clusters \cite{nguyen2007consensus}. \end{enumerate} \subsubsection{Evaluation} We evaluate the different methods using three measures. Our main concern in this work is the level of agreement between the consensus clustering and the set of input clusterings. To this end, one requires a metric measuring the similarity of two clusterings that can be used to measure how close the consensus clustering $\pi^*$ to each base clustering $\pi_i \in \Pi$ is. One of popularly used metrics to measure the similarity between two clusterings is the Rand Index (RI) and Adjusted Rand Index (ARI)~\cite{hubert1985comparing}. The Rand Index of two clustering lies between 0 and 1, obtaining the value 1 when both clusterings perfectly agree. Likewise, the maximum score of ARI, which is corrected-for-chance version of RI, is achieved when both clusterings perfectly agree. $ARI(\pi_i, \pi^*)$ can be viewed as measure of \emph{agreement} between the consensus clustering $\pi^*$ and some base clusterings $\pi_i \in \Pi$. We use the mean ARI as the main evaluation criteria: \begin{equation} \label{eq:meanARI} \frac{1}{m}\sum_{i=1}^m ARI(\pi_i, \pi^*) \end{equation} We also evaluate $\pi^*$ based on clustering quality and accuracy. For clustering quality, we use the mean Silhouette Coefficient \cite{rousseeuw1987silhouettes} of all data points (computed using the Euclidean distance between the data points). For clustering accuracy, we compute the ARI between the consensus partition $\pi^*$ and the true labels. \subsubsection{Benchmark Datasets} We run experiments on seven datasets with different characteristics: \textit{Iris, Optdigits, Pendigits, Seeds, Wine} from the UCI repository~\cite{Dua:2019} as well as \textit{Protein} \cite{xing2003distance} and \textit{MNIST}.\footnote{http://yann.lecun.com/exdb/mnist/} \textit{Optdigits-389} is a randomly sampled subset of Optdigits containing only the digits $\{3,8,9\}$. Similarly, \textit{MNIST-3689} and \textit{Pendigits-149} are subsets of the MNIST and Pendigits datasets. Table \ref{tab:datasets} provides statistics on each of the data set, with the coefficient of variation (CV) \cite{degroot2012probability} describing the degree of class imbalance: zero indicates perfectly balanced classes, while higher values indicate higher degree of class imbalance. \begin{table}[h!] \centering \caption{Datasets}\label{tab:datasets} \begin{tabular}{lcccc} \toprule Dataset & \# Instances & \# Features & \# Clusters & CV \\ \midrule Iris & 150 & 4 & 3 & 0.000 \\ MNIST-3689 & 389 & 784 & 4 & 0.015 \\ Optdigits-389 & 537 & 64 & 3 & 0.021 \\ Pendigits-149 & 532 & 16 & 3 & 0.059 \\ Protein & 116 & 20 & 6 & 0.301 \\ Seeds & 210 & 7 & 3 & 0.000 \\ Wine & 178 & 13 & 3 & 0.158 \\ \bottomrule \end{tabular} \end{table} \subsection{Results} We compare the baseline algorithms to the two Ising models in Section \ref{sec:approach} solved using the Fujitsu Digital Annealer described in Section \ref{sec:da}. Clustering is typically an unsupervised task and the number of clusters is unknown. The number of clusters in the true labels, $\widetilde{K}$, is not available in real scenarios. Furthermore, $\widetilde{K}$ is not necessarily the best value for clustering tasks (e.g., in many cases it is better to have smaller clusters that are more pure). We therefore test the algorithms in two configurations: when the number of clusters is set to $\widetilde{K}$, as in the true labels, and when the number of clusters is set to $2\widetilde{K}$. \begin{table}[b] \centering \caption{Consensus Performance Measured by Mean ARI Across Partitions}\label{tab:results_consensus} \begin{tabular}{l|ccccc|ccccc} \toprule & \multicolumn{5}{c|}{$\widetilde{K}$ clusters} & \multicolumn{5}{c}{$2\widetilde{K}$ clusters} \\ Dataset & CSPA & NMF & HAC & DA-Sm & DA-Cr & CSPA & NMF & HAC & DA-Sm & DA-Cr \\ \midrule Iris & 0.555 & \textbf{0.618} & \textbf{0.618} & \textbf{0.619} & \textbf{0.621} & 0.536 & 0.614 & 0.627 & 0.608 & \textbf{0.642} \\ MNIST & 0.459 & 0.449 & 0.469 & \textbf{0.474} & \textbf{0.474} & 0.456 & 0.511 & 0.517 & 0.490 & \textbf{0.521} \\ Optdig. & 0.528 & \textbf{0.550} & 0.541 & \textbf{0.550} & \textbf{0.551} & 0.492 & 0.596 & 0.608 & 0.576 & \textbf{0.612} \\ Pendig. & 0.546 & 0.546 & 0.507 & \textbf{0.555} & \textbf{0.555} & 0.531 & 0.629 & \textbf{0.642} & 0.605 & \textbf{0.644} \\ Protein & 0.344 & 0.393 & 0.379 & 0.390 & \textbf{0.405} & 0.324 & 0.419 & \textbf{0.423} & 0.378 & 0.415 \\ Seeds & 0.558 & \textbf{0.577} & 0.534 & \textbf{0.575} & \textbf{0.577} & 0.484 & 0.602 & 0.602 & 0.580 & \textbf{0.612} \\ Wine & 0.481 & \textbf{0.536} & 0.535 & \textbf{0.537} & \textbf{0.538} & 0.502 & \textbf{0.641} & \textbf{0.641} & \textbf{0.641} & \textbf{0.643} \\ \midrule \# Best & 0 & 4 & 1 & 6 & \textbf{7} & 0 & 1 & 3 & 1 & \textbf{6} \\ \bottomrule \end{tabular} \end{table} \subsubsection{Consensus Criteria} Table \ref{tab:results_consensus} shows the mean ARI between $\pi^*$ and the clusterings in $\Pi$. To avoid bias due to very minor differences, we consider all the methods that achieved Mean ARI that is within a threshold of 0.0025 from the best method to be equivalent and highlight them in bold. We also summarize the number of times each method was considered best across the different datasets. The results show that DA-Cr is the best performing method for both $\widetilde{K}$ and $2\widetilde{K}$ clusters. The results of DA-Sm are not consistent: DA-Sm and NMF are performing well for $\widetilde{K}$ clusters and HAC is performing better for $2\widetilde{K}$ clusters. \subsubsection{Clustering Quality} Table \ref{tab:results_silhouette} report the mean Silhouette Coefficient of all data points. Again, DA-Cr is the best performing method across datasets, followed by HAC. NMF seems to be equivalent to HAC for $2\widetilde{K}$. \begin{table}[h!] \centering \caption{Clustering Quality Measured by Silhouette}\label{tab:results_silhouette} \begin{tabular}{l|ccccc|ccccc} \toprule & \multicolumn{5}{c|}{$\widetilde{K}$ clusters} & \multicolumn{5}{c}{$2\widetilde{K}$ clusters} \\ Dataset & CSPA & NMF & HAC & DA-Sm & DA-Cr & CSPA & NMF & HAC & DA-Sm & DA-Cr \\ \midrule Iris & 0.519 & \textbf{0.555} & \textbf{0.555} & 0.551 & \textbf{0.553} & 0.289 & 0.366 & \textbf{0.371} & 0.343 & \textbf{0.373} \\ MNIST & 0.075 & 0.072 & \textbf{0.078} & \textbf{0.079} & \textbf{0.078} & 0.069 & \textbf{0.082} & 0.074 & 0.074 & \textbf{0.082} \\ Optdig. & 0.127 & 0.120 & 0.120 & \textbf{0.130} & \textbf{0.130} & 0.088 & \textbf{0.119} & \textbf{0.119} & 0.112 & \textbf{0.121} \\ Pendig. & 0.307 & 0.307 & \textbf{0.315} & 0.310 & 0.310 & 0.305 & 0.332 & \textbf{0.375} & 0.368 & 0.364 \\ Protein & 0.074 & \textbf{0.106} & 0.095 & 0.094 & \textbf{0.104} & 0.068 & 0.111 & 0.115 & \textbf{0.119} & \textbf{0.118} \\ Seeds & 0.461 & 0.468 & 0.410 & 0.469 & \textbf{0.472} & 0.275 & \textbf{0.343} & 0.304 & \textbf{0.344} & 0.302 \\ Wine & 0.453 & 0.542 & \textbf{0.571} & 0.547 & 0.545 & 0.452 & \textbf{0.543} & \textbf{0.541} & 0.539 & \textbf{0.542} \\ \midrule \# Best & 0 & 2 & 4 & 2 & \textbf{5} & 0 & 4 & 4 & 2 & \textbf{5} \\ \bottomrule \end{tabular} \end{table} \subsubsection{Clustering Accuracy} Table \ref{tab:results_accuracy} shows the clustering accuracy measured by the ARI between $\pi^*$ and the true labels. For $\widetilde{K}$, we find DA-Sm to be best-performing solution (followed by DA-Cr). For $2\widetilde{K}$, DA-Cr outperforms the other methods. Interestingly, there is no clear winner between CSPA, NMF, and HAC. \begin{table}[h!] \centering \caption{Clustering Accuracy Measured by ARI Compared to True Labels}\label{tab:results_accuracy} \begin{tabular}{l|ccccc|ccccc} \toprule & \multicolumn{5}{c|}{$\widetilde{K}$ clusters} & \multicolumn{5}{c}{$2\widetilde{K}$ clusters} \\ Dataset & CSPA & NMF & HAC & DA-Sm & DA-Cr & CSPA & NMF & HAC & DA-Sm & DA-Cr \\ \midrule Iris & \textbf{0.868} & 0.746 & 0.746 & 0.716 & 0.730 & 0.438 & 0.463 & 0.447 & 0.433 & \textbf{0.521} \\ MNIST & 0.684 & 0.518 & 0.704 & \textbf{0.730} & 0.720 & 0.412 & 0.484 & \textbf{0.545} & 0.440 & 0.484 \\ Optdig. & 0.712 & 0.642 & 0.675 & 0.734 & \textbf{0.738} & 0.380 & 0.513 & \textbf{0.630} & 0.481 & 0.623 \\ Pendig. & 0.674 & \textbf{0.679} & 0.499 & 0.668 & 0.668 & 0.398 & 0.614 & 0.625 & 0.490 & \textbf{0.639} \\ Protein & 0.365 & 0.298 & 0.363 & 0.349 & \textbf{0.376} & 0.237 & 0.332 & 0.301 & 0.308 & \textbf{0.345} \\ Seeds & 0.705 & 0.710 & 0.704 & \textbf{0.764} & 0.717 & 0.424 & 0.583 & 0.573 & 0.500 & \textbf{0.619} \\ Wine & 0.324 & 0.395 & 0.371 & \textbf{0.402} & 0.398 & 0.231 & 0.245 & 0.240 & \textbf{0.248} & 0.238 \\ \midrule \# Best & 1 & 1 & 0 & \textbf{3} & 2 & 0 & 0 & 2 & 1 & \textbf{4} \\ \bottomrule \end{tabular} \end{table} \subsubsection{Experiments with higher $K$}\label{sec:larger_k} In partition difference approaches, increasing $K$ does not necessarily lead to a $\pi^*$ that has more clusters. Instead, $K$ serves as an upper bound and new clusters will be used in case they reduce the objective. To demonstrate how different algorithms handle different $K$ values, Table \ref{tab:iris_k} shows the consensus criteria and the actual number of clusters in $\pi^*$ for different values of $K$ (note that $\widetilde{K}=3$ in Iris). The results show that the performance of the pairwise similarity methods (CSPA, HAC, DA-Sm) degrades as we increase $K$. This is associated with the fact the actual number of clusters in $\pi^*$ is equal to $K$ which is significantly higher compared to the clusterings in $\Pi$. Methods based on partition difference (NMF and DA-Cr) do not exhibit significant degradation and the actual number of clusters does not grow beyond 5 for DA-Cr and 6 for NMF. Note that the average number of clusters in $\Pi$ is $5.26$. \begin{table}[h!] \centering \caption{Results for Iris dataset with different number of clusters}\label{tab:iris_k} \begin{tabular}{c|ccccc|ccccc} \toprule & \multicolumn{5}{c|}{Consensus Criteria} & \multicolumn{5}{c}{\# of clusters in consensus clustering} \\ $K$ & CSPA & NMF & HAC & DA-Sm & DA-Cr & CSPA & NMF & HAC & DA-Sm & DA-Cr \\ \midrule $3$ & 0.555 & \textbf{0.618} & \textbf{0.618} & \textbf{0.619} & \textbf{0.621} & 3 & 3 & 3 & 3 & 3 \\ $6$ & 0.536 & 0.614 & 0.627 & 0.608 & \textbf{0.642} & 6 & 6 & 6 & 6 & 5 \\ $9$ & 0.447 & 0.614 & 0.591 & 0.497 & \textbf{0.642} & 9 & 6 & 9 & 9 & 5 \\ $12$ & 0.370 & 0.614 & 0.507 & 0.414 & \textbf{0.642} & 12 & 6 & 12 & 12 & 5 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} Motivated by the recent emergence of specialized hardware platforms, we present a new approach to the consensus clustering problem that is based on Ising models and solved on the Fujitsu Digital Annealer, a specialized CMOS hardware. We perform an extensive empirical evaluation and show that our approach outperforms existing methods on a set of seven datasets. These results shows that using specialized hardware in core data mining tasks can be a promising research direction. As future work, we plan to investigate additional problems in data mining that can benefit from the use of specialized optimization hardware as well as experimenting with different types of specialized hardware platforms.
1,116,691,500,602
arxiv
\section{introduction} The upcoming laser facilities, for example the project of Extreme Light Infrastructure (ELI), promise to provide peak focal intensities over $10^{23} \mathrm{W/cm^2}$ to $10^{25} \mathrm{W/cm^2}$. These laser intensities have the potential ability to unveil the mysteries of quantum vacuum \cite{marklund2006, piazza2012} as well as to reach the limits of attainable intensity of electromagnetic wave \citep{fedotov2010,grismayer2016,zhang2015}. They are conveniently employed to observe the radiation transition from classical to quantum region \citep{zhang2015,jillprl2014} and to provide the efficient sources of $\gamma$-rays and dense anti-matter \citep{burke1997,bamber1999,bell2008,ridgers2013,luo2015,chang2015,zhuxl2015}. In some recent works \citep{bell2008,kirk2009,brady2012,kirk2013,blackburn2014,ridgers2014}, the laser induced quantum electrodynamics (QED) processes have been investigated extensively. The nonlinear Compton scattering, radiation reaction and pair production by the Breit-Wheeler processes have been implemented into the particle-in-cell (PIC) code like EPOCH \citep{arber2015} and VLPL \citep{jillpop2014} in the laser plasma interaction. These new processes not only provide the new sources for $\gamma$-ray or/and antimatter but also induce some new effects on the classical physical phenomenon such as the electron phase contraction caused by the radiation reaction \citep{lehmann2012,seipt2011}, electron trapping in the near critical density plasma \citep{jillprl2014} and so on. On the other hand, for the plasmas with high charge state or/and large atom number $Z$, the processes involving the atom nucleus like the bremsstrahlung ($e + Z \rightarrow e' + \gamma + Z$) or pair production by the Bethe-Heitler processes ($\gamma + Z \rightarrow e^- + e^+ + Z$) may be important in laser plasma interaction \citep{sarri2014,sarri2015,guo2008,liang2015,pike2014}. Usually the radiation through bremsstrahlung is mainly considered in the interaction of electron bunch with high $Z$ target like of copper, tungsten, gold, etc. All these high $Z$ matter is not easily ionized with low intensity lasers, however, with the ultra-intense or/and ultra-relativistic lasers, the high $Z$ plasma can be easily obtained. And at these laser intensities, the bremsstrahlung and nonlinear Compton scattering ($e + n \omega \rightarrow e'+\gamma$) will become the main sources for $\gamma$-ray radiation. Currently most works are engaged in investigating nonlinear Compton scattering or bremsstrahlung individually. A simultaneous comparison study is still lacking, therefore, it is the goal of present research to show the relative importance and photon emission strength for these two mechanisms under different laser intensities. In this paper we will use PIC code by implementing Monte Carlo (MC) algorithm to study the bremsstrahlung and nonlinear Compton scattering. We will show the relative strength of bremsstrahlung and nonlinear Compton scattering when an ultra-strong laser with intensity ranges from $I=10^{21}$ $\mathrm{W/cm^2}$ to $I=10^{24}$ $\mathrm{W/cm^2}$ irradiating a thin Al or Au target. The $\gamma$-ray distribution and some other characteristics of each mechanisms will be shown in details. This paper is organized as follows. In Sec. 2, we will review the nonlinear Compton scattering and bremsstrahlung processes and discuss the algorithm of bremsstrahlung by MC implemented to PIC code and the benchmark results. In Sec. 3, we shall discuss the radiation strength of nonlinear Compton scattering and bremsstrahlung with given thin targets and given laser intensities. Summary and discussion has been given in the final section. \section{Implementation of Bremsstrahlung} \subsection{comparison of two radiation processes} The nonlinear Compton scattering is caused by an electron scattering with multiple laser photons $e + n\omega_l \rightarrow e'+\gamma$, which converts several low energy laser photons into a high energy $\gamma$ photon. This mechanism along with the Breit-Wheeler pair production ($ \gamma+ \omega_l \rightarrow e^- + e^+ $) are verified experimentally in the SLAC \citep{bula1996,burke1997,bamber1999}. In the Ref. \citep{kirk2009}, authors implementing the quantum synchrotron radiation \citep{sokolov1968} into the PIC code. The importance of the nonlinear Compton scattering is strongly depending on the laser intensity via the Lorentz invariant $\eta = \gamma \sqrt{(E_\perp+v \times B)^2+(v \cdot E_\parallel )^2}/ E_{cr}$, where $\gamma$ denotes the relativistic Lorentz factor of incoming electrons in laser field, $E_{cr}=m^2c^3/e\hbar$ is the Schwinger critical field, $v$ is the incoming electron velocity. When $\eta$ approaches unity, large numbers of photons will be generated with most probable energy $\epsilon_\gamma \approx 0.44\gamma\eta mc^2$ \citep{bell2008}. Yet a simple cross section is adequate for comparison with other processes. While for the bremsstrahlung, the cross section is highly dependent on the atomic number $Z$ of the target \citep{pike2014,tsai1974}. It is proportional to $\alpha r_e^2 Z^2$, where $\alpha=e^2/\hbar c=1/137$ and $r_e=e^2/mc^2$ are the fine structure constant and the classical electron radius, respectively. Thus the cross section would be increased beside the electron density has been increased when one increases the target $Z$. This will strongly affect the bremsstrahlung emission, for example, for the aluminum target one has $\sigma_b\approx 139\alpha r_e^2$, while for the gold target with $Z=79$ one gets $\sigma_b\approx 6241 \alpha r_e^2$. This fact leads to that at the same laser intensity the increase of atomic number $Z$ will change the relative photon emission strength no matter what is from bremsstrahlung or/and from the nonlinear Compton scattering. \subsection{simulation method} Unlike methods used in Refs. \citep{jiang2014,meadowcroft2012,hanus2014}, we will not separate the bremsstrahlung from the laser plasma interaction. To simulate the bremsstrahlung and nonlinear Compton scattering in the laser plasma interaction, MC method has been implemented in our 2D PIC code. The nonlinear Compton scattering part is the same with EPOCH \citep{arber2015} and it has been tested with very good agreement. For the bremsstrahlung part, we have implement the MC Collision model into the code. This part has been tested with the Geant4 code \citep{agostinelli2003}, and the result will be given below. In the simulation of bremsstrahlung, one of the widely used cross section formula is \citep{tsai1974} \begin{equation} \begin{aligned} \frac{d\sigma_{eZ}}{d\omega}(\omega, y) = & \frac{\alpha r_0^2}{\omega} \lbrace (\frac{4}{3} - \frac{4}{3}y + y^2) \\ & \times [ Z^2(\phi_1 - \frac{4}{3} \mathrm{ln}Z - 4f) + Z(\psi_1 - \frac{8}{3}\mathrm{ln}Z)] \\ & + \frac{2}{3}(1-y)[Z^2(\phi_1 - \phi_2) + Z(\psi_1 - \psi_2)] \rbrace, \end{aligned} \end{equation} where $y = \hbar \omega / E$ (the energy ratio of the emitted photon to the incident electron), $\phi_{1, 2}$ and $\psi_{1, 2}$ are functions of the screening potential by atomic electrons, and $f$ is the Coulomb correction term. For the high $Z$ target, e.g. $Z > 5$, we shall use Eqs.(3.38-3.41) from Ref. \cite{tsai1974}. For $Z < 5$, the approximated screen functions are not suitable and need to be modified. Another method which had been used in the code PENELOPE \cite{salvat2009} is the tabulated data from Ref. \cite{seltzer1986}, in which the "scaled" bremsstrahlung differential cross section (DCS) could be transformed to differential cross section by \cite{salvat2009} \begin{equation} \frac{d\sigma_{br}}{d \omega} = \frac{Z^2}{\beta^2} \frac{1}{\omega}\chi(Z, E, y), \end{equation} where $\beta = v/c$ is the normalized electron velocity. By integrating the $d \sigma_{br}/d\omega$ with $d\omega$, we can get a tabulated $\sigma_{br}(E, y)$ which could be used for MC simulation. The DCS for electron and positron is connected as \begin{equation} \frac{d \sigma_{br}^{+}}{d\omega} = F_p(Z, E) \frac{d \sigma_{br}^{-}}{d\omega} \end{equation} and the analytical approximation of factor $F_p(Z, E)$ could be found in Ref. \cite{salvat2009}, which shows a good accuracy of about $0.5\%$ in comparison with the Ref. \cite{kim1986}. In our case, the implementation of bremsstrahlung is simply a direct MC collision. For a given incident electron with energy $E$ and velocity $v$, the probability of trigger a bremsstrahlung event is given by \begin{equation} P_{br} = 1 - e^{n \sigma(E) v \Delta t} = 1 - e^{\Delta s/ \lambda}, \end{equation} where $n$ denotes the target density, $\Delta t$ is the time interval, $\sigma(E) = \int_{y_{\mathrm{cut}}}^{1} \frac{d \sigma(E, y)}{dy} dy$, $\Delta s = v \Delta t$ and $\lambda = 1 / n\sigma(E)$. Then we will generate a random number $R_1$ to compare with the probability. If $R_1 < P_{br}$, then a bremsstrahlung will be triggered. The photon energy is chosen in the similar way by generate another random number $R_2$, and multiplied with the $\sigma_{br}(E)$ to determine the $\kappa$ through $\sigma(y, E) = \sigma(E) R_2$. Finally, a photon with energy $\hbar \omega = Ey$ and momentum direction $\vec{k}/|k| = \vec{v} / |v|$ will be generated. By choosing a minimum energy of emitted hard photon, we can drop those low energy photons which are not our interests, we can boost our computation. This kind calculation of probability is the same with the method of calculation the random free path \cite{salvat2009}. The implementation of Bethe-Heitler pair production is similar to the bremsstrahlung, and it is not our topic in this article. Bremsstrahlung emission has been tested with the Geant4 code, which is capable of simulating very comprehensive processes. We have used a 1 GeV and 100 MeV bunch electrons constituted by $10^5$ primaries to collide a 5 mm Au target with $Z=79$, i.e. $\rho=\mathrm{19.3 g/cm^3}$ and a 5 $ \mathrm{mm}$ Al target with $Z=13$, i.e. $\rho=2.7 \mathrm{g/cm^3}$. In the PIC code, we have turn off the field updater and weighting procedure, only particle pusher and bremsstrahlung MC module is enabled. The electron and photon spectra seems to be in good agreement with Geant4 result except a slight higher for electron spectra in the high energy tail. In Fig. \ref{mevbrems} we have plotted the spectra of electron and photon from a $100\mathrm{MeV}$ electron bunch normally incident onto the aluminum and gold slab and in Fig. \ref{gevbrems} a $1\mathrm{GeV}$ bunch electrons normally incident onto the same target as in Fig. \ref{mevbrems}. And in the following we will use this module to investigate the bremsstrahlung emission in the laser plasma target interaction. \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=15cm]{Fig1.pdf} \caption{(color online). Bremsstrahlung of 100 MeV electrons} \label{mevbrems} \end{figure} \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=15cm]{Fig2.pdf} \caption{(color online). Bremsstrahlung of $1 \mathrm{GeV}$ electrons} \label{gevbrems} \end{figure} \section{Bremsstrahlung and nonlinear Compton scattering in laser irradiating solid targets} We have used the aluminum target and gold target to investigate the electron density and atomic number $Z$ effects on the intensity of radiation. Four sets of 2D PIC simulations have been performed to study the relative strength of nonlinear Compton scattering and bremsstrahlung with the laser intensity ranges from $I=10^{21}$ $\mathrm{W/cm^2}$ to $I=10^{24}$ $\mathrm{W/cm^2}$. In all simulations, the lasers are linearly polarized in $y$ direction and propagating along $x$ direction. The temporal profile is set to be constant from 0 to 30 fs, and the spatial profile in $y$ direction is a Gaussian with spot size 1 $\mathrm{\mu m}$. The simulation box covers 6 $\mathrm{\mu m}$ in $x$ and $y$ direction with $1000 \times 1000$ cells for aluminum target and $2000 \times 2000$ cells for gold target, respectively. The plasma target is starting from $x = 2$ $\mathrm{\mu m}$ with 1 $\mathrm{\mu m}$ thickness. The macro particles per cell is $80$ for electrons and $20$ for ions. All targets has been presumed fully ionized with $n_e \approx 711 n_c$ for Al and $n_e \approx 4177 n_c$ for gold since the ponderomotive $\langle \gamma \rangle \approx 27$ when $I=10^{21}$ $\mathrm{W/cm^2}$, which means that Au could be easily fully ionized in the thin target case \citep{Beiersdorfer2012,mishra2013}. Thus a constant density for two types of targets is assumed in all simulations. Absorbing boundary condition has been used for the laser and particles. Note that in all simulations only photons with energy $\epsilon_\gamma \geq m_ec^2 \approx$ 0.511 MeV is taken into account, while low energy photons are also created but they are dropped to boost the computation. \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=15cm]{Fig3.pdf} \caption{(color online). Energy absorption rate, where 'b' denotes the bremsstrahlung, 'c' denotes the nonlinear Compton scattering.} \label{ekabsorption} \end{figure} Now we can give the simulation results for the energy absorption rate of particles and photons. For simplicity we have instead of laser intensity by the normalized vector potential $a_0=0.86\sqrt{I_{18}}$ when the laser wavelength $\lambda=1\mathrm{\mu m}$ is given, where $I_{18}$ means the laser intensity in unit of $10^{18} \mathrm{W/cm^2}$. In Fig. \ref{ekabsorption} (a) and (b), we have plotted the energy partition of both targets with the nonlinear Compton scattering at $t =32$ fs. Since we use a thin target with $l =$ 1 $\mathrm{\mu m}$, so the final result will not be the same with Ref. \citep{jillpop2014}, in which very thick target has been used. In both cases, the absorption rate of $\gamma$-ray increases with the laser intensity. This result agrees with the Ref. \citep{jillpop2014}. But in our case, the interaction time is reduced because not only the piston velocity is larger but also the target is very thin. Thus the absorption rate of $\gamma$-ray will become smaller compared to thick target case \citep{jillpop2014}. Besides, due to lower conversion rate to photons, the electron absorption rate $\eta_e$ is higher than the thick target case and continue to rise with the laser intensity increases. And due to the higher electron density in the Au target case, e.g. $n_{e,Au} \approx 6 n_{e,Al}$, the electrons in the Au target always acquires higher absorption rate than the Al target. In Fig. \ref{ekabsorption} (c) and (d), we have plotted the energy partition in the bremsstrahlung case. The results of electron and ion trend are similar to that in the nonlinear Compton case, except that electrons absorption is a little higher. Furthermore, bremsstrahlung photons acquires much lower energies than the nonlinear Compton scattering photons, and this difference becomes much more apparent for higher intensities. Thus, this difference is coincident with the difference of electron absorption rate. This indicates that the bremsstrahlung in the cases of very high intensity can be ignored even for high-$Z$ target. \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=12cm]{Fig4.pdf} \caption{(color online). Photon and electron density distribution in log scale, where 'b' denotes the bremsstrahlung, 'c' denotes the nonlinear Compton scattering.} \label{density} \end{figure} In Fig. \ref{density}, the photon distribution of nonlinear Compton scattering and bremsstrahlung has been given at $I=10^{23}$ $\mathrm{W/cm^2}$ and $I=10^{24}$ $\mathrm{W/cm^2}$. The inset is the corresponding electron density. The photon emission in the nonlinear Compton scattering is much stronger than bremsstrahlung for both of Al target and Au target. Besides, the photon density distribution identifies different mechanisms. For the nonlinear Compton scattering, photons are propagating out in a spherical manner due to the same shape of the laser field, see Fig. \ref{density}(a), (b), and (e), (f). While for the bremsstrahlung, photons are focused in the laser plasma interaction zone, see Fig. \ref{density}(c), (d) and (g), (h). Furthermore, due to higher electron density and much larger cross section, the created bremsstrahlung photon density is much higher in the Au target than Al target. There is a little difference between the target deformation for bremsstrahlung and nonlinear Compton scattering. In each target case, number density of electrons residing in the laser front is a little higher for the bremsstrahlung. This may be due to fewer emission events lead to higher electron energies, thus laser was unable to expel these electrons quickly. \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=15cm]{Fig5.pdf} \caption{(color online). Electron spectra, where 'b' denotes the bremsstrahlung and 'c' denotes the nonlinear Compton scattering.} \label{electron_spectra} \end{figure} In Fig. \ref{electron_spectra} we have plotted the electron energy spectra of two emission mechanisms for different cases. (Note that in the figure here and the next figure the energy of electron as well as photon are both denoted as $E_k$ for convenience.) For the Al target, we can see that in the case of $I=10^{22}$ $\mathrm{W/cm^2}$, electron spectrum is almost the same, see Fig. \ref{electron_spectra} (a). But in the case of $I=10^{23}$ $\mathrm{W/cm^2}$, electrons in the bremsstrahlung case acquires a little higher tail, and this becomes more obvious in the case of $I=10^{24}$ $\mathrm{W/cm^2}$. Besides, the number of low energy electrons has been reduced compared to lower intensities. By comparing different targets, we can see that electrons in the Al target acquires higher maximum energy than the Au target when $I=10^{24}$ $\mathrm{W/cm^2}$, see Fig. \ref{electron_spectra} (c), but they are almost the same for lower intensities. This is caused by the different target deformation, see Fig. \ref{density}. Since the piston velocity is depending on the target density and laser intensity as $v_{HB}=\Xi/(1+\Xi)$ with $\Xi=I/\rho c^3$ \citep{robinson2009}. Thus the burn out of higher intensities and low $Z$ target is much quicker than lower intensities and high $Z$. If the target has been burn out, electrons in vacuum are oscillating with the laser field without the confinement of plasma space charge field. If the piston has not finished, electron's longitudinal oscillation will be confined by the ion attraction, and the maximum energy will be lower compared with the case in vacuum, which will be given as $\epsilon_{max} \leq \epsilon_l \approx c \Delta p_{laser}=ce/\omega \int_0^\pi E_0 \mathrm{sin}(\phi)d\phi\approx$ 1.8 GeV for $I=10^{24}$ $\mathrm{W/cm^2}$ and $\epsilon_{max} \leq$ 550 MeV for $I=10^{23}$ $\mathrm{W/cm^2}$. \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=15cm]{Fig6.pdf} \caption{(color online). Photon spectra, where 'b' denotes the bremsstrahlung and 'c' denotes the nonlinear Compton scattering.} \label{photon_spectra} \end{figure} In Fig. \ref{photon_spectra}, we have plotted the photon spectra of nonlinear Compton scattering and bremsstrahlung (only photons with $\epsilon_\gamma > 0.511$ MeV are taken into account) for Al and Au targets from different laser intensity. In fact when $I=10^{21}$ $ \mathrm{W/cm^2}$ only the Au target can generate very few photons via the bremsstrahlung which is not shown in the figure. For the nonlinear Compton scattering, the cutoff energy for each case highly depends on the input laser intensity, with 15 MeV for $I=10^{22}$ $\mathrm{W/cm^2}$, 250 MeV for $I=10^{23}$ $\mathrm{W/cm^2}$ and 850 MeV for $I=10^{24}$ $\mathrm{W/cm^2}$, while they are independent of the target type. Besides, even though the electron density of Au target is much larger than the Al target, the photon spectra is almost the same for each kind of intensity, which is the direct result of nearly the same electron spectra. The photon spectra by bremsstrahlung is quite different from that by nonlinear Compton scattering. Not only the created number is much less than the later mechanisms, also the cut-off energy is much smaller. Since the bremsstrahlung is not directly depending on the laser intensity, the cut-off will be depending on the electron energy and target density etc. As expected, the number of bremsstrahlung photons from the Au target is much larger than that from the Al target, however, it is still much smaller than those by the nonlinear Compton scattering from the Au target. \begin{figure}[hbtp]\suppressfloats \centering \includegraphics[width=15cm]{Fig7.pdf} \caption{(color online). Photon angular distribution, where 'b' denotes the bremsstrahlung and 'c' denotes the nonlinear Compton scattering.} \label{photon_angle} \end{figure} In Fig. \ref{photon_angle}, the angular distribution of photons is exhibited by two kinds of radiation from two targets. We can see that the angular distribution from two kinds of radiation mechanisms is quite different. In the nonlinear Compton scattering case, most photons are focused to the laser polarization nearby with a little deviation. With the increase of laser intensity, the radiation intensity increases, but there are still two peak angles along the polarization direction. In the bremsstrahlung case, due to much smaller yields, the angular distribution is very rough. But we can still see that there is a plateau from $-1<\theta<1$, which is quite different from the nonlinear Compton scattering. The difference can be understood from two different mechanisms. As we demonstrated above, the strength of nonlinear Compton scattering highly depends on the quantum parameter $\eta\approx \gamma E/E_{cr}$. First the electrons in the high field region will be easily accelerated to high $\eta$ which leads to a larger probability of nonlinear Compton scattering. Second the $\eta$ also depends on the polarization. In the bremsstrahlung case, however, the radiation depends on the electron energy and target density which is not directly affected by the polarization so that the angular distribution of bremsstrahlung is not very sensitive to the polarization. For the Au target case, the electron angular is very similar to the Al case. The photon angular distribution is much smooth compared to the Al target case due to larger cross section. \section{summary and discussion} In summary, we have implemented a MC collision method into PIC to simulate the bremsstrahlung in the laser plasma interactions. By simulating the laser irradiating Al target and Au target with different laser intensity, we have obtained the relative strength of radiation for each mechanism. From the comparison, we can see that when the laser intensity $I \leq 10^{22} \mathrm{W/cm^2}$, bremsstrahlung is still very strong compared with the nonlinear Compton scattering in the laser high-$Z$ target interaction. And when $I \leq 10^{21} \mathrm{W/cm^2}$, this photon channel dominates the photon emission than the nonlinear Compton scattering for high $Z$ target like Au. Thus this kind of energy conversion may need to be taken into account in seeking the accurate simulation and analytical solutions. Our research confirms that the bremsstrahlung in the interaction of ultra-intense laser with low $Z$ plasma can be ignored. And for laser intensity $I \geq 10^{22} \mathrm{W/cm^2}$, even for high $Z$ target, the nonlinear Compton scattering is still the dominant radiation channel. Besides, the photon density distribution could be a signature to distinguish the main radiation channel. Since using the MC collision method is very time consuming for large number system, the so called Null-Collision method may be one of the potential methods to reduce computation resources. Besides, to confirm the experimental thick target case, a $\mathrm{mm}$ scale target should be considered, which is beyond the scope of present study and is worthy to be researched in the future work. \section{acknowledgements} Authors are grateful to Prof. H Wang for helpful discussions on the implementation of MC algorithm. This work was supported by the National Natural Science Foundation of China (NSFC) under Grants No. 11475026 and No. 11305010. The computation was carried out at the High Performance Scientific Computing Center (HSCC) of the Beijing Normal University. The authors are particularly grateful to CFSA at University of Warwick for allowing us to use the EPOCH. \bibliographystyle{unsrt}
1,116,691,500,603
arxiv
\section{Affiliations} \institute{LESIA, Observatoire de Paris, Universit\'e PSL, CNRS, Univ. Paris Diderot, Sorbonne Paris Cit\'{e}, Sorbonne Universit\'e, 5 Place J. Janssen, 92195 Meudon Pricipal Cedex, France \email{[email protected]} \and Center for Technical Physics, Institute of Physics, Vietnam Academy of Science and Technology \and Max-Planck-Institut f\"ur Sonnensystemforschung, Justus-von-Liebig-Weg, 3, 37077, G\"ottingen, Germany \and University of Padova, Department of Physics and Astronomy {\it Galileo Galilei}, Via Marzolo 8, 35131 Padova, Italy \and Center of Studies and Activities for Space (CISAS) {\it G. Colombo}, University of Padova, Via Venezia 15, 35131 Padova, Italy \and CNR-IFN UOS Padova LUXOR, Via Trasea, 7, 35131 Padova, Italy \and Laboratoire Atmosph\`eres, Milieux et Observations Spatiales, CNRS \& Universit\'e de Versailles Saint-Quentin-en-Yvelines, 11 boulevard d'Alembert, 78280 Guyancourt, France \and Centro de Astrobiologia, CSIC-INTA, 28850 Torrejon de Ardoz, Madrid, Spain \and International Space Science Institute, Hallerstrasse 6, 3012 Bern, Switzerland \and Scientific Support Office, European Space Research and Technology Centre/ESA, Keplerlaan 1, Postbus 299, 2201 AZ Noordwijk ZH, The Netherlands \and Jet Propulsion Laboratory, M/S 183-401, 4800 Oak Grove Drive, Pasadena, CA 91109, USA \and Department of Astronomy, University of Maryland, College Park, MD 20742-2421, USA \and INAF, Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy \and University of Padova, Department of Mechanical Engineering, via Venezia 1, 35131 Padova, Italy \and University of Trento, Faculty of Engineering, Via Mesiano 77, 38100 Trento, Italy \and Dipartimento di Geoscienze, University of Padova, via G. Gradenigo 6, 35131 Padova, Italy \and INAF Astronomical Observatory of Trieste, Via Tiepolo 11, 34014 Trieste, Italy \and Instituto de Astrof\'isica de Andalucia (CSIC), c/ Glorieta de la Astronomia s/n, 18008 Granada, Spain \and National Central University, Graduate Institute of Astronomy, 300 Chung-Da Rd, Chung-Li 32054, Taiwan \and Space Science Institute, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau \and Deutsches Zentrum f\"ur Luft und Raumfahrt (DLR), Institut f\"ur Planetenforschung, Asteroiden und Kometen, Rutherfordstrasse 2, 12489 Berlin, Germany \and Institut f\"ur Geophysik und extraterrestrische Physik (IGEP), Technische Universitat Braunschweig, Mendelssohnstr. 3, 38106 Braunschweig, Germany \and Operations Department, European Space Astronomy Centre/ESA, P.O.Box 78, 28691 Villanueva de la Canada, Madrid, Spain \and MTA CSFK Konkoly Observatory, Budapest, Hungary } \date{Accepted 27 August 2018. Received on July 2018} \newpage \abstract{}{The Rosetta space probe accompanied comet 67P/Churyumov-Gerasimenko for more than two years, obtaining an unprecedented amount of unique data of the comet nucleus and inner coma. This has enabled us to study its activity almost continuously from 4 au inbound to 3.6 au outbound, including the perihelion passage at 1.24 au. This work focuses identifying the source regions of faint jets and outbursts and on studying the spectrophotometric properties of some outbursts. We use observations acquired with the OSIRIS/NAC camera during July-October 2015, that is, close to perihelion.} {We analyzed more than 2000 images from NAC color sequences acquired with 7-11 filters covering the 250-1000 nm wavelength range. The OSIRIS images were processed with the OSIRIS standard pipeline up to level 3, that is, converted in radiance factor, then corrected for the illumination conditions. For each color sequence, color cubes were produced by stacking registered and illumination-corrected images.} { More than 200 jets of different intensities were identified directly on the nucleus. Some of the more intense outbursts appear spectrally bluer than the comet dark terrain in the vivible-to-near-infrared region. We attribute this spectral behavior to icy grains mixed with the ejected dust.\\ Some of the jets have an extremely short lifetime. They appear on the cometary surface during the color sequence observations, and vanish in less than some few minutes after reaching their peak. We also report a resolved dust plume observed in May 2016 at a resolution of 55 cm/pixel, which allowed us to estimate an optical depth of $\sim$0.65 and an ejected mass of $\sim$ 2200 kg, assuming a grain bulk density of $\sim$ 800 kg/m$^3$. We present the results on the location, duration, and colors of active sources on the nucleus of 67P from the medium-resolution (i.e., 6-10 m/pixel) images acquired close to perihelion passage. The observed jets are mainly located close to boundaries between different morphological regions. Some of these active areas were observed and investigated at higher resolution (up to a few decimeter per pixel) during the last months of operations of the Rosetta mission.} {These observations allow us to investigate the link between morphology, composition, and activity of cometary nuclei. Jets depart not only from cliffs, but also from smooth and dust-covered areas, from fractures, pits, or cavities that cast shadows and favor the recondensation of volatiles. This study shows that faint jets or outbursts continuously contribute to the cometary activity close to perihelion passage, and that these events are triggered by illumination conditions. Faint jets or outbursts are not associated with a particular terrain type or morphology. } \keywords{Comets: individual: 67P/Churyumov-Gerasimenko, Methods: data analysis, Methods:observational, Techniques: photometric } \titlerunning{Jets and activity on comet 67P } \maketitle \section{Introduction} \begin{table*} \begin{center} \caption{Observing conditions for the NAC images ($\alpha$ is the phase angle, r$_h$ is the heliocentric distance, and $\Delta$ is the distance between comet and spacecraft). Each sequence consists of images acquired with the 11 filters of the NAC camera: F22 (649.2 nm), F23 (535.7 nm), F24 (480.7 nm), F16 (360.0 nm), F27 (701.2 nm), F28 (743.7 nm), F41 (882.1 nm), F51 (805.3 nm), F61 (931.9 nm), F71 (989.3 nm), and F15 (269.3 nm). $^1$: Number of analyzed sequences on the date. The sequences usually cover slightly more than one rotation period of the comet. $^2$: This entry specifically refers to the number of jets that have been successfully located on the surface, not to the total number of jets. Some jets originate from behind the limb, and we cannot precisely locate their source region. $^3$: This is the average number of detected jets per sequence for a given date.} \label{obs} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \bf{2015 } & \boldmath{N$_{sequences}^{1}$} & \boldmath{N$_{jets}^{2}$} & \boldmath{$\alpha ~(^{\circ})$} & \boldmath{r$_{h}$} {\bf(AU)} & \boldmath{$\Delta$} {\bf(km)} & \bf{Res. (m/px)} & \bf{Avg.}\boldmath{$^3$} \\ \hline June 27 & 24 &19 &89.3--89.7 &1.365 &182.8--198.3 & 3.4--3.7 &0.8\\ July 26 & 33 &56 &89.9 & 1.262 &167.3--169.2 &3.2 &1.7\\ August 1 & 29 &101& 89.4--89.7 & 1.251& 206.4--215.& 4.0& 3.4\\ August 9 & 31 &19 &89.0--89.2 &1.244 &303.9--310.0 &5.8 &0.5\\ August 12 & 23 &10 &89.4--89.7 &1.243 &327.2--336.2 &6.3--7.0 &0.4\\ August 23 & 16 &31 &87.3--88.5 &1.250 &329.9--336.7 &6.2--6.3 &1.9 \\ August 30 & 24 &126&70.0--70.2 &1.261 &402.5--405.1 &7.6 &5.3\\ September 5 & 15 &26 & 99.7--102.6& 1.276& 393.3--441.5& 7.4--8.3 &1.7\\ October 11& 12 &16 &60.9--61.5 &1.437 &520.1--529.2& 9.7--9.9 &1.3\\ October 21& 12 & 26&64.0--64.4 &1.487 &420.1--422.5& 7.9 &2.2\\ October 31& 13 &42 &61.6--63.0 &1.565 &287.2--305.0& 5.4--5.7& 3.2\\ \hline \end{tabular} \end{center} \end{table*} The Rosetta mission of the European Space Agency was launched on 2 March 2004 to perform the most detailed study ever attempted of a comet. After ten years of interplanetary cruising, Rosetta entered the orbit of its primary target, the short-period comet 67P/Churyumov-Gerasimenko (hereafter 67P), in August 2014 and followed the comet for more than two years until 30 September 2016, when it landed on the surface of the nucleus.\\ Rosetta carried a broad suite of instruments, including the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS), which acquired more than 75000 images of the comet during the mission. OSIRIS is composed of two cameras: the Narrow Angle Camera (NAC) for nucleus surface and dust studies, and the Wide Angle Camera (WAC) for the wide-field coma investigations (see Keller et al., 2007, for further details). OSIRIS enabled extensive studies at high resolution (down to 10 cm/pixel, and even better during the final phase of Rosetta's descent) of the nucleus with several filters in the 250-1000 nm range. OSIRIS also provided high-resolution images of the cometary activity and its evolution from 4 au inbound to 3.6 au outbound. \\ The nucleus of 67P is bilobed, has a low density of 537.8$\pm$0.7 kg/m$^{3}$, and a high porosity (70-80\%, Sierks et al., 2015; P\"atzold et al., 2016; Jorda et al., 2016; Preusker et al., 2017). The surface is dark, with a geometric albedo of 5.9\% at 535 nm (Fornasier et al. 2015), and it shows a complex morphology that is characterized by consolidated and smooth terrains, depressions, pits, extensive layering, ubiquitous boulders, and dust-covered areas (Thomas et al., 2015, El-Maarry et al., 2015, 2016, Massironi et al., 2015). The coma activity of 67P has been monitored by several instruments even before the Rosetta rendezvous maneuver with the comet in August 2014. The OSIRIS images captured an outburst at the end of April 2014, when the comet was at 4 au and was not yet resolved by the cameras (Tubiana et al., 2015a), followed shortly after by the detection of water vapor with the MIRO instrument (Gulkis et al., 2015). During the first resolved observations, most of the activity arose from Hapi, the northern region located between the two lobes of the comet, which is brighter than average and relatively blue (Fornasier et al., 2015), and water ice and the first evidence of a diurnal water cycle was reported (de Sanctis et al., 2015). Important diurnal and seasonal variations were observed in the coma for different outgassing species. These were related to the complex morphology and the illumination conditions (Bockelee-Morvan et al., 2015, 2016, 2017; Biver et al., 2015; Hansen et al., 2016; Luspay-Kuti et al., 2015; Lin et al., 2015, 2016; Lara et al., 2015). The OSIRIS instrument observed different activity events during the two years of continuous observations of the comet, allowing scientists to retrieve the positions on the surface of the nucleus of several jets through geometrical tracing, to photometrically characterize them, and to study their seasonal evolution (Vincent et al., 2016a, 2016b; Lara et al., 2015 ; Lin et al. 2015, 2016, 2017; Shi et al., 2016, 2018). In particular, several peculiar events were investigated: Shi et al. (2016) analyzed a cluster of sunset jets from the Ma'at region in late April 2015; Knollenberg et al. (2016) studied an outburst originating from a part of the Imhotep region on 12 March 2015; Vincent et al. (2016a) located and classified 34 outbursts that occurred between July and September 2015; Vincent et al. (2016b) observed that most outbursts were located near collapsed cliffs, which they interpreted as evidence of mass wasting; Pajola et al. (2017) reported the first unambiguous link between an outburst and a cliff collapse that they observed in the Aswan site, which is located in the Seth region, with direct exposure of the fresh icy interior of the comet; and Agarwal et al. (2017) reported an outburst event in the Imhotep region at 3.32 au outbound that altered an area with a radius of 10 m on the surface and left an icy patch. Most of the results on cometary activity studies are obtained from observing sequences with long exposure times that are devoted to investigating the faint cometary gas and dust emissions. In such observations, the nucleus is usually saturated. In these studies, the jet sources cannot be directly identified on the nucleus, and their location is retrieved by triangulation from different viewing geometries or from projecting the 2D jet coordinates on synthetic images of the nucleus at the time of a given observation. Moreover, Shi et al. (2018) investigated the relation between jet morphology and terrain and cautioned that the trace-back analysis of jets may be hindered by the observing geometry. This work focuses on the jets observed during the OSIRIS/NAC color sequences that are dedicated to studying the colors and composition of the nucleus. They were taken between June and October 2015, when the comet was immediately before and after perihelion, which occurred on 13 August 2015. Several jets were observed on the nucleus, which allows us to precisely locate them directly on the surface and to investigate the morphology of the source regions from images with higher resolution that were acquired later. These events include the perihelion outburst, several faint jets, and short-lived transient events that lasted for a few minutes. \\ In section 2 we summarize the observational sets and the analysis we performed to characterize and localize the jets, and in section 3 we present the results of our jet distribution analysis over the southern hemisphere of the nucleus and describe their main properties. In section 4 we discuss the morphology of the jet sources from images with higher resolution that were acquired in 2016. Finally, we discuss the main mechanisms that are at the origin of activity and examine the link between morphology, composition, and activity of 67P. \section{Observations and data analysis} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Fornasier_fig1.png} \caption{Top: Map of the comet. We show superposed the jet sources that were identified during summer in the southern hemisphere of the comet. The locations of some nearby jets are averaged for clarity, and some notable jets are represented with larger symbols. Black circles represent cavities that were found to be active in different data sets (see Table~\ref{all_jets} for details). Cyan points represent events observed in 2016 that are reported here for the dust plume in the Bes region, are reported in Agarwal et al. (2017) for the outburst in the Imhotep region, and are reported in Fornasier et al. (2017) for the jets in the Anhur region. Bottom: Three different views of the southern hemisphere of 67P. Regional boundaries are overlaid. The complete 3D views of the comet nucleus with all the regions superposed are shown in El-Maarry et al. (2015, 2016).} \label{map} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth,angle=0]{Fornasier_fig_2.png} \caption{RGB composite (from the images acquired with filters centered at 882, 649, and 480 nm) of 9 out of the 23 sequences acquired on 30-31 August 2015, two weeks after perihelion passage. Several faint jets are visible in different images, as well as local bright patches that are associated with the exposure of volatiles.} \label{30aug2015} \end{figure*} We used color sequences devoted to the nucleus spectrophotometric characterization acquired with several filters of the NAC camera close to perihelion passage, between June and October 2015 (Table~\ref{obs}). Even if not expressly devoted to coma and dust studies, these sequences, acquired at the peak of cometary activity, revealed several jets and outbursts. We thus decided to build a jet catalog based on these observations. The advantage of these sequences is that in contrast to the long-exposure time observations that are devoted to investigating the cometary activity, the nucleus is not saturated. This permits precisely locating the position of the activity sources on the nucleus and studying the colors of some outburst. In particular, these observations for the first time highlight short-lived jets with durations shorter than a few minutes. \\ We analyzed more than 2000 images that were obtained over 11 days between June and October 2015. Each observing data set at a given date consists of 12-33 individual sequences of 11 filters (Table~\ref{obs}) and covers the rotational period of the nucleus (12.4 hours, Mottola et al., 2014). The observed nucleus surface is situated mostly in the southern hemisphere and on the comet equator. The identified jets therefore only account for a fraction of the total number of jets departing from the whole nucleus. \\ We used level 3B images from the OSIRIS pipeline, which are corrected for bias, flat field, geometric distortion, calibrated in absolute flux (in $W~m^{-2}~ nm^{-1}~ sr^{-1}$), and are finally converted into radiance factor (called $I/F$, where I is the observed scattered radiance and F is the incoming solar irradiance at the heliocentric distance of the comet, divided by $\pi$), as described in Tubiana et al. (2015b) and Fornasier et al. (2015). \\ All images of an individual sequence were first coregistered using the F22 NAC filter (centered at 649.2 nm) for reference. For the coregistration, we used a python script based on the scikit-image library (Van der Walt et al., 2014), and the optical flow algorithm (Farneb\"ack, 2003), as has been done previously for the analyses presented in Fornasier et al. (2017) and Hasselmann et al. (2017). \\ Each image was reconstructed for illumination and observing geometry using the 3D stereo-photoclinometric shape model (Jorda et al., 2016), considering all relevant geometric parameters, such as the camera distortion model, the alignment of the instrument to the Rosetta spacecraft, and the orientation of the spacecraft (with reconstructed orbit position and pointing) with respect to the 67P nucleus and to the Sun. RGB (Red, Green, and Blue) in false-color maps were generated from coregistered NAC images that were acquired with the filters centered at 882 nm, 649 nm, and 480 nm using the STIFF code (Bertin, 2012). These RGB maps offer the first visual clues about the comet nucleus. In these images, most of the comet nucleus appears gray, and bright spots are displayed as white patches. Transient events, on the other hand, are usually displayed as colored areas as they are only captured by some of the filters during a sequence, or because their intensity peaks in some filters. \\ Finally, for the spectral analysis, images were photometrically corrected by applying the Lommel-Seeliger disk law ($D$), which has been proven to correct satisfactorily for dark surfaces (Li et al., 2015): \begin{equation} D(i,e) = \frac{2\mu_{i}}{\mu_{e}+\mu_{i}} ,\end{equation} where $\mu_{i}$ and $\mu_{e}$ are the cosine of the solar incidence (i) and the emission (e) angles, respectively. The reflectance (at the phase angle of a given observation) of selected regions of interest (ROI) was computed from the photometrically corrected images by integrating the signal in a box of 3$\times$3 pixels, and the relative reflectance obtained by normalizing the spectrophotometry in the green filter, centered at 535 nm. The spectral slopes were evaluated in the 535-882 nm range, as detailed in Fornasier et al. (2015, 2016). The jets were first identified in the RGB images as colored patches. The Cartesian coordinates (x, y, z) of their sources on the nucleus were obtained through images that were simulated from the shape model, and were converted into longitudes and latitudes as follows: \begin{equation} lon(x,y) = arctan2(y,x) ~ ,~ \hspace{1cm} ~ lat = arctan{\frac{z}{\sqrt{x^2+y^2}}} .\end{equation} We used the Cheops reference frame described in Preusker et al. (2015) to retrieve the coordinates of the jet footprints. The reference of this frame is a boulder called Cheops in the Imhotep region, whose location is defined to be at longitude 142.35$^{\circ}$, latitude -0.28$^{\circ}$, and at a radial distance of 1395 m from the center of the nucleus. \section{Location and properties of jets and outbursts} More than 200 activity events have been identified in June-October 2015 from multi-filter images that were devoted to characterizing the nucleus (Table~\ref{all_jets}). We stress that this list is incomplete. Some jets originate from behind the limb, such that we cannot precisely locate their source region, and they are not considered in this study. Moreover, several jets were reported by other studies from long-exposure time sequences that were expressly devoted to investigating the activity; they are not considered in the following analysis either (Vincent et al., 2016a; Knollenberg et al., 2016; Shi et al., 2016; Schmitt et al., 2017; Lin et al., 2016, 2017). Table~\ref{all_jets} reports all the jet locations we identified in the nucleus color sequences that we analyze here, together with their type, repeatability, cometary local time, and a short description. The jet types are given on the basis of their shape, following the classification of Vincent et al. (2016a): A is a collimated jet, B is a wide plume, and C is a complex shape (broad and collimated). \\ \begin{figure*} \centering \includegraphics[width=0.95\textwidth,angle=0]{Fornasier_fig_3.png} \caption{A: Total number of observed jets observed as a function of the time since sunrise normalized per local day-time duration. Panels B-E: number of jets for four different latitude ranges as a function of the time since sunrise. The vertical black lines represent the approximate sunset time computed for three different dates near perihelion passage. The high southern latitudes in panel B were always illuminated in this period, and in this case, the sunrise time was set to zero. } \label{local_time} \end{figure*} The jet positions are represented in Fig.~\ref{map} in a map of the nucleus showing the different morphological regions (El-Maarry et al., 2015, 2016). Most of the jets are close to the boundaries that separate the different morphological regions, where textural and topographic discontinuities are observed. These boundaries are between Sobek and Hapi, Sobek and Anuket, Wosret and Maftet, Wosret and Bastet, Anhur and Bes, Khepry and Bes, Anhur and Aker, Bes and Geb, Bes and Atum, and Bes and Khonsu. The association between jet location and the boundaries of the morphological regions has been reported by Vincent et al. (2016a), who found a clustering of the activity in the boundaries between Anhur and Aker and Anuket and Sobek on the large lobe, and in the boundary between Wosret and Maftet on the small lobe. \\ Some examples of jets observed on 30 August 2015 are reported in Fig.~\ref{30aug2015}. These images were acquired from relatively large distances, and thus the spatial resolution was low (about 7.6 m/px). However, the spatial extension of the jet sources was several pixels, which means that it is similar to or larger than the dust plume that was observed closely by the Rosetta instruments on 3 July 2016 from a distance of 8.5 km (see Fig. 2 in Agarwal et al., 2017). \begin{figure*} \centering \includegraphics[width=0.85\textwidth,angle=0]{Fornasier_fig_4.png} \caption{Short-lived jet identified in OSIRIS NAC images at 6h48-6h50 on 30 August 2015 (jet 71 in Table~\ref{all_jets}). This sequence captures the beginning, peak, and end of the transient event, which lasted for approximately 90 seconds.} \label{jet30aug} \end{figure*} Several jets periodically originated from the same location inside cavities or alcoves (black circles in Fig.~\ref{map}), especially in Wosret and Bes. The walls of these cavities cast shadows that allowed the recondensation of volatiles. Evidence of exposed water ice has indeed been found inside them (see, e.g., Figs.~\ref{jet30aug}, ~\ref{jet30aug_anal}, and ~\ref{jet30aug_18h53}). The latitude and longitude position of jets departing from cavities is listed in Table~\ref{all_jets}. The errors indicate the range in longitude and latitude associated with periodic or close-by jets departing from these regions. These structures were active in several sequences, up to 61 times for cavity {\it A} in Wosret, and 29-47 times for cavities {\it A} and {\it B} in Bes. \\ While the perihelion sequence has the fewest observed jets per sequence (Table~\ref{obs}), it has the most spectacular and brightest event (jet 8 in Table~\ref{all_jets}). It originates from the Anhur region (Fornasier et al., 2017), and its intensity surpasses that of all other jets (Vincent et al., 2016a). The activity peak, defined as the highest number of jets per sequence, from the observations we investigate here occurs on 30 August 2015 (Table~\ref{obs}). This agrees with results on the entire cometary activity as observed from the ground and from other ROSETTA instruments, which reported an activity peak in 67P approximately two weeks after perihelion (Snodgrass et al., 2016). The highest water production rate as found by ROSINA occurred 18-22 days after perihelion (Hansen et al., 2016). Bockelee-Morvan et al. (2016) reported an abrupt increase in the water production of 67P six days after perihelion from coma observations with the VIRTIS instrument. The coma observations immediately after perihelion with VIRTIS also revealed an increase by a factor 2 for the CO$_2$, CH$_4$, and OCS abundances relative to water. Bockelee-Morvan et al. (2016) attributed this activity post-perihelion to the sublimation of volatile-rich layers near the surface. The exposure of volatile-rich layers close to perihelion was also reported by Fornasier et al. (2016) from OSIRIS observations of the nucleus colors and spectrophotometry. \\ The post-perihelion activity peak is due to thermal lag and to the low thermal inertia of the nucleus surface layers (10-30 or 10-50 J~K$^{-1}$~m$^{-2}$~s$^{-0.5}$, according to Schloerb et al. (2015) and Gulkis et al., (2015), respectively). VIRTIS spectrometer data showed that dust-layered areas have low thermal inertia (I), while the rougher consolidated terrain revealed higher thermal inertia, I $\ge$ 50 J~K$^{-1}$~m$^{-2}$~s$^{-0.5}$ (Leyrat et al., 2015), such as the Abydos landing site, whose thermal inertia is 85$\pm$35 J~K$^{-1}$~m$^{-2}$~s$^{-0.5}$, as determined by in situ measurements with the MUPUS instrument on board the Philae lander (Spohn et al., 2015). These results suggest that the nucleus has a low thermal conductivity and that it is a highly porous body with a subsurface layer of dust and ice that locally has a highly compressive strength (Spohn et al., 2015). \begin{figure*} \centering \includegraphics[width=0.95\textwidth,angle=0]{Fornasier_fig_5.jpg} \caption{Analysis of the short-lived jet identified in OSIRIS NAC images at 6h48-6h50 on 30 August 2015 (jet 71 in Table~\ref{all_jets}). Top left: RGB image (composed from filters centered at 882, 649, and 480 nm). The red square indicates the zoom into the area presented in Fig.~\ref{jet30aug}. Bottom left: Image acquired with the F22 filter. Five selected ROIs are superposed (the black circle shows the dark terrain, the red star represents the jet, and the blue asterisk, green triangle, and magenta square indicate different bright patches. Right plot: I/F factor (given at phase angle = 70$^{\circ}$) and relative reflectance (normalized at 535 nm) of the five selected ROIs.} \label{jet30aug_anal} \end{figure*} Figure~\ref{local_time} shows the distribution of all the observed jets as a function of the comet local time evaluated on a 24-hours basis. Local time has been computed using the Rosetta NAIF-SPICE ESA kernels (Acton et al., 2016), which include all the geometrical information about the spacecraft and the positions of the comet and the Sun, assuming a rotational period of the comet of 12.4047 hours (Mottola et al., 2014). The illumination is computed from the local time of a given position, disregarding topography, that is to say, we did not consider mutual shadowing. At the top of Figure~\ref{local_time} (panel A), we present the distribution of all jets per local time from sunrise, normalized by the length of day (i.e., the time from sunrise to sunset). As the jets cover different latitudes over four months, the length of day strongly depends on latitude and epoch. The length of day was computed for each source, counted once in Figure~\ref{local_time}, for a given time. This normalization was needed to present the active sources in the same cometary time-frame with respect to their illumination time. In this way, sunrise corresponds to zero and sunset to one in panel A of Fig.~\ref{local_time}. In this plot, the majority of the jet sources is active during cometary afternoon, and few events take place around midnight. This behavior may be explained by the thermal lag needed to penetrate the layer of subsurface volatiles and activate sublimation. \\ We also report in Fig.~\ref{local_time} (panels B-E) the distribution of jet sources per different latitude range as a function of time since sunrise. In these plots we did not normalize by the length of day, but we indicate the approximate sunset time for three different dates. For regions close to the south pole (panel B in Fig.~\ref{local_time}), which are always illuminated during the time-frame we considered (we set the sunrise time to zero in this case), the majority of the sources is active after midday. The medium-to-high southern latitudes (panel C), which mostly correspond to active sources in the Bes, Anhur, and Khepry regions, show a bimodal distribution with two activity peaks, one in the morning and one in the afternoon. The equatorial southern sources (panel C), which are mostly located in the Wosret region in the small lobe, also display a bimodal distribution, but with different peaks: one at night, a few hours before dawn, and one during sunset. Conversely, equatorial northern sources (panel E) display most of the activity about 5-7 hours after sunrise and show no events at sunset or during the night. \\ The activity peaks in the afternoon, close to sunset, or during the night may be explained by the thermal lag to activate the sublimation of subsurface volatiles. Sunset jets have previously been observed, for instance, in the Ma'at region (Shi et al., 2016) or at night (Knollenberg et al., 2016). Conversely, more than half of the 34 outbursts observed by Vincent et al. (2016a) during the 67P summer occurred at dawn or early morning. This was interpreted as a consequence of rapid temperature variations that cause the surface to crack. For the fainter jets we report here, only sources at medium-to-high southern latitudes (panel C in Fig.~\ref{local_time}) present a maximum close to sunrise and early morning. The sublimation of recondensing frost or ice during the short summer night, which is periodically visible on the surface close to perihelion passage (Fornasier et al., 2016), may be an alternative explanation for some of the morning jets that were observed in the Bes-Anhur regions at these latitudes. \subsection{Short-lived jets} \begin{figure*} \centering \includegraphics[width=0.95\textwidth,angle=0]{Fornasier_fig_6.png} \caption{Analysis of images obtained on 30 August 2015 at 18h51-18h54. Top left: RGB color image (composed from filters centered at 882, 649, and 480 nm) showing a red jet (jet 123 in Table~\ref{all_jets}) departing from a bright spot, as well as other faint jets that are indicated by the arrows. In particular, the cavity seen to be active at 6h49 on the same day still shows a very faint jet (jet 71 in Table~\ref{all_jets}) that is indicated by the red arrow. Bottom left: Zoom into the region around the red jet in RGB colors produced with filters centered at 931, 649, and 480 nm; a faint jet departing from the 6h49 source can be visualized better, and the fainter flux of the red jet compared to the RGB in the top panel (the acquisition order of the filter was 269, 360, 743, 701, 480, 535, 649, 989, 931, 805, and 882 nm). Bottom center: image with symbols related to the five ROI. Right panel: relative reflectance, normalized at 535 nm, vs. wavelength, and the reflectance at phase = 70$^{\circ}$ of five selected ROI (the red circle shows the dark terrain of the comet, the red star represents the red jet highlighted in the zoom, the blue asterisk shows a bright patch in Khonsu at which activity was previously observed, and the green triangle and magenta square indicate two different bright patches).} \label{jet30aug_18h53} \end{figure*} Transient events with short lifetimes (shorter than two minutes) have been detected for the first time thanks to the unprecedented spatial and temporal coverage of the OSIRIS observations. The best example is a faint jet detected in the Bes region in images acquired on 30 August 2015, at UT 6h48-6h50 (Fig.~\ref{jet30aug_anal}, jet number 71 in Table~\ref{all_jets}). In this color sequence, the area hosting the jet (located precisely at longitude -140.6$^{\circ}$ and latitude -81.0$^{\circ}$, indicated by the yellow rectangle in Fig.~\ref{jet30aug} and by the red rectangle in Fig.~\ref{jet30aug_anal}) is not directly illuminated by the Sun at the time of the observations. This area is inactive in the first two images of the sequence (Fig.~\ref{jet30aug}), which were both acquired with the F15 filter centered at 269 nm. The activity then starts in the third image, and reaches its peak about 25 seconds later (image acquired with the F27 filter centered at 700 nm), after which the intensity progressively decreases, with almost no activity in the last two images of the sequence. We thus estimate its total duration to be about 95 s. As the flux changed during the sequence (which lasted for about 140 s), the jet spectrophotometry cannot be used to deduce information about the possible composition of the ejected material. The jet is represented by the red asterisk in Fig.~\ref{jet30aug_anal}. At its peak, the jet covers a projected area on the nucleus of about 20 pixels, corresponding to 1150 m$^2$. \\ Close to this jet, we observed a patch (represented by the blue asterisk in Fig.~\ref{jet30aug_anal}) that is 80\% brighter than the dark terrain (DT) of the comet. This patch is located in the Bes region at longitude -118.3$^{\circ}$ and latitude -81.8$^{\circ}$. Its spectrum is fainter than that of the comet DT. Previous studies of the nucleus of 67P have proven that the compositions of regions with this spectral behavior (i.e., relatively blue) include some water ice mixed with the comet DT (Pommerol et al., 2015; Fornasier et al., 2015, 2016, 2017; Barucci et al., 2016; Deshapriya et al., 2018; Filacchione et al., 2016a; Oklay et al., 2016, 2017). Two other bright patches are shown in Fig.~\ref{jet30aug_anal}, one in the Bes region (green triangle, longitude -119.2$^{\circ}$ and latitude -69.0$^{\circ}$), and one in the Khonsu region (magenta square, longitude -163.9$^{\circ}$ and latitude -13.2$^{\circ}$). The patch in Khonsu is three times brighter and spectrally flatter than the comet DT, indicating the exposure of some water ice. Its position corresponds to the location where several jets were identified in images obtained on 1 August 2015 (jet 138 in Table~\ref{all_jets}), showing activity for about three hours.This means that the water ice observed here was probalby freshly exposed after these activity events. The source area of jet 71 was repeatedly active during the perihelion passage as faint jets have been observed 12 times between June and October 2015 (see jet 71 in Table~\ref{all_jets} for more details). An example of the periodic activity is shown in Fig.~\ref{jet30aug_18h53}, where a very faint jet, indicated by the red arrow, is observed to depart from the same position as the short-lived event observed 12 hours before. The periodic nature of jets, that is, the exact same feature as was observed from one rotation to the next, has been noted and reported by Vincent et al. (2016a). Some other short-lived jets are observed in this image: a red jet in the Geb region (jet 123 in Table~\ref{all_jets}, indicated by the red star in Fig.~\ref{jet30aug_18h53} and in the zoom into the RGB image in the bottom left panel), whose activity starts in correspondence of observations with the F71 filter (989 nm), which reaches maximum in the last image of the sequence, acquired with the F41 filter (therefore it lasted longer than 25 seconds, and its duration is probably comparable to that of the jet at 6h49); and two faint jets, indicated by the blue arrow in Fig.~\ref{jet30aug_18h53}, departing from two cavities in the Wosret region, seen active periodically (cavities A and B, numbered 182 and 183 in Table~\ref{all_jets}). Again, we cannot constrain the potential composition of these jets because of their short duration and flux variability in the different filters. The bright patches nearby the active regions (magenta square and green triangle in Fig.~\ref{jet30aug_18h53}) look slightly bluer in the near-infrared (NIR) region; this is consistent with the exposure of some water ice. The brightest spot is found in the Khonsu region (blue asterisk in Fig.~\ref{jet30aug_18h53}), at the same position as previously investigated in Fig.~\ref{jet30aug_anal}, showing that water ice exposed at the surface survives during the cometary day. Moreover, several exposures of water ice were later observed in this region, from January 2016, and they survived for several months (Deshapriya et al., 2016; Hasselmann et al., 2018). \begin{figure*} \centering \includegraphics[width=0.95\textwidth,angle=0]{Fornasier_fig_7.png} \caption{Left: Slope of the spectra and RGB images from data obtained on 1 August acquired at 15h43, showing a jet in the shadows that departs from the Sobek region (jet 177 in Table~\ref{all_jets}). Right: Relative reflectance and I/F of the five ROI. The I/F of three positions on the jet (red star, cyan asterisk, and magenta square) is not corrected for the disk function because incidence and emission angle are not reliable in shadowed regions.} \label{jet1aug} \end{figure*} \\ Other examples of short-lived jets observed on 30 August 2015 are found in the Anhur region: two collimated jets that were reported in Fornasier et al. (2017; see Fig.~\ref{30aug2015}, panel at 12h21), which were active for about 50-70 s (jets 17 and 18 in Table~\ref{all_jets}), a jet observed at 8h09 that lasted for about 58 seconds (jet 13, Fig.~\ref{30aug2015}), and a jet observed at 22h34 that lasted for about 72 seconds (jet 20, Fig.~\ref{30aug2015}) and departed from an area in shadow. A few others on the same day are observed in Anuket, Bes, and Imhotep, and details are reported in Table~\ref{all_jets} (jets 36, 89, and 152). \\ Other short-lived jets are reported in Table~\ref{all_jets} and were observed on 1 August (jet 74, Bes region), 26 July (jets 185 and 188, in Wosret, jet 4 in Anhur), 27 June (jet 182 in Wosret, jet 43 in Atum), and 11, 20, and 31 October 2015 (jets 98, 107, and 108 in Bes, jet 183 in Wosret). We note that some jets have blue, green, or red apparent colors because the activity was captured only with few filters in a color sequence, like the {\it red} jet in Geb we described before. \subsection{Notable jets and outbursts} \begin{figure*} \centering \includegraphics[width=0.8\textwidth,angle=0]{Fornasier_fig_8top.png} \includegraphics[width=0.8\textwidth,angle=0]{Fornasier_fig_8bottom.png} \caption{Top: Nine of the 11 images of the sequence obtained on 1 August 2015 acquired at 23h55, showing the flux variation over time from a jet with two close sources (jet 178 in Table~\ref{all_jets}). The I/F flux acquired with the F16 (centered at 360 nm) has been doubled to be correctly shown with the given intensity scale. Bottom left: RGB images showing the double jet. The inset shows a zoom of the active region (in the F22 filter) with the five selected ROIs along the jet and on the comet nucleus (the white circle on the left side of the jet). Bottom right: Spectrophotometry and I/F of the five ROI (the nucleus DT is represented with a black circle). The horizontal line approximately in the middle of the RGB image is a residual of the combination of image subunits (an individual coregistration of three subregions of the full field of view was needed to improve the coregistration), and is and artifact.} \label{jet1aug_double} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth,angle=0]{Fornasier_fig_9.png} \caption{Top: So-called perihelion outburst (jet 8 in Table~\ref{all_jets}) from two color sequences acquired on 12 August 2015 starting at 17h20 and 17h50. Bottom right: RGB color image (composed from filters centered at 882, 649, and 480 nm), and image of the selected ROI. The color difference at the left edge of the outburst in the RGB image arises because the comet rotated between the three exposures. Bottom left: Spectrophotometry of the nucleus and along the collimated component of the outburst.} \label{perihelio} \end{figure*} \subsubsection{Two outbursts in Sobek} Some of the observed jets or outbursts stand out particularly strongly. Two spectacular events from 1 August 2015 are shown in Figs.~\ref{jet1aug} and ~\ref{jet1aug_double} at 15h43 and 23h55 (jets 177 and 178 in Table~\ref{all_jets}), and both originate from the Sobek region. The 15h43 outburst has been reported by Vincent et al. (2016a) and displayed a broad plume (type B in their classification of transient events). Its intensity was about 11\% compared to that of the perihelion outburst of 12 August 2015, which is the brightest event reported in their list. These two outbursts took place some days after the discovery of a resolved boulder of about 0.8 m that orbited the comet and was observed at only 3.5 km from the Rosetta spacecraft (Fulle et al., 2016a, see their Fig.7). This indicates a progressive increase of the activity that is able to lift not only submillimeter and millimeter to centimeter grains from the surface, but also meter-sized boulders. \\ Here we show for the first time the outburst colors and spectrophotometry (Fig.~\ref{jet1aug}), as well as the spectral slope of the nucleus and of the near coma. In contrast to the short-lived jets we discussed previously, these outbursts have a longer duration than that of the observing sequence, and we do not observe a fluctuation of the fluxes with time for the different filters. This means that their spectrophotometric properties are reliable. \\ Figure~\ref{jet1aug} clearly shows that the 15h43 outburst was spectrally bluer than the nucleus beyond 650 nm. Three areas in the very inner coma were monitored along the jet, and all show a negative slope in the 650-1000 nm range. This behavior may be attributed to grains that have an icy composition and/or small particle size. On the surface of the comet, terrains with blue color beyond 650 nm have been observed in different regions, for example on Hapi (Fornasier et al., 2015). Infrared spectroscopy acquired with VIRTIS demonstrated this to be cometary dark material enriched in water ice (de Sanctis et al., 2015, Barucci et al., 2016). Icy particles in the ejecta are expected as the signature of water ice was reported in the July 2016 outburst (Agarwal et al., 2017), and the July 2015 cliff collapse in the Aswan region caused an outburst and exposed fresh icy material on the surface with an albedo beyond 40\% (Pajola et al., 2017). \\ The spectrally blue color of the outburst may also be attributed to fine particles of micron or sub-micron size. With the OSIRIS data alone, we cannot constrain the grain size. However, we may deduce that the dominant size of the ejected particles is probably not much smaller than the incoming wavelength (i.e., $\ll$ 0.3-0.6 $\mu$m), otherwise we would have observed a higher flux and a negative spectral slope in the 260-600 nm range, as they would act as Rayleigh-type scatterers. \\ Other outbursts, such as the July 2016 dust plume, were also found to be composed of refractory and icy grains (Steffl et al. 2016; Agarwal et al. 2017), as the signature of icy particles was detected with the ALICE UV spectrometer. In particular, the models of the UV spectra acquired with ALICE permitted constraining the sizes of the grains: the icy component consisted of submicron grains, while refractories are ejected in larger grains with sizes of several hundred microns (Agarwal et al., 2017). Other outbursts have been observed by VIRTIS, for instance, like the one occurring on 13-14 September 2015 that was reported in Bockelee-Morvan et al. (2017). They measured large bolometric albedos, which indicates bright grains in the ejecta that are of silicatic or icy composition. The authors also measured a negative spectral slope in the IR region for the outburst. A slope like this is associated with high color temperatures (up to 630 K) of the ejected material, which the authors attributed to very fine dust particles (Bockelee-Morvan et al., 2017). We therefore conclude that the outburst we present here may include some icy grains mixed with dust particles, and/or grains of relatively small size, but larger than $\sim$ 0.2 $\mu$m. Another spectacular event originating in the Sobek region is the double-component jet shown in Fig.~\ref{jet1aug_double} (jet 178 in Table~\ref{all_jets}), which was located at longitude 26.5$^{o}$ and latitude -14.7$^{o}$, at the boundary between the Sobek and Bastet regions. Individual images and the RGB composite image show that the jet has two individual sources: one that is constantly active during the whole sequence (the left side of the jet, indicated by the red star in Fig.~\ref{jet1aug_double}), which lasted for longer than 150 s, and one that became active at the end of the sequence, when images were acquired with the four filters covering the 800-1000 nm range. This source thus appears red, as underlined by the spectrophotometry of the blue symbol and magenta square in Fig.~\ref{jet1aug_double}. The component of the jet that was continuously active during the sequence is spectrophotometrically similar to the nucleus DT up to 650 nm, then it has a flatter behavior at longer wavelengths. Similarly as for the outburst described above, we interpret this as due to small icy grains mixed with dust. \subsubsection{Perihelion outburst from Anhur} \begin{figure*} \centering \includegraphics[width=1.0\textwidth,angle=0]{Fornasier_fig_10.jpg} \caption{Left panel: WAC image acquired on 12 May 2016 at UT 03:43 with the F12 filter centered at 630 nm showing an outburst in the Bes region, at the boundary with Imhotep (jets 114 and 115 in Table~\ref{all_jets}). Middle panel: RGB image composed from filters centered at 882, 649, and 480 nm from NAC images acquired on 23 April 2016. Right panel: Orange filter image corresponding to the area inside the white rectangle of the 23 April 2016 observations. Contrast is adapted to show details within the shadowed areas. The red rectangle indicates the source area of the jets observed on 12 May 2016.} \label{jet_desh} \end{figure*} A similar spectral behavior is also observed for the spectacular outburst of 12 August 2015 (jet 8 in Table~\ref{all_jets}, Fig.~\ref{perihelio}). This is called the {\it \textup{perihelion outburst}} as it occurred a few hours before the comet reached perihelion. It was first reported by Vincent et al. (2016a), who estimated a total luminosity of 1.18$\times$10$^{13}$ W/nm at 649.2 nm and an ejected mass on the order of $\sim$ 100 tons, and who deduced that the source lies in the Anhur region. Lin et al. (2017) estimated for this event a mass ejection rate of $>$ 19 kg/s. Here we present the spectrophotometry of the ejecta from the color sequence acquired around UTC 17:21, that is, close to the activity peak. The outburst has a complex shape, including a narrow jet and a complex structure. Very faint activity was reported in images acquired at UTC 17:06 and 18:06 with the orange F22 filter alone (Vincent et al. 2016a). The four selected ROI at the source and along the collimated jet all show a spectrally flat behavior beyond 650 nm, which we attribute to a mixture of dust and icy grains. \subsubsection{Resolved outburst in Bes from May 2016 data} \begin{figure*} \centering \includegraphics[width=0.9\textwidth,angle=0]{Fornasier_fig_11.jpg} \caption{Left panel: Anhur region as seen on 10$^{}$ February 2016, UT 7h14. The locations of the jets identified close to perihelion passage are superimposed. Resolution: 92 cm/px. The symbols represent the corresponding positions of several jets: the star indicates the perihelion outburst, and the box indicates the uncertainties in the position of the jet sources; the circle shows a transient event on 26 July 2015, 15h10 (jet 4 in Table~\ref{all_jets}); squares show the double type-C outbursts on 30 August 2015 at 12h21 (jets 17 and 18 in Table~\ref{all_jets}), and the large square corresponds to the brighter of the two jets (jet 18); and the diamond indicates a transient event on 30 August 2015 at 8h09 (jet 13 in Table~\ref{all_jets}). Right panel: Geomorphological maps of the Anhur region from Fornasier et al. (2017).} \label{Anhur} \end{figure*} Most of the events observed close to perihelion look faint compared to the outbursts reported by Vincent et al. (2016a), for instance. As the sequences were devoted to characterizing the nucleus, the exposure time was short, and several of the faint events have a low signal-to-noise ratio. Moreover, for safety reasons, Rosetta was far away from the nucleus, and the spatial resolution was therefore relatively poor (several m/px). During the last months of observations of 67P (i.e., May-September 2016), Rosetta came closer and closer to the nucleus, and caught a few outbursts at high resolution. This provided a glimpse of how the faint jets observed at perihelion would have looked like had Rosetta been closer to the nucleus. A nice example is the 3 July 2016 outburst, which departied from an area of the Imhotep region. This was observed simultaneously with several Rosetta instruments (see Agarwal et al. (2017) for a detailed study of this event). Another resolved outburst, departing from the Bes region at the boundary with Imhotep, appeared in a single WAC image acquired on 12 May 2016 at 03h43 (jets 114 and 115 in Table~\ref{all_jets}, and Fig.~\ref{jet_desh}, left panel). This image was acquired when the comet was at an heliocentric distance of 2.98 AU, at a phase angle of 96.7$^{o}$, and with Rosetta at an altitude of 8.24 km, resulting in a spatial resolution of 0.82 m/px with the WAC camera. The outburst departs from two closely located sources at longitude [131.32$^{o}$,132.83$^{o}$] and at latitude [-65.62$^{o}$,-64.89$^{o}$]. Unfortunately, this region was always in the shadow during the high-resolution observations acquired with OSIRIS. Using the derived coordinates, we located the source of the jets in an NAC color sequence acquired on 23 April 2016, at UT15h06, when Rosetta was 28.5 km from the nucleus surface. This resulted in a spatial resolution of 0.55 m/px (Fig.~\ref{jet_desh}, middle panel). The geometry of the observations was different, the phase angle was higher (113$^{o}$), and the source of the jets was in shadow. However, the high dynamic range of the OSIRIS cameras allow us to study the morphology of the terrain within the shadowed regions, increasing the contrast. The source of the plume is located close to scarp of about 40 m (Fig.~\ref{jet_desh}, right panel).\\ The surface brightness of the plume was about 3.5 times higher than the surrounding regions of the nucleus, indicating the presence of bright material, likely icy grains. Within the shadow cast by the plume, the impinging light is attenuated by up to 50\%. This implies that the plume is optically thick, with an estimated optical depth of $\sim$ 0.65. We calculated the filling factor $f$ , that is, the fraction of radiance scattered by an optically thin dust coma, to obtain the instantaneous total dust mass. The average radiance factor $\overline{RADF}$ of the dust cloud was integrated in a circle with radius of 50 pixels, and subtracted by the average surface radiance factor calculated inside an annulus external to the event. The integrated dust filling factor in a aperture of $\pi R_{pixel}^{2}$ is expressed by (Knollenberg et al., 2016){\small \par} \begin{equation} f=\frac{\overline{RADF}}{w_{\lambda}(p_{(g)}/p_{(0)})} ,\end{equation} where $w_{\lambda}$ is the single-scattering albedo from Fornasier et al. (2015), estimated from Hapke modeling (Hapke, 2012) of the surface scattering curve, $p_{(g)}$ is the particle phase function from Bertini et al. (2017), and $p(0)$ is the extrapolation of this phase function for phase $g$ = 0. Assuming a differential grain size distribution $n_{(r)}$, described by a power law $\propto r^{h+1}$, and assuming spherical grains, the expression for the integrated dust mass is \begin{equation} M=2\rho\cdot f\cdot\pi R^{2}\frac{\intop n_{(r)}r^{2}dr}{\intop n_{(r)}rdr} ,\end{equation} where $R$ is the aperture radius in meters and $r$ is the grain radius. As the power-law index $h$ is generally unknown, we applied the indices estimated by Agarwal et al. (2017) from 10 $\mu m$ to 1 mm grain size for the multi-instrumental study of the 2016 July outburst. Therefore, from 10 to 150 $\mu m,$ we integrated Eq. 4 with $h=-2.54$, from 150 to 500 $\mu m$ with $h=-3.0$, and from 500 $\mu m$ up to 1 mm with $h=-6.9$ to finally derive the mass density associated with the observed surface brightness. \\ We thus obtained a filling factor of 0.256, and an ejected dust mass for the given image in the range of 700-2220 kg for a grain bulk density $\rho$ ranging from 250 to 795\ $kg/m^{3}$ (Fulle et al., 2016b). For comparison, Agarwal et al. (2017) estimated an equivalent mass of 920$\pm$530 kg (in a given image, and for the same density range) for the July 2016 outburst, and a total ejected mass of 6500-118000 kg for a duration of between 14 and 68 min. \\ Estimating the total mass ejected by the jets reported here is beyond scope of this paper, and moreover, the effective duration of most of the jets is unknown. However, it seems reasonable to assume that on average, the majority of the observed jets at perihelion, excluding the outbursts, ejected an instantaneous mass on the same order as the May 2016 event reported previously, that is, about one to a few thousand kilograms. \section{Morphology of active areas} In this section we describe the morphology of some of the southern hemisphere regions (Anhur, Sobek, Khonsu, Bes, and Wosret) that have been found to host several sources of the jets reported in this study. \subsection{Anhur} The Anhur region, which is illuminated for a relatively short interval during the cometary orbit, close to the perihelion passage, experiences strong thermal effects that result in a high degree of erosion. Anhur is also highly active, and it is the source of several jets, as reported here and in previous papers (Vincent et al., 2016a, Fornasier et al., 2017). This study identified 26 distinct jets originating from the Anhur region during July-October 2015, as reported in Figs.~\ref{Anhur} and ~\ref{Anhur_alljets}, and in Table~\ref{all_jets}. \\ The perihelion outburst source is not as precisely located as the others jets (see Figs.~\ref{Anhur} and ~\ref{Anhur_alljets}, red star and rectangle in a dashed red line), and our best estimate of its position is located inside a canyon-like structure with a pit and fine-particle deposits, where several exposures of water ice have been reported (Fornasier et al., 2017). Boulders and different types of deposits can be seen on the strata within or close to the jet site (Fig.~\ref{Anhur_alljets}), which likely originated by transport from steep slopes like the nearby cliffs (El-Maarry et al. 2015; Pajola et al. 2015; Lee et al., 2017), or from in situ degradation (Lee et al., 2017) due to sublimation and gravitational falls. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,angle=0]{Fornasier_fig_12.jpg} \caption{Anhur region seen at high spatial resolution (92 cm/px) on 10 February 2016 at 7h14. The locations of the 26 individual activity sources identified close to perihelion passage are superimposed, as well as a few on 2016.} \label{Anhur_alljets} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.75\textwidth,angle=0]{Fornasier_fig_13.png} \caption{RGB images in false colors of the Anhur region acquired on 25 June 2016 at UT 11:50, from a spacecraft altitude of 17.9 km, and a resolution of 35 cm/px. The color images are produced using the filters centered at 480 nm, 649 nm, and 882 nm. The heliocentric distance of the comet was 3.27 au.} \label{Anhur_frost} \end{figure*} The sites of the 30 August 2015 transient events originating from the Anhur region are located on stratified terrain, at the base of the scarps (Fig.~\ref{Anhur_alljets}). The sources of the transient Anhur events at 12h21 (jets 17 and 18 in Table ~\ref{all_jets}, see Figs.~\ref{Anhur} and ~\ref{Anhur_alljets}) correspond to a talus formed at the base of a scarp, at the boundary with Khepry region, as well as to the 22h34 jet (jet 20 in Table ~\ref{all_jets}, see Fig.~\ref{30aug2015}). However, jets are found also on smooth terrains with fine-particle deposits, like the 8h09 event (Fig.~\ref{Anhur_alljets}, jet13 in Table ~\ref{all_jets}), or the optically thick plume observed on 27 January 2016 that was reported by Fornasier et al. (2017). Anhur also has an active pit, where a jet was observed on 26$^{}$ July 2015 (jet 4 in Table ~\ref{all_jets}, Fig.~\ref{Anhur_alljets}), as well as in previous observations from June 2015 reported in Fornasier et al. (2017). The Anhur region is characterized by elongated canyon-like depressions in which cliffs expose sequences of strata. This indicates a pervasive layering. Local degradations and scarp retreats provide different types of deposits that partly cover flat tops, terraces, and bottoms of these depressions (Fig.~\ref{Anhur}). In this region, two bright patches of about 1500 m$^2$ each were observed on a flat terrace: one in a smooth terrain in Anhur, and the other just nearby, inside the Bes region, at the boundary with Anhur (Fornasier et al., 2016, 2017). These bright patches were observed between 27 April 2015 and 2-7 May 2015, and they lasted for at least ten days. Spectral modeling indicated a water ice abundance of 20-30\% mixed with the comet DT, which corresponds to a solid ice equivalent thickness of 1.5-27 mm (Fornasier et al., 2016). A few weeks before the detection of these exposed water ice patches, VIRTIS reported on 21-23 March 2015 the first and unique detection of CO$_2$ ice at the location of a patch entirely within the Anhur region (Filacchione et al., 2016b). In addition to these extended bright patches, evidence of exposure of volatiles was observed in the Anhur region in several instances (Fornasier et al., 2017). This region is thus characterized by compositional heterogeneities on the scale of tens of meters and volatile stratification, which gives rise to a particularly fragile terrain. A new scarp, 150 m long and 10 m high, formed at the boundary of the Anhur and Bes regions around perihelion passage or shortly after, in correspondence to the location of an extended bright patch (Fornasier et al., 2017). A close inspection of RGB color images of the Anhur region in 2016 reveals that water ice has been exposed in different locations in the form of tiny patches, or as frost hidden in the shadows cast by the canyon-like structure (Fig.~\ref{Anhur_frost}). Within the elongated depressions of Anhur, the deep strata of the large lobe of 67P, which are presumably enriched in volatiles, are exposed (Lee et al., 2017). An example of volatile frost and exposure of water ice close to a boulder is shown in Fig.~\ref{Anhur_frost} from 25$^{}$ June 2016 at UT 11h50. This observation was acquired at 35 cm/px resolution, and at a comet heliocentric distance of 3.27 au outbound. This spectacular view clearly indicates frost formation inside a shadowed region in Anhur, as well as exposure of ice, confirming that this region of the nucleus is one of the most highly enriched in volatiles, which is also proven by the high level of detected activity. \subsection{Sobek} The Sobek region is located at the intersection between the large and small lobes in the southern hemisphere. In contrast to the Hapi region, which is also located between the two lobes but in the northern hemisphere, Sobek does not have widespread fine-particle deposits. However, this region has a low gravitational potential, and it shows an agglomeration of boulders as well as localized fine deposits in its central area (Lee et al., 2017). The two spectacular outbursts observed on 1$^{}$ August 2015 (jets 177 and 178 in Table~\ref{all_jets}, Figs~\ref{jet1aug} and ~\ref{jet1aug_double}) are located at the foot of the scarp that separates the Sobek from the Bastet and Hapi regions. The source of the 15h43 outburst is located on a gravitational accumulation deposit and is surrounded by two highly vertical cliff walls on one side and the Anhur cliffs on the opposite side (see Fig.~\ref{Sobek}). The 23h55 event emerges from an outcropping consolidated terrain. A total of six jets, including the two outbursts described above, originated from the Sobek region. \begin{figure*} \centering \includegraphics[width=0.9\textwidth,angle=0]{Fornasier_fig_14.png} \caption{Sobek region seen on 27 January 2016 at 18h20 with a spatial resolution of 1.28 m/px (on the left), and on 1 May 2016, 18h11, with a resolution of 0.32 m/px (on the right). The cyan square and triangle represent the two Sobek outburst locations as identified on 1 August 2015 at 15h43 and 23h55, respectively (jets 177 and 178 in Table~\ref{all_jets}), while the red symbols denote some jets that departed from the nearby Anhur region (see Fig.~\ref{Anhur}).} \label{Sobek} \end{figure*} \subsection{Khonsu} \begin{figure} \centering \includegraphics[width=0.48\textwidth,angle=0]{Fornasier_fig_15.png} \caption{Khonsu region seen at high spatial resolution (1.28 m/px) on 27 January 2016 at 10h53. The identified jets are superimposed in red. The cluster of red points is related to the jet that was periodically active (jet 138 in Table~\ref{all_jets}). A full description of the jet properties and associated morphological changes is reported in Hasselmann et al. (2018).} \label{Khonsu} \end{figure} The Khonsu region, bounded by the Apis mesa, is dominated by outcropping consolidated terrain overlaid by some patches of fine-particle deposits, and it includes large boulders, a peculiar 200 m wide structure with three plate-shaped stacked features called the pancake feature, and evidence for layering (El-Maarry et al., 2016; Lee et al., 2017; Ferrari et al., 2018, see Fig.~\ref{Khonsu}). Several jets reported in Table~\ref{all_jets} originated from Khonsu (jets 137-146, see Fig.~\ref{Khonsu}). Two bright outbursts originated from source 138 (on 1$^{}$ August at 10h51 and 21h55-22h55, called event 7 by Vincent et al., 2016a), about one rotation apart. In particular, the source showed activity for about three hours since 10h51. Many faint jets were also observed on the same day. The source of these outbursts is found on a rugged slope, at the foot of a cliff (see Figs.~\ref{Khonsu} and ~\ref{khonsu_zoom}), above a flat dust bank where bright patches have been spotted at least six days before the outburst (examples in Fig.~\ref{khonsu_zoom}), suggesting that this area is relatively abundant in water. Shortly after perihelion passage, exposures of water ice were detected, as discussed previously and shown in Figs. ~\ref{jet30aug_anal} and ~\ref{jet30aug_18h53}. The other jets reported in Fig.~\ref{Khonsu} originated mostly from fine-particle deposits. \\ Hasselmann et al. (2018) reported several morphological changes that were all connected to this active area: the appearance of three ice patches, the formation of three shallow cavities, the sublimation of two thick dust layers, and the appearance of a 50 m jumping boulder that moved there from a nearby region. Previously, El-Maarry et al. (2017) also reported a 30 m boulder rolling into the southern reach of the same dust bank, while Deshapriya et al. (2016) noted another boulder hosting a spot rich in water ice that lasted for about half a year. All these morphological changes took place during the southern hemisphere summer. \begin{figure*} \centering \includegraphics[width=0.95\textwidth,angle=0]{Fornasier_fig_16.jpg} \caption{Images showing, at different resolution, bright and relatively blue patches in or near the source of the 1 August 2015 events in Khonsu (jet 138 in Table~\ref{all_jets}): 26 July 2015 at 08h48 (left), 28 January 2016 at 01h48 (middle), and 2 July 2016 at 07h57 (right). The rectangles indicate the source region of the 1 August 2015 events (jet 138 in Table~\ref{all_jets}). } \label{khonsu_zoom} \end{figure*} \subsection{Bes} The Bes region is dominated by outcrops of consolidated terrain covered with deposits of fine materials. Diamictons and gravitational accumulation deposits are also found in the region (see Fig. 9 of Lee et al., 2017). In the onion-like layering of the nucleus (Massironi et al. 2015, Penasa et al., 2018), Bes is located in a shallower structural level than Anhur, and it also has mesa, high-slope ($> 35^{o}$) regions, and is sculpted by staircase terraces. \\ Many transient events (55, see Table~\ref{all_jets}) originated within this region (Fig.~\ref{Bes}), including the short-lived bright plume at 6h49 on 30 August 2015 (jet 71 in Table~\ref{all_jets}), and the resolved plume observed in the image in May 2016 (Fig.~\ref{jet_desh}, jets 114 and 115 in Table~\ref{all_jets}). Some of these events arose from active cavities or alcoves located below a cliff, in a terrain with pervasive fractures (see Fig.~\ref{Bes}):\\ $\bullet$ Cavity A: longitude -119.1$\pm$3.9$^{\circ}$, latitude -80.1$\pm$4.1$^{\circ}$, seen active in 29 sequences (jet 111 in Table~\ref{all_jets})\\ $\bullet$ Cavity B: longitude -119.1$\pm$3.9$^{\circ}$, latitude -68.7$\pm$2.5$^{\circ}$, seen active in 47 sequences (jet 112 in Table~\ref{all_jets})\\ $\bullet$ Cavity C: longitude -111.5$\pm$3.3$^{\circ}$, latitude -69.8$\pm$3.5$^{\circ}$, seen active in 10 sequences (jet 113 in Table~\ref{all_jets})\\ These cavities very likely host volatiles because localized bright patches of water ice were observed in August 2015 (Figs.~\ref{jet30aug_anal}, and ~\ref{jet30aug_18h53}). Long, linear fractures are visible across the cliff. They are probably formed by mechanical and/or thermal stresses. Small boulders are scattered at the surface below the cliff, while larger boulders are visible below an arc-shaped scarp (Fig.~\ref{Bes}).\\ In addition to cavities, the jet sources identified on Bes are also found on dust deposits, at the foot of scarps and cliffs, and on consolidated terrains. \begin{figure} \centering \includegraphics[width=0.47\textwidth,angle=0]{Fornasier_fig_17.png} \caption{Part of the Bes region seen at high spatial resolution (89 cm/px) on 10 February 2016 at 15h28. Some of the identified jets are superimposed. The green star represents the $\sim$95-second short-lived jet identified on 30 August 2015, at 6h49 (jet 71 in Table~\ref{all_jets}). The other symbols denote different transient events: the diamond shows the event on 31 October, 15h07, the upward-pointing triangle shows the event on 20 October at 7h23, and the downward-pointing triangle represents the event on 11 October at 22h41 (listed in Table~\ref{all_jets} as jets 108, 107, and 98, respectively).} \label{Bes} \end{figure} \subsection{Wosret} Wosret is a region on the small lobe of 67P, and it is dominated by outcropping consolidated terrain with pervasive fracturing. Fine-particle and gravitational accumulation deposits are observed together with several scarps and terraces arranged in staircase patterns that connect different strata (Lee et al., 2017). In Wosret, 33 sources were active in July-October 2015. In particular, several cavities or alcoves, that is, structures that cast shadows, were found to be repeatedly active close to perihelion, as shown in Fig.~\ref{Wosret}: \begin{itemize} \item Cavity A : located at longitude -28.7$\pm$3.5$^{\circ}$ and latitude -26.6$\pm$1.6$^{\circ}$, seen active in 61 color sequences (jet 182 in Table~\ref{all_jets}). \item Cluster of B cavities : two or three closely placed cavities located at longitude -34.8 $\pm$3.6$^{\circ}$ and latitude -30.3$\pm$4.1$^{\circ}$, seen active in 33 color sequences (jet 183 in Table~\ref{all_jets}) \item Cavity C: located at longitude -40.7$\pm$1.6$^{\circ}$ and latitude -31.8$\pm$3.6$^{\circ}$, seen active in 16 sequences (jet 184 in Table~\ref{all_jets}). \item Cavity D: located at longitude -14.7$\pm$3.1$^{\circ}$ and latitude -25.2$\pm$1.3$^{\circ}$, seen active in 37 color sequences (jet 185 in Table~\ref{all_jets}) \item Cavity E: located at longitude -15.3$\pm$2.8$^{\circ}$ and latitude -36.4$\pm$0.9$^{\circ}$, less active than the others cavities, as only three activity events were observed (jet 186 in Table~\ref{all_jets}). \end{itemize} The Wosret cavities usually emit faint cometary jets, sometimes with a very peculiar morphology, as highlighted in the insets on the left side of Fig.~\ref{Wosret}. These last events were brighter than the jets that were usually detected in these cavities, and occurred on 31 October, 20h49 (from cavity A, jet 182 in Table~\ref{all_jets}) and 21h49 (from cavity B, jet 183 in Table~\ref{all_jets}), with a collimated and broad shape. These cavities, except for cavity A, were never observed at high spatial resolution and under good illumination conditions during the mission. Tiny water ice patches were observed in cavity A. \section{Discussion} This study demonstrates that several faint outbursts continuously contribute to the cometary activity at perihelion. They vary in duration and sometimes are extremely short (shorter than a few minutes). \\ Vincent et al. (2016b) reported that jets in the northern hemisphere arise mainly from rough terrains rather than smooth areas, and more specifically, from fractured walls. However, smooth areas also produce jets and jet-like features, as claimed by Shi et al. (2016, 2018). In the southern hemisphere, the jets and outbursts investigated here originated from both consolidated terrains (i.e., from scarps and cavities) and from smooth dust deposits that can sustain large boulders or fill niches or pit bottoms. Several processes are evoked to explain the activity events reported here for comet 67P, and, more in general, for cometary nuclei: \begin{figure*} \centering \includegraphics[width=0.9\textwidth,angle=0]{Fornasier_fig_18.png} \caption{Wosret region seen at a spatial resolution of 1.28 m/px on 28 January 2016 at 05h33. The identified jets are superimposed. The symbols represent different locations: the cross indicates cavity D (jet 185 in Table~\ref{all_jets}), the asterisk shows cavity A (jet 182 in Table~\ref{all_jets}), the square represents the B cavities (jet 183 in Table~\ref{all_jets}), and the diamond indicates a transient event seen in images of 26 July 2015 at 14h10 (jet 188 in Table~\ref{all_jets}). } \label{Wosret} \end{figure*} \begin{enumerate} \item The main driver that causes jets is insolation coupled with local subsurface volatile enrichment and/or direct exposure of water ice at the nucleus surface. Owing to the complex morphology, different locations on the nucleus of 67P have varying diurnal illumination cycles. A study of the sublimation of water ice upon local sunrise is reported in Shi et al. (2018). The sublimation of water ice through the porous mantle, which Belton et al. (2010) defined as type I jets, is the likely driver of sources seen to be periodically active on comet 67P (e.g., the cavities listed in Table~\ref{all_jets} (jets 111-113 and 182-186), or the periodic jets reported in Vincent et al., 2016a). \\ The most active areas are those close to regional boundaries, which mostly correspond to cliff walls, or the insides of cavities or alcoves. These structures cast shadows that permit the recondensation of volatiles that arose from the subsurface during the cometary night, and partially from inner coma molecules that are backscattered to the nucleus surface (Davidsson \& Skorov, 2004; Crifo, 1987; Liao et al., 2018). Subsurface thermal lag (Shi et al., 2016), coupled with the low thermal inertia of the comet, results in the recondensation of volatiles at night or in terrains that are often covered by shadows. \\ Moreover, as the comet approached perihelion, its dust mantle became thinner (Fornasier et al., 2016), exposing the underlying layers that are enriched in volatiles, and producing seasonal and diurnal color variations. Several instances of diurnal color changes and frost formation close to shadows have been observed. They were attributed to volatile recondensation during the cometary night (Fornasier et al., 2016, De Sanctis et al., 2015). Even far from perihelion, the complex local morphology coupled with the seasonal thermal lag permits local volatile recondensation, as shown in the Anhur region (Fig.~\ref{Anhur_frost}) at 3.3 au outbound. \\ Laboratory experiments on cometary analog mixtures have shown that a considerable fraction of sublimating ice can be redeposited at the surface instead of being released through the dust mantle (Sears et al., 1999). They have also shown how material stratification, separation of types of ices, and the release of trapped gases could occur near the surface of a comet, as has been observed on the nucleus of comet 67P (Fornasier et al., 2016, 2017; Filacchione et al., 2016b; Pajola et al., 2017). \item Episodic and explosive events may be associated with cliff collapse (Vincent et al., 2016b). An example is the July 2015 outburst reported in Pajola et al. (2017), which exposed an inner layer, enriched in volatiles, in which bright and bluer material survived for several months. Several jets presented in this study were found below or close to cliffs or scarps, and some of them may be potentially related to a past cliff collapse. For instance, the sources of most of the Khonsu and Anhur jets were rich in volatiles and contained a number of scattered boulders that may have originated from a past cliff collapse that exposed volatile-rich layers. Unfortunately, most of the southern hemisphere was not observable before March 2015 from Rosetta, and therefore we do not have high-resolution images before perihelion to investigate the morphological changes associated with potential cliff collapses in detail. However, we noted the formation of a new 140 m long scarp nearby the boundary of Bes ad Anhur, as reported in Fornasier et al. (2017) and previously detailed in section 4.1, where exposure of ices and volatile stratification was reported before and after its formation (Fornasier et al., 2016, 2017; Filacchione et al., 2016b). Several jets have sources close to that scarp, but none of the events we reported here were directly associated with this cliff collapse and scarp formation, which took place sometime between August and December 2015. \item Thermal stress produced the fractures that are ubiquitous on the surface of 67P. Fractures allow the heat wave to penetrate underlaying volatile-rich strata and may be the sources of jets, as discussed in Belton (2010) and in Bruck Syal et al. (2013). The jets located in an area of Bes characterized by long fractures (Fig.~\ref{Bes}) may be an example of this mechanism. \item Local jets and outbursts such as the May 2016 event (jets 114 and 115 in Table~\ref{all_jets}, Fig.~\ref{jet_desh}) or the July 2016 dust plume (Agarwal et al., 2017) may have been produced by a pressurized reservoir of volatiles below the surface (Knollenberg et al., 2016). The exothermic transition of water ice from the amorphous to the crystalline state following sudden exposure to sunlight can cause the volatile outflow and trigger an activity event. According to Agarwal et al. (2017), this mechanism can take place even near the surface. \item Sinkhole collapse is invoked as the source of active pits (Vincent et al., 2015), which would result in the exposure of fresh volatiles on the cavity wall and interior after the collapse. However, very few pits are observed in the southern hemisphere, and only one is seen to be active in the Anhur region (see Fornasier et al., 2017, and Fig.~\ref{Anhur}). The paucity of pits in the southern compared to the northern hemisphere is probably related to its higher insolation and erosion rate. \end{enumerate} Activity events on 67P are well localized on the southern hemisphere close to perihelion passage. In a similar manner, other comets observed by space missions showed activity departing from well-defined sources and not from the surface of the entire nucleus: only 10\% of the surface of 1P/Halley was estimated to be active (Keller et al., 1986) during the Giotto observations, while the hyperactive comet Hartley 2 showed well-localized jets mostly originating from the ends of its elongated nucleus (A'Hearn et al., 2011) and a plume of icy grains above the smooth waist (Protopapa et al., 2014). Together with solar illumination, local compositional inhomogeneities are related to activity events. Several jet or outburst sources are located in or close to areas that are brighter and have colors that are relatively bluer than the dark terrain of the comet, indicating a local enrichment in volatiles that, once illuminated, sublimate. It has been noted that comet 67P shows evidence of local heterogeneities in composition at different spatial scales. Three types of terrains, from the spectrally bluer and water-ice-enriched terrains to the redder ones, associated mostly with dusty regions, have been identified by visible spectrophotometry with OSIRIS (Fornasier et al., 2015). The southern hemisphere shows a lack of spectrally red regions compared to the northern hemisphere. This is associated with the absence of widespread smooth or dust-covered terrains (Fornasier et al., 2016). \\ Local color and compositional heterogeneities have been identified from the OSIRIS images up to the decimeter scale (Feller et al., 2016) during the closest comet flyby on 14 February 2015, with bright material, dark boulders, and some striae. During the 10$^{}$ April 2016 flyby, several bright spots associated with exposure of water ice mixed with the dark terrain of the comet were reported (Feller et al., 2018; Hasselmann et al., 2017). Bright patches showing exposure of water ice mixed with the dark terrain of the comet have been reported in several regions of the comet. Water ice amounts varied in these regions from a few percent (De Sanctis et al., 2015; Filacchione et al., 2016a; Pommerol et al., 2015; Barucci et al. 2016; Oklay et al., 2016) to $>$ 20\% in localized areas in the Anhur, Bes, Khonsu, and Imhotep regions (Fornasier et al., 2016, 2017; Deshapriya et al., 2016, Oklay et al., 2017), and at the Aswan site (Pajola et al., 2017). As reported in section 4.1, the Anhur region has shown local volatile stratification and wide ice patches before the southern spring equinox. \\ Brightness variations in the comet surface at the centimeter and millimeter scale were reported by the CIVA instrument on board the Philae lander (Bibring et al., 2015), which observed a surface that was globally dominated by dark conglomerate, likely made of organics, with brighter spots that may be linked to mineral grains or point to ice-rich material. \\ Compositional inhomogeneities have also been reported on the surface of comet 9P/Tempel 1 with the detection of dirty water-ice rich material (Sunshine et al., 2006), and they were associated with extensive subsurface sources of volatile material. Morphological changes were reported on comet Tempel 1 between the Deep Impact and the Stardust flybys, with fronts receding by several meters in a large smooth area (Veverka et al., 2013). Several jets were linked to the rugged surface bordering this smooth area (Farnham et al., 2007). These morphological changes were interpreted as the progressive sublimation and depletion of volatiles and ice-rich material (Meech et al., 2017). The short-period comets 9P/Tempel 1 and 81P/Wild 2, observed by the Deep Impact and Stardust missions, also showed extensive layering and stratification, similar to comet 67P (Belton et al., 2007), and a complex morphology (Thomas et al., 2013). \section{Conclusions} We inspected over 2000 images acquired with the OSIRIS instrument on board Rosetta during four months around perihelion passage, and we identified and precisely located more than 200 transient events on the nucleus of 67P.\\ Our main findings are listed below. \begin{itemize} \item The source locations of the jets are usually found below cliffs, scarps, or inside cavities or alcoves that cast shadows, but they are also found on smooth terrains. Therefore, these activity events are not related to a specific terrain type or morphology, but are mainly driven by the local insolation. \item This analysis indicates that several transient events observed at perihelion have lifetimes shorter than a few minutes. \item Faint jets are often periodic as a consequence of local illumination and of sublimation and recondensation processes of water ice. These processes, in particular, seem to be the source of the periodic jets that depart from cavities or alcoves. \item Several jet sources are bright and spectrally bluer than the dark terrain of the comet. This implies a local enrichment of volatiles. \item The ejecta of three outbursts we investigated have bluer colors in the visible-to-near-infrared range (but not in the near-ultraviolet region), indicating that the ejected material may contain some icy grains mixed with the ejected dust. \item We reported a resolved bright plume observed in May 2016, which was optically thick. It had an instantaneous estimated mass loss of $\sim$ 1000-2000 kg. The faint jets observed at perihelion, whose durations are often unconstrained, probably eject a similar amount of material. \end{itemize} We presented a comprehensive inventory of source regions and locations of jets observed directly on the surface of 67P during and close to perihelion passage. This database of jets and outbursts can serve as a reference for further studies devoted to cometary activity, and, in particular, for future in situ space-probe observations of the activity and evolution of this comet, such as the NASA Caesar mission, if selected. \vspace{0.3truecm} \begin{acknowledgements} OSIRIS was built by a consortium led by the Max-Planck-Institut f\"ur Sonnensystemforschung, Goettingen, Germany, in collaboration with CISAS, University of Padova, Italy, the Laboratoire d'Astrophysique de Marseille, France, the Instituto de Astrof\'isica de Andalucia, CSIC, Granada, Spain, the Scientific Support Office of the European Space Agency, Noordwijk, The Netherlands, the Instituto Nacional de T\'ecnica Aeroespacial, Madrid, Spain, the Universidad Polit\'echnica de Madrid, Spain, the Department of Physics and Astronomy of Uppsala University, Sweden, and the Institut f\"ur Datentechnik und Kommunikationsnetze der Technischen Universitat Braunschweig, Germany. \\ The support of the national funding agencies of Germany (DLR), France (CNES), Italy (ASI), Spain (MEC), Sweden (SNSB), and the ESA Technical Directorate is gratefully acknowledged. We thank the Rosetta Science Ground Segment at ESAC, the Rosetta Mission Operations Centre at ESOC and the Rosetta Project at ESTEC for their outstanding work enabling the science return of the Rosetta Mission. SF acknowledges the financial support from the France Agence Nationale de la Recherche (programme Classy, ANR-17-CE31-0004). The authors thank H. Campins for his comments that helped us improve this article. \end{acknowledgements}
1,116,691,500,604
arxiv
\section{Introduction} Priority queues are fundamental data structures with numerous applications across computer science, most prominently in the design of efficient graph algorithms. They support the following operations on $N$ stored \emph{elements} of the type (\emph{key}, \emph{priority}), where ``key'' serves as an identifier and ``priority'' is a value from a total order: \begin{itemize} \item \textsc{Insert}(element $e$): Insert element $e$ to the priority queue. \item \textsc{Delete}(key $k$): Remove all elements with key $k$ from the priority queue. \item element $e=$ \textsc{ExtractMin}(): Remove and return the element $e$ in the priority queue with the smallest priority. \item \textsc{DecreaseKey}(element $(k,p)$): Given that an element with key $k$ and priority $p'$ is stored in the priority queue, if priority $p<p'$, replace the element's priority $p'$ with $p$. \end{itemize} \noindent Operation \textsc{Update}(element $(k,p)$)~is the combination of operations \textsc{Insert}~and \textsc{DecreaseKey}, that calls \textsc{Insert}($(k,p)$), if the priority queue does not contain any element with key $k$; or \textsc{DecreaseKey}($(k,p)$), otherwise. We study the problem of designing cache-oblivious and cache-aware priority queues that support all these operations in external memory. In the \emph{external memory model} (also known as the \emph{I/O model}) \cite{AV88} the amount of input data is assumed to be much larger than the main memory size $M$. Thus, the data is stored in an external memory device (i.e. disk) that is divided into consecutive blocks of size $B$ elements. Time complexity is measured in terms of \emph{I/O operations} (or \emph{I/Os}), namely block transfers from external to main memory and vice versa, while computation in main memory is considered to be free. Space complexity is measured in the number of blocks occupied by the input data in external memory. Algorithms and data structures in the I/O model are considered \emph{cache-aware}, since they are parameterized in terms of $M$ and $B$. In contrast, \emph{cache-oblivious} algorithms and data structures \cite{FLPR99} are oblivious to both these values, which allows them to be efficient along all levels of a memory hierarchy. I/O-optimally scanning and sorting $x$ consecutive elements in an array are commonly denoted to take $\scan{x} = \OO{\frac{x}{B}}$ I/Os and $\sort{x} = \OO{\frac{x}{B}\log_\frac{M}{B}\frac{x}{B}}$ I/Os, respectively \cite{AV88,FLPR99}. Priority queues are a basic component in several fundamental graph algorithms, including: \begin{itemize} \item The \emph{single-source shortest paths (SSSP)} algorithm on directed graphs with positively weighted edges, which computes the minimum edge-weight paths from a given source node to all other nodes in the graph. \item The \emph{depth-first search (DFS)} and \emph{breadth-first search (BFS)} algorithms on directed unweighted graphs, which number all nodes of the graph according to a depth-first or a breadth-first exploration traversal starting from a given source node, respectively. \end{itemize} \noindent Another necessary component for these algorithms are I/O-efficient \emph{buffered repository trees (BRTs)} \cite{BGVW00,ABDHM07,CR18}. They are used by external memory graph algorithms in order to confirm that a given node has already been visited by the algorithm. This avoids expensive random-access I/Os incurred by simulating internal memory methods in external memory. In particular, BRTs support the following operations on a stored multi-set of $N$ (key, value) elements, where ``key'' serves as an identifier and ``value'' is a value from a total order: \begin{itemize} \item \textsc{Insert}(element $e$): Insert element $e$ to the BRT. \item element $e_{i} = $ \textsc{Extract}(key $k$): Remove and return all $K$ elements $e_i$ (for $i\in [1,K]$) in the BRT with key $k$. \end{itemize} \begin{table} \begin{center} \begin{tabular}{l l l l l} & \textsc{Insert} & \textsc{Delete} & \textsc{ExtractMin} & \textsc{DecreaseKey} \\ \cite{ABDHM07} & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $-$ \\ \cite{BFMZ04,CR18} & $\frac{1}{B}\log_{2}\frac{N}{B}$ & $\frac{1}{B}\log_{2}\frac{N}{B}$ & $\frac{1}{B}\log_{2}\frac{N}{B}$ & $\frac{1}{B}\log_{2}\frac{N}{B}$ \\ New & $\frac{1}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}$ & $\lceil \frac{\lambda^{\varepsilon}}{B} \log_{\frac{\lambda}{B}} \frac{N}{B} \rceil \log_{\frac{\lambda}{B}} \frac{N}{B}$ & $\lceil \frac{\lambda^{\varepsilon}}{B} \log_{\frac{\lambda}{B}} \frac{N}{B} \rceil \log_{\frac{\lambda}{B}} \frac{N}{B}$ & $\frac{1}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}$\\\hline \cite{FJKT99,WY14} & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $-$ \\ \cite{KS96} & $\frac{1}{B}\log_{2}\frac{N}{B}$ & $\frac{1}{B}\log_{2}\frac{N}{B}$ & $\frac{1}{B}\log_{2}\frac{N}{B}$ & $\frac{1}{B}\log_{2}\frac{N}{B}$ \\ \cite{JL19}$^*$ & $\frac{1}{B}\log_{\log N} \frac{N}{B}$ & $\frac{1}{B}\log_{\log N} \frac{N}{B}$ & $\frac{1}{B}\log_{\log N} \frac{N}{B}$ & $\frac{1}{B}\log_{\log N} \frac{N}{B}$ \\ New & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ & $\lceil\frac{M^{\varepsilon}}{B} \log_{\frac{M}{B}}\frac{N}{B} \rceil \log_{\frac{M}{B}}\frac{N}{B}$ & $\lceil\frac{M^{\varepsilon}}{B} \log_{\frac{M}{B}}\frac{N}{B} \rceil \log_{\frac{M}{B}}\frac{N}{B}$ & $\frac{1}{B}\log_{\frac{M}{B}}\frac{N}{B}$ \end{tabular} \end{center} \caption{Asymptotic amortized I/O-bounds of cache-oblivious and cache-aware priority queue operations (respectively, above and below the horizontal line) on $N$ elements, parameter $\lambda \in \left[ 2, N \right]$ and real $\varepsilon \in \left(0,1\right)$. $^*$Expected I/Os.}\label{tab:pq} \end{table} \subsection{Previous work} Designing efficient external memory priority queues able to support operation \textsc{DecreaseKey}\ (or at least operation \textsc{Update}) has been a long-standing open problem \cite{KS96,FJKT99,WY14,ELY17,CR18,JL19}. I/O-efficient adaptations of the standard heap data structure (cache-aware \cite{FJKT99} and cache-oblivious \cite{ABDHM07}) or other cache-aware sorting-based approaches \cite{WY14}, despite achieving optimal base-$(M/B)$ logarithmic amortized I/O-complexity, fail to support operation \textsc{DecreaseKey}. (Nevertheless, we use these priority queues as subroutines in our structure.) On the other hand, cache-aware adaptations of the tournament tree \cite{KS96} and cache-oblivious adaptations of the heap \cite{BFMZ04,CR18} data structures support all operations, albeit in not so efficient base-$2$ logarithmic amortized I/Os. Indeed, in the recent work of Eenberg, Larsen and Yu \cite{ELY17} it is shown that for a sequence of $N$ operations, any external-memory priority queue supporting \textsc{DecreaseKey}\ must spend $\max$\{\textsc{Insert}, \textsc{Delete}, \textsc{ExtractMin}, \textsc{DecreaseKey}\} $ = \Om{\frac{1}{B}\log_{\log N}B}$ amortized I/Os. Jiang and Larsen present matching randomized priority queues \cite{JL19}. The cache-aware BRTs introduced by Buchsbaum et al. \cite[Lemma 2.1]{BGVW00} and their cache-oblivious counterparts \cite{ABDHM07} support \textsc{Insert}\ in $\OO{\frac{1}{B}\log_{2} \frac{N}{B}}$ amortized I/Os and \textsc{Extract}\ on $K$ extracted elements in $\OO{\log_{2} \frac{N}{B} + \frac{K}{B}}$ amortized I/Os on a multi-set of $N$ stored elements. \subsection{Our contributions} We present cache-oblivious I/O-efficient priority queues that support on $N$ stored elements, operation \textsc{Update}\ in optimal $\OO{\frac{1}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ amortized I/Os and operations \textsc{ExtractMin}\ and \textsc{Delete}\ in $\OO{\lceil \frac{\lambda^{\varepsilon}}{B} \log_{\frac{\lambda}{B}} \frac{N}{B} \rceil \log_{\frac{\lambda}{B}} \frac{N}{B}}$ I/Os, using $\OO{\frac{N}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ blocks, for a user-defined parameter $\lambda \in \left[ 2, N \right]$ and any real $\varepsilon \in \left(0,1\right)$. Our priority queues are the first to support operation \textsc{Update}\ (and thus \textsc{DecreaseKey} and \textsc{Insert}) in a cache-oblivious setting in $o\left(\frac{1}{B} \log_{2} \frac{N}{B} \right)$ I/Os. This is the first I/O-optimal cache-aware \textsc{Update}\ bound, setting $\lambda = \OO{M}$. Our bounds improve on previous cache-aware \cite{KS96} and cache-oblivious \cite{BFMZ04,CR18} priority queues supporting \textsc{DecreaseKey}, albeit at the expense of suboptimal I/O-efficiency for \textsc{ExtractMin}\ and \textsc{Delete}\ (respecting the lower bound of \cite{ELY17} for $\lambda =\OM{B\log_2 N}$). See Table \ref{tab:pq} for a comparison with previous external memory priority queues. We also present cache-oblivious I/O-efficient BRTs that support on a multi-set of $N$ elements, operation \textsc{Insert}\ in $\OO{\frac{1}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ amortized I/Os and operation \textsc{Extract}\ on $K$ extracted elements in $\OO{\frac{\lambda^{\varepsilon}}{B} \log_{\frac{\lambda}{B}} \frac{N}{B} + \frac{K}{B}}$ amortized I/Os. Our bounds also hold in a cache-aware setting for $\lambda = \OO{M}$. Previous cache-oblivious and cache-aware I/O-bounds are $\OO{\frac{1}{B}\log_2 \frac{N}{B}}$ and $\OO{\log_2 \frac{N}{B} + \frac{K}{B}}$, respectively \cite{ABDHM07,BGVW00}. Combining our BRTs with our priority queues, for cache-oblivous external memory SSSP, DFS and BFS algorithms, we achieve $\OO{\frac{V^{\frac{1}{1+\alpha}} E^{\frac{\alpha}{1+\alpha}}}{B}\log^2_{\frac{E}{VB}} \frac{E}{B} + V\log_{\frac{E}{VB}} \frac{E}{B} + \frac{E}{B} \log_{\frac{E}{VB}} \frac{E}{B}}$ I/Os on graphs with $V$ nodes and $E$ directed edges, setting $\lambda = \OO{E/V}$. For cache-aware extrenal memory SSSP, DFS and BFS algorithms, we achieve $\OO{V \frac{M^{\frac{\alpha}{1+\alpha}}}{B}\log^2_{\frac{M}{B}} \frac{E}{B} + V\log_{\frac{M}{B}} \frac{E}{B} + \frac{E}{B} \log_{\frac{M}{B}} \frac{E}{B}}$ I/Os, setting $\lambda = \OO{M}$. This compares to previous cache-oblivious and cache-aware graph algorithms that take $\OO{\left(V+\frac{E}{B}\right)\log_{2} E}$ I/Os for directed SSSP \cite{KS96,V01,CR18} and that take $\OO{\left(V+\frac{E}{B}\right)\log_{2} \frac{V}{B} + \frac{E}{B}\log_{\frac{M}{B}}\frac{E}{B}}$ I/Os for directed DFS and BFS \cite{BGVW00,ABDHM07}. Our cache-oblivious and cache-aware bounds are I/O-optimal, for dense graphs with $E/V = \Om{M}$ and with $E = \Om{V^{1+\varepsilon}}$ and $V \!\! = \Om{M}$, respectively. \subsection{Our approach} The main component of our priority queues is the $x$-\emph{treap}, a recursive structure inspired by similar cache-oblivious $x$-box \cite{BDFILM10} and cache-aware hashing data structures \cite{IP12} that solve the dynamic dictionary problem in external memory (respectively, under predecessor and membership queries on a dynamic set of keys). To solve the priority queue problem, we adapt this recursive scheme to also handle priorities, inspired by the cache-oblivious priority queues of Brodal et al.~\cite{BFMZ04} and of Chowdhury and Ramachandran \cite{CR04,CR18} that support \textsc{Update}, yet in suboptimal~I/Os. We hope that the discussion below provides the intuition to follow the full details in the sequel. Previous cache-oblivious priority queues~\cite{BFMZ04,CR04,CR18} are based on a simple idea. Their basic structure has a logarithmic number of levels, where level $i$ has two arrays, or buffers, of size roughly $2^i$. These buffers are called the \emph{front} and \emph{rear} buffers. They contain key-priority pairs or a key-delete message (described later). The idea is that the front buffers are sorted, with everything in the $i$-th front buffer having smaller priorities than everything in the $(i+1)$-th front buffer. The items in the rear buffers do not have this rigorous ordering, but instead must be larger than the items in the rear buffer at the smaller levels. When an \textsc{Update}\ operation occurs, the key-priority pair gets placed in the first rear buffer; when a \textsc{ExtractMin}\ operation occurs, the key-priority pair with the smallest priority is removed from the first front buffer. Every time a level-$i$ buffer gets too full or empty relative to its target size of~$2^i$, this is fixed by moving things up or down as needed, and by moving things from the rear to the front buffer if that respects the ordering of items in the front buffer. This resolution of problems is done efficiently using a scan of the affected and neighbouring levels. Thus, looking in a simplified manner at the lifetime of an \textsc{Update} d item, it will be inserted in the smallest rear buffer, be pushed down to larger rear buffers as they overflow, be moved from a rear buffer to a front buffer once it has gone down to a level where its priority is compatible with those in the corresponding front buffer, then moves up from the front buffer to smaller front buffers as they underflow, and is finally removed from the smallest front buffer during an \textsc{ExtractMin}. Thus, during its lifetime, it could be moved from one level to another a total of $\OO{\log_2 \frac{N}{B}}$ times at an I/O-cost of $\OO{\frac{1}{B}}$ per level, for a total cost of $\OO{\frac{1}{B} \log_2 \frac{N}{B}}$ I/Os. One detail is that when an item moves from a rear to a front buffer, we want to make sure that no items in larger levels with the same key and larger priority are ever removed. This is done through special delete messages, which stay in the rear buffers and percolate down, removing any key-priority pairs with the given key that they encounter in their buffer or the corresponding front buffer. The problem with this approach is that the base-2 logarithm seems unavoidable, with the simple idea of a geometrically increasing buffer size. So here instead we use the more complicated recursion introduced with the cache-oblivious $x$-box structure \cite{BDFILM10} and also used in the cache-aware hashing data structures \cite{IP12}. In its simplest form, used for a dictionary, an $x$-box has three buffers: top, middle and bottom (respectively of approximate size $x$, $x^{1.5}$ and $x^2$), as well as $\sqrt{x}$ recursive \emph{upper-level} $\sqrt{x}$-boxes (ordered logically between the top and middle buffers) and $x$ recursive \emph{lower-level} $\sqrt{x}$-boxes (ordered logically between the middle and bottom buffers). Data in each buffer is sorted, and all keys in a given recursive buffer are smaller than all keys in subsequent recursive buffers in the same level (upper or lower). There is no enforced order among keys in different buffers or in a recursive upper- or lower-level $\sqrt{x}$-box. The key feature of this construction is that the top/middle/bottom buffers have the same size as the neighbouring recursive buffers: the top buffer has size $x$, the top buffers of the upper-level recursive $\sqrt{x}$-boxes have total size $x$; the middle buffer, sum of the bottom buffers of the upper-level, and sum of the top buffers of the lower-level recursive structures all have size $x^{1.5}$; the sum of the bottom buffers of the lower-level recursive structures and the bottom buffer both have size $x^2$. Therefore, when for example a top buffer overflows, it can be fixed by moving excess items to the top buffers of the top recursive substructures. In a simplified view with only insertions, as buffers overflow, an item over its lifetime will percolate from the top buffer to the upper-level substructures, to the middle buffer, to the lower-level substructures, and to the bottom buffer, with each overflow handled only using scans. Assuming a base case of size $M$, there will be $\OO{\log_M N}$ times that an item will move from one buffer to another and an equal number of times that an item will pass through a base case. One major advantage of this recursive approach, is that an item will pass though a small base case not just once at the top of the structure, as in the previous paragraph, but many times. We combine these ideas to form the $x$-treap, described at a high level as follows: Everywhere an $x$-box has a buffer, we replace it with a front and a rear buffer storing key-priority pairs. The order used by the $x$-box is imposed on the keys, not the priorities. The order imposed on priorities in the previous cache-oblivious priority queues are carried over and imposed on the priorities in different levels of the $x$-treap; this is aided by the fact that the buffers in the $x$-treap form a DAG, thus the buffers where items with a given key can appear, form a natural total order. Hence, this forms a treap-like arrangement where we use the keys for order in one dimension and priorities for order in the other. We invoke separate trivial base case structure at a size smaller than a fixed value, e.g. the main memory size in a cache-aware setting; it stores items in no particular order and thus supports fast insertion of items when a neighbouring buffer adds them ($\OO{\frac{1}{B}}$), but slow ($\OO{M^\epsilon}$ amortized) removal of items with small priorities to fix the underflow of a front buffer above. In its typical hypothetical lifetime, an item will be inserted at the top in the rear buffer, percolate down $\OO{\log_{\frac{M}{B}} \frac{N}{B}}$ levels and base cases at a cost of $\OO{\frac{1}{B}}$ amortized each, move over to a front buffer, then percolate up $\OO{\log_{\frac{M}{B}} \frac{N}{B}}$ levels at a cost of $\OO{\frac{M^\epsilon}{B}}$ amortized each. Thus, the total amortized cost for an item that is eventually removed by an \textsc{ExtractMin}\ is $\OO{\frac{M^\epsilon}{B}\log_{\frac{M}{B}} \frac{N}{B}}$. However, we want the amortized cost for an item that is inserted via \textsc{Update}\ to be much faster than this, i.e. $\OO{\frac{1}{B}\log_{\frac{M}{B}} \frac{N}{B}}$. This requires additional observations and tricks. The first is that, unlike Brodal et al., we do not use delete-type messages that percolate down to eliminate items with larger than minimum priority in order to prevent their removal from \textsc{ExtractMin}. Instead, we adopt a much simpler approach, and use a hash table to keep track of all keys that have been removed by an \textsc{ExtractMin}, and when an \textsc{ExtractMin}\ returns a key that has been seen before, it is discarded and \textsc{ExtractMin}\ is repeated. The second trick is to simply ensure that each buffer has at most one item with each key (and remove key-priority pairs other than the one with the minimum priority among those with the same key in the buffer). This has the effect that if there are a total of $u$ updates performed on a key before it is removed by an \textsc{ExtractMin}, the total cost will involve up to $\OO{u \log_{\frac{M}{B}} \frac{N}{B}}$ percolations down at a cost of $\OO{\frac{1}{B}}$, but only $\OO{\log^2_{\frac{M}{B}} \frac{N}{B}}$ percolations up at a cost of $\OO{\frac{M^\epsilon}{B}}$ amortized each. After the \textsc{ExtractMin}, some items may still remain in the structure and will be discarded when removed by \textsc{ExtractMin}. However, due to the no-duplicates-per-level property there will only be $\OO{\log_{\frac{M}{B}} N}$ such items (called \emph{ghosts}) each of which will incur an amortized cost of at most $\OO{\lceil \frac{M^\epsilon}{B}\log_{\frac{M}{B}} \frac{N}{B}\rceil}$, where the ceiling accounts for accessing the hash table. Thus the total amortized cost for the lifetime of the $u$ \textsc{Update} s and one \textsc{ExtractMin}\ involving a single key is $O\left(\frac{u}{B}\log_{\frac{M}{B}} \frac{N}{B} +\lceil \frac{M^\epsilon}{B}\log_{\frac{M}{B}} \frac{N}{B}\rceil \log_{\frac{M}{B}} \frac{N}{B} \right).$ This cost can be apportioned in the amortized sense by having the \textsc{ExtractMin}\ cost $\OO{\lceil \frac{M^\epsilon}{B}\log_{\frac{M}{B}} \frac{N}{B}\rceil \log_{\frac{M}{B}} \frac{N}{B}}$ amortized~I/Os and each update cost $\OO{\frac{1}{B}\log_{\frac{M}{B}} \frac{N}{B} }$ amortized~I/Os, assuming that the treap finishes in an empty state and more importantly that no item can be \textsc{Update} d after it has been \textsc{ExtractMin} 'd. The details that implement these rough ideas consume the rest of the paper. One complication that eludes the above discussion is that items don't just percolate down and then up; they could move up and down repeatedly and this can be handled through an appropriate potential function. The various layers of complexity needed for the $x$-treap recursion combined with the front/rear buffer idea, various types of over/underflows of buffers, a special base case, having the middle and bottom buffers be of size $x^{1+\frac{\alpha}{2}}$ and $x^{1+\alpha}$ for a suitable parameter $\alpha$ rather than $x^{1.5}$ and $x^2$ as described above, and a duplicate-catching hash table, result in a complex structure with an involved potential analysis, but that follows naturally from the above high-level description. \section{Cache-oblivious $x$-Treap} \label{sec:xtreap} Given real parameter $\alpha \in (0,1]$ and \emph{key range} $\left[k_{\min}, k_{\max}\right)\subseteq \mathbb{R}$, an $x$-\emph{treap} $D$ stores a set of at most $2\left(D.x\right)^{1+\alpha} $ \emph{elements} $\left(\ast,k,p\right)$ associated with a \emph{key} $k\in \left[D.k_{\min}, D.k_{\max}\right)$ and a \emph{priority} $p$ from a totally ordered set. $D$ represents a set $D.rep$ of pairs (key, priority), such that a particular key $k$ contained in $D$ is represented to have the smallest priority $p$ of any element with key $k$ stored in $D$, unless an element with key $k$ and a smaller priority has been removed from the structure. In particular, we call the key and priority \emph{represented}, when the pair (key, priority) $\in D.rep$. A \emph{representative} element contains a represented key and its represented priority. Formally, we define: $$ D.rep := \bigcup_{\{k|\left(k , p \right)\in D\}} \left \{ \left(k,\min_p \left(k, p \right) \in D \right)\right \} $$ \noindent The proposed representation scheme works under the assumption that a key that is not represented by the structure anymore, cannot become represented again. In other words, a key returned by operation \textsc{ExtractMin}\ cannot be \textsc{Insert} ed to the structure again. The following \emph{interface operations} are supported: \begin{itemize} \item \textsc{Batched-Insert} $\left(D, e_1, e_2 , \ldots, e_{b}\right)$: For constant $c\in \left(0,\frac{1}{3}\right]$, insert $b \leq c\cdot D.x$ elements $e_1, e_2 , \ldots, e_{b}$ to $D$, given they are key-sorted with keys $e_i.k \in \left[D.k_{\min}, D.k_{\max}\right), i\in \left[1,b\right]$. \textsc{Batched-Insert}\ adds the pairs $\left(e_i.k, e_i.p\right)$ to $D.rep$ with key $e_i.k$ that is not contained in $D$ already. \textsc{Batched-Insert}\ decreases the priority of a represented key $e_i.k$ to $e_i.p$, if its represented priority is larger than $e_i.p$ before the operation. More formally, let $X_{new}$ contain the inserted pairs $\left(e_i.k, e_i.p\right)$ with $e_i.k\notin D.rep$. Let $X_{old}$ contain the pairs in $D.rep$ with an inserted key, but with larger priority than the inserted one, and let $X_{dec}$ contain these inserted pairs. After \textsc{Batched-Insert}, a new $x$-treap $D'$ is created where $D'.rep = D.rep \cup X_{new} \cup X_{dec} \backslash X_{old}$. \item \textsc{Batched-ExtractMin} $\left(D\right)$: For constant $c\in \left(0,\frac{1}{4}\right]$, remove and return the at most $c\cdot D.x$ elements $\left(k , p \right)$ with the smallest priorities in $D$. \textsc{Batched-ExtractMin}\ removes the pairs $X_{\min}$ from $D.rep$ with the at most $c\cdot D.x$ smallest priorities. Let $X_{key}$ contain the pairs in $D$ with keys in $X_{\min}$. After \textsc{Batched-ExtractMin}, a new $x$-treap $D'$ is created where $D'.rep = D.rep \backslash X_{\min} \backslash X_{key}$. \end{itemize} \begin{theorem}\label{thm:xtreap} An $x$-treap $D$ supports operation \textsc{Batched-ExtractMin}\ in $\OO{\lambda^{\frac{\alpha}{1+\alpha}}\frac{1+\alpha}{B} \log_{\lambda} D.x}$ amortized I/Os per element and operation \textsc{Batched-Insert}\ in $\OO{\frac{1+\alpha}{B}\log_{\lambda} D.x}$ amortized I/Os per element, using $\OO{\frac{\left(D.x\right)^{1+\alpha}}{B}\log_{\lambda} D.x}$ blocks, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{theorem} The lemmata in this section prove the above theorem. The structure is recursive. The base case is described separately in Subsection \ref{ssec:base}. The base case structure is used when $D.x \leq c' \lambda^{\frac{1}{1+\alpha}}$ (for an appropriately chosen constant $c'>0$). Thus assuming $D.x> c' \lambda^{\frac{1}{1+\alpha}}$, we define an $x$-treap to contain three \emph{buffers} (which are arrays that store elements) and many $\sqrt{x}$-treaps (called \emph{subtreaps}). Specifically, the \emph{top}, \emph{middle} and \emph{bottom} buffers have \emph{sizes} $D.x, \left(D.x\right)^{1+ \frac{\alpha}{2}}$ and $\left(D.x\right)^{1+\alpha}$, respectively. Each buffer is divided in the middle into a \emph{front} and a \emph{rear} \emph{buffer}. The subtreaps are divided into the \emph{upper} and the \emph{lower level} that contain at most $\frac{1}{4}\left(D.x\right)^{\frac{1}{2}}$ and $\frac{1}{4}\left(D.x\right)^{\frac{1+\alpha}{2}}$ subtreaps, respectively. Let $|b|$ denote the \emph{size} of a buffer $b$. We define the \emph{capacity} of an $x$-treap $D$ to be the maximum number of elements it can contain, which is $D.x + \frac{5}{4}\left(D.x\right)^{1+\frac{\alpha}{2}} + \frac{5}{4}\left(D.x\right)^{1+\alpha} < 2\left(D.x\right)^{1+\alpha}$. We define a partial order ($\preceq$) using the terminology ``above/below'' among the buffers of an $x$-treap and all of the buffers in recursive subtreaps or base case structures. In this order we have top buffer $\preceq$ upper level recursive subtreaps $\preceq$ middle buffer $\preceq$ lower level recursive subtreaps $\preceq$ bottom buffer. Along with all buffers of $D$, we store several additional pieces of bookkeeping information: a counter with the total number of elements stored in $D$ and an index indicating which subtreap is stored in which space in memory. \begin{figure} \begin{center} \includegraphics[scale=0.4]{figs/xtreap.pdf} \end{center} \caption{Overview of an $x$-treap $D$ on ``key'' $\times$ ``partial order'' space. Black/white dots represent elements in the front/rear buffers, respectively. All buffers are resolved. Buffer sizes and a level's maximum number of subtreaps appear on the right-hand side.} \end{figure} \subsection{Invariants} An $x$-treap $D$ maintains the following invariants with respect to every one of its top/middle/bottom buffers $b$. The invariants hold after the execution of each interface operation, but may be violated during the execution. They allow changes to $D$ that do not change $D.rep$. \begin{enumerate} \item \label{inv:type} The front and rear buffers of $b$ store elements sorted by key and left-justified. \item \label{inv:lprio} The front buffer's elements' priorities are smaller than the rear buffer's elements' priorities. \item \label{inv:prio} The front buffer's elements' priorities are smaller than all elements' priorities in buffers below $b$ in $D$. \item \label{inv:key} For a top or middle buffer $b$ with key range $\left[ b.k_{\min}, b.k_{\max} \right)$, the $r$ upper or lower subtreaps $D_i, i\in \{1,r\}$, respectively, have distinct key ranges $\left[D_i.k_{\min} , D_i.k_{\max}\right)$, such that $b.k_{\min} = D_1.k_{\min}< D_1.k_{\max} = D_2.k_{\min} < \ldots < D_r.k_{\max} = b.k_{\max}$. \item \label{inv:empty} If the middle/bottom buffer $b$ is not empty, then at least one upper/lower subtreap is not empty, respectively. \end{enumerate} \subsection{Auxiliary operations} The operations \textsc{Batched-Insert}\ and \textsc{Batched-ExtractMin}\ make use of the following \emph{auxiliary operations}: \begin{itemize} \item Operation \textsc{Resolve}$\left(D,b\right)$. We say that a buffer $b$ is \emph{resolved}, when the front and rear buffers contain elements with pairs (key,priority) $\left(k,p\right)$, such that $k$ is a represented key, and when the front buffer contains those elements with smallest priorities in the buffer. To resolve $b$, operation \textsc{Resolve}\ assigns to the elements with represented keys, the key's minimum priority stored in $b$. Also, it removes any elements with non-represented keys from $b$. \textsc{Resolve}\ restores Invariant \ref{inv:lprio} in $b$, when it is temporarily violated by other (interface or auxiliary) operations that call it. \item Operation \textsc{Initialize}$\left(D, e_1, e_2 , \ldots, e_{b}\right)$ distributes to a new $x$-treap $D$, $\frac{1}{4}\left(D.x\right) \leq b \leq\frac{1}{2} \left(D.x\right)^{1+\alpha}$ elements $e_i, i\in [1,b]$ from a temporary array (divided in the middle into a front and a rear array, respecting Invariants~\ref{inv:type} and \ref{inv:lprio}). \item Operation \textsc{Flush-Up}$\left(D\right)$ ensures that the front top buffer of $D$ contains at least $\frac{1}{4}D.x$ elements (unless all buffers of $D$ contains less elements altogether, in which case \textsc{Flush-Up}\ moves them all to the top front buffer of $D$). By Invariants \ref{inv:lprio} and \ref{inv:prio}, these are the elements in $D$ with smallest priority. \item Operation \textsc{Flush-Down}$\left(D\right)$ is called by \textsc{Batched-Insert}\ on an $x$-treap $D$ whose bottom buffer contains between $\frac{1}{2}\left(D.x\right)^{1+\alpha}$ and $\left(D.x\right)^{1+\alpha}$ elements. It moves to a new temporary array, at least $\frac{1}{6}\left(D.x\right)^{1+\alpha}$ and at most $\frac{2}{3}\left(D.x\right)^{1+\alpha}$ elements from the bottom buffer of $D$. It ensures that the largest priority elements are removed from $D$. \item Operation \textsc{Split}$\left(D \right)$ is called by \textsc{Batched-Insert}\ on an $x$-treap $D$ that contains between $\frac{1}{2}\left(D.x\right)^{1+\alpha}$ and $\left(D.x\right)^{1+\alpha}$ elements. It moves to a new temporary (front and rear) array, the at most $\frac{1}{3}\left(D.x\right)^{1+\alpha}$ elements with largest keys in $D$. \end{itemize} \subsubsection{Resolving a buffer} \paragraph{Algorithm.} Auxiliary operation \textsc{Resolve}\ on a buffer $b$ of an $x$-treap $D$ is called by the auxiliary operation \textsc{Flush-Up}\ and by the interface operation \textsc{Batched-Insert}. It makes use of two temporary auxiliary arrays of size~$|b|$. \textsc{Resolve}$\left(D, b\right)$ is implemented as following: \begin{enumerate} \item \label{res:0} Determine the maximum priority $p_{\max}$ in the front buffer (by a scan). Return, if the front buffer is empty. \item \label{res:1} 2-way merge the elements in the front and rear buffers into a temporary array (by simultaneous scans in increasing key-order). Empty the front and rear buffers. \item \label{res:2} Determine the representative elements in the temporary array (by a scan) and write them to a second temporary array (by another scan): specifically for each key, write only the element with the smallest priority to the secondary temporary array. \item \label{res:3} Scan the second temporary array, writing the elements with priority smaller or equal to $p_{\max}$ to the front buffer, and with priority larger than $p_{\max}$ to the rear buffer. Discard the temporary arrays. \item \label{res:4} Update the counter of $D$. \end{enumerate} \paragraph{Correctness.} After calling \textsc{Resolve}\ on a buffer $b$, elements from $b$ are allowed to be moved to other buffers, since Invariants \ref{inv:type} and \ref{inv:lprio} are maintained. This is because after Steps \ref{res:1}, \ref{res:2} and \ref{res:3}, the front and rear buffers of $b$ contain a representative element for every represented key in $b$ separated by priority $p_{\max}$ (computed in Step \ref{res:0}). Step \ref{res:4} accounts for the elements ignored in Step \ref{res:2}. See Figure \ref{fig:resol} for an illustration of the operation. \begin{figure} \begin{center} \includegraphics[scale=0.5]{figs/res.pdf} \end{center} \caption{A buffer before and after operation \textsc{Resolve}\ (respectively, above and below).} \label{fig:resol} \end{figure} \subsubsection{Initializing an $x$-treap} \paragraph{Algorithm.} Auxiliary operation \textsc{Initialize}\ is called by the auxiliary operation \textsc{Flush-Up}\ and by the interface operation \textsc{Batched-Insert}. It allocates an empty $x$-treap $D$ and distributes the $b\in \left[\frac{1}{4}\left(D.x\right), \frac{1}{2}\left(D.x\right)^{1+\alpha} \right] $ elements $e_i$, $i\in\left[1,b\right]$ from a temporary key-sorted array (divided in the middle into a front and a rear array) to the buffers of $D$. \textsc{Initialize}$\left(D, e_1. \ldots, e_b\right)$ is implemented as following: \begin{enumerate} \item \label{init:0} Create a new $x$-treap $D$ and move all elements in the temporary rear array to the rear bottom buffer of $D$. \item \label{init:1} Find the $\left(\frac{1}{2} \left(D.x\right)^{1+\alpha}\right)$-th smallest priority in the temporary front array (by an order-statistics algorithm \cite{BFPRT73}) and move all elements in the array with larger priority to the front bottom buffer of $D$. \item \label{init:2} Find the $\left(\frac{1}{2} \left(D.x\right)\right)$-th smallest priority in the temporary front array and move all elements in the array with smaller priority to the front top buffer of $D$. \item \label{init:3} Find the $\left(\frac{1}{2} \left(D.x\right)\right)$-th smallest priority in the temporary front array and until the maximum number of upper level subtreaps has been reached: \textsc{Initialize}\ a new upper subtreap with the $\frac{1}{2}\left(D.x\right)^{\frac{1+\alpha}{2}}$ key-next elements with smaller priority. \item \label{init:4} Find the $\left(\frac{1}{2} \left(D.x\right)^{1+\frac{\alpha}{2}}\right)$-th smallest priority in the temporary front array and move all elements in the array with smaller priority to the front middle buffer of $D$. \item \label{init:5} Find the $\left(\frac{1}{2} \left(D.x\right)^{1+\frac{\alpha}{2}}\right)$-th smallest priority in the temporary front array and until the maximum number of lower level subtreaps has been reached: \textsc{Initialize}\ a new lower subtreap with the $\frac{1}{2}\left(D.x\right)^{\frac{1+\alpha}{2}}$ key-next elements with smaller priority. \item \label{init:6} Discard the temporary array and update the counters of $D$. \end{enumerate} \paragraph{Correctness.} Operation \textsc{Initialize}\ moves the elements from the temporary array to a new $x$-treap in the following sequence: bottom rear buffer, top front buffer, upper subtreaps' front buffers, middle front buffer and lower subtreaps' front buffers, bottom front buffer. The recursive calls in Steps \ref{init:3} and \ref{init:5} ensure that the temporary array empties. All invariants are maintained. See Figure \ref{fig:init} for an illustration of the operation. \begin{figure} \begin{center} \includegraphics[scale=0.5]{figs/init.pdf} \end{center} \caption{A new $x$-treap after operation \textsc{Initialize}.} \label{fig:init} \end{figure} \subsubsection{Flushing up an $x$-treap} \paragraph{Algorithm.} Auxiliary operation \textsc{Flush-Up}\ on an $x$-treap $D$ is called only by the interface operation \textsc{Batched-ExtractMin}. It is implemented by means of the recursive subroutine \textsc{Flush-Up}$\left(D, b\right)$ that also takes as argument a top or middle buffer $b$ of $D$ and moves to its front buffer, the elements with the (at most) $\frac{1}{4}|b|$ smallest represented priorities among the representative elements stored inside and below $b$ in $D$. The operation makes use of a temporary priority queue that supports only operations \textsc{Insert}\ and \textsc{ExtractMin}\ \cite{ABDHM07} (The structure can be easily modified to ). For a bottom buffer $b$, a non-recursive subroutine \textsc{Flush-Up}$\left(D, b\right)$ simply calls \textsc{Resolve}\ on $b$. \textsc{Flush-Up}$\left(D, b\right)$ is implemented as following: \begin{enumerate} \item \label{flu:0} \textsc{Resolve}\ $b$ and \textsc{Flush-Up}\ the top buffers of the subtreaps immediately below $b$. \item \label{flu:1} If the front buffer of $b$ contains $k <\frac{1}{4}|b|$ elements: Allocate a temporary array of size $|b|$ and a temporary priority queue $Q$. For every subtreap immediately below $b$: Remove all elements from its front top buffer and \textsc{Insert}\ them to $Q$ (by simultaneous scans). While the temporary array contains no more than $\frac{1}{4}|b| - k$ elements, do: \subitem \ref{flu:1}.1. \textsc{ExtractMin}\ one element $e$ from $Q$ and write it in the temporary array. \subitem \ref{flu:1}.2. If $Q$ contains no more elements from the subtreap $D'$ that contained $e$: \textsc{Flush-Up}\ the top buffer of~$D'$, remove all its elements and \textsc{Insert}\ them to $Q$. \subitem \ref{flu:1}.3. If $Q$ is empty and the temporary buffer contains $k'< \frac{1}{4}|b| - k$ elements: \textsc{Flush-Up}\ the buffer $b'$ immediately below $b$ in $D$, find the $\left(\frac{1}{4}|b|-k-k'\right)$-th smallest priority in the front buffer of $b'$ (by an external memory order-statistics algorithm \cite{BFPRT73}) and move its elements with smaller priority to the temporary array. Left-justify $b'$. \item \label{flu:2} Sort the elements in the temporary array by key. 2-way merge into the front buffer of $b$, the elements in the front buffer of $b$ and the temporary array (by simultaneous scans in increasing key-order). Discard the temporary array. \item \label{flu:3} If $Q$ is not empty: \textsc{ExtractMin}\ all elements from $Q$ into a new temporary array, sort the array by key, move the elements left-justified back to the front top buffers of the subtreaps they were taken from (by simultaneous scans in increasing key-order), update the subtreaps' counters and discard the array. \item \label{flu:4} Discard $Q$, update the counters of $D$ and remove all empty subtreaps immediately below $b$ (i.e. whose counter is $0$). \item \label{flu:5} If there are no subtreaps immediately below $b$ and the front buffer $b'$ immediately below $b$ is not empty: Find the $\left(\frac{1}{4}|b|^{\frac{1+\alpha}{2}}\right)$-th smallest priority in the front buffer of $b'$ (by an external memory order-statistics algorithm \cite{BFPRT73}), move the elements with smaller priority to a new temporary front array (by a scan), left-justify the front buffer of $b$ and \textsc{Initialize}\ a new subtreap with the elements in the array. Discard the temporary array. \end{enumerate} \paragraph{Correctness.} Operation \textsc{Flush-Up}\ allows for accessing the representative elements with smallest represented priorities in $D$ by only accessing its front top buffer. Invariants \ref{inv:lprio} and \ref{inv:prio} imply that the next-larger represented priorities with respect to the front top buffer's maximum represented priority are stored in the upper subtreaps' front top buffers and in turn the next-larger ones are stored in the front middle buffer. Similarly, this holds between the middle buffer with respect to the lower subtreaps and the bottom buffer. The subroutine \textsc{Flush-Up}$\left(D, b\right)$ respects this sequence when it moves elements with minimum represented priorities from front buffers to the front buffer of $b$. Specifically, Step \ref{flu:0} ensures that the representative elements in $b$ with priority smaller than the $\left(\frac{1}{4}|b|\right)$-th largest priority in $b$ are stored in its front buffer. It also ensures this recursively for the front top buffers of the subtreaps immediately below $b$. If the front buffer of $b$ contains less than $\frac{1}{4}|b|$ such elements, Step \ref{flu:1} attempts to move into a temporary array enough elements from below $b$ in $D$. The temporary priority queue is used (at Steps \ref{flu:1} and \ref{flu:1}.1) to ensure that indeed the smallest-priority representative elements are moved, first from the subtreaps (Step \ref{flu:1}.2) and, if not enough elements have been moved, from the buffer immediately below $b$ in $D$ (at Step \ref{flu:1}.3). At most two key-sorted runs are created in the temporary array (one from the subtreaps and one from the buffer immediatelly below $b$) which are merged to the front buffer of $b$ by Step \ref{flu:2}, while maintaining Invariants \ref{inv:type} and \ref{inv:key}. Step \ref{flu:3} revokes the effects of the temporary priority queue, allowing it to be discarded at Step \ref{flu:4}, which also accounts for the moved elements. It also removes any empty subtreaps, maybe violating Invariant \ref{inv:empty} that is restored by Step \ref{flu:5}. See Figure \ref{fig:flu} for an illustration of the operation. \begin{figure} \begin{center} \includegraphics[scale=0.5]{figs/flup.pdf} \end{center} \caption{The top/middle buffers and the top upper level buffers, (a) before operation \textsc{Flush-Up}, (b) after Step \ref{flu:4} and (c) Step~\ref{flu:5}.} \label{fig:flu} \end{figure} \subsubsection{Flushing down an $x$-treap} \paragraph{Algorithm.} Auxiliary operation \textsc{Flush-Down}\ on an $x$-treap $D$ is called only by the interface operation \textsc{Batched-Insert}\ and returns a new temporary key-sorted array with at most $\frac{2}{3}\left(D.x\right)^{1+\alpha}$ elements. \textsc{Flush-Down}$\left(D\right)$ is implemented as following: \begin{enumerate} \item \label{fld:0} Move all elements from the bottom rear buffer of $D$ the the temporary rear array (by a scan). \item \label{fld:1} If Step \ref{fld:0} did not move more than $\left(\frac{1}{6}\left(D.x\right)^{1+\alpha}\right)$ elements: Find the $\left(\frac{1}{3}\left(D.x\right)^{1+\alpha}\right)$-th smallest priority in the bottom front buffer of $D$ (by an external memory order-statistics algorithm \cite{BFPRT73}). Move all elements in the bottom front buffer with larger priority to the temporary front array and left-justify the bottom front buffer (by a scan). \item \label{fld:2} 2-merge the two runs created by Steps \ref{fld:0} and \ref{fld:1} in the temporary array (by a scan). \item \label{fld:3} Update the counter of $D$. \end{enumerate} \paragraph{Correctness.} Operation \textsc{Flush-Down}\ leaves the bottom rear buffer of $D$ empty and the bottom front buffer with at most $\frac{1}{3}\left(D.x\right)^{1+\alpha}$ elements. By Invariants \ref{inv:lprio} and \ref{inv:prio}, the largest priority elements of $D$ are in the bottom rear buffer and they are removed at Steps \ref{fld:0}. However, if they don't account for a constant fraction of $D$'s size, Step \ref{fld:1} removes such a fraction from the bottom front buffer, which contains the next-smaller elements. Invariant \ref{inv:type} is maintained. Step \ref{fld:3} accounts for the removed elements. See Figure \ref{fig:fld} for an illustration of the operation. \begin{figure} \begin{center} \includegraphics[scale=0.5]{figs/fldown.pdf} \end{center} \caption{The bottom buffer and temporary array before and after operation \textsc{Flush-Down}\ (respectively, above and below).} \label{fig:fld} \end{figure} \subsubsection{Splitting an $x$-treap} \paragraph{Algorithm.} Auxiliary operation \textsc{Split}\ is called only by the interface operation \textsc{Batched-Insert}. It moves to a new temporary key-sorted array (divided in the middle into a front and a rear array) at most $\frac{1}{4}|b_i|$ key-larger elements from all buffers $b_i$ in $D$. \textsc{Split}$\left(D\right)$ is implemented as following: \begin{enumerate} \item \label{spl:0} Find the $\left(\frac{1}{4}\left(D.x\right)^{1+\alpha}\right)$-th smallest key in the front top/middle/bottom buffer of $D$ (by a scan). (Let this key be $k$ for the front buffer and $k'$ for the middle buffer.) Respectively, move all elements in the front buffers with larger key to three new front auxiliary arrays. \item \label{spl:1} Repeat Step \ref{spl:0} with respect to rear buffers. \item \label{spl:2} Update the counter of $D$. \item \label{spl:3} \textsc{Split}\ the upper subtreap whose key range contains $k$. \textsc{Split}\ the lower subtreap whose key range contains~$k'$. \item \label{spl:4} Merge the elements in all front/rear auxiliary arrays to a new front/rear temporary array, respectively. Discard all auxiliary arrays. \end{enumerate} \paragraph{Correctness.} Operation \textsc{Split}\ leaves all (front and rear) buffers of $D$ half-full. Since it operates on $x$-treaps that are more than half-full, whose bottom buffers contain a constant fraction of the total number of elements in the $x$-treap, the execution of Steps \ref{spl:0}, \ref{spl:1} and \ref{spl:3} moves at most $\frac{1}{2}\left(D.x\right)^{1+\alpha}$ elements to the temporary array. Step \ref{spl:2} accounts for the removed elements. All invariants are maintained. \subsection{Base case} \label{ssec:base} The $x$-treap is a recursive structure. When the $x$-treap stores few elements, we use simple arrays to support the interface operations and operation \textsc{Flush-Up}. \begin{lemma}\label{lem:array} An $\OO{\lambda^{\frac{1}{1+\alpha}}}$-treap supports operation \textsc{Batched-Insert}\ in $\OO{1/B}$ amortized I/Os per element and operations \textsc{Batched-ExtractMin}\ and \textsc{Flush-Up}\ in $\scan{\lambda^{\frac{\alpha}{1+\alpha}}}$ amortized I/Os per element, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{lemma} \begin{proof} For a universal constant $c_0>0$ and a constant parameter $c'<c_0^{\frac{1}{\alpha}+1}$, we allocate an array of size $\left(c'\left(\lambda^{\frac{1}{1+\alpha}}\right)\right)^{\frac{\alpha}{1+\alpha}} \leq c_0 M$ and divide it in the middle into a front and a rear buffer that store elements and maintain only Invariants \ref{inv:type} and \ref{inv:lprio}. To implement \textsc{Batched-Insert}\ on at most $\frac{c'}{2}\lambda^{\frac{1}{1+\alpha}}$ elements, we simply add them to the rear buffer and update the counter. This costs $\OO{\frac{\lambda^{\frac{1}{1+\alpha}}}{B}/\frac{1}{2}\lambda^{\frac{1}{1+\alpha}}} = \OO{\frac{1}{B}}$ I/Os amortized per added element, since we only scan the part of the rear buffer where the elements are being added to. To implement \textsc{Batched-ExtractMin}\ on at most $\frac{c'}{2}\lambda^{\frac{1}{1+\alpha}}$ extracted elements, we \textsc{Resolve}\ the array (as implemented for Theorem \ref{thm:xtreap}), remove and return all elements in the front buffer, and update the counter. By Lemma \ref{lem:res} (proven in Subsection \ref{ssec:ana}) this costs $\OO{\frac{\lambda}{B}/\frac{1}{2}\lambda^{\frac{1}{1+\alpha}}} = \OO{\frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B}}$ I/Os amortized per extracted element. \textsc{Flush-Up}\ is implemented like \textsc{Batched-ExtractMin}\ with the difference that the returned elements are not removed from the array. \end{proof} \subsection{Interface operations} We proceed to the description of the interface operations supported by an $x$-treap. \subsubsection{Inserting elements to an $x$-treap} \paragraph{Algorithm.} Interface operation \textsc{Batched-Insert}\ on an $x$-treap $D$ is implemented by means of the recursive subroutine \textsc{Batched-Insert}$\left(D,e_1, \ldots, e_{c\cdot |b|}, b\right)$ that also takes as argument a top or middle buffer $b$ of $D$ and inserts $c\cdot |b|$ elements $e_1, \ldots, e_{c\cdot |b|}$ (contained in a temporary array) inside and below $b$ in $D$, for constant $c\in \left(0,\frac{1}{3}\right]$. For a bottom buffer $b$, a non-recursive subroutine \textsc{Batched-Insert}$\left(D, b\right)$ simply executes Step \ref{bi:0} below and discards the temporary array. \textsc{Batched-Insert}$\left(D,e_1, \ldots, e_{c\cdot |b|}, b\right)$ is implemented as following: \begin{enumerate} \item \label{bi:0} If $D.x > c' \lambda^{\frac{1}{1+\alpha}} + c|b|$: \subitem \ref{bi:0}.1. 2-way merge into the temporary array, the elements in the temporary array and in the rear buffer of~$b$ (by simultaneous scans in increasing key-order). \textsc{Resolve}\ $b$ considering the temporary array as the rear buffer of $b$. \subitem \ref{bi:0}.2. Implicitly partition the front buffer of $b$ and the temporary array by the key ranges of the subtreaps immediately below $b$. Consider the subtreaps in increasing key-order by reading the index of $D$. For every key range (associated with subtreap $D'$) that contains at least $\frac{1}{3}\left(D.x\right)^{\frac{1}{2}}$ elements in either the front buffer of~$b$ or the temporary array: While the key range in the front buffer of $b$ and in the temporary array contains at most $\frac{2}{3}\left(D.x\right)^{\frac{1}{2}}$ elements, do: \subsubitem \ref{bi:0}.2.1. Find the $\left(\frac{2}{3}\left(D.x\right)^{\frac{1}{2}}\right)$-th smallest priority within the key range in the front buffer of $b$ and in the temporary array (by an external memory order-statistics algorithm \cite{BFPRT73}) and move the elements in the key range with larger priority to a new auxiliary array (by simultaneous scans in increasing key-order). \subsubitem \ref{bi:0}.2.2. If the counter of $D'$ plus the auxiliary array's size does not exceed the capacity of $D'$: \textsc{Batched-Insert}\ the elements in the auxiliary array to the top buffer of $D'$. Discard the auxiliary array. \subsubitem \ref{bi:0}.2.3. Else, if there are fewer than the maximum allowed number of subtreaps in the level immediately below $b$: \textsc{Split}\ $D'$. Let $k$ be the smallest key in the array returned by \textsc{Split}\ (determined by a constant number of random accesses to the leftmost elements in the returned front/rear array). Move the elements in the auxiliary array with key smaller than $k$ to a new temporary array (by a scan), \textsc{Batched-Insert}\ these elements to $D'$ and discard this temporary array. 2-way merge the remaining elements in the auxiliary array into the returned rear array and discard the auxiliary array. \textsc{Initialize}\ a new subtreap with the elements in the returned array. Discard the returned array. \subsubitem \ref{bi:0}.2.4. Else, \textsc{Flush-Down}\ all subtreaps immediately below $b$, which writes them to many returned arrays. 2-way merge into a new temporary array, all elements in $b$ and in all returned arrays (by simultaneous scans in increasing key-order). (When the scan on a subtreap's temporary array is over, determine the subtreap with the key-next elements in the level by reading the index of $D$.) \textsc{Batched-Insert}\ the elements in the new temporary array to the buffer $b'$ immediately below $b$. Discard the new temporary array and all returned arrays. \subitem \ref{bi:0}.3. Discard the temporary array and update the counter of $D$. \item \label{bi:1} Else if $D.x \leq c' \lambda^{\frac{1}{1+\alpha}} + c|b|$: \textsc{Batched-Insert}\ the elements to the base case structure. \end{enumerate} \paragraph{Correctness.} Operation \textsc{Batched-Insert}\ accommodates the insertion of at most $\frac{1}{3}|b|$ elements by allocating recursively extra space within $D$. Step \ref{bi:0} considers the recursive structure. Specifically, Step \ref{bi:0}.1 allows for moving representative elements from $b$ and inserted elements by resolving $b$ with respect to the temporary array. Step \ref{bi:0}.2 identifies the subtreaps immediately below $b$ (repeatedly in increasing key-order) whose associated key range contains too many keys (stored in $b$ and the temporary array, but not in the considered subtreap) and attempts to move the largest-priority elements within this key range into the subtreap. Step \ref{bi:0}.2.1 identifies the at most $\frac{1}{3}|b|^{\frac{1}{2}}$ elements to be moved and Step \ref{bi:0}.2.2 recursively inserts them to the subtreap. However if the subtreap is full, Step \ref{bi:0}.2.3 splits it into two subtreaps with enough space. Nonetheless if the level cannot contain a new subtreap, Step \ref{bi:0}.2.4 essentially moves all elements in $b$ and in all subtreaps immediately below $b$, to the buffer $b'$ immediately below $b$. Step \ref{bi:0}.3 accounts for the number of inserted elements and the changes in number of subtreaps. Step \ref{bi:1} allows for recursing down to the base case. \subsubsection{Extracting minimum-priority elements from an $x$-treap} \paragraph{Algorithm.} Interface operation \textsc{Batched-ExtractMin}\ on an $x$-treap $D$ is implemented as following: \begin{enumerate} \item \label{be:0} If $D.x > c' \lambda^{\frac{1}{1+\alpha}}$: \subitem \ref{be:0}.1 If the front top buffer contains less than $\frac{1}{4} D.x$ elements: \textsc{Flush-Up}~the top buffer. \subitem \ref{be:0}.2 Remove and return all the elements $\left(e_i.k, e_i.p\right)$ from the front top buffer. \subitem \ref{be:0}.3 Update the counter of $D$. \item \label{be:1} Else if $D.x \leq c' \lambda^{\frac{1}{1+\alpha}}$: \textsc{Batched-ExtractMin}\ the base case structure. \end{enumerate} \paragraph{Correctness.} Operation \textsc{Batched-ExtractMin}\ considers only the top buffer of $D$. Step \ref{be:0}.1 ensures that there are enough minimum-priority representative elements in the front top buffer of $D$ to be extracted by Step \ref{be:0}.2. Step \ref{be:0}.3 accounts for the extracted elements and Step \ref{be:1} for the base case. All Invariants are maintained. \subsection{Analysis} \label{ssec:ana} \begin{lemma}\label{lem:card} An $x$-treap $D$ has $\OO{\log_{\lambda} D.x}$ levels and occupies $\OO{\left(D.x\right)^{1+\alpha}\log_{\frac{\lambda}{B}} D.x}$ blocks. \end{lemma} \begin{proof} We number the levels of the structure sequentially, following the defined ``above/below'' order from top to bottom, where a base case structure counts for one level. Hence, the total number of levels is given by $L\left(D.x\right) = 3 + 2 L\left(\left(D.x\right)^{\frac{1}{2}}\right) $ with $L\left(c\lambda^{\frac{1}{1+\alpha}}\right) = 1$, which solves to the stated bound. Since, \textsc{Resolve}\ leaves in the operated buffer at most one element with a given key, the space bound follows. \end{proof} \begin{lemma}\label{lem:ra} By the tall-cache assumption, scanning the buffers of an $x$-treap $D$ and randomly accessing $\OO{\left(D.x\right)^{\frac{1+\alpha}{2}}}$ subtreaps takes $\scan{\left(D.x\right)^{1+\alpha}}$ I/Os, for any real $\alpha \in (0,1]$. \end{lemma} \begin{proof} We show that $\OO{\frac{\left(D.x\right)^{1+\alpha}}{B} + \left(D.x\right)^{\frac{1+\alpha}{2}}} = \OO{\frac{\left(D.x\right)^{1+\alpha}}{B}}$. Indeed this holds when $\left(D.x\right)^{\frac{1+\alpha}{2}} = \OO{\frac{\left(D.x\right)^{1+\alpha}}{B}}$. Otherwise $\left(D.x\right)^{\frac{1+\alpha}{2}} = \Om{\frac{\left(D.x\right)^{1+\alpha}}{B}} \Rightarrow \left(D.x\right)^{1+\alpha} = \OO{B^2} $ and by the tall-cache assumption that $M\geq B^2$, we get that $D.x = \OO{M^{\frac{1}{1+\alpha}}}=\OO{M}$. Hence $D$ fits into main memory and thus randomly accessing its subtreaps incurs no I/Os, meaning that the I/O-complexity is dominated by $\OO{\frac{\left(D.x\right)^{1+\alpha}}{B}}$. \end{proof} \subsubsection{Amortization} A buffer $b_i$ at level $i\leq h := \OO{\log_{\frac{\lambda}{B}} D.x}$ with $b_f$ elements in the front buffer and $b_r$ elements in the rear buffer has potential $\Phi (b_i) := \Phi_{f}(b_i) + \Phi_r(b_i)$, such that (for constants $\varepsilon := \frac{\alpha}{1+\alpha}$ and $c_0\geq 1$): \begin{itemize} \item $\Phi_f (b_i) =\begin{cases} 0, & \text{if $\frac{1}{4}|b_i|\le b_f \le \frac{1}{3}|b_i|$},\\ \frac{c_0}{B} \lambda^{\varepsilon} \cdot \left(\frac{|b_i|}{4} - b_f\right) \cdot \left( h - i \right), & \text{if $b_f < \frac{1}{4}|b_i|$},\\ \frac{c_0}{B} \cdot \left( b_f - \frac{|b_i|}{3} \right) \cdot \left( h - i \right), & \text{if $b_f > \frac{1}{3}|b_i|$},\\ \end{cases}$ \item $\Phi_r (b_i) =\begin{cases} 2 \frac{c_0}{B} \cdot \left(b_r - \frac{|b_i|}{2}\right) \cdot \left( h - i \right), & \hspace{11pt}\text{if $b_r > 0$}. \end{cases}$ \end{itemize} In general, a particular element will be added to a rear buffer and will be moved down the levels of the structure over rear buffers by operation \textsc{Flush-Down}. A \textsc{Resolve}\ operation will move the element from the rear to the front buffer, if it is a representative element. From this point, it will be moved up the levels over front buffers by operation \textsc{Flush-Up}. If it is not representative, it will either get discarded by \textsc{Resolve} (when there is an element with the same key and with smaller priority in the same buffer) or it will keep going down the structure. Since \textsc{Resolve}\ leaves only one element per key at the level it operates, $\OO{\log_{\frac{\lambda}{B}} D.x}$ elements with the same key (i.e. at most one per level) will remain in the structure after the extraction of the representative element for this key. The $\lambda^\varepsilon$-factor accounts for the extra cost of \textsc{Flush-Up}\ and \textsc{Batched-ExtractMin}, the $\left(h-i\right)$-factor allows for moving elements up or down a level by \textsc{Flush-Up}\ and \textsc{Flush-Down}\ and the $2$-factor accounts for moving elements from the rear to the front buffer. \begin{lemma}\label{lem:res} \textsc{Resolve}\ on a buffer $b_i$ takes $\scan{|b_i|}+\OO{1}$ amortized I/Os. \end{lemma} \begin{proof} All steps of operation \textsc{Resolve}\ are implemented by a constant number of scans over buffers of size at most $|b_i|$. Since elements can only be added to the front buffer and only be removed from the rear buffer, the maximum difference in potential occurs, when $\frac{|b_i|}{2} - \frac{|b_i|}{3} = \frac{|b_i|}{6}$ elements are moved from a full rear buffer to a front buffer that contains $\frac{|b_i|}{3}$ elements. We have that: \[\Delta \Phi (b_i)\leq \Delta \Phi_f(b_i) + \Delta \Phi_r(b_i) \leq \frac{c_0}{B} \cdot \frac{|b_i|}{6} \cdot i - 2 \frac{c_0}{B} \cdot \frac{|b_i|}{6} \cdot i \leq - \frac{c_0}{B} \cdot \frac{|b_i|}{6} \cdot i \leq 0, \] \noindent for $|b_i|,i\geq 0$. \end{proof} \begin{lemma}\label{lem:bins} \textsc{Batched-Insert}\ on an $x$-treap $D$ takes $\OO{\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} D.x}$ amortized I/Os per element, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{lemma} \begin{proof} Excluding all recursive calls (to \textsc{Batched-Insert}\ at Steps 1.2.2, 1.2.3 and 1.2.4 and to \textsc{Initialize}\ at Step 1.2.3), the worst-case cost of \textsc{Batched-Insert}\ on a buffer $b_i$ is $\scan{|b_i|^{1+\alpha}}+\OO{1}$ I/Os by Lemmata \ref{lem:ra} and \ref{lem:res}, by \cite{BFPRT73} and because this is also the worst-case I/O-cost of \textsc{Flush-Down}\ and \textsc{Initialize}. The base case (Step 2) charges an extra $\OO{\frac{1}{B}}$ I/Os per element by Lemma \ref{lem:array}. In Step 1.2.2, at most $\frac{1}{3} |b_{i+1}|$ elements are moved from a buffer $b_i$ at level $i$ to a buffer $b_{i+1}$ at level $i+1$, where $|b_{i}| = |b_{i+1}|^{1+\frac{\alpha}{2}}$. The maximum difference in potential occurs when all these elements are moved from a full rear buffer of $b_i$ to a rear buffer of $b_{i+1}$. We have that: \[ \sum_{j=i}^{i+1} \Delta \Phi_r (b_j) \leq - 2 \frac{c_0}{B} \frac{|b_{i+1}|}{3} \left( h-i \right) + 2 \frac{c_0}{B} \frac{|b_{i+1}|}{3} \left( h-i-1 \right) \leq - \frac{2}{3} \frac{c_0}{B} |b_{i+1}| \leq 0 \] \noindent for $|b_{i+1}|\geq 0$. If the same scenario occurs between front buffers, we have that: \[ \sum_{j=i}^{i+1} \Delta \Phi_f (b_j) \leq - \frac{1}{3} \frac{c_0}{B} |b_{i+1}| \leq 0 \] \noindent for $|b_{i+1}|\geq 0$. Hence $\sum_{j=i}^{i+1} \Delta \Phi (b_j)\leq 0$, given every newly inserted element is charged with an $\OO{h}=\OO{\log_{\frac{\lambda}{B}}D.x}$ initial amount of potential. In Step 1.2.3, operation \textsc{Split}\ removes at most $\frac{1}{2}\left(D.x\right)^{1+\alpha}$ from a subtreap $D$, whose bottom buffer $b_i$ we assume to be at level $i$. Operation \textsc{Initialize}\ will add these elements to a new subtreap, whose bottom buffer is also at level $i$. Without loss of generality, we focus only on these bottom buffers, since $D$ is more than half-full when \textsc{Split}\ is called on it and since the bottom buffer's size is constant fraction of the subtreap's total size. The maximum difference in potential occurs when $\frac{1}{4}|b_{i}|$ elements are removed from a full bottom front/rear buffer of $b_i$ and added to an empty bottom front/rear buffer, respectively. We have that: \[ \Delta \Phi_r (b_i) \leq - 2 \frac{c_0}{B} \frac{|b_{i}|}{2} \left( h-i \right) + 2 \frac{c_0}{B} \frac{|b_{i}|}{2} \left( h-i \right) =0 \] \[ \Delta \Phi_f (b_i) \leq - \frac{c_0}{B} \frac{|b_{i}|}{6} \left( h-i \right) - \frac{c_0}{B}M^\varepsilon \frac{|b_{i}|}{6} \left( h-i \right) \leq 0 \] \noindent for $|b_{i}|\geq 0$. Hence $\Delta \Phi (b_i)\leq 0$, given that subtreaps created by \textsc{Initialize}\ are charged with an $\OO{\left(D.x\right)^{1+\alpha} \cdot h}=\OO{\left(D.x\right)^{1+\alpha}\log_{\frac{\lambda}{B}}D.x}$ initial amount of potential. This extra charge does not exceed by more than a constant factor, the initial potential charged to every newly inserted element, and is amortized over the $\OO{\left(D.x\right)^{1+\alpha}}$ elements that are inserted to the created subtreap. In Step 1.2.4, operation \textsc{Flush-Down}\ removes elements from a bottom buffer $b_i$ at level $i$ and inserts them to a middle or bottom buffer $b_{i+1}$ at level $i+1$, where $|b_{i+1}| = |b_{i}|^{1+\frac{\alpha}{2}}$. The maximum difference in potential occurs when $\frac{1}{2}|b_{i}|$ elements are removed from a full bottom rear buffer of $b_i$ and added to a rear buffer at level $i+1$. We have that: \[ \sum_{j=i}^{i+1} \Delta \Phi_r (b_j) \leq - 2 \frac{c_0}{B} \frac{|b_{i}|}{2} \left( h-i \right) + 2 \frac{c_0}{B} \frac{|b_{i}|}{2} \left( h-i -1 \right) \leq - \frac{c_0}{B} |b_{i}| \leq 0 \] \noindent for $|b_{i+1}|\geq 0$. For the case where Step \ref{fld:1} is executed, in the worst case at most $\frac{1}{6}|b_{i}|$ elements are removed from a full bottom front buffer of $b_i$. By Invariants \ref{inv:lprio} and \ref{inv:prio}, operation \textsc{Resolve}\ at Step \ref{bi:0}.1 of the recursive call to \textsc{Batched-Insert}\ will move these elements to the middle/bottom front buffer at level $i+1$. We have that: \[ \sum_{j=i}^{i+1} \Delta \Phi_f (b_j) \leq - \frac{1}{6}\frac{c_0}{B} |b_{i}| \leq 0 \] \noindent for $|b_{i}|\geq 0$. Hence $\sum_{j=i}^{i+1} \Delta \Phi (b_j)\leq 0$. \end{proof} \begin{lemma}\label{lem:bext} \textsc{Batched-ExtractMin}\ on an $x$-treap $D$ takes $\OO{\lambda^{\frac{\alpha}{1+\alpha}}\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} D.x}$ amortized I/Os per element, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{lemma} \begin{proof} The cost of \textsc{Batched-ExtractMin}\ is dominated by the call to \textsc{Flush-Up}\ on a buffer $b_i$, where $|b_i| = \OO{D.x}$. Steps \ref{be:0}.2 (extraction) and \ref{be:1} (base case) cost an extra $\OO{\lambda^{\frac{1}{1+\alpha}}/B}$ amortized I/Os per element, due to the potential's definition and by Lemma \ref{lem:array}, respectively. Excluding all recursive calls (to \textsc{Flush-Up}\ at Steps 1, 2.2 and 2.3 and to \textsc{Initialize}\ at Step 6), the worst-case cost of \textsc{Flush-Up}\ on a buffer $b_i$ is $\OO{1 + \frac{|b_i|}{B}\log_{\frac{\lambda}{B}} |b_i|}$ I/Os. Specifically, \textsc{Flush-Up}\ executes $\OO{|b_i|}$ \textsc{Insert} s and \textsc{ExtractMin} s to the temporary priority queue (that does not support \textsc{DecreaseKey}) and a constant number of scans and merges. The priority queue operations \cite{ABDHM07} and Step 3 take $\OO{\frac{|b_i|}{B}\log_{\frac{\lambda}{B}} |b_i|}$ I/Os (the structure can be easily modified to achieve this bound, rather than $\sort{|b_i|}$ I/Os). This dominates the cost of incurred random accesses and of calls to \textsc{Resolve}\ (Lemmata \ref{lem:ra} and \ref{lem:res}, respectively). The I/O-cost is amortized over the $\Theta \left(|b_i|\right)$ elements returned by \textsc{Batched-ExtractMin}, resulting in $\OO{\frac{1}{B}\log_{\frac{\lambda}{B}} |b_i|}$ amortized I/Os per element. To prove the negative cost of the recursive calls (Steps 1 and 2.3), it suffices to argue that there is a release in potential when buffers in consecutive levels are being processed. Without loss of generality, we assume that elements are moved from the middle front buffer at level $i$ to the top front buffer at level $i-2$ ($|b_{i}| = |b_{i-2}|^{1+\frac{\alpha}{2}}$), where the upper level subtreaps at level $i-1$ are base case structures (hence they do not affect the potential). This assumption charges an extra $\OO{\frac{\lambda^{\varepsilon}}{B}}$ I/Os per element by Lemma \ref{lem:array}. The case between the bottom and middle buffers is analogous. The maximum difference in potential occurs when $\frac{|b_{i-2}|}{4}$ elements are removed from a front middle buffer at level $i$ with less than $\frac{|b_{i}|}{4}$ elements, and added to an empty front top buffer at level $i-2$. Since the rear buffers do not change, we have that: \[ \sum_{j=i-2}^{i} \Delta \Phi(b_j) \leq \sum_{j=i-2}^{i} \left(\Delta \Phi_f(b_j) + \Delta \Phi_r(b_j)\right)\leq \Delta \Phi_f(b_{i-2}) + \Delta \Phi_f(b_{i})\leq \] \[ - \frac{c_0}{B} \lambda^\varepsilon \frac{|b_i|}{4} \left(h-i + 2\right) + \frac{c_0}{B} \lambda^\varepsilon \frac{|b_i|}{4} \left(h-i\right) \leq - \frac{c_0}{B} M^\varepsilon \frac{|b_i|}{2} \leq 0 , \] \noindent for $|b_i| \geq 0$. \end{proof} \section{Cache-oblivious priority queues} Priority queues support operations \textsc{Update}\ and \textsc{ExtractMin}\ that are defined similarly to \textsc{Batched-Insert}\ and \textsc{Batched-ExtractMin}, respectively, but on a \emph{single} element. \subsection{Data structure} To support these operations, we compose a priority queue out of its batched counterpart in Theorem \ref{thm:xtreap}. The data structure on $N$ elements consists of $1+\log_{1+\alpha}\log_2 N $ $x$-treaps of doubly increasing size with parameter $\alpha$ being set the same in all of them. Specifically, for $i\in \{0,\log_{1+\alpha}\log_2 N\}$, the $i$-th $x$-treap $D_i$ has $D_i.x = 2^{\left(1+\alpha \right)^i}$. We store all keys returned by \textsc{ExtractMin}\ in a hash table $X$ \cite{IP12,CFS18}. (Any I/O-efficient hash table with $\OO{1}$ query and update I/Os is also cache-oblivious). For $i\in \{0,\log_{1+\alpha}\log_2 N - 1\}$, we define the top buffer of $D_i$ to be ``below'' the bottom buffer of $D_{i-1}$ and the bottom buffer of $D_i$ to be ``above'' the top buffer of $D_{i+1}$. We define the set of represented pairs (key, priority) $rep = \bigcup_{i=0}^{\log_{1+\alpha}\log_2 N} D_i.rep \backslash \{(k,p)|k\in X\}$ and call \emph{represented} the keys and priorities in $rep$. We maintain the invariant that the maximum represented priority in $D_i.rep$ is smaller than the smallest represented priority below. \subsection{Algorithms} To implement \textsc{Update}\ on a pair (key,priority) $\in rep$, we \textsc{Batched-Insert}\ the corresponding element to $D_0$. $D_0$ handles single-element batches, since for $i=0 \Rightarrow x=\Theta\left(1\right)$. When $D_i$ reaches capacity (i.e. contains $\left(D_i.x\right)^{1+\alpha}$ elements), we call \textsc{Flush-Down}~on it, \textsc{Batched-Insert}\ the elements in the returned temporary array to $D_{i+1}$ and discard the array. This process terminates at the first $x$-treap that can accomodate these elements without reaching capacity. To implement \textsc{ExtractMin}, we call \textsc{Batched-ExtractMin}\ to the first $x$-treap $D_i$ with a positive counter, add the extracted elements to the (empty) bottom front buffer of $D_{i-1}$ and repeat this process on $D_{i-1}$, until $D_0$ returns at least one element. If the returned key does not belong to $X$, we insert it. Else, we discard the element and repeat \textsc{ExtractMin}. To implement \textsc{Delete}\ of a key, we add the key to $X$. \begin{theorem}\label{thm:pq} There exist cache-oblivious priority queues on $N$ elements that support operation \textsc{Update}\ in $\OO{\frac{1}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ amortized I/Os per element and operations \textsc{ExtractMin}\ and \textsc{Delete}\ in $\OO{\lceil \frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B} \log_{\frac{\lambda}{B}} \frac{N}{B} \rceil \log_{\frac{\lambda}{B}} \frac{N}{B} }$ amortized I/Os per element, using $\OO{\frac{N}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ blocks, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{theorem} \begin{proof} To each element in $D_i$ that has been \textsc{Update} d since the last time that $D_i$ has undergone a \textsc{Flush-Down}~operation, we define an \emph{update potential} of: % $$ \frac{1+\alpha}{B} \log_{\frac{\lambda}{B}} \frac{2^{(1+\alpha)^i}}{B} = (1+\alpha)^{i+1}\frac{1}{B \log \frac{\lambda}{B}}-\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} B$$ % When $D_i$ reaches capacity and \textsc{Flush-Down}~is called on it, the number of elements that have been \textsc{Update} d in the priority queue is a constant fraction of the $D_i$'s capacity. This is because the only way element could appear in $D_i$ is from: \begin{itemize} \item The elements that were already in the priority queue at $D_j$ for $j<i$. \item The elements that were newly inserted. \item The elements inserted from $D_j$ for $j>i$ during \textsc{ExtractMin}\ process. However, these elements will never bring the number in $D_i$ above a constant fraction of the capacity. \end{itemize} \noindent Hence, an \textsc{Insert}\ operation, before any needed \textsc{Flush-Down}\ operation, increases the update potential by: \begin{align*} \sum_{i=0}^{\log_{1+\alpha}\log_2 N} \left( (1+\alpha)^{i+1}\frac{1}{B \log \frac{\lambda}{B}}-\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} B \right) &\leq \sum_{i=0}^{\log_{1+\alpha}\log_2 N} (1+\alpha)^{i+1}\frac{1}{B \log \frac{\lambda}{B}} \\ &= O \left( \frac{\log N}{B \log \frac{\lambda}{B}} \right) \\ &= O \left( \frac{\log_\frac{\lambda}{B} N}{B} \right) \end{align*} To every key $k$ that has been returned by operation \textsc{ExtractMin}\ and occurs in $d$ elements in $D$, we define a potential of $\Phi(k) = d\cdot \frac{c_d}{B}\lambda^{\varepsilon}\cdot \log_{\frac{\lambda}{B}} D.x $ (for constant $c_d\geq 1$). \textsc{Insert}\ does not affect this potential by the assumption that an extracted key is not reinserted to the structure. To $D_i$ we give an \emph{extract-min potential} of: % $$ \lambda^{\frac{\alpha}{1+\alpha}}\frac{1+\alpha}{B} \log_{\frac{\lambda}{B}} \frac{2^{(1+\alpha)^i}}{B} = (1+\alpha)^{i+1}\frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B \log \frac{\lambda}{B}}-\lambda^{\frac{\alpha}{1+\alpha}}\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} B$$ % for the number of \textsc{ExtractMin}~operations performed since the last time a \textsc{Batched-ExtractMin}\ operation was performed on $D_i$. When $D_i$ gets empty and \textsc{ExtractMin}~is needed to fill it, the number of elements \textsc{ExtractMin} 'd from the priority queue since the last time that happened is a constant fraction of $D_i$'s capacity. This is true, since the only way elements could be removed from $D_i$ between \textsc{Batched-ExtractMin} s is because $D_i$ reached its capacity from \textsc{Update} s and thus \textsc{Flush-Down}\ needed to be called on $D_{i-1}$. This, however, only reduces the number of elements to a constant fraction of $D_i$'s capacity. \noindent Hence, an \textsc{ExtractMin}~operation, before any needed \textsc{Flush-Down}, increases the extract-min potential by: \begin{align*} \sum_{i=0}^{\log_{1+\alpha}\log_2 N} \left( (1+\alpha)^{i+1}\frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B \log \frac{\lambda}{B}}-\lambda^{\frac{\alpha}{1+\alpha}}\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} B \right) &\leq \sum_{i=0}^{\log_{1+\alpha}\log_2 N} (1+\alpha)^{i+1}\frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B \log \frac{\lambda}{B}} \\ &= O \left( \lambda^{\frac{\alpha}{1+\alpha}}\frac{\log N}{B \log \frac{\lambda}{B}} \right) \\ &= O \left( \lambda^{\frac{\alpha}{1+\alpha}}\frac{\log_{\frac{\lambda}{B}} N}{B} \right) \end{align*} \noindent Finally, the potential is constructed to exactly pay for the cost of \textsc{Flush-Down}~between $x$-treaps. The worst-case cost of operation \textsc{Delete}\ is $\OO{1}$ I/Os \cite{IP12}. However, its amortized cost includes the potential necessary to remove all the elements with the deleted key from the structure. We introduce an extra ``ghost potential'' of $\OO{\frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ to every key in $X$ that has been \textsc{Delete} d or \textsc{ExtractMin} 'd. The stated I/O-cost for \textsc{Delete}\ follows from the fact that the first time a key is \textsc{Delete} d or \textsc{ExtractMin} 'd from the structure $\OO{\log_{\frac{\lambda}{B}}\frac{N}{B}}$ elements in the structure get charged by this potential (by Lemma \ref{lem:card}). Whenever an \textsc{ExtractMin}\ returns an element with ghost potential, this potential is released in order to \textsc{ExtractMin}\ the next element at no extra amortized cost. The priority queue may contain an $N$-treap that occupies $\OO{\left(\frac{N}{B}\right)^{1+\alpha}\log_{\frac{\lambda}{B}}\frac{N}{B}}$ blocks (by Theorem \ref{thm:xtreap}). However, with carefull manipulation of the unaddressed empty space, this space usage (and thus of the whole priority queue) can be reduced back to $\OO{\frac{N}{B}\log_{\frac{\lambda}{B}}\frac{N}{B}}$ blocks. \end{proof} \section{Cache-oblivious buffered repository trees} A \emph{buffered repository tree (BRT)} \cite{BGVW00,ABDHM07,CR18} stores a multi-set of at most $N$ \emph{elements}, each associated with a \emph{key} in the range $\left[1\dots k_{\max}\right]$. It supports the operations \textsc{Insert}~and \textsc{Extract}~that, respectively, insert a new element to the structure and remove and report all elements in the structure with a given key. To implement a BRT, we make use of the $x$-box \cite{BDFILM10}. Given positive real $\alpha \leq 1$ and key range $\left[k_{\min}, k_{\max}\right)\subseteq \Re$, an $x$-\emph{box} $D$ stores a set of at most $\frac{1}{2}\left(D.x\right)^{1+\alpha} $ elements associated with a key $k\in \left[D.k_{\min}, D.k_{\max}\right)$. An $x$-box supports the following operations: \begin{itemize} \item \textsc{Batched-Insert} $\left(D, e_1, e_2 , \ldots, e_{b}\right)$: For constant $c\in \left(0,\frac{1}{2}\right]$, insert $b \leq c\cdot D.x$ elements $e_1, e_2 , \ldots, e_{b}$ to $D$, given they are key-sorted with keys $e_i.k \in \left[D.k_{\min}, D.k_{\max}\right), i\in \left[1,b\right]$. \item \textsc{Search} $\left(D, \kappa \right)$: Return pointers to all elements in $D$ with key $\kappa$, given they exist in $D$ and $\kappa \in \left[D.k_{\min}, D.k_{\max}\right)$. \end{itemize} To implement operation \textsc{Extract}$\left(D, \kappa \right)$~that extracts all elements with key $\kappa$ from an $x$-box $D$, we \textsc{Search}$\left(D, \kappa \right)$ and remove from $D$ all returned pointed elements. The BRT on $N$ elements consists of $1+\log_{1+\alpha}\log_{2} N$ $x$-boxes of doubly increasing size with parameter $\alpha$ being set the same in all of them. We obtain the stated bounds by modifying the proof of the $x$-box \cite[Theorem 5.1]{BDFILM10} to account for Lemmata \ref{lem:xbox} and \ref{lem:brtarray}. \begin{lemma}\label{lem:xbox} For $D.x= \Omega \left( \lambda^{\frac{1}{1+\alpha}}\right)$, an $x$-box supports operations \textsc{Batched-Insert}\ in $\OO{\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} \frac{D.x}{B}}$ and \textsc{Extract}~on $K$ extracted elements in $\OO{\left(1+\alpha \right)\log_{\frac{\lambda}{B}} \frac{D.x}{B} +\frac{K}{B}}$ amortized I/Os per element, using $\OO{\frac{\left(D.x\right)^{1+\alpha}}{B}}$ blocks, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{lemma} \begin{proof} Regarding \textsc{Batched-Insert}\ on a cache-aware $x$-box, we obtain $\OO{\frac{1+\alpha}{B}\log_{\frac{\lambda}{B}} \frac{D.x}{B}}$ amortized I/Os by modifying the proof of \textsc{Batched-Insert}\ \cite[Theorem 4.1]{BDFILM10} according the proof of Lemma \ref{lem:bins}. Specifically, every element is charged $\OO{1/B}$ amortized I/Os, instead of $\OO{1/B^{\frac{1}{1+\alpha}}}$, and the recursion stops when $D.x = \OO{\lambda^{\frac{1}{1+\alpha}}}$, instead of $D.x = \OO{B^{\frac{1}{1+\alpha}}}$. Regarding \textsc{Search} ing for the first occurrence of a key in a cache-aware $x$-box, we obtain $\OO{\log_{\frac{\lambda}{B}} \frac{D.x}{B}}$ amortized I/Os by modifying the proof of \textsc{Search}~\cite[Lemma 4.1]{BDFILM10}, such that the recursion stops when $D.x = \OO{\lambda^{\frac{1}{1+\alpha}}}$, instead of $D.x = \OO{B^{\frac{1}{1+\alpha}}}$. To \textsc{Extract}\ all $K$ occurrences of the searched key, we access them by scanning the $x$-box and by following fractional cascading pointers, which incurs an extra $\OO{\frac{K}{B}}$ I/Os. \end{proof} \begin{lemma}\label{lem:brtarray} An $\OO{\lambda^{\frac{1}{1+\alpha}}}$-box supports operation \textsc{Batched-Insert}\ in $\OO{1/B}$ amortized I/Os per element and operation \textsc{Extract}~on $K$ extracted elements in $\scan{\lambda^{\frac{\alpha}{1+\alpha}}}$ amortized I/Os per element, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{lemma} \begin{proof} We allocate an array of size $\OO{M}$ and implement \textsc{Batched-Insert}\ by simply appending the inserted element to the array and \textsc{Extract}~by scanning the array and removing and returning all occurrences of the searched key. \end{proof} \begin{theorem}\label{thm:brt} There exist cache-oblivious buffered priority trees on a multi-set of $N$ elements and $K$ extracted elements that support operations \textsc{Insert}\ in $\OO{\frac{1}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}}$ and \textsc{Extract}\ in $\OO{\frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B}\log_{\frac{\lambda}{B}} \frac{N}{B}+\frac{K}{B}}$ amortized I/Os per element, using $\OO{\frac{N}{B}}$ blocks, for some $\lambda \in \left[2, N \right]$ and for any real $\alpha\in (0,1]$. \end{theorem} \section{Applications to graph algorithms} \subsection{Directed single-source shortest paths} \begin{theorem} \label{thm:dsssp} Single source shortest paths on a graph with $V$ nodes and directed $E$ edges can be computed cache-obliviously in $\OO{\frac{V^{\frac{1}{1+\alpha}} E^{\frac{\alpha}{1+\alpha}}}{B}\log^2_{\frac{E}{VB}} \frac{E}{B} + V\log_{\frac{E}{VB}} \frac{E}{B} + \frac{E}{B} \log_{\frac{E}{VB}} \frac{E}{B}}$ I/Os, for any real $\alpha\in (0,1]$. \end{theorem} \begin{proof} The algorithm of Vitter \cite{V01} (described in detail in \cite[Lemma 4.1]{CR18} for the cache-oblivious model) makes use of a priority queue that supports the \textsc{Update}\ operation and of a BRT on $\OO{E}$ elements. Specifically, it makes $V$ calls to \textsc{ExtractMin}\ and $E$ calls to \textsc{Update}\ on the priority queue and $V$ calls to \textsc{Extract}\ and $E$ calls to \textsc{Insert}\ on the BRT. We obtain $\OO{V \frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B}\log^2_{\frac{\lambda}{B}} \frac{E}{B} + V\log_{\frac{\lambda}{B}} \frac{E}{B} + \frac{E}{B} \log_{\frac{\lambda}{B}} \frac{E}{B}}$ total I/Os by using Theorems \ref{thm:pq} and \ref{thm:brt} for some $\lambda \in \left[2, N \right]$. We set $\lambda = \OO{E/V}$ to obtain the stated bound. \end{proof} \subsection{Directed depth- and breadth-first search} \begin{theorem} \label{thm:ddbfs} Depth-first search and breadth-first search numbers can be assigned cache-obliviously to the $V$ nodes of a graph with $E$ directed edges in $\OO{\frac{V^{\frac{1}{1+\alpha}} E^{\frac{\alpha}{1+\alpha}}}{B}\log^2_{\frac{E}{VB}} \frac{E}{B} + V\log_{\frac{E}{VB}} \frac{E}{B} + \frac{E}{B} \log_{\frac{E}{VB}} \frac{E}{B}}$ I/Os, for any real $\alpha\in (0,1]$. \end{theorem} \begin{proof} The algorithm of Buchsbaum et al. \cite{BGVW00} makes use of a priority queue and of a BRT on $\OO{E}$ elements. Specifically, it makes $2V$ calls to \textsc{ExtractMin}\ and $E$ calls to \textsc{Insert}\ on the priority queue and $2V$ calls to \textsc{Extract}~and $E$ calls to \textsc{Insert}\ on the BRT \cite[Theorem 3.1]{BGVW00}. We obtain $\OO{V \frac{\lambda^{\frac{\alpha}{1+\alpha}}}{B}\log^2_{\frac{\lambda}{B}} \frac{E}{B} + V\log_{\frac{\lambda}{B}} \frac{E}{B} + \frac{E}{B} \log_{\frac{\lambda}{B}} \frac{E}{B}}$ total I/Os, by using Theorems \ref{thm:pq} and \ref{thm:brt} for some $\lambda \in \left[2, N \right]$. We set $\lambda = \OO{E/V}$ to obtain the stated bound. \end{proof} \bibliographystyle{plain}
1,116,691,500,605
arxiv
\section{Introduction} In machine learning (ML), classification algorithms have achieved great success. Through recent advances in convolutional neural networks, their classification performance already surpassed the human-level performance in image classification~\cite{He:delving}. However, such algorithms have usually been developed under a \emph{closed-set} assumption, \emph{i.e.}, the class of each test sample is assumed to always belong to one of the pre-defined set of classes. Although this conventional assumption can be easily violated in real-world applications (classifiers can face unknown-class data), traditional classification algorithms are highly likely to force unknown-class samples to be classified into one of the known classes. To tackle this problem, the \emph{open-set recognition (OSR)} problem~\cite{Scheirer:openset} aims to properly classify unknown-class samples as ``unknown'' and known-class samples as one of the known classes. According to the definition of OSR~\cite{Scheirer:openset}, it is required to properly limit the latent feature space of known-class data. To satisfy the requirement, various OSR methods were developed based on traditional ML models. Previously, Scheirer~\emph{et al.}~\cite{Scheirer:svm} calibrated the decision scores of support vector machines (SVMs). Based on the intuition that a large set of data samples of unknown classes can be rejected if those of known classes are accurately modeled, Jain~\emph{et al.}~\cite{Jain:pisvm} proposed $P_I$-SVM, which utilized the statistical modeling of known-class samples located near the decision boundary of SVMs. Afterwards, it was attempted to solve the OSR problem based on the principle of the nearest neighbors~\cite{PRM:specialized}. Taking distribution information of data into account, Rudd~\emph{et al.}~\cite{Rudd:extreme} proposed the extreme value machine which utilizes the concept of margin distributions. Since deep neural networks (DNNs) have robust classification performance by learning high-level representations of data, OSR methods for DNNs have received great attention. Based on the theoretical foundations studied in traditional ML-based OSR methods, Bendale and Boult~\cite{Bendale:openmax} proposed the first OSR strategy for DNNs called Openmax, which calibrates the output logits of pre-trained Softmax classifiers. To improve Openmax, Yoshihashi~\emph{et al.}~\cite{Yoshihashi:crosr} proposed the classification-reconstruction learning to make robust latent feature vectors. Afterwards, Oza and Patel~\cite{Oza:c2ae} proposed to exploit a class-conditioned autoencoder and use its reconstruction error to assess each input sample. Sun~\emph{et al.}~\cite{sun:cgdl} employed several class-conditioned variational auto-encoders for generative modeling. Although previous methods applied \emph{offline analyses} to pre-trained Softmax classifiers or employed complicated DNN architectures, they have limited performance since the classifiers were trained solely based on known-class data. To mitigate the problem, this paper designs an simple and effective open-set classifier in the \emph{generalized OSR setting}, which uses background-class regularization (BCR) at training time. Despite its effectiveness, BCR has received a little attention in OSR and previous BCR methods~\cite{Dhamija:reducing,Hendrycks:oe,Liu:energy} are insufficient to properly solve the OSR problem. In this paper, we denote the infinite label space of all classes as $\mathcal{Y}$ and use the following class categories, whose definition is also provided in~\cite{Dhamija:reducing,Geng:survey}. \begin{enumerate}[nosep] \item[\tiny$\bullet$] \textbf{Known known classes} (KKCs; $\mathcal{K}= \{1,\cdots,C\} \subset \mathcal{Y}$) include distinctly labeled positive classes, where $\mathcal{U}=\mathcal{Y}\setminus \mathcal{K}$ is the entire unknown classes. \item[\tiny$\bullet$] \textbf{Known unknown classes} (KUCs; $\mathcal{B}\subset \mathcal{U}$) include background classes, \emph{e.g.}, labeled classes which are not necessarily grouped into a set of KKCs $\mathcal{K}$. \item[\tiny$\bullet$] \textbf{Unknown unknown classes} (UUCs; $\mathcal{A}=\mathcal{U}\setminus \mathcal{B}$) represent the rest of $\mathcal{U}$, where UUCs are not available at training time, but occur at inference time. \end{enumerate} Also, we denote $\mathcal{D}_{t}$ as a training set consisting of multiple pairs of a KKC data sample and the corresponding class label $y \in \{1,\cdots,C\}$. $\mathcal{D}_{test}^{k}$ and $\mathcal{D}_{test}^{u}$ are test sets of KKCs and UUCs, respectively. $\mathcal{D}_{b}$ is a background dataset of KUCs. \section{Preliminary Studies} \subsection{The Open-Set Recognition Problem}\label{sec2.1} The OSR problem addresses a classification setting that can face test samples from classes unseen during training (UUCs). In this setting, open-set classifiers aim to properly classify KKC samples while rejecting UUC ones simultaneously. A similar problem to OSR is out-of-distribution (OoD) detection~\cite{hendrycks:baseline}, which typically aims to reject data items drawn far away from the training data distribution. Conventionally, previous studies such as~\cite{hendrycks:baseline,Liang:odin,Lee:calibration,Lee:maha} assumed that OoD samples are drawn from other datasets or can be even noise data. In this paper, we aim to reject test data whose classes are unknown but related to the training data, which narrows down the scope of conventional OoD detection tasks. \begin{figure}[t] \centering{% \subfloat[Latent feature space]{\includegraphics[width=0.27\textwidth, angle=0]{fig1-0.png}} \quad \quad \subfloat[Closed-set problem]{\includegraphics[width=0.27\textwidth, angle=0]{fig1-1.png}} \quad \quad \subfloat[Open-set problem]{\includegraphics[width=0.27\textwidth, angle=0]{fig1-2.png}} } \caption{Given (a) a latent feature space, we demonstrate (b) closed-set and (c) open-set problems, where KKCs and UUCs are known and unknown classes, respectively. } \label{fig:comparison} \end{figure} Previously, Scheirer~\emph{et al.}~\cite{Scheirer:openset} introduced a formal definition of OSR based on the notion of open-space risk $R_{\mathcal{O}}$, which is a relative measure of a positively labeled union of balls $\mathcal{S}_V$ and open space $\mathcal{O}$ located far from $\mathcal{S}_V$. Since labeling any data item in $\mathcal{O}$ incurs open-space risk, it is straightforward that a classifier cannot be a solution for the OSR problem if the classifier accepts data in infinitely wide regions, \emph{i.e.}, its open-space risk is unbounded ($R_{\mathcal{O}} = \infty$). The definition implies that essential requirements to solve the OSR problem are 1) \emph{bounding open-space risk} and 2) \emph{ideally balancing it with empirical risk}. Unlike traditional classifier models, open-set classifiers are required to limit the latent feature space of KKC data to bound their open-space risk. To ensure open-space risk to be bounded, Scheirer~\emph{et al.}~\cite{Scheirer:svm} introduced compact abating probability (CAP) models. The principle of CAP models is that if the support region of a classifier decays in all directions from the training data, thresholding the region will bound the classifier's open-space risk~\cite{Boult:survey}. As depicted in Figure~\ref{fig:comparison}, which compares traditional closed-set and open-set classification problems~\cite{Geng:survey}, building proper \emph{class-wise} CAP models is an effective strategy for OSR. \subsection{Post-Classification Analysis for Pre-Trained Softmax Classifier}\label{sec2.2} This paper aims to solve the OSR problem solely based on a standard DNN-based classifier architecture $f$ as a latent feature extractor. Applying a fully-connected layer to $f$, a conventional Softmax classifier computes the posterior probability of an input ${\bf x}$ belonging to the $c$-th known class by \begin{equation}\label{eq:ori_prob} P_s(y=c|{\bf x}) = \frac{\exp({\bf w}_c^T f ({\bf x})+b_c)}{\sum_{i=1}^C \exp({\bf w}_i^T f ({\bf x})+b_i)}, \end{equation} where $c \in \{1,\cdots,C\}$, $f({\bf x}) \in \mathbb{R}^n$ is the latent feature vector of ${\bf x}$, and ${\bf w}_c$ and $b_c$ are the weight and bias for the $c$-th class, respectively. For pre-trained Softmax classifiers, Hendrycks and Gimpel~\cite{hendrycks:baseline} proposed a baseline technique to detect anomalous samples, which imposes a threshold on the predictive confidence of Eq.~(\ref{eq:ori_prob}). When using the baseline approach to solve the OSR problem, one can estimate the class of each KKC sample and recognize UUC data by \begin{equation}~\label{eq:softmax_inf} \widehat{y} = \begin{cases} \argmax_{c \in \{1,\cdots,C\}} P_s(y=c|{\bf x}), & \text{if } \max_{c \in \{1,\cdots,C\}} P_s(y=c|{\bf x}) \ge \tau, \\ C+1 \textrm{ (unknown class)}, & \text{otherwise}. \end{cases} \end{equation} However, Eq.~(\ref{eq:softmax_inf}) cannot formally bound open-space risk and formulate class-wise CAP models since it only rejects test data near the decision boundary of classifiers, thus having infinitely wide regions of acceptance~\cite{Boult:survey}. Therefore, \emph{post-classification analysis} methods using an auxiliary measure other than the Softmax probability are necessary to build auxiliary CAP models in the latent feature space of $f$, where \emph{distance measures} have been widely employed in previous studies~\cite{Bendale:openmax,Lee:maha}. To build class-wise CAP models, Openmax~\cite{Bendale:openmax} defined radial-basis decaying functions $\{s({\bf x},i)\}_{i=1}^C$, each of which measures the class-belongingness of ${\bf x}$ for the $c$-th class, in the latent feature space of $f$. For each $s({\bf x},c)$, the authors employed distance measures between $f({\bf x})$ and an empirical class mean vector ${\boldsymbol \mu}_c$, \emph{e.g.}, $s({\bf x},c) = D_E^2(f({\bf x}), {\boldsymbol \mu}_c) = (f({\bf x})-{\boldsymbol \mu}_c)^T(f({\bf x})-{\boldsymbol \mu}_c)$. To formulate more effective CAP models, they statistically analyzed the distribution of $s({\bf x},c)$ based on the extreme value theory (EVT)~\cite{Scheirer:evt}, which provides a theoretical foundation that the Weibull distribution is suitable for modeling KKC samples located far from the class mean vectors (extreme samples). To be specific, Openmax fits a Weibull distribution on extreme samples of the $c$-th class having the highest $D_E(f({\bf x}), {\boldsymbol \mu}_c)$ values, where its cumulative distribution function (CDF) formulates the \emph{probability of inclusion} $P_I({\bf x},c)$~\cite{Jain:pisvm,Rudd:extreme}, \emph{i.e.}, $P_I({\bf x},c) = 1 - \texttt{WeibullCDF}$, which rapidly decays near the extreme samples. Based on $P_I({\bf x},c)$, the decision rule of Eq.~(\ref{eq:softmax_inf}) can be calibrated to conduct OSR with Softmax classifiers. \subsection{Background-Class Regularization}\label{sec2.3} Although they need additional inference procedures (\emph{e.g.}, EVT modeling), previous offline analyses may have limited OSR performance since the classifiers were trained solely based on known-class data. To obtain robust empirical results without complicated analyses, one can use the strategy of BCR at the training phase, which exploits background-class (KUC) samples as surrogates of UUC data. Geng~\emph{et al.}~\cite{Geng:survey} argued that the generalized OSR setting that utilizes KUC samples is still less-explored and an important research direction for robust OSR. Conventionally, a loss function for training classifiers with BCR can be \begin{equation}\label{eq:previous} \mathcal{L} = \mathcal{L}_{cf} + \lambda \mathcal{L}_{bg} = \mathbb{E}_{({\bf x}^k, y)\sim\mathcal{D}_{t}} \left[- \log P_s(y|{\bf x}^k) + \lambda \mathbb{E}_{{\bf x}^b \sim\mathcal{D}_{b}}\left[f_{reg}\left({\bf x}^k, y, {\bf x}^b \right) \right]\right], \end{equation} where $\mathcal{L}_{cf}$ and $\mathcal{L}_{bg}$ are the loss terms for closed-set classification and BCR, respectively, and $\lambda$ is a hyperparameter. For $\mathcal{L}_{bg}$, previous studies designed their own $f_{reg}$, where \cite{Dhamija:reducing} proposed the objectosphere loss for OSR, and \cite{Hendrycks:oe} and \cite{Liu:energy} employed the uniformity and the energy losses for OoD detection, respectively. In this paper, we tackle the following limitations of the previous BCR methods. \begin{enumerate}[nosep] \item[\tiny$\bullet$] In the previous BCR methods, $\mathcal{L}_{bg}$ were designed to make normal data and anomalies more distinguishable in terms of the corresponding anomaly scores. Since they categorized normal data into a single group (did not consider the classes) in $\mathcal{L}_{bg}$, the previous methods may have limited performance in rejecting UUC data and maintaining robust closed-set classification results. \item[\tiny$\bullet$] The previous methods using the decision rule of Eq.~(\ref{eq:softmax_inf}) (\emph{e.g.}, objectosphere~\cite{Dhamija:reducing} and uniformity~\cite{Hendrycks:oe}) cannot bound open-space risk. Although one can use post-classification analyses to bound open-space risk, trained latent feature space can be inappropriate for using another metric such as distance measures. \item[\tiny$\bullet$] To increase the gap between KKC and KUC data in terms of latent feature magnitude and energy in the objectosphere~\cite{Dhamija:reducing} and the energy~\cite{Liu:energy} losses, respectively, it is necessary to find proper margin parameters for each dataset. \end{enumerate} \section{Proposed Method} \subsection{Overview}\label{sec2.4} Using a standard classifier $f$, this paper aims to design open-set classifiers having simple yet effective inference steps. In the following, we summarize our method. \begin{enumerate}[nosep] \item[\tiny$\bullet$] Instead of applying fully-connected layers to feature extractors $f$, we use the principle of linear discriminant analysis (LDA)~\cite{Murphy:machine} to classify images based on a distance measure. By simply imposing a threshold on the distance as in Eq.~(\ref{eq:softmax_inf}), our classifiers can easily build class-wise CAP models. (Section~\ref{sec3.1}) \item[\tiny$\bullet$] Afterwards, we propose a novel BCR strategy suitable for the distance-based classifiers. Following the convention of Eq.~(\ref{eq:previous}), we design our own $\mathcal{L}_{bg}$ called \emph{class-inclusion loss}, where our total loss is function defined by \begin{equation}~\label{eq:totloss} \mathcal{L} = \mathcal{L}_{cf} + \lambda \mathcal{L}_{bg} = \mathcal{L}_{cf} + \lambda ( \mathcal{L}_{bg,k}+\mathcal{L}_{bg,u}) \end{equation} The class-inclusion loss first limits the feature space of KKC data by formulating \emph{explicit} class-wise boundaries, and then forces KUC data to be located outside the boundaries at each training iteration. Our loss is designed to increase the distance gaps between KKC and KUC samples while maintaining robust closed-set classification performance. (Sections~\ref{sec3.2} and~\ref{sec3.3}) \end{enumerate} For a better understanding of the training and inference processes of our method, we provide their detailed algorithm in our supplementary materials. \subsection{Distance-Based Classification Models}\label{sec3.1} \subsubsection{Distance-based classifiers.} To train a robust open-set classifier, we formulate a \emph{distance-based classifier} as an alternative of Eq.~(\ref{eq:ori_prob}): \begin{equation}~\label{eq:disc} P_d(y=c|{\bf x}) = \frac{P_c\cdot \mathcal{N}(f({\bf x})|{\boldsymbol \mu}_c, {\bf I})}{\sum_{i=1}^C P_i \cdot \mathcal{N}(f({\bf x})|{\boldsymbol \mu}_i, {\bf I})} = \frac{P_c \cdot \exp\left(-D_E^2(f({\bf x}), {\boldsymbol \mu}_c)\right)}{\sum_{i=1}^C P_i \cdot \exp\left(-D_E^2(f({\bf x}), {\boldsymbol \mu}_i)\right)} \end{equation} where Eq.~(\ref{eq:disc}) uses the principle of LDA and $\mathcal{L}_{cf} = \mathbb{E}_{({\bf x}^k, y)\sim\mathcal{D}_{t}} [- \log P_d(y|{\bf x}^k)]$. In Eq.~(\ref{eq:disc}), we exploit an identity covariance matrix ${\bf I}$ and $P_c = P(y=c)=C^{-1}$ for all $c$ for KKCs. The classifier estimates the class of each ${\bf x}$ via $D_E^2(f({\bf x}), {\boldsymbol \mu}_c) = (f({\bf x})-{\boldsymbol \mu}_c)^T(f({\bf x})-{\boldsymbol \mu}_c)$, the Euclidean distance between $f({\bf x}) \in \mathbb{R}^n$ and ${\boldsymbol \mu}_c \in \mathbb{R}^n$, where we call ${\boldsymbol \mu}_c$ a \emph{class-wise anchor}. To ensure sufficiently large distance gaps between the pairs of initial class-wise anchors, we randomly sample each $\boldsymbol{\mu}_c$ from the standard Gaussian distribution and then set each $\boldsymbol{\mu}_c$ as a trainable vector. For distance analysis results of such randomly sampled vectors, see~\cite{Izmailov:semi}. \subsubsection{Decision rule.} At inference time, each KKC sample ${\bf x}$ can be classified via $\widehat{y} = \argmin_{c \in \{1,\cdots,C\}} D_E^2(f({\bf x}), {\boldsymbol \mu}_c)$. Furthermore, applying a threshold to $D_E^2(f({\bf x}), {\boldsymbol \mu}_c)$ can bound open-space risk by formulating class-wise CAP models as follows: \begin{equation}~\label{eq:inference} \widehat{y} = \begin{cases} \argmin_{c \in \{1,\cdots,C\}} D_E^2(f({\bf x}), {\boldsymbol \mu}_c), & \text{if } \max_{c \in \{1,\cdots,C\}} - D_E^2(f({\bf x}), {\boldsymbol \mu}_c) \ge \tau, \\ C+1 \textrm{ (unknown class)}, & \text{otherwise}. \end{cases} \end{equation} As Eq.~(\ref{eq:inference}) employs the same metric $D_E$ for classification and UUC rejection, our method may support more accurate latent feature space analysis for OSR than the previous OSR methods using post-classification analyses. The concept of distance-based classification was also employed in prototypical networks~\cite{snell:prototypical}, nearest class mean classifiers~\cite{Mensink:ncm}, and the previous studies of the center loss function~\cite{wen:centerloss} and convolutional prototype classifiers~\cite{yang:convolutionalp}. In addition, polyhedral conic classifiers~\cite{cevikalp:polyhedral} used the idea of returning compact class regions for KKC samples based on distance-based feature analyses. It is noteworthy that our main contribution is a novel BCR method that can effectively utilize KUC samples in a distance-based classification scheme (described in Sections~\ref{sec3.2} and~\ref{sec3.3}), not the distance-based classifier method itself. To the best of our knowledge, we are the first to discuss the necessity of distance-based BCR methods for OSR and propose a reasonable regularization method for distance-based classifiers. \subsection{Background Class Regularization for Distance-based Classifiers}\label{sec3.2} \subsubsection{Intuition and hypersphere classifiers.} To obtain robust OSR performance via Eq.~(\ref{eq:inference}), we aim to design a BCR method suitable for distance-based classifiers, which uses $\mathcal{D}_{t}$ and $\mathcal{D}_b$ as surrogates of $\mathcal{D}_{test}^k$ and $\mathcal{D}_{test}^u$ at training time, respectively. Although it cannot provide any information of $\mathcal{D}_{test}^{u}$, $\mathcal{D}_b$ can be effective to limit the latent feature space of KKCs, while reserving space for UUCs. With $\mathcal{D}_b$, it is intuitive that the primary objective of BCR for Eq.~(\ref{eq:inference}) is to make KUC samples located far away from ${\boldsymbol \mu}_i$ for all classes $i \in \{1,\cdots,C\}$. Before we illustrate our BCR method, we first introduce hypersphere classifiers (HSCs)~\cite{ruff:hsc}. An HSC conducts anomaly detection by using a feature extractor $g$, where its anomaly score for an input ${\bf x}$ is the Euclidean distance between a single center vector $\boldsymbol{\mu}$ and $g({\bf x})$. When training the HSC model, the authors used normal and background data, $\mathcal{D}_{t}$ and $\mathcal{D}_{b}$, respectively, and a loss function \begin{equation}\label{eq:hscobj} \mathbb{E}_{{\bf x}^k\sim\mathcal{D}_{t}} \left[h\left(D_E^2\left(g({\bf x}^k), {\boldsymbol \mu}\right)\right)\right] - \mathbb{E}_{{\bf x}^b \sim\mathcal{D}_{b}}\left[ \log\left(1-\exp\left(-h\left(D_E^2\left(g({\bf x}^b), {\boldsymbol \mu}\right)\right)\right)\right) \right]. \end{equation} The loss function is designed to decrease the Euclidean distances between normal samples ${\bf x}^k$ and $\boldsymbol{\mu}$ while increasing the distances for background samples ${\bf x}^b$. In Eq.~(\ref{eq:hscobj}), $h(x) = \sqrt{x + 1} - 1$, which implies that the Euclidean distance $D_E^2(g({\bf x}), {\boldsymbol \mu})$ is scaled into the range of $(0,1]$ via $\exp(-h(D_E^2(g({\bf x}), {\boldsymbol \mu})))$. \subsubsection{Background-class regularization strategy.} It is straightforward that the decision rule of Eq.~(\ref{eq:inference}) employs the principle of HSCs in a class-wise manner. In other words, the class-wise HSC for the $c$-th class determines whether a test sample belongs to the $c$-th class by computing $D_E^2(f({\bf x}), {\boldsymbol \mu}_c)$, where the input is determined as UUC if the entire class-wise HSCs reject the data item. Thus, a proper BCR strategy for distance-based classifiers should force each KUC sample ${\bf x}^b$ to be rejected by the entire class-wise HSCs (increase $D_E^2(f({\bf x}^b), {\boldsymbol \mu}_i)$ for all $i$). Since it is inefficient to consider the entire KKCs to regularize $f$ with ${\bf x}^b$ at each iteration, we approximate the process by only taking the \emph{closest} class-wise HSC into account (increase $\min_{i\in\{1,\cdots,C\}} D_E^2(f({\bf x}^b), {\boldsymbol \mu}_i)$). Although one can adopt Eq.~(\ref{eq:hscobj}) to formulate $\mathcal{L}_{bg}$ for distance-based classifiers, scaling $D_E^2(f({\bf x}), {\boldsymbol \mu}_c)$ into $(0,1]$ via $\exp(-h(D_E^2(f({\bf x}), {\boldsymbol \mu}_c)))$, which rapidly decays near ${\boldsymbol \mu}_c$, can be insufficient to move KUC data far away from class-wise anchors. Therefore, we design $\mathcal{L}_{bg}$ that can guarantee sufficient spaces for KKC data and simultaneously force KUC samples located outside the limited class-wise spaces. \subsection{Probability of Inclusion and Class-Inclusion Loss}\label{sec3.3} As we described in Section~\ref{sec2.2}, the probability of inclusion builds effective CAP models, since it is designed to rapidly decay near extreme data, \emph{i.e.}, $P_I({\bf x},c) \approx 1$ in the region that a majority of class-$c$ KKC samples are located. In the following, we introduce a novel regularization method for distance-based classifiers based on the principle of the probability of inclusion, and then design a loss function. \subsubsection{Probability of inclusion for distance-based classifiers.} For pre-trained Softmax classifiers, Openmax~\cite{Bendale:openmax} formulated the probability of inclusion via EVT modeling at inference time, where the strategy is to find \emph{implicit} class-wise boundaries that distinguish KKCs from UUCs. However, such EVT-based analysis can be intractable at each training iteration, since it requires computationally-expensive and parameter-sensitive processes. In addition, it is inappropriate to make boundaries by analyzing features which are not properly trained yet. Thus, we build \emph{explicit} class-wise boundaries by formulating $P_I({\bf x},c)$ based on the underlying assumption of LDA, and then use the boundaries for regularization without additional analysis of latent feature distribution. Under the assumption of LDA that each class-$c$ latent feature vector is drawn from a unimodal Gaussian distribution $\mathcal{N}(f({\bf x})|{\boldsymbol \mu}_c, {\bf I})$, the Euclidean distance $D_E^2(f({\bf x}), {\boldsymbol \mu}_c)$, a simplified version of the Mahalanobis distance, can be assumed to follow the Chi-square distribution having the degree of freedom $n$. Then, we have \begin{equation}~\label{eq:chi2pdf} P\left(D_E^2(f({\bf x}), {\boldsymbol \mu}_c)=t\right) = \frac{t^{\frac{n}{2}-1}}{2^{\frac{n}{2}}\cdot\Gamma(n/2)}\cdot \exp\left(-\frac{t}{2}\right), \end{equation} where $t \ge 0$, $\Gamma(\cdot)$ is the Gamma function, and $n$ is the dimension of $f({\bf x})$. As previous studies~\cite{Jain:pisvm,Bendale:openmax,Rudd:extreme} formulated the probability of inclusion by computing the CDF of the Weibull distribution, \emph{i.e.}, $P_I({\bf x},c) = 1 - \texttt{WeibullCDF}$, we define our $P_I({\bf x},c)$ by using the CDF of Eq.~(\ref{eq:chi2pdf}) as follows: \begin{equation}~\label{eq:chi2cdf} P_I({\bf x},c) = 1- \int_0^{{D_E^2({\bf x}, c)}/{2}}\frac{t^{n/2-1}}{\Gamma(n/2)}\cdot\exp\left(-t\right) dt = \frac{\Gamma(n/2,D_E^2(f({\bf x}), {\boldsymbol \mu}_c)/2)}{\Gamma(n/2)}, \end{equation} where $\Gamma(\cdot, \cdot)$ is the upper incomplete Gamma function. It is noteworthy that Eq.~(\ref{eq:chi2cdf}) can be easily computable via $\texttt{igammac}$ function in PyTorch~\cite{pytorch}. \subsubsection{Class-inclusion loss function.} \begin{wrapfigure}[9]{r}{0.35\textwidth} \centering \vspace{-\intextsep} \hspace*{-.75\columnsep} {\includegraphics[width=0.35\textwidth]{./prob.png}\vspace{-0.3\intextsep}} \caption{$P_H$ and $P_I$ (Ours)} \flushbottom \label{fig:probinc} \end{wrapfigure} Based on $\mathcal{D}_{t}$, $\mathcal{D}_{b}$, and our $P_I({\bf x},c)$ of Eq.~(\ref{eq:chi2cdf}), the primary objective of the proposed BCR strategy, which aims to force each KUC data sample to be located far away from the closest class-wise HSC, can be achieved by employing a loss function $\mathcal{L}_{bg,u} = \mathbb{E}_{{\bf x}^b \sim\mathcal{D}_{b}}[-\log(1 - \max_{i\in\{1,\cdots,C\}} P_I({\bf x}^b,i)) ]$. To compare $P_I({\bf x},c)$ and $P_H({\bf x},c) = \exp(-h(D_E^2(f({\bf x}), {\boldsymbol \mu}_c)))$, which was used in Eq.~(\ref{eq:hscobj}), we plot $P_I({\bf x},c)$ and $P_H({\bf x},c)$ in Figure~\ref{fig:probinc} with respect to $||f({\bf x}) - {\boldsymbol \mu}_c||$ by assuming $n=128$. The figure implies that unlike $P_H({\bf x},c)$, our $P_I({\bf x},c)$ can assign sufficiently large space for KKC data and force KUC samples to be located outside the space. Also, it is noteworthy that our regularization method based on $P_I({\bf x},c)$ does not require any margin parameters dependent on datasets or the dimension of latent features. At training time, $P_I({\bf x},c) = 0.5$ constructs an auxiliary decision boundary between the $c$-th class KKC data and the other data items, where $\mathcal{L}_{bg,u}$ makes a majority of KUC data to be located outside the entire class-wise boundaries. However, $\mathcal{L}_{bg,u}$ can be insufficient to achieve robust UUC rejection and closed-set classification results, since it does not control correctly classified KKC samples to be located inside the corresponding class-wise boundaries. Therefore, in addition to $\mathcal{L}_{cf} = \mathbb{E}_{({\bf x}^k, y)\sim\mathcal{D}_{t}} [- \log P_d(y|{\bf x}^k)]$, we apply another loss $\mathcal{L}_{bg,k}$ to KKC data to \emph{maintain} high closed-set classification accuracy and enhance the gap between KKC and KUC samples in terms of the Euclidean distance. By formulating $\mathcal{L}_{bg,k} = \mathbb{E}_{({\bf x}^k,y) \sim\mathcal{D}_{t}}[-\mathbbm{1}(y = \hat{c})\log(P_I({\bf x}^k,\hat{c}))]$, where $\hat{c} = \argmax_{i\in\{1,\cdots,C\}} P_I({\bf x}^k,i)$, we define our $\mathcal{L}_{bg}$ as $\mathcal{L}_{bg,k} + \mathcal{L}_{bg,u}$ and call $\mathcal{L}_{bg}$ the \emph{class-inclusion loss}. In our total loss (Eq.~(\ref{eq:totloss})), $\mathcal{L}_{cf}$ makes KKC samples be correctly classified, $\mathcal{L}_{bg,u}$ makes KUC samples located outside the explicit class-wise boundaries, and $\mathcal{L}_{bg,k}$ additionally regularizes correctly classified KKC samples. It is noteworthy that we use an additional loss for KKC samples after they are correctly classified, to prevent obstructions in training closed-set classifiers at early iterations. \section{Experiments}\label{sec4} Through extensive experiments, we compared our class-inclusion loss for distance-based classifiers to the objectosphere~\cite{Dhamija:reducing}, the uniformity (also widely known as OE)~\cite{Hendrycks:oe}, and the energy~\cite{Liu:energy} losses for conventional Softmax classifiers. This section aims to show that whether our approach provides competitive UUC rejection results, while keeping high closed-set classification accuracy. Furthermore, we conducted additional experiments and provided the corresponding discussions. \subsection{Experimental Settings}\label{sec4.1} For evaluation, we first measured the closed-set classification accuracy. To quantify the accuracy of UUC data rejection, we also measured the area under the receiver operating characteristic curve (AUROC). Also, we used the open-set classification rate (OSCR) as additional OSR accuracy measure by quantifying the correct closed-set classification rate when the false positive rate for UUC rejection is $10^{-1}$. For in-depth details of OSCR, see~\cite{Dhamija:reducing}. As $\mathcal{D}_b$, we used ImageNet~\cite{Olga:imagenet}, which was also employed in~\cite{Li:background}. To ensure that the classes of $\mathcal{D}_b$ and our test sets are disjoint, we used only the remaining classes of ImageNet, which are not included in the test sets. In our experiments, we considered the following two settings. \subsubsection{Setting 1.}\label{sec4.1.1} In Setting 1, a single dataset was split into KKCs and UUCs, where we used the KKCs in the training set as $\mathcal{D}_{t}$, and the KKCs and UUCs in the test set as $\mathcal{D}_{test}^k$ and $\mathcal{D}_{test}^u$, respectively. Following the protocol in~\cite{Neal:counterfactual}, which were also employed in~\cite{Oza:c2ae,sun:cgdl}, we conducted experiments by using the following standard datasets: SVHN~\cite{Netzer:svhn}, CIFAR10 \& CIFAR100~\cite{Krizhevsky:cifar}, and TinyImageNet~\cite{Le:tinyimagenet}. \vspace{0.4\baselineskip}\\ \emph{SVHN, CIFAR10} \quad For SVHN and CIFAR10, each of which consists of images of 10 classes, each dataset was randomly partitioned into 6 KKCs and 4 UUCs. \vspace{0.4\baselineskip}\\ \emph{CIFAR+10, CIFAR+50} \quad For CIFAR+$M$, we employed randomly selected 4 classes of CIFAR10 as KKCs and $M$ classes of CIFAR100 as UUCs. \vspace{0.4\baselineskip}\\ \emph{TinyImageNet} \quad For a larger number of classes, we randomly selected 20 classes of TinyImageNet as KKCs and then used the remaining 180 classes as UUCs. \subsubsection{Setting 2.}\label{sec4.1.2} By using the training and the test sets of a single dataset as $\mathcal{D}_{t}$ and $\mathcal{D}_{test}^k$, respectively, we employed the test set of another dataset relatively close to $\mathcal{D}_{t}$ as $\mathcal{D}_{test}^u$ in Setting 2. Adopting the experiment settings in \cite{Yoshihashi:crosr} and \cite{Liang:odin}, we used the entire classes of a dataset as KKCs for CIFAR10 \& CIFAR100. For UUC dataset, TinyImageNet, LSUN~\cite{Yu:lsun}, and iSUN~\cite{Xu:isun} were selected. TinyImageNet and LSUN consists of 10,000 test samples each, where the samples in each dataset were resized (R) or cropped (C) into the size $32 \times 32$. The iSUN dataset has 8,925 test samples and they were also resized into the size of $32 \times 32$. The modified datasets can be obtained in the Github repository of~\cite{Liang:odin}. \subsection{Training Details}\label{sec4.2} \subsubsection{Network selection.} For $f$, we employed the Wide-ResNet (WRN)~\cite{Zagoruyko:wrn} and then used its penultimate layer $f({\bf x}) \in \mathbb{R}^n$ for the latent feature vector of each input sample ${\bf x}$. For CIFAR10 and TinyImageNet, we used WRN 40-2 with a dropout rate of $0.3$, where WRN 28-10 was employed for CIFAR100 with the same dropout rate. For SVHN, we used WRN 16-4 with a dropout rate of $0.4$. Such network selection was determined by referring the experiments in~\cite{Hendrycks:oe,Zagoruyko:wrn}. \subsubsection{Parameters.} For the entire BCR methods, we set the mini-batch sizes of KKC training samples and KUC samples to $128$. We kept $\lambda$ as a constant during training, \emph{i.e.}, each $f$ was trained with the BCR method \emph{from scratch}. To select hyperparameters and margin parameters of the previous regularization methods, we followed the official implementations\footnote{\url{https://github.com/Vastlab/Reducing-Network-Agnostophobia}}$^{,}$\footnote{\url{https://github.com/hendrycks/outlier-exposure}}$^{,}$\footnote{\url{https://github.com/wetliu/energy_ood}}. For SVHN, CIFAR10, CIFAR100, and TinyImageNet, we trained the corresponding classifiers for $80$, $100$, $200$, and $200$ epochs, respectively, where we used the stochastic gradient descent for optimization. For SVHN and the other datasets, we used initial learning rates of 0.01 and 0.1, respectively, and a cosine learning rate decay~\cite{Loshchilov:sgdr}. We also used the learning rate warm-up strategy for the first 5 epochs of each training process. \subsection{Results}\label{sec:4.3} The OSR results of our proposed approach and the previous methods are reported in Tables~\ref{tb:table1} and~\ref{tb:table2}. All the reported values were averaged over five randomized trials, by randomly sampling seeds, data splits of KKCs and UUCs, and class-wise anchors. In the tables, $\uparrow$ and $\downarrow$ indicate higher-better and lower-better measures, respectively, where underlined values present the best scores. \begin{table}[!t] \caption{Comparison with the previous BCR methods in the first setting.} \label{tb:table1} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{cC{1.75in}C{1.75in}C{1.75in}} \toprule \multirow{2.5}{*}{Experiments} & Accuracy ($\uparrow$) & AUROC ($\uparrow$) & OSCR ($\uparrow$)\\ \cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4} & \multicolumn{3}{c}{Objectosphere / Uniformity / Energy / Class-inclusion (Ours)}\\ \midrule \midrule SVHN & 0.968 / 0.966 / 0.972 / \underline{0.974} & 0.935 / 0.927 / 0.911 / \underline{0.956} & 0.813 / 0.793 / 0.774 / \underline{0.854}\\ CIFAR10 & 0.964 / 0.964 / 0.956 / \underline{0.973} & 0.942 / 0.923 / 0.933 / \underline{0.948} & 0.851 / 0.814 / 0.807 / \underline{0.870} \\ CIFAR+10 & \multirow{2}{*}{0.958 / 0.969 / 0.949 / \underline{0.976}} & 0.945 / 0.950 / 0.936 / \underline{0.961} & 0.839 / 0.867 / 0.808 / \underline{0.881} \\ CIFAR+50 & & 0.944 / 0.942 / 0.937 / \underline{0.957} & 0.837 / 0.837 / 0.808 / \underline{0.865} \\ TinyImageNet & 0.778 / 0.779 / 0.715 / \underline{0.802} & 0.755 / 0.771 / 0.727 / \underline{0.785} & 0.484 / 0.488 / 0.357 / \underline{0.493} \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Setting 1.} For the first setting, Table~\ref{tb:table1} compares our proposed BCR methods for distance-based classifiers with the previous approachs designed for Softmax classifiers. The results demonstrate that our proposed method obtained robust UUC rejection results, which were superior to the results of the previous approaches. It is noteworthy that our method achieved higher classification accuracy values, which were critical in acquiring better OSR results in terms of the OSCR measure, than the previous methods. Such results imply that the proposed framework effectively satisfies the two essential requirements described in Section~\ref{sec2.1}, bounding open-space risk and ideally balancing it with empirical risk. \subsubsection{Setting 2.} In Table~\ref{tb:table2}, we present our experiment results of the second setting. When using the CIFAR10 and CIFAR100 datasets as KKC data, our approach achieved the highest closed-set classification accuracy, which is consistent with the experiment results of Setting 1. Furthermore, by averaging the AUROC and the OSCR values over the various UUC datasets, the table shows that our model outperformed the previous methods in the second setting. \subsubsection{Average runtime.} We conducted all the experiments with PyTorch and two GeForce RTX 3090 GPUs. At each trial in the CIFAR10 experiment of Setting 1, the running time of each training epoch took 28 seconds for our method, where its OSR evaluation required approximately 6.5 seconds. We observed that the other methods take similar running time at their training and inference phases. \begin{table}[!t] \caption{Comparison with the previous methods in the second setting. The corresponding classification accuracy values are reported in the first column.} \label{tb:table2} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{ccC{1.75in}C{1.75in}} \toprule \multirow{2.5}{*}{$\mathcal{D}_{t}/\mathcal{D}_{test}^{k}$} & \multirow{2.5}{*}{$\mathcal{D}_{test}^{u}$} & AUROC ($\uparrow$) & OSCR ($\uparrow$) \\ \cmidrule{3-4} & & \multicolumn{2}{c}{Objectosphere / Uniformity / Energy / Class-inclusion (Ours)}\\ \midrule \midrule \multirow{6.5}{*}{\shortstack{CIFAR10 \\ \vspace{2\baselineskip} \\ 0.940 / 0.939 / 0.925 / \underline{0.947}}} & ImageNet-C & 0.988 / 0.986 / 0.981 / \underline{0.989} & 0.929 / 0.928 / 0.894 / \underline{0.932}\\ & ImageNet-R & 0.979 / 0.984 / 0.972 / \underline{0.984} & 0.923 / 0.926 / 0.886 / \underline{0.927}\\ & LSUN-C & \underline{0.994} / 0.990 / 0.989 / 0.993 & 0.938 / 0.931 / 0.904 / \underline{0.940}\\ & LSUN-R & 0.985 / 0.988 / 0.984 / \underline{0.990} & 0.928 / 0.931 / 0.897 / \underline{0.935}\\ & iSUN & 0.985 / 0.989 / 0.984 / \underline{0.991} & 0.928 / 0.932 / 0.896 / \underline{0.936}\\ \cmidrule{2-4} & \textbf{Average} & 0.986 / 0.987 / 0.982 / \underline{0.989} & 0.929 / 0.930 / 0.895 / \underline{0.934}\\ \midrule \midrule \multirow{6.5}{*}{\shortstack{CIFAR100 \\ \vspace{2\baselineskip} \\ 0.727 / 0.735 / 0.705 / \underline{0.779}}} & ImageNet-C & 0.886 / 0.929 / 0.925 / \underline{0.930} & 0.641 / 0.686 / 0.652 / \underline{0.696}\\ & ImageNet-R & 0.815 / 0.910 / \underline{0.934} / 0.920 & 0.572 / 0.674 / 0.658 / \underline{0.687}\\ & LSUN-C & \underline{0.967} / 0.931 / 0.901 / 0.965 & 0.685 / 0.680 / 0.643 / \underline{0.751}\\ & LSUN-R & 0.844 / 0.930 / \underline{0.959} / 0.945 & 0.608 / 0.695 / 0.684 / \underline{0.731}\\ & iSUN & 0.842 / 0.923 / 0.954 / \underline{0.955} & 0.603 / 0.689 / 0.680 / \underline{0.734}\\ \cmidrule{2-4} & \textbf{Average} & 0.871 / 0.925 / 0.935 / \underline{0.943} & 0.621 / 0.685 / 0.663 / \underline{0.720}\\ \bottomrule \end{tabular} } \end{table} \subsection{Additional Experiments and Discussions}\label{sec4.4} We further analyzed our BCR method by using various $\lambda$ in our loss function. Furthermore, we compared our method by formulating another baseline using the triplet loss~\cite{schroff:triplet}. Using the CIFAR10 and TinyImageNet experiments in Setting 1, we present the corresponding OSR results. We also conducted various additional experiments that can show the effectiveness of our proposed method. \paragraph{Selecting $\lambda$.} Conducting additional OSR experiments with $\lambda \in \{0.1, 0.5, 1, 5, 10\}$ in our loss function $\mathcal{L} = \mathcal{L}_{cf} + \lambda \mathcal{L}_{bg}$, Table~\ref{tb:ablation1} presents that our method provides robust OSR accuracy across a \emph{wide range} of $\lambda$, \emph{e.g.}, $\lambda \in [1,5]$, which implies that users can flexibly select $\lambda$ in our method. Although such range may depend on datasets, users are not required to carefully adjust the $\lambda$ parameter. In additional experiments, $\lambda = 5$ yielded the best OSR results in the SVHN and CIFAR + $M$ experiments of Setting 1 and the CIFAR10 experiments of Setting 2, where $\lambda=0.5$ showed the best results in the CIFAR100 experiments. Such empirical results implies that a lower $\lambda$ value can be better when handling more KKCs. \begin{table}[!t] \caption{OSR results with various $\lambda$ in our class-inclusion and the triplet losses. In each cell, the results are presented in the form of (Accuracy / AUROC / OSCR).} \label{tb:ablation1} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{cC{1.3in}C{1.3in}C{1.3in}C{1.3in}} \toprule \multirow{2.5}{*}{Parameter $\lambda$} & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{TinyImageNet}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} & Class inclusion & Triplet & Class inclusion & Triplet \\ \midrule \midrule 0 (Vanilla) & 0.963 / 0.759 / 0.472 & -- & 0.785 / 0.631 / 0.308 & -- \\ 0.1 & 0.966 / 0.899 / 0.742 & 0.963 / 0.820 / 0.537 & 0.790 / 0.765 / 0.442 & 0.787 / 0.746 / 0.431 \\ 0.5 & 0.967 / 0.928 / 0.810 & 0.968 / 0.842 / 0.572 & 0.794 / 0.775 / 0.464 & 0.785 / 0.729 / 0.426 \\ 1 & 0.973 / 0.936 / 0.840 & 0.966 / 0.860 / 0.628 & 0.802 / 0.785 / 0.493 & 0.793 / 0.714 / 0.423 \\ 5 & 0.973 / 0.948 / 0.870 & 0.965 / 0.872 / 0.628 & 0.798 / 0.783 / 0.480 & 0.785 / 0.707 / 0.360 \\ 10 & 0.968 / 0.947 / 0.863 & 0.958 / 0.856 / 0.597 & 0.787 / 0.701 / 0.361 & 0.738 / 0.637 / 0.220 \\ \bottomrule \end{tabular} } \end{table} \paragraph{Triplet loss.} We propose a distance-based BCR method suitable for the OSR problem, where the proposed method defines explicit class-wise boundaries and then increases the distance gap between KKC and KUC samples based on the boundaries. Another loss function that can separate KKC and KUC data in terms of such distance measure is the triplet loss, where the loss function has been widely employed to control the distances between latent feature vectors effectively. Therefore, we formulated a baseline distance-based BCR method by following the conventional definition of the triplet loss $\mathcal{L}_{tri}$, where we set class-wise anchors, KKC training data, and KUC data as anchors, positive samples, and negative samples, respectively. Since we observed that training classifiers solely based on the triplet loss $\mathcal{L} = \mathcal{L}_{tri}$ yields significantly worse OSR results in comparison with the regularization method $\mathcal{L} = \mathcal{L}_{cf} + \lambda \mathcal{L}_{tri}$, we employed $\mathcal{L}_{tri}$ as a regularization loss function for BCR. In Table~\ref{tb:ablation1}, we reported experiment results by using the triplet loss as $\mathcal{L}_{bg}$. The results show that our proposed method (class-inclusion loss) outperforms the regularization method based on the triplet loss. \paragraph{Vanilla distance-based classifiers.} To show the effectiveness of our method, we assessed the OSR performance of vanilla distance-based classifiers (trained solely based on $\mathcal{L}_{cf}$), where we present the results in the form of (Accuracy / AUROC / OSCR). In the CIFAR10 and TinyImageNet experiments of Setting 1, we obtained (0.962 / 0.757 / 0.470) and (0.785 / 0.629 / 0.315), respectively. In the CIFAR10 and CIFAR100 experiments of Setting 2, the OSR results averaged over the five UUC datasets in vanilla distance-based classifiers were (0.936 / 0.838 / 0.709) and (0.766 / 0.807 / 0.549), respectively. Comparing these results to the results in Tables~\ref{tb:table1} and~\ref{tb:table2}, we show that our regularization strategy can significantly improve the OSR performance of distance-based classifiers. \paragraph{Ablation study on loss terms.} Recall our loss function $\mathcal{L}_{cf} + \lambda (\mathcal{L}_{bg,k} + \mathcal{L}_{bg,u})$. As $\mathcal{L}_{bg,u}$ is essential for BCR by making KUC samples located outside explicit class-wise boundaries, we conducted an ablation study to investigate the necessity of $\mathcal{L}_{bg,k}$. In the absence of $\mathcal{L}_{bg,k}$, which additionally regularizes correctly classified KKC data, we obtained the result of (0.963 / 0.821 / 0.509) for the (Accuracy / AUROC / OSCR) measures in the CIFAR10 experiment of Setting 1, which is worse than our original result (0.973 / 0.948 / 0.870). This result implies that $\mathcal{L}_{bg,k}$ is necessary to increase the distance gap between KKC and KUC data. Also, by designing $\mathcal{L}_{bg}$ based on the original HSC loss function (Eq.~(\ref{eq:hscobj})), we obtained the result of (0.950 / 0.634 / 0.338) for (Accuracy / AUROC / OSCR) in the CIFAR10 experiment of Setting 1, which supports our hypothesis. \begin{table}[!t] \caption{Comparison with the previous OSR methods (Macro-averaged F1 score).} \label{tb:table4} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{cC{0.7in}C{0.7in}C{0.7in}C{0.7in}C{0.7in}C{0.7in}C{0.7in}C{0.7in}} \toprule \multirow{2.5}{*}{Experiments} & \multicolumn{4}{c}{Setting 1} & \multicolumn{4}{c}{Setting 2}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} & SVHN & CIFAR10 & CIFAR+10 & CIFAR+50 & ImageNet-C & ImageNet-R & LSUN-C & LSUN-R \\ \midrule \midrule Softmax~\cite{hendrycks:baseline} & 0.725 & 0.600 & 0.701 & 0.637 & 0.639 & 0.653 & 0.642 & 0.647\\ Openmax~\cite{Bendale:openmax} & 0.737 & 0.623 & 0.731 & 0.676 & 0.600 & 0.684 & 0.657 & 0.668\\ CROSR~\cite{Yoshihashi:crosr} & 0.753 & 0.668 & 0.769 & 0.684 & 0.721 & 0.735 & 0.720 & 0.749\\ CGDL~\cite{sun:cgdl} & 0.776 & 0.655 & 0.760 & 0.695 & 0.840 & 0.832 & 0.806 & 0.812\\ AOSR~\cite{fang:learning} & 0.842 & 0.705 & 0.773 & 0.706 & 0.798 & 0.795 & 0.839 & 0.838\\ \midrule \textbf{Ours} & \underline{0.854} & \underline{0.761} & \underline{0.805} & \underline{0.732} & \underline{0.876} & \underline{0.869} & \underline{0.880} & \underline{0.877}\\ \bottomrule \end{tabular} } \end{table} \paragraph{Previous OSR approaches.} We additionally compared our proposed approach to previous OSR methods, whose OSR results are already reported in~\cite{Yoshihashi:crosr,fang:learning}. For fair comparison, the entire methods presented in Table~\ref{tb:table4} (including ours) were implemented by using a VGG backbone and tested based on the codebase of (\url{https://github.com/Anjin-Liu/Openset_Learning_AOSR}). The table, which presents OSR results based on the macro-averaged F1 score measure, shows that our distance-based BCR approach can achieve robust OSR results via a simple inference process in standard classifier architectures. \paragraph{ResNet-18 architecture.} In our main experiments, we used the WRN architectures as feature extractors. To further investigate the effectiveness of our method, we used another standard classifier architecture, ResNet-18~\cite{he2016deep}. In the first setting, we obtained the quantitative results of {(0.963 / 0.945 / 0.844)}, {(0.966 / 0.950 / 0.845)}, {(0.967 / 0.945 / 0.848)}, and {(0.971 / 0.947 / 0.850)} for the regularization methods using the objectosphere~\cite{Dhamija:reducing}, the uniformity~\cite{Hendrycks:oe}, the energy~\cite{Liu:energy}, and our class-inclusion losses, respectively. In addition to the results, Table~\ref{tb:table5} shows quantitative results in the second setting, where the results imply that our method can outperform the previous BCR methods with ResNet-18, as we observed in the experiments using the WRN architectures. \paragraph{Text classification.} To show that our proposed BCR method can be applicable in another domain, we compared our class-inclusion loss to the uniformity loss in text classification applications. For text classification, we used 20 Newsgroups and WikiText103 for KKCs and KUCs, respectively, and trained a simple GRU model~\cite{cho:gru} for $f$ as in~\cite{Hendrycks:oe}. As UUC sets, we used Multi30K, WMT16, and IMDB. Since the margin parameters of the objectosphere and the energy losses selected for image classification cannot be suitable for the text classification tasks, we only tested the uniformity loss for comparison. In Table~\ref{tb:table3}, we present the results, where we additionally reported the area under the precision-recall curve (AUPR) and the false-positive rate at $95\%$ true-positive rate (FPR95) measures. As it outperformed the uniformity loss in image classification tasks, our method also showed significantly better OSR accuracy in text classification. We provide more training details of our text classification models in our supplementary materials. \begin{table}[!t] \caption{Comparison with the previous methods in the second setting by using ResNet-18. The corresponding classification accuracy values are reported in the first column.} \label{tb:table5} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{ccC{1.75in}C{1.75in}} \toprule \multirow{2.5}{*}{$\mathcal{D}_{t}/\mathcal{D}_{test}^{k}$} & \multirow{2.5}{*}{$\mathcal{D}_{test}^{u}$} & AUROC ($\uparrow$) & OSCR ($\uparrow$) \\ \cmidrule{3-4} & & \multicolumn{2}{c}{Objectosphere / Uniformity / Energy / Class-inclusion (Ours)}\\ \midrule \midrule \multirow{6.5}{*}{\shortstack{CIFAR10 \\ \vspace{2\baselineskip} \\ 0.937 / 0.949 / 0.933 / \underline{0.951}}} & ImageNet-CR & 0.982 / 0.979 / 0.983 / \underline{0.987} & 0.932 / 0.928 / 0.917 / \underline{0.941}\\ & ImageNet-RE & 0.977 / 0.982 / 0.975 / \underline{0.988} & 0.918 / 0.934 / 0.909 / \underline{0.945}\\ & LSUN-CR & 0.991 / 0.984 / 0.987 / \underline{0.993} & 0.934 / 0.935 / 0.919 / \underline{0.946}\\ & LSUN-RE & 0.987 / 0.987 / 0.986 / \underline{0.990} & 0.932 / 0.938 / 0.918 / \underline{0.942}\\ & iSUN & 0.987 / 0.986 / 0.987 / \underline{0.994} & 0.932 / 0.938 / 0.919 / \underline{0.946}\\ \cmidrule{2-4} & \textbf{Average} & 0.985 / 0.984 / 0.984 / \underline{0.991} & 0.930 / 0.935 / 0.916 / \underline{0.944}\\ \bottomrule \end{tabular} } \end{table} \begin{table}[!t] \caption{Comparison with the previous BCR method in text classification experiments.} \label{tb:table3} \centering \resizebox{0.83\textwidth}{!}{ \begin{tabular}{ccC{0.85in}C{0.85in}C{0.85in}C{0.85in}} \toprule \multirow{2.5}{*}{$\mathcal{D}_{t}/\mathcal{D}_{test}^{k}$} & \multirow{2.5}{*}{$\mathcal{D}_{test}^{u}$} & AUROC ($\uparrow$) & AUPR ($\uparrow$) & FPR95 ($\downarrow$) & OSCR ($\uparrow$)\\ \cmidrule{3-6} & & \multicolumn{4}{c}{Uniformity / Class-inclusion (Ours)}\\ \midrule \midrule \multirow{4.5}{*}{\shortstack{20 Newsgroups \\ \vspace{2\baselineskip} \\ 0.719 / \underline{0.749}}} & Multi30k & 0.997 / \underline{0.997} & \underline{0.998} / 0.997 & \underline{0.002} / 0.010& 0.715 / \underline{0.745}\\ & WMT16 & \underline{0.997} / 0.996 & \underline{0.997} / 0.995 & \underline{0.010} / 0.016& 0.715 / \underline{0.742}\\ & IMDB & 0.805 / \underline{0.999} & 0.692 / \underline{0.999} & 0.367 / \underline{0.003}& 0.585 / \underline{0.747}\\ \cmidrule{2-6} & \textbf{Average} & 0.933 / \underline{0.997} & 0.896 / \underline{0.997} & 0.126 / \underline{0.010}& 0.672 / \underline{0.745}\\ \bottomrule \end{tabular} } \end{table} \section{Concluding Remarks} In this paper, we propose a novel BCR method to train open-set classifiers that can provide robust OSR results with a simple inference process. By employing distance-based classifiers with the principle of LDA, we designed a novel class-inclusion loss based on the principle of probability of inclusion, which effectively limits the feature space of KKC data in a class-wise manner and then regularizes KUC samples to be located far away from the limited class-wise spaces. Through our extensive experiments, we present that our method can achieve robust UUC rejection performance, while maintaining high closed-set classification accuracy. As this paper aims to improve the reliability of modern DNN-based classifiers, we hope our work to enhance reliability and robustness in various classification applications by providing a novel methodology of handling UUC samples. \subsubsection{Acknowledgements.} This work was supported by Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)) and the National Research Foundation of Korea (NRF) grants funded by the Korea government (MSIT) (No. NRF-2018M3E3A1057305 and No. NRF-2022R1A2B5B02001913). \clearpage \bibliographystyle{splncs04}
1,116,691,500,606
arxiv
\section{Introduction} In 1963 E. Lorenz by a drastic truncation of fluid-dynamics equations governing the atmospheric motion obtained a system of ODE which he proposed as a crude yet non trivial model of thermal convection of the atmosphere \cite{L}. As a matter of fact, the Lorenz model is today understood as a basic toy-model for the evolution of Earth atmosphere's regimes, like zonal or blocked circulation or climate regimes (e.g. warm and cold), in which dynamics is described by equilibrium states \cite{CMP}, \cite{Se}. In his work Lorenz showed such system to exhibit, for a large set of parameters values, a peculiar chaotic behavior, that is exponential sensitivity to perturbations of initial conditions and the existence of a global attracting set for the flow nowadays called generalized nontrivial hyperbolic attractor. Although there exists an extensive literature on the subject, we refer the reader to \cite{Sp} for a rather comprehensive overview on this problem and to \cite{V} for a recent account on the progress made on the rigorous analysis of the Lorenz '63 ODE\ system and the relationship between this and its more abstract counterpart, the geometric Lorenz model, introduced in the second half of the seventies (\cite{G}, \cite{ABS} and \cite{GW}) to describe the geometrical features a dynamical system should posses in order to exhibit the same asymptotic behavior as the Lorenz one. An affirmative answer to the long-standing question whether the original Lorenz '63 flow fits or not the description of the one modeled by the geometric Lorenz, which means it supports a robust singular hyperbolic (strange) attractor, has been given by W. Tucker in \cite{T} by means of a computer assisted proof. As a byproduct, Tucker also proved that Lorenz flow admits a unique SRB measure supported on the strange attractor. Further results about the characterization of the set of geometric Lorenz-like maps are given in \cite{LM}. In 2000 it has been emphasized that Lorenz '63 model and the Kolmogorov one, considered as a low-order approximation of the Navier-Stokes equations, belong to a particular class of dynamical systems, named Kolmogorov-Lorenz systems \cite{PP1}, whose vector field admits a representation as a sum of a Hamiltonian $SO\left( 3\right) $-invariant field, a dissipative linear field and constant forcing field (see also \cite{PP2} for an extension of this analysis to the Lorenz '84 model). Moreover, they proved that the chaotic behavior of these models relies on the interplay between dissipation and forcing. More specifically, and more recently, in \cite{PM} it has been shown that the effect of the dissipative and forcing terms appearing in the previously described decomposition of the Lorenz '63 vector field, with the classical set of parameters, is to induce chaotic oscillations in the time evolution of the first integrals of the Hamiltonian system associated to the Lorenz '63 model, namely the Hamiltonian and the Casimir function for the (+) Lie-Poisson brackets associated the $so\left( 3\right) $ algebra \cite{MR}, which represents the angular momentum of a free rigid body in the Kolmogorov-Lorenz representation of geofluid dynamics introduced in \cite{PP1}. In particular, it has been shown that two subsequent oscillation peaks in the plot of the Casimir function $C$ as a function of time are related by a map $\Phi$ of the interval similar to the one originally computed by Lorenz in \cite{L} depicting the functional dependence of two subsequent maximum values assumed by the third coordinate of the flow as a function of time. We remark that, being $C$ the square norm of the flow, the similarity of the plots of this two maps is therefore not surprising. In \cite{PM}, the recurrence properties of $\Phi$ are also studied, which allows to characterize the trajectories of the system through the number of revolution they perform around the unstable point lying on one side of the plane $x+y=0$ when the initial condition is chosen on the opposite side (\cite{PM} figg. 2,10 and 11). In our paper we clarify what stated in \cite{PM} by giving an account in the first section of the rigid body formulation of the Lorenz '63 model and constructing, in the next section, a Markov expanding Lorenz-like map $T$ of the interval being the reduction to $\left[ 0,1\right] $ of $\Phi.$ Both maps are in fact derived throughout the Poincar\'{e} map associated to the surface in the configuration space of the system corresponding to the set of maxima reached by the Casimir function during its time evolution. Hence, we will study the invariant measure under the dynamics defined by $T$ characterizing its density and consequently the SRB measure of the system. Furthermore, we analyse the recurrence properties of the dynamics induced by $T$ clarifying more rigorously what stated in Section IV B of \cite{PM}. We will also perturb the system by adding an extra forcing term which will eventually cause the system to loose its symmetry under the involution $R:\left( x,y,z\right) \rightarrow\left( -x,-y,z\right) $ of $\mathbb{R}^{3}.$ Due to the robustness of the attractor, i.e. persistence under perturbations of the parameters, proved in \cite{T}, maps analogous to $T$ can be defined and studied, and their statistical properties analysed as in the unperurbed case. Therefore, such perturbation of the Lorenz '63 field will only have the effect to induce a change in the statistics of the invariant measure for the system, the SRB measure. We will prove that this change could be detected by looking at the deviation of the invariant density of the perturbed map with respect to the unperturbed one. Such result would confirm what has been empirically shown in \cite{CMP} about the impact of anthropogenic forcing to climate dynamics of the northern hemisphere. We also believe that this analysis could also be pursued in the case of more general $N$-dimensional models such as those introduced by Zeitlin in \cite{Z} to approximate in the limit of $N$ tending to infinity the dynamics of the atmosphere in absence of dissipation and forcing. We will present elsewhere our contributions in these directions; here we prove the first non-trivial result about the statistical stability of the invariant measure for $T.$ The technique we propose is new and we believe it could be applied as well for other maps with some sort of criticalities. We remark that, in particular, the distribution of the return times of a measurable subset of $[0,1],$ which can be derived directly from the invariant measure of $T,$ could be useful in studying the statistics of extreme meteorological events. \section{Return Lorenz-like maps} \subsection{Rigid body formulation of the Lorenz '63 model\label{KLS}} It can be shown (\cite{PP1}) that the Lorenz '63 ODE system \cite{L}, \begin{equation} \left\{ \begin{array} [c]{l}% \dot{x}_{1}=-\sigma x_{1}+\sigma x_{2}\\ \dot{x}_{2}=-x_{1}x_{3}+\rho x_{1}-x_{2}\\ \dot{x}_{3}=x_{1}x_{2}-\beta x_{3}% \end{array} \right. \label{l}% \end{equation} can be mapped, through the change of variables \begin{equation} \left\{ \begin{array} [c]{l}% u_{1}=x_{1}\\ u_{2}=x_{2}\\ u_{3}=x_{3}-\left( \rho+\sigma\right) \end{array} \right. \ , \end{equation} to the ODE system \begin{equation} \left\{ \begin{array} [c]{l}% \dot{u}_{1}=-\sigma u_{1}+\sigma u_{2}\\ \dot{u}_{2}=-u_{1}u_{3}-\sigma u_{1}-u_{2}\\ \dot{u}_{3}=u_{1}u_{2}-\beta u_{3}-\beta\left( \rho+\sigma\right) \end{array} \right. \label{l1}% \end{equation} representing the evolution of a Hamiltonian system whose configuration space is the $SO\left( 3\right) $ group, subject to dissipation and to a constant forcing. That is, denoting by \begin{equation} \{F,G\}:=\omega_{+}^{2}\left( ad_{\nabla F}^{\ast}x,ad_{\nabla G}^{\ast }x\right) =x\cdot\nabla F\times\nabla G \label{LPb}% \end{equation} the Lie-Poisson brackets associated to the symplectic 2-form $\omega_{+}^{2}$ defined on the cotangent bundle of $SO\left( 3\right) $ \cite{MR}, (\ref{l1}) reads \begin{equation} \dot{u}_{i}=\{u_{i},H\}-\left( \Lambda u\right) _{i}+f_{i}\ ,\qquad i=1,2,3, \label{lpp}% \end{equation} where: \begin{itemize} \item \begin{equation} H\left( u\right) :=\frac{1}{2}u\cdot\Omega u+h\cdot u \end{equation} is the Hamiltonian of a rigid body whose kinetical term is given by the matrix $\Omega:=diag\left( 2,1,1\right) ,$ while $h:=\left( 0,0,-\sigma\right) $ is an axial torque; \item $\Lambda:=diag\left( \sigma,1,\beta\right) $ is the dissipation matrix; \item $f:=\left( 0,0,-\beta\left( \rho+\sigma\right) \right) $ is a forcing term. \end{itemize} This representation allows to study the Lorenz system as a perturbation of the Hamiltonian system \begin{equation} v_{i}\left( u\right) :=\{u_{i},H\},\qquad i=1,2,3, \label{lppi}% \end{equation} admitting, as in the case of a rigid body with a fixed point, two independent first integrals $H$ and the Casimir function $C$ for the Poisson brackets (\ref{LPb}) \cite{MR}. In fact, when rewritten in this form, it follows straightforwardly that the system is non chaotic for $\sigma=0$ \cite{PP1} while, for $\sigma\neq0,$ the values of $C$ and $H$ undergo chaotic oscillations \cite{PM}. Moreover, when passing to the representation (\ref{lpp}), the symmetries of the system are preserved as well as other features such as the invariance of the $x_{3}\ \left( u_{3}\right) $ axis and the direction of rotation of the trajectories about this axis. The critical points of the velocity field of the system are then \begin{equation} c_{1}:=\left( \sqrt{\beta\left( \rho-1\right) },\sqrt{\beta\left( \rho-1\right) },-\left( \sigma+1\right) \right) ,\ c_{2}:=\left( -x_{1}\left( c_{1}\right) ,-x_{2}\left( c_{1}\right) ,x_{3}\left( c_{1}\right) \right) \ , \end{equation} and $c_{0}:=\left( 0,0,-\left( \rho+\sigma\right) \right) .$ We also remark that (\ref{lpp}) can be rewritten in the form \begin{equation} \dot{u}=v-w\ , \label{decu'}% \end{equation} where $v$ is the divergence free field (\ref{lppi}) and \begin{equation} \mathbb{R}^{3}\ni u\longmapsto w\left( u\right) :=\Lambda u-f=\nabla K\left( u\right) \in\mathbb{R}^{3}% \end{equation} with \begin{equation} K\left( u\right) :=\frac{1}{2}u\cdot\Lambda u-f\cdot u \end{equation} a convex function on $\mathbb{R}^{3}.$ Notice that the fields $v$ and $w$ are orthogonal in $L^{2}\left( r\mathcal{B};\mathbb{R}^{3}\right) ,$ where $B:=\{u\in\mathbb{R}^{3}:\left\Vert u\right\Vert \leq1\}$ is the unitary ball in $\mathbb{R}^{3}$ and $rB$ denotes the ball of radius $r.$ The decomposition of the velocity field as the sum of a divergence free field ad a gradient one, together with the appearance in the Hamiltonian description of the flow of the Lie-Poisson brackets (\ref{LPb}) in the space reference frame of the rigid body, i.e. right translation on $SO\left( 3\right) ,$ is standard in fluid dynamics \cite{A}, \cite{MR} and can be seen as another source of analogy between the Lorenz '63 model and Navier-Stokes equations \cite{PP1}, \cite{FJKT}. \subsection{The return map on the set of maxima of the Casimir function} If $u_{0}$ is any non stationary point for the field (\ref{l1}) such that $\left\Vert u_{0}\right\Vert \leq\frac{\left\Vert f\right\Vert }{\sqrt {\lambda_{\Lambda}}},$ with $\lambda_{\Lambda}:=\min\{t\in spec\Lambda\},$ and $C\left( t\right) :=\left\Vert u\left( t,u_{0}\right) \right\Vert ^{2},$ let \begin{equation} m:=\inf_{t>0}C\left( t\right) \ ;\ M:=\sup_{t>0}C\left( t\right) \ . \end{equation} Clearly, $m\geq0$ since $C\geq0.$ Moreover, $M<\infty$ since it has been shown in \cite{PP1} that $C\left( t\right) \leq\frac{\left\Vert f\right\Vert }{\sqrt{\lambda_{\Lambda}}}$ where, for our choice of parameters, \begin{equation} \frac{\left\Vert f\right\Vert }{\sqrt{\lambda_{\Lambda}}}=\left\Vert f\right\Vert =\beta\left( \rho+\sigma\right) \ . \end{equation} To construct the function which links two subsequent relative maximum values of $C\left( t\right) $ we proceed as follows: \begin{itemize} \item first we identify the manifold $\Sigma$ in the configuration space of the system corresponding to the relative maxima of $C\left( t\right) ,$ \item then we construct a map of the interval $\left[ 0,1\right] $ to itself as a function of the map of the interval $\left[ m,M\right] $ of the possible values of $C\left( t\right) $ in itself, which can be defined through the Poincar\'{e} map of this manifold. \end{itemize} The existence of the aforementioned Poincar\'{e} map follows from the existence of the return map computed by Tucker in \cite{T}. Throughout the paper we will assume $\sigma=10,\ \rho=28,\ \beta=\frac{8}{3}. $ \subsubsection{Identification of $\Sigma$} By (\ref{decu'}) we get \begin{equation} \dot{C}\left( u\right) =-2\left[ E\left( u\right) -\frac{\beta\left( \rho+\sigma\right) ^{2}}{4}\right] \ . \label{C'-E}% \end{equation} with \begin{equation} E\left( u\right) :=\sigma u_{1}^{2}+u_{2}^{2}+\beta\left( u_{3}% +\frac{\left( \rho+\sigma\right) }{2}\right) ^{2}\ . \end{equation} Therefore, \begin{equation} \mathcal{E}:=\left\{ u\in\mathbb{R}^{3}:\dot{C}\left( u\right) =0\right\} =\left\{ u\in\mathbb{R}^{3}:E\left( u\right) =\frac{\beta\left( \rho+\sigma\right) ^{2}}{4}\right\} \ , \end{equation} as already noticed in \cite{PM}, is an ellipsoid intersecting the vertical axis ($u_{3}$) in the origin and in $c_{0}.$ This also implies, $M=\rho +\sigma.$ Clearly, $c_{1},c_{2}\in\mathcal{E}.$ Moreover, by (\ref{C'-E}) and (\ref{LPb})% \begin{align} \ddot{C}\left( u\right) & =2\nabla E\cdot\left[ u\times\nabla H+\nabla K\right] \left( u\right) \\ & =4\left\{ \sigma^{2}u_{1}^{2}+u_{2}^{2}-\left[ \sigma\left( \sigma-1\right) +\left( \beta-1\right) \left( u_{3}+\frac{\rho+\sigma}% {2}\right) +\frac{\rho+\sigma}{2}\right] u_{1}u_{2}\right. \nonumber\\ & \left. +\beta^{2}\left( u_{3}+\frac{\rho+\sigma}{2}\right) ^{2}% +\beta^{2}\frac{\rho+\sigma}{2}\left( u_{3}+\frac{\rho+\sigma}{2}\right) \right\} \ .\nonumber \end{align} Let us set $z:=u_{3}+\frac{\rho+\sigma}{2},$ then \begin{gather} \mathcal{E}^{\prime}:=\left\{ u\in\mathbb{R}^{3}:\ddot{C}\left( u\right) =0\right\} \\ =\left\{ u\in\mathbb{R}^{3}:\sigma^{2}u_{1}^{2}+u_{2}^{2}-\left[ \sigma\left( \sigma-1\right) +\left( \beta-1\right) z+\frac{\rho+\sigma }{2}\right] u_{1}u_{2}\right. \nonumber\\ \left. +\beta^{2}z\left( z+\frac{\rho+\sigma}{2}\right) =0\right\} \ .\nonumber \end{gather} We remark that, denoting by $R$ the involution \begin{equation} \mathbb{R}^{3}\ni u=\left( u_{1},u_{2},u_{3}\right) \longmapsto Ru:=\left( -u_{1},-u_{2},u_{3}\right) \in\mathbb{R}^{3}\ , \end{equation} leaving invariant the field $\dot{u},\ R\mathcal{E}=\mathcal{E}$ and $R\mathcal{E}^{\prime}=\mathcal{E}^{\prime}.$ Consider the diffeomorphism \begin{equation} q_{i}=O\left( z\right) u_{i},\ i=1,2\ ;\ q_{3}=z \end{equation} such that for any fixed value of $z,\ O\left( z\right) $ is an orthogonal matrix diagonalizing the symmetric quadratic form \begin{equation} \zeta\cdot A\left( z\right) \zeta:=\sigma^{2}\zeta_{1}^{2}+\zeta_{2}% ^{2}-\left[ \sigma\left( \sigma-1\right) +\left( \beta-1\right) z+\frac{\rho+\sigma}{2}\right] \zeta_{1}\zeta_{2}\;,\qquad\zeta\in \mathbb{R}^{2}\ , \end{equation} namely, setting $A\left( z\right) =O^{t}\left( z\right) diag\left( \lambda_{1}\left( z\right) ,\lambda_{2}\left( z\right) \right) O\left( z\right) ,$% \begin{equation} u\cdot A\left( z\right) u=q\cdot O\left( z\right) A\left( z\right) O^{t}\left( z\right) q=\lambda_{1}\left( z\right) q_{1}^{2}+\lambda _{2}\left( z\right) q_{2}^{2}\ . \end{equation} with \begin{align} \lambda_{1}\left( z\right) & =\frac{\sigma^{2}+1+\sqrt{\left( \sigma ^{2}-1\right) ^{2}+\left[ \frac{\rho+\sigma}{2}+\sigma\left( \sigma -1\right) +\left( \beta-1\right) z\right] ^{2}}}{2}\ ,\\ \lambda_{2}\left( z\right) & =\frac{\sigma^{2}+1-\sqrt{\left( \sigma ^{2}-1\right) ^{2}+\left[ \frac{\rho+\sigma}{2}+\sigma\left( \sigma -1\right) +\left( \beta-1\right) z\right] ^{2}}}{2}\ . \end{align} Under this change of variables \begin{equation} \mathcal{E}^{\prime}=\left\{ q\in\mathbb{R}^{3}:\lambda_{1}\left( q_{3}\right) q_{1}^{2}+\lambda_{2}\left( q_{3}\right) q_{2}^{2}+\beta ^{2}q_{3}\left( q_{3}+\frac{\rho+\sigma}{2}\right) =0\right\} \ . \end{equation} Since $\lambda_{1}\left( q_{3}\right) $ is positive for any choice of the parameters $\beta,\rho,\sigma$ and $q_{3},$ the equation giving the intersection of $\mathcal{E}^{\prime}$ with the planes parallel to $q_{3}% =0$\linebreak($u_{3}=-\frac{\left( \rho+\sigma\right) }{2}$) can have a solution only if $\lambda_{2}\left( q_{3}\right) $ is negative, that is for \begin{align} q_{3} & >-\frac{\sigma\left( \sigma-3\right) +\frac{\rho+\sigma}{2}}% {\beta-1}\ \Rightarrow\ u_{3}>-\frac{1}{\beta-1}\left[ \sigma\left( \sigma-3\right) +\beta\frac{\left( \rho+\sigma\right) }{2}\right] \ ;\\ q_{3} & <-\frac{\sigma\left( \sigma+1\right) +\frac{\rho+\sigma}{2}}% {\beta-1}\ \Rightarrow\ u_{3}<-\frac{1}{\beta-1}\left[ \sigma\left( \sigma+1\right) +\beta\frac{\left( \rho+\sigma\right) }{2}\right] \ . \end{align} Therefore, for $q_{3}\neq0\ $($u_{3}\neq-\frac{\left( \rho+\sigma\right) }{2}$), these intersections are hyperbolas while, if $q_{3}=0$ or $q_{3}=-\frac{\left( \rho+\sigma\right) }{2}\ $($u_{3}=-\left( \rho +\sigma\right) $), from the definition of $\mathcal{E}^{\prime}$ we get the equations \begin{itemize} \item if $q_{3}=0,$% \begin{equation} \sigma^{2}u_{1}^{2}+u_{2}^{2}-\left[ \frac{\left( \rho+\sigma\right) }% {2}+\sigma\left( \sigma-1\right) \right] u_{1}u_{2}=0\ ; \end{equation} \item if $q_{3}=-\frac{\left( \rho+\sigma\right) }{2},$% \begin{equation} \sigma^{2}u_{1}^{2}+u_{2}^{2}-\left[ \sigma\left( \sigma-1\right) -\left( \beta-2\right) \frac{\rho+\sigma}{2}\right] u_{1}u_{2}=0\ . \end{equation} \end{itemize} Since for our choice of the values of the parameters of the model, \begin{align} \lambda_{2}\left( 0\right) & =\frac{\sigma^{2}+1-\sqrt{\left( \sigma ^{2}-1\right) ^{2}+\left[ \frac{\left( \rho+\sigma\right) }{2}% +\sigma\left( \sigma-1\right) \right] ^{2}}}{2}<0\\ \lambda_{2}\left( -\frac{\left( \rho+\sigma\right) }{2}\right) & =\frac{\sigma^{2}+1-\sqrt{\left( \sigma^{2}-1\right) ^{2}+\left[ \sigma\left( \sigma-1\right) -\left( \beta-2\right) \frac{\left( \rho+\sigma\right) }{2}\right] ^{2}}}{2}<0 \end{align} the intersection of $\mathcal{E}^{\prime}$ with the planes $z=q_{3}=0$ and $z=q_{3}=\frac{\left( \rho+\sigma\right) }{2}$ are the straight lines. The manifold in $R^{3}$ corresponding to the relative maxima of $C\left( t\right) $ is \begin{align} \Sigma & :=\left\{ u\in\mathbb{R}^{3}:\dot{C}\left( u\right) =0\ ,\ \ddot{C}\left( u\right) \leq0\right\} \label{Sigma}\\ & =\left\{ u\in\mathbb{R}^{3}:\left\{ \begin{array} [c]{l}% \sigma u_{1}^{2}+u_{2}^{2}+\beta\left( u_{3}+\frac{\left( \rho +\sigma\right) }{2}\right) ^{2}=\frac{\beta\left( \rho+\sigma\right) ^{2}% }{4}\\ \sigma^{2}u_{1}^{2}+u_{2}^{2}-\left[ \sigma\left( \sigma-1\right) +\left( \beta-1\right) \left( u_{3}+\frac{\rho+\sigma}{2}\right) +\frac{\rho +\sigma}{2}\right] u_{1}u_{2}+\\ +\beta^{2}\left( u_{3}+\frac{\rho+\sigma}{2}\right) \left( u_{3}% +\rho+\sigma\right) \leq0 \end{array} \right. \right\} \ .\nonumber \end{align} Since for our choice of the parameters \begin{equation} \frac{1}{\beta-1}\left[ \sigma\left( \sigma-3\right) +\beta\frac{\left( \rho+\sigma\right) }{2}\right] >\rho+\sigma\ , \end{equation} $\Sigma$ is composed by two closed surfaces in $R^{3},\Sigma_{+}$ and $\Sigma_{-},$ such that $R\Sigma_{+}=\Sigma_{-}$ and intersecting only in the critical point $c_{0}.$ By definition, $\forall u\in\mathcal{E},$ the vector $\dot{u}\left( u\right) $ is orthogonal to the vector $\nabla C\left( u\right) ,$ hence it belongs to the plane spanned by $\nabla E\left( u\right) -\left( \nabla E\cdot\nabla C\right) \left( u\right) \nabla C\left( u\right) $ and $\left( \nabla C\times\nabla E\right) \left( u\right) ,$ where \begin{equation} \nabla C\times\nabla E=4\left( \begin{array} [c]{c}% u_{2}\left[ \left( \beta-1\right) u_{3}+\beta\frac{\rho+\sigma}{2}\right] \\ -u_{1}\left[ \left( \beta-\sigma\right) u_{3}+\beta\frac{\rho+\sigma}% {2}\right] \\ -\left( \sigma-1\right) u_{1}u_{2}% \end{array} \right) \end{equation} and, since $C$ is a constant of motion for the Hamiltonian field $v,$\linebreak$\forall u\in\mathcal{E},\ \left( w\cdot\nabla C\right) \left( u\right) =0.$ Moreover: \begin{itemize} \item $\left\vert \nabla E\right\vert \upharpoonleft_{\mathcal{E}},\left\vert \nabla C\right\vert \upharpoonleft_{\mathcal{E}}$ and $\left\vert \nabla C\times\nabla E\right\vert \upharpoonleft_{\mathcal{E}}$ are always different from zero; \item from (\ref{C'-E}) and (\ref{Sigma})% \begin{equation} \ddot{C}\left( t\right) =\left( \dot{u}\cdot\nabla\dot{C}\right) \left( t\right) =\left( \dot{u}\cdot\nabla\left[ -2\left( E-\beta\frac{\left( \rho+\sigma\right) ^{2}}{4}\right) \right] \right) \left( t\right) \ , \end{equation} hence, $\forall u\in\partial\Sigma,\ \dot{u}\left( u\right) $ is parallel to $\nabla C\times\nabla E,$ that is tangent to $\Sigma.$ \end{itemize} Therefore, $\dot{u}$ is transverse to $\Sigma\backslash\partial\Sigma$ and since $\forall u\in\Sigma\backslash\partial\Sigma,$% \begin{equation} \ddot{C}\left( u\right) =\left( \dot{u}\cdot\nabla\dot{C}\right) \left( u\right) =-2\left( \dot{u}\cdot\nabla E\right) <0\;\Longrightarrow\;\left( \dot{u}\cdot\nabla E\right) >0\ , \end{equation} the direction of $\dot{u}\left( u\right) $ points outward the bounded subset of $\mathbb{R}^{3},$% \begin{equation} \left\{ u\in\mathbb{R}^{3}:E\left( u\right) \leq\beta\frac{\left( \rho+\sigma\right) ^{2}}{4}\right\} \ . \end{equation} \subsubsection{Parametrization of $\Sigma$} If $r\in\left( 0,\rho+\sigma\right) ,\ \gamma:=rB\cap\mathcal{E}$ is a regular closed curve. Therefore, we can parametrize $\Sigma_{+}$ choosing an appropriate arc of $\gamma$ as coordinate curve of the parametrization, that is there exist an open regular subset $\Omega$ of $\mathbb{R}^{2}$ and a map\linebreak$b^{+}\in C^{1}\left( \Omega,\mathbb{R}^{3}\right) \cap C\left( \overline{\Omega},\mathbb{R}^{3}\right) $ such that% \begin{equation} \left\{ \begin{array} [c]{c}% u_{1}=b_{1}^{+}\left( y,z\right) \\ u_{2}=b_{2}^{+}\left( y,z\right) \\ u_{3}=b_{3}^{+}\left( y,z\right) \end{array} \right. \ ,\ \left( y,z\right) \in\Omega\ . \label{b+}% \end{equation} Moreover, we can choose the parametrization such that $z=r^{2},$ therefore the coordinate curves $b_{z}^{+}\left( y\right) $ satisfy the equations \begin{equation} \left\{ \begin{array} [c]{c}% C\left( b_{z}^{+}\left( y\right) \right) =z\\ \dot{C}\left( b_{z}^{+}\left( y\right) \right) =0 \end{array} \right. \ . \end{equation} We remark that the tangent field to $\gamma$ is parallel to $\nabla C\times\nabla E,$ while, if $\zeta$ denotes the coordinate curve $b_{y}% ^{+}\left( z\right) ,$ the tangent field to $\zeta$ is parallel to $\nabla E\times\left( \nabla C\times\nabla E\right) .$ Similar arguments also hold for $\Sigma_{-}.$ Furthermore, \begin{equation} \overline{\Omega}\ni\left( y,z\right) \longmapsto b^{-}\left( y,z\right) :=Rb^{+}\left( y,z\right) \in\mathbb{R}^{3}% \end{equation} is easily seen to be a parametrization of $\Sigma_{-}$ sharing the same properties of $b^{+}.$ \subsubsection{Return maps on $\Sigma$} The evolution of the system maps in itself the ball $\beta\left( \rho +\sigma\right) B,\ \mathcal{E}\subset\beta\left( \rho+\sigma\right) B$ and the velocity field is transverse to $\Sigma\backslash\partial\Sigma$ and $\mathcal{E}\backslash\Sigma.$ For our choice of parameters $\rho,\beta$ and $\sigma,$ it has been shown in \cite{T} that there exist periodic orbits crossing a two-dimensional compact domain $\Delta$ contained in the plane $\pi:=\left\{ u\in\mathbb{R}^{3}% :u_{3}=1-\left( \rho+\sigma\right) \right\} ,$ which is also intersected by the stable manifold of the system $W_{o}^{s}$ along some curve $\Gamma_{0}.$ Furthermore, the first eight shortest periodic orbits have been rigorously found in \cite{GT}. Notice that by symmetry if $\varphi\left( t,u\right) ,\ u\in\Sigma_{+},$ is a periodic orbit, $R\varphi\left( t,u\right) $ is also a periodic orbit and either $R\varphi\left( t,u\right) =\varphi\left( t,u\right) $ or $R\varphi\left( t,u\right) =\varphi\left( t,Ru\right) ,$ where $Ru\in\Sigma_{-};$ that is periodic orbits are either symmetric or appear in couples whose elements are mapped one into another by $R,$ as already remarked in \cite{Sp}. Since $\Delta$ is easily seen to be contained in \begin{equation} \left\{ u\in\mathbb{R}^{3}:\dot{C}\left( u\right) \leq0\right\} \cap\pi\ , \end{equation} these periodic orbits then, necessarily cross $\Sigma$ which is also possibly intersected by $W_{o}^{s}$ along some curve $\Gamma$ lying in the half-space \begin{equation} \left\{ u\in\mathbb{R}^{3}:u_{3}\geq1-\left( \rho+\sigma\right) \right\} \ . \end{equation} Therefore, if $u_{0}\in\Sigma\backslash\Gamma$ lies on a periodic orbit of period $t_{0},$ there exists an open neighborhood $N\ni u_{0}$ and a $C^{1}\left( N,\mathbb{R}\right) $ map $\tau$ such that $\tau\left( u_{0}\right) =t_{0}$ and $\varphi_{\tau\left( u\right) }\left( u\right) \in\Sigma$ for any $u\in N.$ Then, \begin{equation} N\cap\Sigma\backslash\Gamma\ni u\longmapsto P_{\Sigma}\left( u\right) :=\varphi_{\tau\left( u\right) }\left( u\right) \in\Sigma\ . \end{equation} Moreover, it has been proved in \cite{T} that $\Delta\backslash\Gamma_{0}$ is forward invariant under the return map on $\pi$ and that on $\Delta$ there estists a forward invariant unstable cone field. These properties are also shared by a compact subset $\Delta^{\prime}\subset\Sigma$ such that any open subset of $\Delta^{\prime}$ is diffeomorphic to a open subset of $\Delta.$ Hence, $P_{\Sigma}$ admits an invariant stable foliation with $C^{1+\iota },\ \iota\in\left( 0,1\right) $ leaves. \subsubsection{Construction of the map $T$} Let $P^{\left( \pm\right) }:=P_{\Sigma_{\pm}}.$ By the parametrization previously introduced for $\Sigma_{+},$ there exists an open subset $\Omega^{\prime}\subset\Omega\backslash\Gamma^{\prime},$ with $\Gamma^{\prime }:=\left( b^{+}\right) ^{-1}\left( \Gamma\right) ,$ and a $C^{1}\left( \Omega^{\prime},\mathbb{R}^{2}\right) $ map \begin{equation} \Omega^{\prime}\ni\left( y,z\right) \longmapsto S\left( y,z\right) \in\Omega^{\prime}\ . \end{equation} such that, $\forall\left( y,z\right) \in\Omega^{\prime},$% \begin{equation} \left( b^{+}\circ S\right) \left( y,z\right) :=\left( P^{\left( +\right) }\circ b^{+}\right) \left( y,z\right) \ , \end{equation} Furthermore, \begin{equation} G\left( y,z\right) :=\left( \dot{C}\circ b^{+}\circ S\right) \left( y,z\right) =0\ . \end{equation} Let $S_{1},S_{2}$ be respectively the first and the second component of $S.$ Since $b^{+}$ is a diffeomorphism and the components of $\nabla E=\nabla \dot{C}$ are different from zero on $\Sigma_{+},\ \forall\left( y,z\right) \in\Omega^{\prime},\ \partial_{y}G\left( y,z\right) \neq0.$ Thus, by the implicit function theorem, $\forall\left( y_{0},z_{0}\right) \in \Omega^{\prime},$ there exist two open interval $\left( y_{1},y_{2}\right) ,\ \left( z_{1},z_{2}\right) $ such that $\left( y_{0},z_{0}\right) \in\left( y_{1},y_{2}\right) \times\left( z_{1},z_{2}\right) \subseteq\Omega^{\prime}$ and a unique $C^{1}\left( \left( z_{1}% ,z_{2}\right) \right) $ map \begin{equation} \left( z_{1},z_{2}\right) \ni z\longmapsto y:=U\left( z\right) \in\left( y_{1},y_{2}\right) \end{equation} such that, $\forall z\in\left( z_{1},z_{2}\right) ,\ G\left( U\left( z\right) ,z\right) =0$ and, $\forall\left( y,z\right) \in\left( y_{1},y_{2}\right) \times\left( z_{1},z_{2}\right) $ such that $y\neq U\left( z\right) ,\ G\left( y,z\right) \neq0.$ Therefore, let \begin{equation} \left( z_{1},z_{2}\right) \ni z\longmapsto V\left( z\right) :=S_{2}\left( U\left( z\right) ,z\right) \in\left( z_{1},z_{2}\right) \ . \end{equation} Notice that, since $b^{+}\in C^{1}\left( \Omega\right) ,\ S=\left( b^{+}\right) ^{-1}\circ P^{\left( +\right) }\circ b^{+}$ is $C^{1}\left( \Omega^{\prime}\right) $ if and only if $P^{\left( +\right) }$ is, and so are $U$ and $V.$ Moreover, by symmetry, \begin{equation} b^{-}\circ S=Rb^{+}\circ S=RP^{\left( +\right) }\circ b^{+}=RP^{\left( +\right) }\circ Rb^{-}=P^{\left( -\right) }\circ b^{-}\ . \end{equation} Hence $S=\left( b^{-}\right) ^{-1}\circ P^{\left( -\right) }\circ b^{-}.$ Clearly, $\left[ z_{1},z_{2}\right] \subseteq\left[ m\vee\left( r^{\ast }\right) ^{2},\rho+\sigma\right] ,$ with $r^{\ast}:=\inf\{r>0:rB\cap \Sigma\neq\varnothing\}.$ Let $u_{+}\in\Sigma_{+},\left( y_{+},z_{+}\right) \in\overline{\Omega}$ be such that $P^{\left( +\right) }\left( u_{+}\right) =c_{0}$ and\linebreak% $b^{+}\left( y_{+},z_{+}\right) =u_{+}.$ Setting \begin{equation} \left[ z_{1},z_{2}\right] \ni z\longmapsto X\left( z\right) :=\frac {z-z_{1}}{z_{2}-z_{1}}\in\left[ 0,1\right] \ , \end{equation} we define \begin{equation} \left[ 0,1\right] \ni s\longmapsto T\left( s\right) :=X\circ V\circ X^{-1}\left( s\right) \in\left[ 0,1\right] \ . \end{equation} Hence, by construction, $T$ is a $C^{1}\left( \left( 0,1\right) \backslash\left\{ x_{0}\right\} \right) $ map, where $x_{0}:=X\left( z_{+}\right) ,$ and, since there exits $\iota\in\left( 0,1\right) $ such that $P_{\Sigma}$ admits an invariant stable foliation with $C^{1+\iota}$ leaves, then $T$ is also $C^{1+\iota}\left( \left( 0,1\right) \backslash\left\{ x_{0}\right\} \right) .$ \section{The invariant density for the evolution under $T$} In this section we compute the density of the unique (by ergodicity) absolutely continuous invariant measure for the map $T$ and we prove its statistical stability. For the construction of the density we use the techniques recently introduced in the paper \cite{CHMV} (see also \cite{BH} for results related to similar maps), which dealt with Lorenz maps admitting indifferent fixed points besides points with unbounded derivative. For the statistical stability we will follow the recent article \cite{BV}, but with some new substantial improvements. The techniques used in \cite{CHMV} turned around Young's towers \cite{LSY} and, substantiated by a careful analysis of the distortion, led to a detailed study of the density of the absolutely continuous invariant measure, of the recurrence properties of the dynamics and of limit theorems for H\"{o}lder continuous observables. This analysis could in particular be carried over when the map has a derivative larger than one at the fixed point, as in the case we are going to treat, but possibly smaller than one in some other point (see below). We remind that whenever the Lorenz map is a Markov expanding map with finite derivative, it could be investigated with the spectral techniques of Keller \cite{K}. Young's towers are useful when the map looses the Markov property, but preserves points with unbounded derivative. This has been analysed in \cite{KDO}, see also \cite{OHL} when there are critical points too. Our main effort will be in investigating the smoothness of the density. We will show that such a density is Lipschitz continuous on the whole unit interval but in one point. The argument we produce is a strong improvement with respect to the result achieved in \cite{CHMV} (and applies to it as well), where we solely proved the Lipschitz continuity on countably many intervals partitioning the unit interval. We stress that, as far as we know,this is the first result where the smoothness of the density for Lorenz like maps is explicitly exhibited.\newline \textbf{Notations:} With $a_{n}\approx b_{n}$ we mean that there exists a constant $C\geq1$ such that $C^{-1}b_{n}\leq a_{n}\leq Cb_{n}$ for all $n\geq1;$ with $a_{n}\sim b_{n}$ we mean that $\lim_{n\rightarrow\infty} \frac{a_{n}}{b_{n}}=1.$ We will also use the symbols "$O$" and "$o$" in the usual sense. \newline The analysis we perform in this section applies to a large class of Lorenz-like maps which includes in particular those whose behavior is given by the theoretical arguments of the preceding section and by the numerical investigations of the paper \cite{PM}. The map $T$ (fig.1) has a left and a right convex branches around the point $0<x_{0}<1;$ the left branch is monotonically increasing and uniformly expanding even at the fixed point $0,$ while the right one is monotonically decreasing with the derivative bounded from below by a constant less than one; at the cusp, located at $x_{0},$ the left and right derivative blow up to infinity. Both branches are onto $[0,1]$ and this makes the map Markovian. Moreover, we recall that our map is $C^{1}$ on $[0,1]\backslash\{x_{0}\}$ and $C^{1+\iota},$\linebreak$\iota\in\left( 0,1\right) ,$ on $(0,x_{0})\cup(x_{0},1).$ \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig1.jpg} } \caption{Normalized Lorenz cusp map for the Casimir maxima} \label{fig:1} \end{figure} The local behaviors are ($c$ will denote a positive constant which could take different values from one formula to another): \begin{align} & \left\{ \begin{array} [c]{l}% T(x)=\alpha^{\prime}x+\beta^{\prime}x^{1+\psi}+o(x^{1+\psi});\ x\rightarrow 0^{+}\\ DT(x)=\alpha^{\prime}+cx^{\psi}+o(x^{\psi}),\ \alpha^{\prime}>1;\ \beta ^{\prime}>0;\ \psi>1 \end{array} \right. \ ,\label{T_DT1}\\ & \left\{ \begin{array} [c]{l}% T(x)=\alpha(1-x)+\tilde{\beta}(1-x)^{1+\kappa}+o((1-x)^{1+\kappa });\ x\rightarrow1^{-}\\ DT(x)=-\alpha-c(1-x)^{\kappa}+o((1-x)^{\kappa}),\ 0<\alpha<1,\ \tilde{\beta }>0,\ \kappa>1 \end{array} \right. \ ,\label{T_DT2}\\ & \left\{ \begin{array} [c]{l}% T(x)=1-A^{\prime}(x_{0}-x)^{B^{\prime}}+o((x_{0}-x)^{B^{\prime}}% );\ x\rightarrow x_{0}^{-},\ A^{\prime}>0\\ DT(x)=c(x_{0}-x)^{B^{\prime}-1}+o((x_{0}-x)^{B^{\prime}-1}),\ 0<B^{\prime}<1 \end{array} \right. \ ,\label{T_DT3}\\ & \left\{ \begin{array} [c]{l}% T(x)=1-A(x-x_{0})^{B}+o((x-x_{0})^{B});\ x\rightarrow x_{0}^{+},\ A>0\\ DT(x)=-c(x-x_{0})^{B-1}+o((x-x_{0})^{B-1}),\ 0<B<1 \end{array} \right. \ . \label{T_DT4}% \end{align} We set $B^{\ast}:=\max(B,B^{\prime});$ moreover we set $T_{1}$ (resp. $T_{2}$) the restriction of $T$ to $[0,x_{0}]$ (resp. to $[x_{0},1]$). A key role is played by the preimages of $x_{0}$ since they will give the sets where we will induce with the first return map; so we set: $a_{0}:=T_{2}^{-1}x_{0};$% \ $a_{0}^{\prime}:=T_{1}^{-1}x_{0};$\ $a_{p}^{\prime}=T_{1}^{-p}a_{0}^{\prime };$\ $a_{p}=T_{2}^{-1}T_{1}^{-(p-1)}a_{0}^{\prime},$\ $p\geq1.$ We also define the sequences $\{b_{p}\}_{p\geq1}\subset(x_{0},a_{0})$ and $\{b_{p}^{\prime }\}_{p\geq1}\subset(a_{0}^{\prime},x_{0})$ as $Tb_{p}^{\prime}=Tb_{p}% =a_{p-1}.$ The idea is now to induce on some domain $I$ and to replace the action of $T$ on $I$ with that of the first return map $T_{I}$ into $I.$ We will see that the systems $(I,T_{I})$ will admit an absolutely continuous invariant measure $\mu_{I}$ which is in particular equivalent to the Lebesgue measure with a density $\rho_{I}$ bounded from below and from above. There will be finally a link between the induced measure $\mu_{I}$ and the absolutely continuous invariant measure $\mu$ on the interval, which will allows us to get some informations on the density $\rho$ of $\mu.$ The principal set where we will choose to induce is the open interval $I=(a_{0}^{\prime},a_{0})\backslash\{x_{0}\}.$ The subsets $Z_{p}\subset I$ with first return time $p$ will have the form \begin{align} Z_{1} & =(a_{0}^{\prime},b_{1}^{\prime})\cup(b_{1},a_{0})\label{SI}\\ Z_{p} & =(b_{p-1}^{\prime},b_{p}^{\prime})\cup(b_{p},b_{p-1})\quad p>1\ . \label{SII}% \end{align} We will also induce over the open sets $(a_{n}^{\prime},a_{n-1}^{\prime})$ and $(a_{n},a_{n+1}),n>1,$ simply denoted in the following as the rectangles $I_{n}.$ In order to apply the techniques of \cite{CHMV}, we have to show that the induced maps are aperiodic uniformly expanding Markov maps with bounded distortion on each set with prescribed return time. On the sets $I_{n}$ the first return map $T_{I_{n}}$ is Bernoulli, while the aperiodicity condition on $I$ follows easily by the inspection of the graph of the first return map $T_{I}:I\rightarrow I$ showing that it maps: $(a_{0}^{\prime},b_{1}^{\prime})$ onto $(x_{0},a_{0});$ the intervals $(b_{l}^{\prime},b_{l+1}^{\prime}% ),\ l\geq1,$ onto the interval $(a_{0}^{\prime},x_{0})$ and $(b_{1},a_{0})$ onto $(x_{0},a_{0}).$ Finally, $T_{I}$ sends the intervals $(b_{l+1}% ,b_{l}),\ l\geq1$ onto $(a_{0}^{\prime},x_{0}).$ Bounds on the distortion of the first return map on $I$ and on the $I_{n}$ can be proved exactly in the same way as in Proposition 3 of \cite{CHMV} (we defer to it for the details) provided we show that the first return maps are uniformly expanding\footnote{The Lorenz-like map considered in \cite{CHMV} was $C^{2}$ outside the boundary points and the cusp; our map is instead $C^{1+\iota}.$ This will not change the proof of the distortion in \cite{CHMV} and all the statistical properties which follow from it. As a matter of fact, in the initial formula (5) in \cite{CHMV}, we have to replace the term $\left\vert \frac{D^{2}T(\xi)}{DT(\xi)}\right\vert \left\vert T^{q}(x)-T^{q}(y)\right\vert $ with $\frac{C_{h}}{\left\vert DT(\xi)\right\vert }\left\vert T^{q}% (x)-T^{q}(y)\right\vert ^{\iota},$ where $C_{h}$ is the H\"{o}lder constant larger than $0$ depending only on $T$ and $\xi$ is a point between the iterates $T^{q}(x),\ T^{q}(y).$ The only delicate point where the $C^{1+\iota }$ assumption could give problems is the summability of the series at point (i) in the statement of Lemma 4 in \cite{CHMV}. The general term of this series will be of the form (we adapt to our case): $(a_{n+1}-a_{n})^{\iota}.$ In \cite{CHMV}, due to the presence of the indifferent fixed point, the term $(a_{n+1}-a_{n})$ decays polynomially like $n^{-\kappa},$ say, where $\kappa>1$ depends on the map. In order to guarantee the aforecited summability property we have therefore to ask an additional assumption on $\kappa,$ namely $\kappa>\iota^{-1}$. We do not have such a constraint in our case since the length $(a_{n+1}-a_{n})$ decays exponentially fast.}. The proof of this fact is given in the next Lemma and it requires a few assumptions which can be checked numerically with a finite number of steps and by a direct inspection of the graph of $T.$ With abuse of language we will say that the derivative is larger than $1$ if its absolute value is larger than $1.$ \begin{lemma} Let us suppose that in addition to (\ref{T_DT1})-(\ref{T_DT4}) the map $T$ satisfies the assumptions: \begin{itemize} \item[(i)] $d_{(1,0)}:=\inf_{x\in(b_{1},a_{0})}|DT(x)|>1;$ \item[(ii)] $|DT(b_{1})|\geq DT(a_{0}^{\prime});$ \item[(iii)] $|DT(a_{p-1})|DT(a_{p-2}^{\prime})\cdots DT(a_{1}^{\prime })DT(a_{0}^{\prime})>\alpha^{\prime\prime},\ \forall p\leq p^{\ast }:=\left\lfloor 1+\frac{\log(\alpha^{\prime\prime}\alpha^{-1})}{\log \alpha^{\prime}}\right\rfloor ,$ for $1<\alpha^{\prime\prime}\leq d_{(1,0)}\wedge\alpha^{\prime}.$ \end{itemize} Then the first return time maps $T_{I}$ and $T_{I_{n}},n>1$ have the derivative uniformly bounded below away from $\alpha^{\prime\prime}.$ \end{lemma} \begin{proof} We give the proof for $I$ and we generalize after to all the $I_{n}.$ We represent with an arrow $"\rightarrow"$ the evolution under $T$ of a subset $Z_{p}\subset I,\ p\geq1,$ given in (\ref{SI}) and (\ref{SII}). Consequently $(b_{1},a_{0})\rightarrow(x_{0},a_{0})$ and $(a_{0}^{\prime},b_{1}^{\prime })\rightarrow(x_{0},a_{0}).$ In the latter case the derivative of the map coincides with that of $T$ and is larger than $1,$ since $T_{1}$ has derivative larger than $\alpha^{\prime}>1.$ The former case follows by condition (i). For $p>1$ we have: \begin{equation} \left\{ \begin{array} [c]{l}% \left( b_{2},b_{1}\right) \rightarrow\left( a_{0},a_{1}\right) \rightarrow\left( a_{0}^{\prime},x_{0}\right) , \ p=2\\ (b_{p},b_{p-1})\rightarrow(a_{p-2},a_{p-1})\rightarrow(a_{p-2}^{\prime },a_{p-3}^{\prime})\rightarrow(a_{p-3}^{\prime},a_{p-4}^{\prime}% )\rightarrow\cdots\\ \rightarrow(a_{1}^{\prime},a_{0}^{\prime})\rightarrow(a_{0}^{\prime}% ,x_{0})\quad p\geq3 \end{array} \right. \end{equation} and \begin{equation} \left\{ \begin{array} [c]{l}% \left( b_{2}^{\prime},b_{1}^{\prime}\right) \rightarrow\left( a_{0}% ,a_{1}\right) \rightarrow\left( a_{0}^{\prime},x_{0}\right) , \ p=2\\ (b_{p-1}^{\prime},b_{p}^{\prime})\rightarrow(a_{p-2},a_{p-1})\rightarrow (a_{p-2}^{\prime},a_{p-3}^{\prime})\rightarrow(a_{p-3}^{\prime},a_{p-4}% ^{\prime})\rightarrow\cdots\\ \rightarrow(a_{1}^{\prime},a_{0}^{\prime})\rightarrow(a_{0}^{\prime}% ,x_{0})\quad p\geq3 \end{array} \right. \end{equation} In order to get that the derivative of $T^{p}$ is larger than one we need: \begin{itemize} \item in the first case \begin{equation} \left\vert DT(b_{p-1})DT(a_{p-1})\right\vert DT(a_{p-2}^{\prime})\cdots DT(a_{2}^{\prime})DT(a_{1}^{\prime})>1\ ; \label{(*)}% \end{equation} \item in the second case \begin{equation} DT(b_{p-1}^{\prime})\left\vert DT(a_{p-1})\right\vert DT(a_{p-2}^{\prime })\cdots DT(a_{2}^{\prime})DT(a_{1}^{\prime})>1\ . \label{(**)}% \end{equation} \end{itemize} Let us suppose now that we have, for $p>1,$ \begin{equation} \left\vert DT(a_{p-1})\right\vert DT(a_{p-2}^{\prime})\cdots DT(a_{1}^{\prime })DT(a_{0}^{\prime})>\alpha^{\prime\prime}\ . \label{C1}% \end{equation} If this condition holds, then the inequality (\ref{(**)}) follows too with the same uniform (in $p$) bound $\alpha^{\prime\prime},$ since, by monotonicity of the first derivative, $DT(b_{p-1}^{\prime})>DT(a_{0}^{\prime}).$ In order to satisfy the inequality (\ref{(*)}) with a lower bound given by $\alpha ^{\prime\prime},$ by assuming again (\ref{C1}), it will be sufficient to show that $\left\vert DT(b_{p-1})\right\vert \geq DT(a_{0}^{\prime}),$ which, by monotonicity, is implied by $|DT(b_{1})|\ge DT(a^{\prime}_{0})$ and this follows from assumption (ii). Therefore, we are left with the proof of the validity of condition (\ref{C1}). By condition (iii) this holds true for $p\leq p^{\ast}.$ Moreover, since the points $a_{p-2}^{\prime},a_{p-3}% ^{\prime},\dots,a_{0}^{\prime}$ lies on the left of $x_{0},$ all the $(p-1)$ derivatives in the block $DT(a_{p-2}^{\prime})\cdots DT(a_{2}^{\prime })DT(a_{0}^{\prime}))$ are larger $\alpha^{\prime}.$ On the other hand, the derivative in $a_{p-1}$ is surely larger than $\alpha.$ Hence, (\ref{C1}) holds for all $p$ such that $\alpha\left( \alpha^{\prime}\right) ^{p-1}>\alpha^{\prime\prime}.$\newline Now we return to the rectangles in $I_{n}.$ Let us first call \emph{complete path} the graphs given above starting respectively from $(a_{p-1},a_{p}),\ p>1,$ and ending in $(a_{0}^{\prime},x_{0})$ and starting from $(a_{p}^{\prime},a_{p-1}^{\prime }),\ p>1$ and ending in $(a_{0}^{\prime},x_{0}).$ It is easy to check, by looking at the grammar\footnote{Let us give the coding for the map $T$ with the grammar which we invoked above. To use a coherent notation we will redefine $a_{-0}\equiv a_{0}^{\prime};$ $a_{-p}\equiv a_{p}^{\prime}% =T_{1}^{-p}a_{0}^{\prime}\ ,\ p\geq1.$ We associate with each point $x\in\lbrack0,1]\backslash\chi,$ where $\chi=\cup_{i\geq0}T^{-i}\{x_{0}\},$ the unique coding $x=(\omega_{0},\omega_{1},\dots,\omega_{n},\dots ),\ \omega_{l}\in\mathbb{Z},$ where (from now on $n$ will denote a positive integer larger than $1$), $\omega_{l}=n$ iff $T^{l}x\in(a_{n-1},a_{n}% );\ \omega_{l}=-n$ iff $T^{l}x\in(a_{-n},a_{-(n-1)});$ $\omega_{l}=0$ iff $T^{l}x\in I.$ The grammar is the following (the formal symbol $-0$ must be intended as $0$) :% \begin{align*} \omega_{i} & =n>0\Rightarrow\omega_{i+1}=-n\ ;\ \omega_{i}=-n\Rightarrow \omega_{i+1}=-(n-1)\\ \newline\omega_{i} & =0\Rightarrow\omega_{i+1}=n\geq0\ (\text{any }n) \end{align*} } given by the arrows, that any subset of the rectangles in $I_{n}$ with first return time $q>n$ will contain points whose trajectory follows a complete path, or spends some time in $(x_{0},a_{0}).$ In any case, and by the condition (\ref{C1}) whose validity has been checked above, the derivative $DT^{q}$ will be strictly larger than $\alpha^{\prime\prime}.$ \end{proof} \begin{remark} The assumption (i)-(iii) in the previous Lemma are easily verified for the map investigated in \cite{PM}. In particular, with the values $\alpha=0.4603$ and $\alpha^{\prime}=1.113$ associated to the map and with $\alpha^{\prime\prime }\sim1.01,$ the inequality (iii) is verified for $p\geq9;$ hence we have only to check (\ref{C1}) for $1<p\leq8$ and this has been done, and confirmed, by a direct easy numerical inspection. \end{remark} As a consequence of the preceding results, we could apply, as in \cite{CHMV}, the L.-S. Young tower theory and conclude the following statements: \begin{itemize} \item On the induced set $I,$ the tail of the Lebesgue measure of the set of points with first return bigger than $n,$ to be more precise the quantity $\sum_{k>n}m\{x\in I~;~\tau_{I}(x)\geq k\},$ where $\tau_{I}(x)$ denotes the first return of the point $x$ into $I,$ decays exponentially fast with $n.$ By using (\ref{SI}) and the asymptotic values for the $b_{n}$ and $b_{n}^{\prime }$ given below it is immediate to find that the previous rate of decay is $O(\left( \alpha^{\prime}\right) ^{-\frac{n}{B^{\ast}}}).$ This implies the existence on the Borel $\sigma$-algebra $\mathcal{B}\left( [0,1]\right) $ of an absolutely continuous invariant measure $\mu$ with exponential decay of correlations for H\"{o}lder observables evolving under $T$ w.r.t. $\mu$ (the rate of this decay will be of the type $\hat{\alpha}^{-n},$ where $\hat {\alpha}$ is possibly different from $\alpha^{\prime}$). \item Since the first return maps $T_{I}$ and $T_{I_{n}}$ are aperiodic uniformly expanding Markov maps, they admit invariant measures $\mu_{I}$ and $\mu_{I_{n}}$ which turn out to be equivalent to Lebesgue on $I$ and $I_{n}$ with densities bounded away from $0$ and $\infty$ \cite{CHMV} and also Lipschitz continuous on the images of the rectangles of the their associated Markov partition \cite{AD}\footnote{It is argued in \cite{AD} that if $\alpha$ is a Markov partition of the standard probability metric space $(X,\mathcal{B}% ,m,T)$ with distance $d,$ then $T\alpha\subset\sigma(\alpha),$ where $\sigma(\alpha)$ denotes the sigma-algebra generated by the partition $\alpha,$ and therefore it exists a (possibly countable) partition $\beta$ coarser than $\alpha$ such that $\sigma(T\alpha)=\sigma(\beta).$ Moreover, if the system is Gibbs-Markov, as in our case, then the space $Lip_{\infty,\beta }$ of functions $f:X\rightarrow\mathbb{R},\ $ $f\in L_{m}^{\infty}% :=L_{m}^{\infty}\left( X\right) ,$ which are Lipschitz continuous on each $Z\in\beta,$ is a Banach space with the norm: $\left\Vert f\right\Vert_{Lip_{\infty,\beta}% }=\left\Vert f\right\Vert_{L_{m}^{\infty}}+D_{\beta}f,$ where $D_{\beta}f=\sup_{Z\in\beta}% \sup_{x,y\in Z}\frac{|f(x)-f(y)|}{d(x,y)}.$ The space $Lip_{\infty,\beta}$ is compactly injected into $L_{m}^{1},$ which gives the desired conclusions on the smoothness of the density as a consequence of the Lasota-Yorke inequality. Notice that in our case $m$ in just the Lebesgue measure. We denote by~$B(I)$ the Banach space $Lip_{\infty,\beta}$ defined on $I.$}. In the sequel we will show that such densities coincide, but a constant, with the restriction, on the inducing sets, of the density $\rho$ of the invariant measure $\mu$ for the map $T.$ Now, the images of the rectangles of the Markov partitions are the (disjoint) sets $(a_{0}^{\prime},x_{0})$ and $(x_{0},a_{0}),$ when we induce over $I,$ and the whole intervals $(a_{n}^{\prime},a_{n-1}^{\prime})$ and $(a_{n},a_{n+1}),\ n>1,$ when we induce over the rectangles in $I_{n}.$ Therefore we could conclude that the density of the invariant measure $\mu$ is a piecewise Lipschitz continuous functions with possible discontinuities at the points $a_{p},a_{p}^{\prime},\ p>1,$ $a_{0},a_{0}^{\prime}$ and $x_{0}.$ \end{itemize} We now improve this last result by showing that the density is Lipschitz continuous over the unit interval but in the cusp point $x_{0}.$ We stress that this result will improve as well Proposition 13 in \cite{CHMV}. \begin{proposition} The density $\rho$ of the invariant measure $\mu$ is Lipschitz continuous and bounded over the intervals $[0,1]$. Moreover,% \begin{equation} \lim_{x\rightarrow0^{+}}\rho(x)=\lim_{x\rightarrow1^{-}}\rho(x)=0\ . \end{equation} \end{proposition} \begin{proof} We work on the induced set $I.$ The invariant measure $\mu_{I}$ for the induced map $T_{I}$ is related to the invariant measure $\mu$ over the whole interval thanks to the well-known formula due to Pianigiani: \begin{equation} \mu(B)=C_{r}\sum_{i}\sum_{j=0}^{\tau_{i}-1}\mu_{I}(T^{-j}(B)\cap Z_{i}) \label{reldens}% \end{equation} where $B$ is any Borel set in $[0,1]$ and the first sum runs over the cylinders $Z_{i}$ with prescribed first return time $\tau_{i}$ and whose union gives $I.$ The normalizing constant $C_{r}=\mu(I)$ satisfies $1=C_{r}\sum _{i}\tau_{i}\mu_{I}(Z_{i}).$ This immediately implies that by calling $\hat{\rho}$ the density of $\mu_{I} $ we have that $\rho(x)=C_{r}\hat{\rho}(x)$ for $m$-almost every $x\in I$ and therefore $\rho$ can be extended to a Lipschitz continuous function on $I $ as $\hat{\rho}.$ A straightforward application of formula (\ref{reldens}) gives \cite{CHMV}: \begin{align} \mu(a_{n-1},a_{n}) & =C_{r}\mu_{I}(Z_{n+1})\\ \mu(a_{n}^{\prime},a_{n-1}^{\prime}) & =C_{r}\sum_{p=n+2}^{\infty}\mu _{I}(Z_{p}) \end{align} Let us now take a measurable $B\subset(a_{n}^{\prime},a_{n_{1}}^{\prime});$ the formula above immediately implies that \begin{equation} \mu(B)=C_{r}\sum_{p=n+2}^{\infty}\mu_{I}(T^{-(p-n)}B\cap Z_{p}) \end{equation} Passing to the densities we have \begin{equation} \int_{B}\rho\left( x\right) dx=C_{r}\sum_{p=n+2}^{\infty}\int_{T^{-(p-n)}% B\cap Z_{p}}\hat{\rho}\left( x\right) dx \end{equation} We now perform a change of variables by observing that the set $B$ is pushed backward $p-n-2$ times by means of $T_{1}^{-1},$ then once by means of $T_{2}^{-1}$ and finally it splits into two parts according to the actions of $T_{1}^{-1}$ and $T_{2}^{-1}.$ Therefore, \begin{equation} \sum_{p=n+2}^{\infty}\int_{T^{-(p-n)}B\cap Z_{p}}\hat{\rho}\left( x\right) dx=\sum_{p=n+2}^{\infty}\sum_{l=1,2}\int_{B}\frac{\hat{\rho}(T_{l}^{-1}% T_{2}^{-1}T_{1}^{-(p-n-2)}y)}{|DT^{p-n}(T_{l}^{-1}T_{2}^{-1}T_{1}% ^{-(p-n-2)}y)|}dy\ . \end{equation} Since $B$ is any measurable set in $(a_{n}^{\prime},a_{n-1}^{\prime}),$ we have for $m$-almost every $x\in(a_{n}^{\prime},a_{n-1}^{\prime}),$% \begin{align} \rho(x) & =C_{r}\sum_{p=n+2}^{\infty}\sum_{l=1,2}\frac{\hat{\rho}(T_{l}% ^{-1}T_{2}^{-1}T_{1}^{-(p-n-2)}x)}{|DT^{p-n}(T_{l}^{-1}T_{2}^{-1}% T_{1}^{-(p-n-2)}x)|}\nonumber\\ & =C_{r}\sum_{m=2}^{\infty}\sum_{l=1,2}\frac{\hat{\rho}(T_{l}^{-1}T_{2}% ^{-1}T_{1}^{-(m-2)}x)}{|DT^{m}(T_{l}^{-1}T_{2}^{-1}T_{1}^{-(m-2)}x)|}\ . \label{FF}% \end{align} This formula does not depend on the choice of the interval $(a_{n}^{\prime },a_{n-1}^{\prime})$ and therefore it holds for $x\in(0,a_{0}^{\prime}).$ For the cylinders $(a_{n-1},a_{n})$ we get similarly that, for $m$-almost any $x\in(a_{0},1),$% \begin{equation} \rho(x)=C_{r}\sum_{l=1,2}\frac{\hat{\rho}(T_{l}^{-1}x)}{|DT(T_{l}^{-1}x)|}\ . \label{FF2}% \end{equation} Since $\hat{\rho}$ is Lipschitz continuous inside $I$ and the inverse branches of $T$ are $C^{1+\iota},$ we conclude that $\rho$ can be chosen as Lipschitz continuous over the disjoint open intervals $(0,a_{0}^{\prime})\cup(a_{0},1).$ It is now useful to observe that the right hand sides of (\ref{FF}) and (\ref{FF2}) give exactly the expression of the Perron-Frobenius operator associated to the first return map and whenever $x$ is chosen into $I.$ By the existence of the left (resp. right) limit of $\hat{\rho}$ in $a_{0}^{\prime}$ (resp. $a_{0}$) we immediatly obtain the continuity of $\rho$ in such points. We use now this result to prove the continuity of the density in $x_0$. We remind that such a density is the fixed point of the Perron-Frobenius operator, so that it verifies the following equation, for any $x\in[0,1]$ : \begin{equation}\label{PPFF} \rho(x)=\frac{\rho(T_{1}^{-1}(x))}{|DT\left( T_{1}^{-1}(x)\right) |}% +\frac{\rho(T_{2}^{-1}(x))}{|DT\left( T_{2}^{-1}(x)\right) |}\ , \end{equation} which gives, for $x=x_0$ \begin{equation} \rho(x_0)=\frac{\rho(a'_0)}{|DT\left( a'_0\right) |}% +\frac{\rho(a_0)}{|DT\left( a_0\right) |}\ , \end{equation} and this proves immediately the continuity in $x_0$. \newline We now observe that assumptions (\ref{T_DT1})-(\ref{T_DT4}) together with the facts that $T_{2}(a_{p})=a_{p-1}^{\prime},$ $T_{1}(a_{p}^{\prime })=a_{p-1}^{\prime}$ and $T_{1}b_{p}=T_{2}b_{p}=a_{p-1},$ allow to get easily the following asymptotic behaviors (for $p$ large) for the preimages of $x_{0}$ (again $c$ will denote a constant independent of $p$ and that could change from a formula to another): \begin{align} a_{p}^{\prime} & \sim\frac{c}{\left( \alpha^{\prime}\right) ^{p}% };\ (1-a_{p})\sim\frac{c}{\left( \alpha^{\prime}\right) ^{p}}\ ,\\ (x_{0}-b_{p}^{\prime}) & \sim\frac{c}{\left( \alpha^{\prime}\right) ^{\frac{p}{B^{\prime}}}};\ (b_{p}-x_{0})\sim\frac{c}{\left( \alpha^{\prime }\right) ^{\frac{p}{B}}}\ . \end{align} These formulas immediately imply that for $x=b_{p}$ (resp. $x=b_{p}^{\prime}$) in a neighborhood of $x_{0}$ and for $p$ large the derivative behaves like $|DT(x)|\sim c\left( \alpha^{\prime}\right) ^{p(\frac{1}{B}-1)}$ (resp. $|DT(x)|\sim c(\alpha^{\prime})^{p(\frac{1}{B^{\prime}}-1)}$). Since $\hat{\rho}$ is bounded away from zero and infinity on $I,$ by the preceding scalings on the growth of the derivative near $x_{0}$ we have that $\rho(x)\approx x^{\frac{1}{B^{\ast}}-1}$ for $x$ close to $0$ and $1,$ which means that $\rho$ can be extended by continuity to zero on the right side of $0$ and on the left side of $1.$ \end{proof} The preceding proposition suggests the following scaling for the density. \begin{proposition}% \begin{align} \rho(x) & =c^{\prime}x^{a}+o(x^{a}),\ x\rightarrow0^{+};\ a>0,\\ \rho(x) & =c^{\prime\prime}(1-x)^{b}+o((1-x)^{b});\ x\rightarrow1^{-},\ b>0, \end{align} with \begin{equation} a=b=\frac{1}{B^{\ast}}-1 \end{equation} and the constant $c^{\prime}$ and $c^{\prime\prime}$ verifying \begin{equation} \left( \frac{1}{\alpha^{\prime}}\right) ^{\frac{1}{B^{\ast}}}+\left( \frac{1}{\alpha(\frac{c^{\prime}}{c^{\prime\prime}})^{B^{\ast}}}\right) ^{\frac{1}{B^{\ast}}}=1\ . \end{equation} \end{proposition} \begin{proof} We use again formula (\ref{PPFF}). By using for $T$ and its two inverse branches the asymptotic polynomial behaviors in $0$ and $1$ given in (\ref{T_DT1})-(\ref{T_DT4}), we get at the lowest order in $x$ in the neighborhood of $0,$% \begin{equation} c^{\prime}\left( \alpha^{\prime}\right) ^{-a-1}x^{a}+c^{\prime\prime}% \alpha^{-b-1}x^{b}=c^{\prime\prime}x^{a}\ .\label{fr}% \end{equation} Now, suppose $a<b.$ Then $\left( \alpha^{\prime}\right) ^{-a-1}\approx1,$ which implies either $\alpha^{\prime}=1$ or\linebreak$a=-1$ and both cases are excluded. On the contrary, if $a>b,$ then $\alpha^{-b-1}x^{b}\sim0,$ implying $\alpha=0,$ which is still impossible. Hence, we necessarily have $a=b.$ We now take the point $x$ in the neighborhood of $1.$ By explicitating $T_{1}^{-1}\left( x\right) $ and $T_{2}^{-1}\left( x\right) $ with respect to $x$ in the neighborhood of $x_{0}$ and substituting into the Perron-Frobenius equation we get, at the lowest order in $1-x,$% \begin{equation} \frac{O(1)}{\left( 1-x\right) ^{\frac{B-1}{B}}}+\frac{O(1)}{\left( 1-x\right) ^{\frac{B^{\prime}-1}{B^{\prime}}}}=(1-x)^{b}\ , \end{equation} from which we obtain \begin{equation} (1-x)^{-\frac{B^{\ast}-1}{B^{\ast}}}\approx(1-x)^{b}\ . \end{equation} We finally conclude that $a=b=\frac{1}{B^{\ast}}-1.$ Substituting this common value into equation (\ref{fr}) we finally get the expression relating the constants $c^{\prime}$ and $c^{\prime\prime}.$ \end{proof} The latter relation is a good check for the validity of the shape of the density in $0$ and $1.$ By assuming the continuity of $\rho$ in $x_{0}$, we could use the value of $a $ given above in terms of the map parameter $B^{\ast}$ to guess a functional expression for $\rho.$ In agreement with the previous considerations, such an expression could be \begin{equation} \rho(x)=N\left( \gamma,\delta\right) e^{-\gamma x}x^{\delta}(1-x)^{\delta }\ , \label{rfit}% \end{equation} where, if $I_{\nu}\left( z\right) $ is the modified Bessel function of the first kind,% \begin{equation} N\left( \gamma,\delta\right) =\frac{\gamma^{\frac{1}{2}+\delta}% e^{\frac{\gamma}{2}}}{\sqrt{\pi}\Gamma\left( 1+\delta\right) I_{\frac{1}% {2}+\delta}\left( \frac{\gamma}{2}\right) }\ , \end{equation} with $\delta=a,\ c^{\prime}=N\left( \gamma,\delta\right) $ and $c^{\prime\prime}=N\left( \gamma,\delta\right) e^{-\gamma}.$ Numerical computations performed on about $10^{5}$ values for Casimir maxima allowed us to estimate the parameters describing the local behavior of the map listed at the beginning of this section: \begin{itemize} \item $\alpha^{\prime}\simeq1.113\ ,\ \alpha\simeq0.4603\ ;$ \item $B^{\prime}\simeq0.3095\ ,\ B\simeq0.2856\ .$ \end{itemize} Therefore, we get $B^{\ast}=B^{\prime}$ and $\delta\simeq2.2258.$ The fit of the empirical stationary distribution function performed with such parameters comes out to be in good agreement with the functional expression for the invariant density (\ref{rfit}) and the estimated value for $\gamma$ is $\gamma\simeq4.26.$ \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig2.jpg} } \caption{Fit of the invariant density $\rho\left( x\right) $ for the map $T$ with the function given in (\ref{rfit})} \label{fig:2} \end{figure} An interesting question is to locate the maximum of the density $\rho.$ Numerical investigations suggest that this maximum belongs to $\left[ a_{0}^{\prime},x_{0}\right] $ (see fig.2) depending on the parameters which define the map $T.$ \subsection{Return times} In section II and in section IV B of \cite{PM}, the periodic orbits of the system, due to its invariance under $R,$ have been empirically classified by specifying that the initial condition belongs to the half space containing, say, the fixed point $c_{1}$ and the number of rotations they perform around the fixed point $c_{2}$ (cfr. \cite{PM} figg. 2 and 11b). In particular, labeling as $\Sigma_{+}$ the portion of $\Sigma$ laying in the half space containing $c_{1},$ it can be shown by direct inspection that fig. 11b in \cite{PM}, which represents the map of the set of maximum values of $C$ in itself associated to periodic trajectories starting from $\Sigma_{+}$ after $k$ rotations around $c_{2},$ is exactly the graph of the induced map of $T$ in the appropriate scale. Therefore, the distribution of the number of times a trajectory of the system, starting from $\Sigma_{+},$ winds around $c_{2}$ before hitting again $\Sigma_{+}$ or equivalently, starting from $\Sigma_{-},$ winds around $c_{1}$ before hitting again $\Sigma_{-},$ is the same of that of the random variable $\tau_{\left( x_{0},1\right) }\left( x\right) ,\ x\in\left( x_{0},1\right) ,$ being the return time on $\left( x_{0},1\right) $ starting from $x$ under the dynamics induced by $T.$ In terms of the already constructed invariant measure $\mu,$ this probability is given by \begin{align} \mu(\tau_{(x_{0},1)}(x) & \geq n\ ;\ x\in(x_{0},1))=\sum_{l=n}^{\infty}% \mu(\tau_{(x_{0},1)}(x)=l\ ;\ x\in(x_{0},1))\\ & =\sum_{l=n}^{\infty}\mu(a_{n-2},a_{n-1})\ .\nonumber \end{align} But the sum on the r.h.s. can be computed using the corresponding expression evaluated in (\ref{SL}) and we finally get \begin{equation} \mu(\tau_{(x_{0},1)}(x)\geq n\ ;\ x\in(x_{0},1))\approx\left( \alpha^{\prime }\right) ^{-\frac{n}{B^{\ast}}}\ . \end{equation} We remark that the distribution of $\tau_{\left( x_{0},1\right) }\left( x\right) ,\ x\in\left( x_{0},1\right) ,$ is the $\mu\ a.s.$ limit of the empirical distribution of the points appearing in fig. 2 of \cite{PM}. We also take the occasion to remark that the average time between two crossing of $\Sigma$ corresponds to the gap between the filled bands of points appearing in fig.2 of \cite{PM} which has been estimated to be about $0.66\ .$ Therefore, the period of the smallest periodic orbit of the Lorenz system is about $2\cdot0.66$ in complete agreement with what predicted by the perturbation theory developed in \cite{Lu} and the more rigorous estimate given in \cite{GT}. \subsection{Statistical stability} A slight change in the forcing term in the Lorenz equation will also change the shape of the associated map $T$ and therefore the invariant density associated to it, which will exist provided the perturped map still satisfies (\ref{T_DT1}-\ref{T_DT4}). At the end of the section we will give two examples of such a perturbation of the forcing contribution to the Lorenz field, the first preserving the original symmetry of the Lorenz system, and the second breaking it. As already remarked in the introduction, this last type of perturbation has been empirically shown in \cite{CMP} to model the impact of anthropogenic forcing to climate dynamics of the northern hemisphere as well as the effect of the sea surface temperature on the Indian summer monsoon rainfall variability \cite{KDC}. Let us denote by $T_{\epsilon}$ the perturbed map. We show in this section that under suitable assumptions the density $\rho_{\epsilon}$ of the perturbed measure will converge to the density $\rho$ of the unperturbed one in the $L_{m}^{1}$ norm. This kind of property is know as \emph{statistical stability}. A former paper by Alves and Viana \cite{AV}, see also the succesive paper by Alves \cite{Al}, addressed the question of the statistical stability for a wide class of non-uniformly expanding maps. Their result is based on two assumptions: (i) the perturbed map belongs to an open neighborhood of the unperturbed one in the $C^{k}$ topology with $k\geq2$ and (ii) the two maps are compared throughout their first return maps defined on the \emph{same} subset where the first return maps are uniformly expanding, with bounded distortion and long branches. Moreover, the structural parameters of the perturbed map (especially those bounding the derivative and the distortion) could be chosen uniformly in a $C^{k}$ neighborhood of the unperturbed map. The main result of those papers is that when the perturbed map converges to the unperturbed one in the $C^{k}$ topology then the density of the absolutely continuous invariant perturbed measure converges to the density of the unperturbed measure in the $L_{m}^{1} $ norm. Here we prove the same result but allowing the perturbed map to be close to the unperturbed one in the $C^{0}$ topology only. We will make use of induction but, in order to preserve the Markov structure of the first return map, we will compare the perturbed and the unperturbed first return maps on \emph{different} induction subsets. The difficulty will therefore arise in the comparison of the Perron-Frobenius operators, which will now be defined on different functional spaces. The proof we give is inspired by the recent work \cite{BV}, but it contains the important improvement of changing the domains of inductions. Contrarily to \cite{AV} we are not able to establish the continuity of the map $T_{\epsilon}\mapsto\rho_{\epsilon}$ and this is surely due to the fact that we only require the maps $C^{0}$ close. On the other hand, discarding regularity allows us to cover a much wider class of examples; we believe in fact that our techniques could be used to prove the statistical stability for general classes of maps with some sort of criticalities and singluarities.\newline \textbf{Assumptions on the perturbed map} \begin{itemize} \item[\emph{Assumption A}] $T_{\epsilon}$ is a Markov map of the unit interval which is one-to-one and onto on the intervals $[0,x_{\epsilon,0})$ and $(x_{\epsilon,0},1],$ convex on both sides and of class $C^{1+\iota_{\epsilon }}$ on the open interval $(0,x_{\epsilon,0})\cup(x_{\epsilon,0},1).$ \item[\emph{Assumption B}] Let $\left\Vert \cdot\right\Vert _{0}$ denotes the $C^{0}$-norm on the unit interval, then \begin{equation} \lim_{\epsilon\rightarrow0}\left\Vert T_{\epsilon}-T\right\Vert _{0}=0\ . \end{equation} Moreover, $\forall x\in\lbrack0,1],\ x\neq x_{0},$ we can find $\epsilon(x)$ such that,\linebreak$\forall\epsilon<\epsilon(x),\ DT_{\epsilon}$ exists and is finite and we have \begin{equation} \lim_{\epsilon\rightarrow0}DT_{\epsilon}(x)=DT(x)\ . \end{equation} Furthermore, \begin{equation} \lim_{x\rightarrow x_{0}^{+}}\lim_{\epsilon\rightarrow0}\frac{DT_{\epsilon }(x)}{DT(x)}=\lim_{x\rightarrow x_{0}^{-}}\lim_{\epsilon\rightarrow0}% \frac{DT_{\epsilon}(x)}{DT(x)}=1\ . \end{equation} \item[\emph{Assumption C}] Let us denote by $C_{h,\epsilon}$ and $\iota_{\epsilon}$ respectively the H\"{o}lder constant and the H\"{o}lder exponent for the derivative of $T_{\epsilon}$ on the open interval $(0,x_{\epsilon,0})\cap(x_{\epsilon,0},1);$ namely: $|DT_{\epsilon }(x)-DT_{\epsilon}(y)|\leq C_{h,\epsilon}|x-y|^{\iota_{\epsilon}}$ for any $x,y$ either in $(0,x_{\epsilon,0})$ or in $(x_{\epsilon,0},1).$ We assume $C_{h,\epsilon}$ and $\iota_{\epsilon}$ to converge to the corresponding quantities for $T$ in the limit $\epsilon\rightarrow0.$ \item[\emph{Assumption D}] Let us set $d_{(\epsilon,1,0)}:=\inf_{(b_{\epsilon ,1},a_{\epsilon,0})}|DT_{\epsilon}(x)|.$ We assume $d_{(\epsilon,1,0)}>1$ and that there exists a constant $d_{c}$ and $\epsilon_{c}=\epsilon\left( d_{c}\right) $ such that,\newline$\forall\epsilon<\epsilon_{c},\ |d_{(1,0)}% -d_{(\epsilon,1,0)}|<d_{c}.$ \end{itemize} \emph{Remark on the notation.} To simplify the notations we will set:\newline$W_{n}^{\prime}=(a_{n}^{\prime},a_{n-1}^{\prime});W_{n}% =(a_{n},a_{n+1})$ and we will denote by $W_{\epsilon,n}^{\prime}% =(a_{\epsilon,n}^{\prime},a_{\epsilon,n-1}^{\prime})$ and $W_{\epsilon ,n}=(a_{\epsilon,n},a_{\epsilon,n+1})$ the corresponding perturbed intervals, where $a_{\epsilon,n}^{\prime}$ and $a_{\epsilon,n},\ n>1,$ are the preimages of the maximum point $x_{\epsilon,0}.$ We also set \begin{equation} Z_{\epsilon,1}=Z_{\epsilon,1}^{1}\cup Z_{\epsilon,1}^{2}\ ;\ Z_{\epsilon ,1}^{1}:=(a_{\epsilon,0}^{\prime},b_{\epsilon,1}^{\prime})\ ,\ Z_{\epsilon ,1}^{2}:=(b_{\epsilon,1},a_{\epsilon,0}) \end{equation} and \begin{equation} Z_{\epsilon,n}=Z_{\epsilon,n}^{1}\cup Z_{\epsilon,n}^{2}\ ;\ Z_{\epsilon ,n}^{1}:=(b_{\epsilon,n-1}^{\prime},b_{\epsilon,n}^{\prime})\ ,\ Z_{\epsilon ,n}^{2}:=(b_{\epsilon,n},b_{\epsilon,n-1}), \end{equation} where $T_{\epsilon}b_{\epsilon,n}^{\prime}=T_{\epsilon}b_{\epsilon ,n}=a_{\epsilon,n-1}.$ The same notation will be used for the corresponding unperturbed intervals. We denote by $I_{\epsilon}:=(a_{\epsilon,0}^{\prime },a_{\epsilon,0})\backslash\left\{ x_{\epsilon,0}\right\} $ the interval where we will induce with the first perturbed return map. From now on, we will denote by $F$ the first return map of $T$ over $I,$ by $F_{\epsilon}$ the first return map of $T_{\epsilon}$ on $I_{\epsilon}$ and by $P$ and $P_{\epsilon}$ the Perron-Frobenius operators associated respectively with $F$ and $F_{\epsilon}.$ If $t\in T^{-n}z,$ where $t=T_{i_{n}}^{-1}\circ T_{i_{n-1}}^{-1}\dots\circ T_{i_{1}}^{-1}z$ with $i_{k}=1$ or $2,$ we will call the sequence $i_{1},\dots,i_{n}$ the \emph{signature} of $t$ relatively to $z.$ \begin{remark} \label{FR} The preceding assumptions imply that the order of tangency of $T_{\epsilon}$ in $0,x_{0}$ and $1$ tends, in the limit $\epsilon \rightarrow0,$ to that of $T.$ This is the first requirement to get again Lemma 1 for the perturbed map. The other requirement is expressed by Assumption D which guarantees the condition (i) in Lemma 1. Notice that this condition cannot be deduced by assumptions (A)-(C). On the other hand, the assumptions (ii) and (iii) of Lemma 1 are still valid for the perturbed map since we have only to control a finite number of relations among the corresponding derivatives. For istance, by using Assumptions B and C, we have \begin{equation} |DT(a_{l}^{\prime})-DT_{\epsilon}(a_{\epsilon,l}^{\prime})|\leq|DT_{\epsilon }(a_{l}^{\prime})-DT_{\epsilon}(a_{\epsilon,l}^{\prime})|+|DT_{\epsilon}% (a_{l}^{\prime})-DT(a_{l }^{\prime})|\ . \end{equation} The first term on the right hand side of this inequality is controlled by the H\"{o}lder continuity of the derivative of $T_{\epsilon}$ while the second one is controlled by the local convergence to $DT$ of $DT_{\epsilon}$ in the limit $\epsilon\rightarrow0.$ However, we need some more informations for the first return maps, which are summarized in the following Lemma. \end{remark} \begin{lemma} \label{SL} \begin{itemize} \item[(i)] For any $n\geq0,$ let $t_{n}$ and $t_{\epsilon,n}$ two preimages of order $n$ of $x_{0}$ and $x_{\epsilon,0}$ respectively with the same signature with respect to these two points. Then, $\lim_{\epsilon\rightarrow 0}t_{\epsilon,n}=t_{n}.$ \item[(ii)] For any $n>0$ we have: \begin{equation} \lim_{\epsilon\rightarrow0}\left\Vert T_{\epsilon}^{n}-T^{n}\right\Vert _{0}=0\,. \end{equation} \item[(iii)] For any $x\neq\cup_{k=0}^{\infty}T^{-k}x_{0},$ $n>0$ there exists $\epsilon(x,n)$ such that, for any $\epsilon<\epsilon(x,n),$ the derivative $DT_{\epsilon}^{n}(x)$ exists and is finite and moreover \begin{equation} \lim_{\epsilon\rightarrow0}DT_{\epsilon}^{n}(x)=DT^{n}(x)\,. \end{equation} \item[(iv)] For any $n\geq1,$ let $[u_{n},v_{n}],[u_{\epsilon,n}% ,v_{\epsilon,n}]\subset\left[ 0,1\right] $ such that $u_{\epsilon ,n}\rightarrow u_{n},v_{\epsilon,n}\rightarrow v_{n}$ in the limit $\epsilon\rightarrow0$ and $T_{\epsilon}^{n}\upharpoonleft_{\lbrack u_{\epsilon,n},v_{\epsilon,n}]},T^{n}\upharpoonleft_{\lbrack u_{n},v_{n}]}$ are injective on the respective images. Then, setting for any $y\in T_{\epsilon}^{n}([u_{\epsilon,n},v_{\epsilon,n}])\cap T^{n}([u_{n},v_{n}]),$% \begin{equation} T_{\epsilon}^{-\left( n\right) }:=(T_{\epsilon}^{n}\upharpoonleft_{\left[ u_{\epsilon,n},v_{\epsilon,n}\right] })^{-1},T^{-\left( n\right) }% :=(T^{n}\upharpoonleft_{\left[ u_{n},v_{n}\right] })^{-1}\ , \end{equation} $T_{\epsilon}^{-\left( n\right) }(y)\rightarrow T^{-\left( n\right) }(y)$ in the limit $\epsilon\rightarrow0.$ \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[(i)] We prove it for $n=0,$ for $n\geq1$ the proof will follow by induction. Suppose $x_{\epsilon,0}$ does not converge to $x_{0},$ then passing to subsequences, by compactness, there exists a subsequence $\epsilon_{n}$ and a point $\tilde{x}\neq x_{0}$ such that $x_{\epsilon_{n},0}\rightarrow \tilde{x}$ for $n\rightarrow\infty$. In such a point $T(\tilde{x})<1$ since $T$ has only one maximum located at $x_{0}.$ Now, $|T_{\epsilon_{n}% }(x_{\epsilon_{n},0})-T(\tilde{x})|=|1-T(\tilde{x})|>0. $ We now fix $\sigma>0$ and choose $n$ large enough, depending on $\sigma,$ in such a way that for uniform convergence we get \begin{align} \left\vert T_{\epsilon_{n}}(x_{\epsilon_{n},0})-T(\tilde{x})\right\vert & =\left\vert T_{\epsilon_{n}}(x_{\epsilon_{n},0})-T_{\epsilon_{n}}(\tilde {x})+T_{\epsilon_{n}}(\tilde{x})+T(x_{\epsilon_{n},0})\right. \\ & \left. -T(x_{\epsilon_{n},0})+T(\tilde{x})\right\vert \nonumber\\ & \leq2\left\Vert T_{\epsilon_{n}}-T\right\Vert _{0}+|T(x_{\epsilon_{n}% ,0})-T_{\epsilon_{n}}(\tilde{x})|\nonumber\\ & \leq2\sigma+|T(x_{\epsilon_{n},0})-T_{\epsilon_{n}}(\tilde{x})|\ .\nonumber \end{align} In the limit $n\rightarrow\infty$ the second term on the right hand side of the previous inequality goes to zero by assumption B and by the continuity of $T.$ We finally send $\sigma$ to zero getting a contradiction with the above strictly positive lower bound. \item[(ii)] The proof is standard and by induction and it uses the uniform continuity of $T^{n}$ on the closed unit interval. \item[(iii)] We use induction again. Suppose the limit holds for $n.$ Then we write \begin{gather} |DT_{\epsilon}^{n+1}(x)-DT^{n+1}(x)|=\\ \left\vert DT_{\epsilon}(T_{\epsilon}^{n}(x))DT_{\epsilon}^{n}(x)-DT(T^{n}% (x))DT^{n}(x)+\right. \nonumber\\ \left. DT_{\epsilon}(T^{n}(x))DT_{\epsilon}^{n}(x)-DT_{\epsilon}% (T^{n}(x))DT_{\epsilon}^{n}(x)\right\vert \ .\nonumber \end{gather} Now, we know that: (a) $x\neq\cup_{k=0}^{n-1}T^{-k}x_{0}$ by assumption, and also (b) $x\neq\cup_{k=0}^{n-1}T_{\epsilon}^{-k}x_{\epsilon,0}$ since by the induction assumption the derivative $DT^{n}_{\epsilon}(x)$ is well defined at $x\neq\cup_{k=0}^{\infty}T^{-k}x_{0}$. We need to take $\epsilon$ even smaller, than a certain $\epsilon(x, n)$, to guarantee that $|DT_{\epsilon }^{n+1}(x)|$ is well defined too. This is easily achieved since the preimages of $x_{0}$ and $x_{\epsilon,0}$ converge to each other according to signature and by choosing $\epsilon$ small enough depending on $x$ and $n$ we could just get (a) and (b) at the same time and for $n$. We can now bound the previous expression by: \begin{align} & |DT_{\epsilon}^{n}(x)||DT_{\epsilon}(T_{\epsilon}^{n}(x))-DT_{\epsilon }(T^{n}(x))|\\ & +|DT_{\epsilon}(T^{n}(x))DT_{\epsilon}^{n}(x)-DT(T^{n}(x))DT^{n}% (x)|\ .\nonumber \end{align} The second term converges to zero by the induction assumption. The first term can be bounded making use of the H\"{o}lder continuity assumption on the derivative, namely \begin{equation} |DT_{\epsilon}(T_{\epsilon}^{n}(x))-DT_{\epsilon}(T^{n}(x))|\leq C_{h,\epsilon}|T_{\epsilon}^{n}(x))-T^{n}(x)|^{\iota_{\epsilon}}\ , \end{equation} and of Assumption C assuring $C_{h,\epsilon}$ and $\iota_{\epsilon}$ to converge to the corresponding quantities given for $T.$ \item[(iv)] Let us set $y_{n}:=T^{-\left( n\right) }(y)\in\lbrack u_{n},v_{n}]$ and $y_{\epsilon,n}:=T_{\epsilon}^{-\left( n\right) }% (y)\in\lbrack u_{\epsilon,n},v_{\epsilon,n}].$ Suppose $y_{\epsilon,n}$ does not converge to $y_{n}.$ Then, by passing again to subsequences and by compactness, we can find $\tilde{y}\neq y_{n}$ such that\linebreak% $\lim_{k\rightarrow\infty}y_{\epsilon_{k},n}=\tilde{y}.$ But $y=T_{\epsilon _{k}}^{n}(y_{\epsilon_{k},n})=T_{\epsilon_{k}}^{n}(y_{\epsilon_{k}% ,n})-T_{\epsilon_{k}}^{n}(\tilde{y})+T_{\epsilon_{k}}^{n}(\tilde{y}).$ For $k$ going to infinity the last term tends to a value different from $y$ since $T$ is injective over $[u_{n},v_{n}],$ while the first difference goes to zero by (ii) above. \end{itemize} \end{proof} It is clear that with the previous assumptions the map $T_{\epsilon}$ will admit a unique absolutely continuous invariant measure with density $\rho_{\epsilon}.$ This density will be related to the invariant density $\hat{\rho}_{\epsilon}$ of the first return map $F_{\epsilon}$ on $I_{\epsilon}$ by the formula (\ref{FF}), with normalizing constant $C_{\epsilon,r}.$ Our next result will be to prove the statistical stability of the unperturbed density, namely \begin{proposition}% \begin{equation} \lim_{\epsilon\rightarrow0^{+}}\left\Vert \rho-\rho_{\epsilon}\right\Vert _{L_{m}^{1}}=0\,. \end{equation} \end{proposition} \begin{proof} The proof is divided into two parts. The second part, which concerns the comparison of the invariant densities outside the regions of induction, will follow closely the proof of an analogous result given in \cite{BV}, but in our case the proof will be easier since the quantities we are going to consider have an exponential tail contrarily the corresponding ones analysed in \cite{BV} where the presence of a neutral fixed point forced those quantities to decay polynomially fast. The first part concerns the comparison of the invariant densities inside the regions of induction and this part is new. \begin{itemize} \item[\emph{First part}] Let us suppose without restriction that the induction sets $I=(a_{0}^{\prime},a_{0})\backslash\{x_{0}\},\newline I_{\epsilon }=(a_{\epsilon,0}^{\prime},a_{\epsilon,0})\backslash\{x_{\epsilon,0}\}$ verify $a_{\epsilon,0}^{\prime}<a_{0}^{\prime},\ a_{\epsilon,0}<a_{0}.$ In the following, to ease the notation we will simply write $dx$ instead of $dm(x)$ for the (normalized) Lebesgue measure on $[0,1]$ and, for any interval $J\subset\left[ 0,1\right] ,$ we will set $\left\vert J\right\vert :=m\left( J\right) .$ We begin by bounding \begin{equation} \int_{I\cap I_{\epsilon}}|\hat{\rho}\left( x\right) -\hat{\rho}_{\epsilon }\left( x\right) |dx\ .\label{E1}% \end{equation} In footnote 3 we defined the Banach spaces $B(I)$ and $B(I_{\epsilon}),$ which are invariant respectively under the action of the Perron-Frobenius operators $P$ and $P_{\epsilon}.$ The densities $\hat{\rho}$ and $\hat{\rho}_{\epsilon}$ belong respectively to these spaces and they are Lipschitz continuous on the open intervals $(a_{0}^{\prime},x_{0})\cup(x_{0},a_{0})$ and $(a_{\epsilon ,0}^{\prime},x_{\epsilon,0})\cup(x_{\epsilon,0},a_{\epsilon,0}).$ In fact we have to consider the action of the Perron-Frobenius operators on a larger functional space namely that of functions of bounded variation. It is a standard result that the Perron-Frobenius operator associated to Gibbs-Markov maps with bounded distortion leaves invariant this space and moreover it satisfies a Lasota-Yorke inequality for the complete norm given by the sum of the total variation and the $L_{m}^{1}$ norm, see for instance \cite{AB} for an account of these results. We denote by $BV(I)$ and $BV(I_{\epsilon})$ the Banach spaces of functions of bounded variations defined respectively on the induction sets $I$ and $I_{\epsilon},$ and by $\left\Vert \cdot\right\Vert _{BV(I)},\ \left\Vert \cdot\right\Vert _{BV(I_{\epsilon})}$ the respective norms. We remark that the Lebesgue measure associated to this norms should be understood as normalized to the sets $I$ and $I_{\epsilon}.$ Since the Perron-Frobenius operators $P$ and $P_{\epsilon}$ are quasi-compact on respectively $BV(I)$ and $BV(I_{\epsilon}),$ we know that, in the limit $n\rightarrow\infty,$% \begin{align} \left\Vert P^{n}\mathbf{1}_{I}-\hat{\rho}\right\Vert _{BV(I)} & \rightarrow0\ ,\label{E2}\\ \left\Vert P_{\epsilon}^{n}\mathbf{1}_{I_{\epsilon}}-\hat{\rho}_{\epsilon }\right\Vert _{BV(I_{\epsilon})} & \rightarrow0\ .\label{E3}% \end{align} It will be important for what follows the convergence of the two previous limits to be uniform with respect to $\epsilon$ in the $L_{m}^{\infty},$ and therefore in the $L_{m}^{1},$ norms. This is guaranteed by the results in \cite{LSV}, in particular Lemmas 4.8 and 4.11. As a matter of fact, our first return Gibbs-Markov maps fit the assumptions of the \emph{covering systems} with countably many branches investigated in \cite{LSV}. In particular, it can be proven that there exist two constants $C$ an $\Lambda$ such that $\left\Vert P^{n}\mathbf{1}_{I}-\hat{\rho}\right\Vert _{\infty}\leq C\Lambda^{n},$ where the constant $C$ and the rate $\Lambda$ have an explicit and $C^{\infty}$ dependence on some parameters charaterizing the map and its expanding properties\footnote{These constants can be explicitly computed using the Hilber metric approach. In particular $C=(1+a)D_{H}e^{D_{H}\Lambda ^{-2N_{0}}}\Lambda^{-2N_{0}},$ and $\Lambda=\left( \tanh\frac{D_{H}}% {4}\right) ^{\frac{1}{N_{0}}}$. The integer $N_{0}$ insures that the hyperbolic diameter of the iterate $P^{N_{0}}$ of a certain cone of bounded variation functions is finite and bounded by $D_{H}.$ In particular, $a,\Delta$ and $D_{H}$ are smooth functions of the quantities $\nu$ and $D$ entering the Lasota-Yorke inequality (see next footnote).}. Therefore, given $\eta>0$ we can choose $n$ large enough, depending on $\eta,$ and such that \begin{align} \int_{I\cap I_{\epsilon}}|\hat{\rho}-\hat{\rho}_{\epsilon}|dx & =\int_{I\cap I_{\epsilon}}\left\vert \hat{\rho}-P^{n}\mathbf{1}_{I}+P^{n}\mathbf{1}% _{I}+P_{\epsilon}^{n}\mathbf{1}_{I_{\epsilon}}-P_{\epsilon}^{n}\mathbf{1}% _{I_{\epsilon}}-\hat{\rho}_{\epsilon}\right\vert dx\nonumber\\ & \leq2\eta+\int_{I\cap I_{\epsilon}}\left\vert P^{n}\mathbf{1}% _{I}-P_{\epsilon}^{n}\mathbf{1}_{I_{\epsilon}}\right\vert dx\ .\label{E5}% \end{align} Let us introduce, for $n\geq2,$% \begin{equation} \hat{\rho}_{n}:=P^{n-1}\mathbf{1}_{I}\ ;\ \hat{\rho}_{\epsilon,n}% :=P_{\epsilon}^{n-1}\mathbf{1}_{I_{\epsilon}}% \end{equation} and finally $\tilde{\rho}_{n}:=\hat{\rho}_{n}$ on $I\cap I_{\epsilon}$ and $\tilde{\rho}_{n}:=a_{n},$ on $I_{\epsilon}\backslash(I\cap I_{\epsilon}),$ where\linebreak$a_{n}=\lim_{x\rightarrow a_{0}^{\prime+}}\hat{\rho}_{n}(x).$ Notice that this right limit exists since $\hat{\rho}_{n}$ is Lipschitz continuous on $(a_{0}^{\prime},x_{0})$ and moreover $\tilde{\rho}_{n}\in BV(I_{\epsilon})$ as proven in Section 2. We remark that the need of considering $BV\left( I_{\epsilon}\right) $ follows by the fact that $\tilde{\rho}_{n}$ could be discontinuous in $x_{\epsilon,0}.$ Let us rewrite the second term in (\ref{E5}) as \begin{gather} \int_{I\cap I_{\epsilon}}\left\vert P^{n}\mathbf{1}_{I}-P_{\epsilon}% ^{n}\mathbf{1}_{I_{\epsilon}}\right\vert dx=\int_{I\cap I_{\epsilon}}% |P\hat{\rho}_{n}-P_{\epsilon}\hat{\rho}_{\epsilon,n}|dx\label{E6}\\ \leq\int_{I\cap I_{\epsilon}}\left\vert P\hat{\rho}_{n}-P_{\epsilon}% \tilde{\rho}_{n}\right\vert dx+\int_{I\cap I_{\epsilon}}|P_{\epsilon}% \tilde{\rho}_{n}-P_{\epsilon}\hat{\rho}_{\epsilon,n}|dx\ .\nonumber \end{gather} We now consider the term $\int_{I\cap I_{\epsilon}}|P_{\epsilon}\tilde{\rho }_{n}-P_{\epsilon}\hat{\rho}_{\epsilon,n}|dx;$ by the positivity and the contraction in $L_{m}^{1}$ of the Perron-Frobenius operator, we have \begin{gather} \int_{I\cap I_{\epsilon}}|P_{\epsilon}\tilde{\rho}_{n}-P_{\epsilon}\hat{\rho }_{\epsilon,n}|dx\leq\int_{I_{\epsilon}}|P_{\epsilon}\tilde{\rho}% _{n}-P_{\epsilon}\hat{\rho}_{\epsilon,n}|dx\leq\int_{I_{\epsilon}}|\tilde {\rho}_{n}-\hat{\rho}_{\epsilon,n}|dx\\ \leq\int_{I_{\epsilon}\cap I}|\hat{\rho}_{n}-\hat{\rho}_{\epsilon,n}% |dx+\int_{I_{\epsilon}\backslash(I_{\epsilon}\cap I)}|a_{n}-\hat{\rho }_{\epsilon,n}|dx\nonumber\\ =\int_{I_{\epsilon}\cap I}|P\hat{\rho}_{n-1}-P_{\epsilon}\hat{\rho}% _{\epsilon,n-1}|dx+\int_{I_{\epsilon}\backslash(I_{\epsilon}\cap I)}% |a_{n}-\hat{\rho}_{\epsilon,n}|dx\nonumber\\ \leq\int_{I_{\epsilon}\cap I}|P\hat{\rho}_{n-1}-P_{\epsilon}\hat{\rho }_{\epsilon,n-1}|dx+m(I_{\epsilon}\backslash(I_{\epsilon}\cap I))(||\hat{\rho }_{n}||_{\infty}+||\hat{\rho}_{\epsilon,n}||_{\infty})\nonumber \end{gather} where the $L_{m}^{\infty}$-norm should be understood in terms of the normalized Lebesgue measures respectively on $I$ and $I_{\epsilon}.$ But each of these norms is bounded by the Banach norm and in particular for $\hat{\rho }_{n}$ we have, by the Lasota-Yorke inequality, \begin{equation} \left\Vert \hat{\rho}_{n}\right\Vert _{\infty}\leq\left\Vert \hat{\rho}% _{n}\right\Vert _{BV(I)}\leq\left\Vert P^{n-1}\mathbf{1}_{I}\right\Vert _{BV(I)}\leq\nu^{n-1}\left\Vert \mathbf{1}_{I}\right\Vert _{BV(I)}% +D\ .\label{LY}% \end{equation} This last quantity, for all $n$ large enough, is less than a constant $C_{2}$ and the same argument also apply to $\left\Vert \hat{\rho}_{\epsilon ,n}\right\Vert _{\infty}.$ Moreover, setting $C_{2}$ and $C_{\epsilon,2}$ the constants bounding (\ref{LY}) in the unperturbed and perturbed case, for $\epsilon$ sufficiently small, we have that the difference $|C_{2}% -C_{\epsilon,2}|$ is bounded by a constant independent of $\epsilon $\footnote{The constant $\nu<1$ and $D$ are in fact explicitly determined in terms of the map, we defer to \cite{AB} for the details. To compare with what stated in \cite{AB}, we need to show that there exists a power $n_{0}$ of the first return map $F$ having the absolute value of its derivative uniformly larger than $2.$ In the case of interest, this follows easily from the proof of Lemma 1 by combining the Markov structure of $F$ with the lower bound for the absolute value of its derivative which is uniformly larger than $1$ and which is an explicit function of the parameters describing the local behavior of the map $T,$ in particular $\alpha^{\prime},\alpha$ and $d_{\left( 1,0\right) }.$ The quantities $\nu$ and $D$ are then functions of the lower bound of $\left\vert DF^{n_{0}}\right\vert $ and of the constant, which we denote by $D^{\prime},$ appearing in the Adler's condition. This last condition is equivalent to prove that $T$ has bounded distortion and we defer to \cite{CHMV} where the constant bounding the distortion is explicitly determined as a function of the parameters defining the map. In the present case, a simple inspection of the proof in \cite{CHMV} shows that such a constant is a multiple of $d_{\left( 0,1\right) }.$ Hence, a contribution to $D^{\prime}$ comes from $d_{\left( 0,1\right) },$ while the other one \cite{CHMV} comes from the divergent behavior of the second derivative close to the fixed point. However, in our case, the H\"{o}lder continuity assumption on the first derivative of the map and the exponential decay of the lenght of $Z_{i}$ makes this second contribution simply bounded by $1.$}. By setting \begin{equation} G_{l}:=\int_{I_{\epsilon}\cap I}|P\hat{\rho}_{l}-P_{\epsilon}\tilde{\rho}% _{l}|dx \end{equation} with $l=1,\cdots,n$ and $\hat{\rho}_{1}:=\mathbf{1}_{I},\ \hat{\rho}% _{\epsilon},1:=\mathbf{1}_{I}{}_{\epsilon}$ we have \begin{equation} \int_{I\cap I_{\epsilon}}|P\hat{\rho}_{n}-P_{\epsilon}\hat{\rho}_{\epsilon ,n}|dx=\sum_{l=1}^{n}G_{l}+(n-1)C_{2}m(I_{\epsilon}\backslash(I_{\epsilon}\cap I)) \end{equation} where $m(I_{\epsilon}\backslash(I_{\epsilon}\cap I))=O(\epsilon).$\newline In order to compute the term $G_{l}$ we have to use the explicit structure of the Perron-Frobenius operator. In particular we have \begin{gather} \int_{I_{\epsilon}\cap I}|P\hat{\rho}_{n}-P_{\epsilon}\tilde{\rho}_{n}% |dx=\int_{I_{\epsilon}\cap I}\left\vert \sum_{i\geq1}\frac{\hat{\rho}% _{n}(F_{i}^{-1}x)}{|DF(F_{i}^{-1}x)|}-\sum_{i\geq1}\frac{\tilde{\rho}% _{n}(F_{\epsilon,i}^{-1}x)}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}\right\vert dx\leq\nonumber\\ \int_{I_{\epsilon}\cap I}\sum_{i\geq1}\left\vert \frac{\hat{\rho}_{n}% (F_{i}^{-1}x)}{|DF(F_{i}^{-1}x)|}-\frac{\hat{\rho}_{n}(F_{i}^{-1}% x)}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}+\frac{\hat{\rho}_{n}(F_{i}^{-1}% x)}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}-\frac{\tilde{\rho}_{n}% (F_{\epsilon,i}^{-1}x)}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}\right\vert dx\,.\label{FC}% \end{gather} Actually what we want to do is to compare the preimages of the perturbed and of the unperturbed first return maps whose direct images are defined on cylinders with the \emph{same} return times. This can always be done and in particular we will consider points $x$ whose perturbed and unperturbed preimages are both defined. At this regard, it will be enough to erase from $I_{\epsilon}\cap I$ the open interval with endpoints $x_{\epsilon,0},x_{0}$ whose measure goes to zero in the limit $\epsilon\rightarrow0.$ We will prove that the sum in (\ref{FC}) is bounded uniformly in $\epsilon$ in order to exchange the sum with the limit $\epsilon\rightarrow0.$ We remind that the perturbed and unperturbed induced first return maps are Gibbs-Markov and have bounded distortion and that $\left\Vert \hat{\rho}_{n}\right\Vert _{\infty }<C_{2}.$ Therefore, on each interval $Z_{i}^{j}$ (resp. $Z_{\epsilon,i}^{j}$) $i\geq1,j=1,2,$ where $F$ (resp. $F_{\epsilon,i}$) is injective we have: \begin{itemize} \item For any $i\geq1;j=1,2$ and $\forall x,y\in F(Z_{i}^{j})$, we have $\frac{|DF(F_{i}^{-1}x)|}{|DF(F_{i}^{-1}y)|}\leq D_{1}$\ and $\forall x,y\in F(Z_{\epsilon,i}^{j})$ we have $\frac{|DF_{\epsilon}(F_{\epsilon,i}^{-1}% x)|}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}y)|}\leq D_{2}.$ \item For $\epsilon$ small enough, by the argument developed in the footnote (4), the difference $|D_{1}-D_{2}|$ is bounded by a constant independent of $\epsilon.$ \item There exists $y\in Z_{i}^{j}$ (resp. $Z_{\epsilon,i}^{j}$) such that $\left\vert DF\left( y\right) \right\vert =\frac{|F(Z_{i}^{j})|}{\left\vert Z_{i}^{j}\right\vert }$ (resp. $\left\vert DF_{\epsilon}\left( y\right) \right\vert =\frac{|F_{\epsilon}(Z_{\epsilon,i}^{j})|}{\left\vert Z_{\epsilon,i}^{j}\right\vert }$). \end{itemize} This immediately implies that the first term in (\ref{FC}) is bounded by \begin{equation} \int_{I_{\epsilon}\cap I}\sum_{i\geq1}\frac{\hat{\rho}_{n}(F_{i}^{-1}% x)}{|DF(F_{i}^{-1}x)|}dx\leq C_{2}D_{1}\sum_{i\geq1}\frac{|Z_{i}|}{|F(Z_{i}% )|}\ . \end{equation} Similar bounds hold also for the other three terms in (\ref{FC}). We remind that the images of the $Z_{i}$ have length $(x_{0}-a_{0}^{\prime})$ and the sum over the $\left\vert Z_{i}\right\vert $'s gives the length of $I.$ We can therefore take the limit $\epsilon\rightarrow0$ in (\ref{FC}). Let us consider the first two terms in (\ref{FC}), \begin{equation} \sum_{i\geq1}\left\vert \frac{\hat{\rho}_{n}(F_{i}^{-1}x)}{|DF(F_{i}^{-1}% x)|}-\frac{\hat{\rho}_{n}(F_{i}^{-1}x)}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}% x)|}\right\vert =\sum_{i\geq1}\left\vert \frac{\hat{\rho}_{n}(F_{i}^{-1}% x)}{|DF(F_{i}^{-1}x)|}\right\vert \left\vert 1-\frac{|DF(F_{i}^{-1}% x)|}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}\right\vert \ . \end{equation} We can bound this quantity making use of Lemma ({\ref{SL}) part (iii) and part (iv) first and then by observing that the point $F_{i}^{-1}x$ does not coincide with $x_{0}.$ Let us set $w:=F_{i}^{-1}x;\ w_{\epsilon}% :=F_{\epsilon,i}^{-1}x$ and $F=T^{i}.$ Then, \begin{equation} \left\vert \frac{DF(F_{i}^{-1}x)}{DF_{\epsilon}(F_{\epsilon,i}^{-1}% x)}\right\vert =\prod_{m=0}^{i-1}\left\vert \frac{DT(T^{m}w)DT_{\epsilon }(T^{m}w)}{DT_{\epsilon}(T_{\epsilon}^{m}w_{\epsilon})DT_{\epsilon}(T^{m}% w)}\right\vert \label{DD}% \end{equation} We notice that }$\left\vert {T^{m}w-T_{\epsilon}^{m}w_{\epsilon}}\right\vert =O\left( {\epsilon}\right) ,${\ the intervals with endpoints $T^{i}w$ and $T_{\epsilon}^{i}w_{\epsilon}$ do not contain $x_{\epsilon,0}$ and their length tends to zero when $\epsilon$ vanishes. Therefore, \begin{equation} \prod_{m=0}^{i-1}\left\vert \frac{DT(T^{m}w)}{DT_{\epsilon}(T^{m}% w)}\right\vert \exp\left[ \sum_{m=0}^{i-1}\frac{1}{|DT_{\epsilon}% (y)|}C_{h,\epsilon}(\left\Vert T^{m}-T_{\epsilon}^{m}\right\Vert _{0}% ^{\iota_{\epsilon}}+|T^{m}w-T^{m}{w_{\epsilon}}|)\right] \ , \end{equation} where $y$ is a point between $T^{i}w$ and $T_{\epsilon}^{i}w_{\epsilon}.$ Hence, by Assumption B, this term tends to $1$ in the limit $\epsilon \rightarrow0$\footnote{Actually $y$ depends on $\epsilon,\ y=y_{\epsilon}.$ Setting $y^{\ast}:=\lim_{\epsilon\rightarrow0}y_{\epsilon},$ we get \[ |DT_{\epsilon}(y_{\epsilon})-DT(y^{\ast})|\leq|DT_{\epsilon}(y_{\epsilon })-DT_{\epsilon}(y^{\ast})|+|DT_{\epsilon}(y^{\ast})-DT(y^{\ast})|\ . \] \par The first term on the r.h.s. can be bounded making use of the H\"{o}lder continuity assumption on $DT_{\epsilon},$ the second making use of Assumption B.}.\newline Moreover, the other couple of terms in (\ref{FC}), \begin{equation} \sum_{i\geq1}\left\vert \frac{\hat{\rho}_{n}(F_{i}^{-1}x)}{|DF_{\epsilon }(F_{\epsilon,i}^{-1}x)|}-\frac{\tilde{\rho}_{n}(F_{\epsilon,i}^{-1}% x)}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}\right\vert \leq\sum_{i\geq1}% \frac{1}{|DF_{\epsilon}(F_{\epsilon,i}^{-1}x)|}|\hat{\rho}_{n}(F_{i}% ^{-1}x)-\tilde{\rho}_{n}(F_{\epsilon,i}^{-1}x)|\ . \end{equation} We remark that the function $\tilde{\rho}_{n}$ is a continuous extension of $\hat{\rho}$ to}\linebreak{$I_{\epsilon}\backslash(I\cap I_{\epsilon})$ and therefore we can rewrite }% \begin{equation} {|\hat{\rho}_{n}(F_{i}^{-1}x)-\tilde{\rho}_{n}(F_{\epsilon,i}^{-1}% x)|=|\tilde{\rho}_{n}(F_{i}^{-1}x)-\tilde{\rho}_{n}(F_{\epsilon,i}^{-1}x)|\ ,}% \end{equation} {where $\tilde{\rho}_{n}$ is now defined on $I\cup I_{\epsilon}.$ This function is continuous on}\linebreak{$I\cup I_{\epsilon}\backslash\{x_{0}\}$ and, by part (iv) of Lemma (\ref{SL}),}\newline{$\lim_{\epsilon\rightarrow 0^{+}}|\tilde{\rho}_{n}(F_{i}^{-1}x)-\tilde{\rho}_{n}(F_{\epsilon,i}% ^{-1}x)|=0.$\newline To resume: for $n$ larger than a certain $n(\eta),$% \begin{equation} (\ref{E5})\leq2\eta+\sum_{l=1}^{n}G_{l}+(n-1)O(\epsilon)\ ; \end{equation} each $G_{l}$ is bounded uniformly w.r.t. $\epsilon$ and tends to zero for $\epsilon$ tending to zero. Therefore we can pass to the limit $\epsilon \rightarrow0$ and then $\eta\rightarrow0.$} \item[\emph{Second part}] According to the assumptions made at the beginning of the first part and without loss of generality, we will assume that all $W_{\epsilon,n}$ lie to the left of the corresponding $W_{n}.$ Therefore we have \begin{gather} \int_{\lbrack0,1]}|\rho-\rho_{\epsilon}|dx=\int_{I\cap I_{\epsilon}}|\rho -\rho_{\epsilon}|dx+\int_{I\cap W_{\epsilon,1}}|\rho-\rho_{\epsilon }|dx\nonumber\\ +\int_{I_{\epsilon}\cap W_{1}^{\prime}}|\rho-\rho_{\epsilon}|dx\nonumber\\ +\sum_{l=1}^{\infty}\left\{ \int_{W_{l}\cap W_{\epsilon,l}}|\rho -\rho_{\epsilon}|dx+\int_{W_{l}\backslash(W_{l}\cap W_{\epsilon,l})}|\rho -\rho_{\epsilon}|dx\right\} \nonumber\\ +\sum_{l=1}^{\infty}\left\{ \int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}% }|\rho-\rho_{\epsilon}|dx+\int_{W_{l}^{\prime}\backslash(W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime})}|\rho-\rho_{\epsilon}|dx\right\} \ .\label{maino}% \end{gather} The densities are given in terms of the corresponding densities of the induced subsets and of the multiplicative constants $C_{r}$ and $C_{\epsilon,r}.$ Hence, we should first compare the latter. Since they are surely smaller than $1,$ we have \begin{equation} |C_{r}-C_{\epsilon,r}|\leq\sum_{i=1}^{\infty}i\left\vert \int_{Z_{i}^{1}}% \hat{\rho}\frac{dx}{m(I)}-\int_{Z_{\epsilon,i}^{1}}\hat{\rho_{\epsilon}}% \frac{dx}{m(I_{\epsilon})}\right\vert \ . \end{equation} The same bound holds also choosing $Z_{i}^{2}$ ($Z_{\epsilon,i}^{2}$) instead of $Z_{i}^{1}$ ($Z_{\epsilon,i}^{1}$). The sum converges uniformly as a function of $\epsilon$ since the $L_{m}^{\infty}$ norms of $\hat{\rho}$ and $\hat{\rho_{\epsilon}}$ are bounded by $C_{2}$ and the lengths of the $Z_{i}^{1}$ and $Z_{\epsilon,i}^{1}$ decay exponentially fast. We now show that passing to the limit $\epsilon\rightarrow0$ inside the sum this vanishes. At this regard we rewrite the previous bound as \begin{align} & \sum_{i=1}^{\infty}i\left\vert \int_{Z_{i}^{1}\cap Z_{\epsilon,i}^{1}}% \hat{\rho}\frac{dx}{m(I)}+\int_{Z_{i}^{1}\backslash(Z_{i}^{1}\cap Z_{\epsilon,i}^{1})}\hat{\rho}\frac{dx}{m(I)}\right. \\ & \left. -\int_{Z_{\epsilon,i}^{1}\cap Z_{i}^{1}}\hat{\rho_{\epsilon}}% \frac{dx}{m(I_{\epsilon})}-\int_{Z_{\epsilon,i}^{1}\backslash(Z_{i}^{1}\cap Z_{\epsilon,i}^{1})}\hat{\rho_{\epsilon}}\frac{dx}{m(I_{\epsilon})}\right\vert \nonumber\\ & \leq\sum_{i=1}^{\infty}i\left[ 2C_{2}m(Z_{i}^{1}\Delta Z_{\epsilon,i}% ^{1})+C_{2}\left\vert \frac{1}{m(I)}-\frac{1}{m(I_{\epsilon})}\right\vert +\int_{Z_{i}^{1}\cap Z_{\epsilon,i}^{1}}|\hat{\rho}-\hat{\rho_{\epsilon}% }|\frac{dx}{m(I_{\epsilon})}\right] \ .\nonumber \end{align} Each term in the last sum vanishes in the limit $\varepsilon\rightarrow0,$ in particular the third term tends to zero by what stated in the first part of the proof.\newline Moreover, by Lemma 1 and by the fact that the derivatives of the maps $T$ and $T_{\epsilon}$ are strictly expanding in the neighborhood of $x_{0},$ for $x\in(0,a_{0}^{\prime}),$ we get \begin{equation} \sum_{m=2}^{\infty}\sum_{l=1,2}\frac{1}{|DT^{m}(T_{l}^{-1}T_{2}^{-1}% T_{1}^{-(m-2)}x)|}\leq C_{3}\frac{1}{\left( \alpha\alpha^{\prime}\log \alpha^{\prime}\right) }:=C_{4}\ . \end{equation} Furthermore, for $x\in(0,a_{0}),$% \begin{equation} \sum_{l=1,2}\frac{1}{|DT(T_{l}^{-1}x)|}\leq\left( \min_{(b_{2},b_{1}% )\cup(b_{1}^{\prime},b_{2}^{\prime})}|DT|\right) ^{-1}:=C_{5}\ . \end{equation} Analogous bounds hold also for the perturbed map, so we can choose the constants $C_{4},C_{5}$ independent of $\epsilon.$ Let us call $\rho_{s}$ (resp. $\rho_{r}$), the representations of the invariant density on $(0,x_{0})$ (resp. $(x_{0},1))$ without the normalizing factor $C_{r}.$ By the previous bounds on the derivatives of $T$ and the boundness of the densities on the induced spaces, it follows immediately that there exists a constant $C_{6}$ such that the $L_{m}^{\infty}$ norms of $\rho_{s}$ and $\rho_{r}$ are bounded by $C_{6}.$ The same argument also holds for $\rho_{\epsilon,s}$ and $\rho_{\epsilon,r}$ and, since $C_{6}$ can be chosen independent of $\epsilon,$ $\left\Vert \rho_{\epsilon,s}\right\Vert _{\infty},\left\Vert \rho_{\epsilon,r}\right\Vert _{\infty}\leq C_{6}.$\newline We can now proceed to bound each term in (\ref{maino}). For the first one we get \begin{equation} \int_{I\cap I_{\epsilon}}|\rho-\rho_{\epsilon}|dx\leq|C_{r}-C_{\epsilon ,r}|\int_{I\cap I_{\epsilon}}\hat{\rho}dx+C_{\epsilon,r}\int_{I\cap I_{\epsilon}}|\hat{\rho}-\hat{\rho_{\epsilon}}|dx\ , \end{equation} which can be bounded uniformly in $\epsilon$ by arguing as in the previous computations. For the second term (the third one can be bounded in the same way) we have \begin{equation} \int_{I\cap W_{\epsilon,1}}|\rho-\rho_{\epsilon}|dx\leq|C_{r}-C_{\epsilon ,r}|\int_{I\cap W_{\epsilon,1}}\hat{\rho}dx+C_{\epsilon,r}\int_{I\cap W_{\epsilon,1}}|\hat{\rho}-\rho_{\epsilon,s}|dx\ . \end{equation} The right hand side is uniformly bounded in $\epsilon,$ in particular \begin{equation} C_{\epsilon,r}\int_{I\cap W_{\epsilon,1}}|\hat{\rho}-\rho_{\epsilon,s}% |dx\leq(C_{2}+C_{6})m(I\cap W_{\epsilon,1})\ , \end{equation} where vanishes $m(I\cap W_{\epsilon,1})$ in the limit $\epsilon\rightarrow 0.$\newline We now consider the last sum in (\ref{maino}). Similar arguments allow to bound the remaining sum which is even easier to handle. We first have \begin{gather} \sum_{l=1}^{\infty}\int_{W_{l}^{\prime}\backslash(W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime})}|\rho-\rho_{\epsilon}|dx\leq\sum_{l=1}^{\infty }\left[ |C_{r}-C_{\epsilon,r}|\int_{W_{l}^{\prime}\backslash(W_{l}^{\prime }\cap W_{\epsilon,l}^{\prime})}\rho_{s}dx\right. \\ \left. +C_{\epsilon,r}\int_{W_{l}^{\prime}\backslash(W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime})}|\rho_{s}-\rho_{\epsilon,s}|dx\right] \ .\nonumber \end{gather} The sum is uniformly convergent as a function of $\epsilon$ since\linebreak% $W_{l}^{\prime}\backslash(W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime})\subset W_{l}^{\prime}$ and the length of such an interval decays exponentially fast with rate independent of $\epsilon.$ Finally, previous considerations imply that each term into the sum goes to zero in the limit $\epsilon\rightarrow 0.$\newline Finally we have \begin{gather} \sum_{l=1}^{\infty}\int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}|\rho -\rho_{\epsilon}|dx\leq\\ \sum_{l=1}^{\infty}\left[ |C_{r}-C_{\epsilon,r}|\int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}\rho_{s}dx+C_{\epsilon,r}\int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}|\rho_{s}-\rho_{\epsilon,s}|dx\right] \ .\nonumber \end{gather} The preceding considerations also apply to the first sum in this formula proving this to vanish in the limit $\epsilon\rightarrow0.$ For the second sum we make use of the representations of $\rho_{s}$ and $\rho_{\epsilon,s}$ in terms of the density on the induced space. Thus we have \begin{gather} \int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}|\rho_{s}-\rho_{\epsilon ,s}|dx\leq\\ \int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}\sum_{p=l+2}^{\infty}% \sum_{k=1,2}\left\vert \frac{\hat{\rho}(T_{k}^{-1}T_{2}^{-1}T_{1}% ^{-(p-l-2)}x)}{|DT^{p-l}(T_{k}^{-1}T_{2}^{-1}T_{1}^{-(p-l-2)}x)|}-\frac {\hat{\rho}_{\epsilon}(T_{\epsilon,k}^{-1}T_{\epsilon,2}^{-1}T_{\epsilon ,1}^{-(p-l-2)}x)}{|DT_{\epsilon}^{p-l}(T_{\epsilon,k}^{-1}T_{\epsilon,2}% ^{-1}T_{\epsilon,1}^{-(p-l-2)}x)|}\right\vert dx\leq\nonumber\\ \int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}\sum_{p=l+2}^{\infty}% \sum_{k=1,2}\left\vert \frac{\hat{\rho}(T_{k}^{-1}T_{2}^{-1}T_{1}% ^{-(p-l-2)}x)}{|DT^{p-l}(T_{k}^{-1}T_{2}^{-1}T_{1}^{-(p-l-2)}x)|}-\frac {\hat{\rho}_{\epsilon}(T_{\epsilon,k}^{-1}T_{\epsilon,2}^{-1}T_{\epsilon ,1}^{-(p-l-2)}x)}{|DT^{p-l}(T_{k}^{-1}T_{2}^{-1}T_{1}^{-(p-l-2)}% x)|}\right\vert dx+\nonumber\\ \int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}\sum_{p=l+2}^{\infty}% \sum_{k=1,2}\left\vert \frac{\hat{\rho}_{\epsilon}(T_{\epsilon,k}% ^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}x)}{|DT^{p-l}(T_{k}^{-1}% T_{2}^{-1}T_{1}^{-(p-l-2)}x)|}-\frac{\hat{\rho}_{\epsilon}(T_{\epsilon,k}% ^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}x)}{|DT_{\epsilon}% ^{p-l}(T_{\epsilon,k}^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}% x)|}\right\vert dx=\nonumber\\ Q_{1,l}+Q_{2,l}\ .\nonumber \end{gather} We further decompose $Q_{1,l}$ as \begin{align} Q_{1,l} & =\int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}\sum _{p=l+2}^{\infty}\sum_{k=1,2}\frac{1}{|DT^{p-l}(T_{k}^{-1}T_{2}^{-1}% T_{1}^{-(p-l-2)}x)|}\left\vert \hat{\rho}(T_{k}^{-1}T_{2}^{-1}T_{1}% ^{-(p-l-2)}x)-\right. \\ & \left. \hat{\rho}(T_{\epsilon,k}^{-1}T_{\epsilon,2}^{-1}T_{\epsilon ,1}^{-(p-l-2)}x)+\hat{\rho}(T_{\epsilon,k}^{-1}T_{\epsilon,2}^{-1}% T_{\epsilon,1}^{-(p-l-2)}x)-\hat{\rho}_{\epsilon}(T_{\epsilon,k}% ^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}x)\right\vert \ .\nonumber \end{align} Changing variables, setting $y_{k}:=T_{k}^{-1}T_{2}^{-1}T_{1}^{-(p-l-2)}x$ and\linebreak$y_{\epsilon,k}:=y_{\epsilon}(y_{k})=T_{\epsilon,k}% ^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}(T^{p}y_{k}),$ since $y_{k}$ and $y_{\epsilon,k}$ belong to $Z_{p}^{k}\cup Z_{\epsilon,p}^{k},$ we get \begin{equation} Q_{1,l}=\sum_{k=1,2}\sum_{p=l+2}^{\infty}\int_{Z_{p}^{k}}|\hat{\rho}% (y_{k})-\hat{\rho}(y_{\epsilon,k})+\hat{\rho}(y_{\epsilon,k})-\hat{\rho }_{\epsilon}(y_{\epsilon,k})|dy_{k}\ . \end{equation} But, \begin{equation} \sum_{l=1}^{\infty}Q_{1,l}\leq4C_{2}\sum_{l=1}^{\infty}\sum_{p=l+2}^{\infty }m(Z_{p})\ , \end{equation} which is clearly convergent because the measure of $Z_{p}$ is exponentially decreasing. Moreover, by what has been shown in the first part of the proof, \begin{equation} \lim_{\epsilon\rightarrow0}\int_{Z_{p}^{k}}|\hat{\rho}(y_{\epsilon,k}% )-\hat{\rho}_{\epsilon}(y_{\epsilon,k})|dy_{k}=0\ . \end{equation} On the other hand, we first take $\epsilon$ small enough to get $y_{\epsilon ,k}$ on the same side of $x_{0}$ as $y_{k}$ and then we use the Lipschitz continuity property of $\hat{\rho}$ to conclude, by observing that $y_{\epsilon,k}$ tends to $y_{k}$ when $\epsilon$ tends to zero, that also \begin{equation} \lim_{\epsilon\rightarrow0}\int_{Z_{p}^{k}}|\hat{\rho}(y_{k})-\hat{\rho }(y_{\epsilon,k})|dy_{k}=0\ . \end{equation} We now consider $Q_{2,l}$ and show that it is uniformly bounded in $\epsilon.$ As a matter of fact, \begin{align} Q_{2,l} & =\int_{W_{l}^{\prime}\cap W_{\epsilon,l}^{\prime}}\sum _{p=l+2}^{\infty}\sum_{k=1,2}\left\vert \frac{\hat{\rho}_{\epsilon }(T_{\epsilon,k}^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}% x)}{\left\vert DT^{p-l}(T_{k}^{-1}T_{2}^{-1}T_{1}^{-(p-l-2)}x)\right\vert }\right\vert \times\\ & \times\left\vert 1-\frac{\left\vert DT^{p-l}(T_{k}^{-1}T_{2}^{-1}% T_{1}^{-(p-l-2)}x)\right\vert }{\left\vert DT_{\epsilon}^{p-l}(T_{\epsilon ,k}^{-1}T_{\epsilon,2}^{-1}T_{\epsilon,1}^{-(p-l-2)}x)\right\vert }\right\vert dx\ .\nonumber \end{align} We bound it by the sum of its two parts: the density will have bounded infinity norm; the sums in $p$ over the inverses of the derivatives are bounded by a constant since the derivatives decay exponentially fast and the sums over $l$ will be controlled by the measure of $W_{l}^{\prime}.$ Finally the same arguments that led to bound (\ref{DD}) apply also to the second factor in the previous expression proving it tends to zero in the limit $\epsilon\rightarrow0.$ \end{itemize} This concludes the proof. \end{proof} \bigskip We end up our analysis considering two examples of the perturbed Lorenz system giving rise to perturbed versions of the map $T$ of the kind discussed in this section. \begin{example} Let us consider a perturbation of the Lorenz field (\ref{l1}) obtained by adding the constant forcing field $\left( 0,0,-\epsilon\beta\left( \rho+\sigma\right) \right) ,\ \epsilon>0.$ The perturbation is easily seen to preserve the symmetry under the involution $R$ of the unperturbed field. Arguing as in the first section, for $\epsilon$ sufficiently small, the perturbed system will keep the same features of the unperturbed one, hence map $T_{\epsilon}$ is easily seen to satisfy (\ref{T_DT1}-\ref{T_DT4}) as well as Assumptions A-D. Here it follows the plot of $T_{\epsilon}$, for $\epsilon=0.5$, and the plot of the fit of the invariant density $\rho_{R}$ for the evolution under the maps $T$ and $T_{\epsilon}$, corresponding respectively to the choice of the Poincar\'{e} surfaces $\Sigma_{+},\Sigma_{+}^{\epsilon}$, the last one being contructed as in the unperturbed case. \end{example} \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig3.jpg} } \caption{$T_{\epsilon}$ (thick line), $T$ (thin line).} \label{fig:3} \end{figure} \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig4.jpg} } \caption{Fit of the invariant density for the evolution under $T$ (solid line) and under $T_{\epsilon},\epsilon=0.5$, (dashed line).} \label{fig:4} \end{figure} \newpage \begin{example} We now consider the following perturbation of the Lorenz field (\ref{l1}) realized by adding the field $\left( \epsilon\cos\theta,\epsilon\sin \theta,0\right) $ where $\epsilon>0$ and $\theta\in\lbrack0,2\pi).$ The perturbed system is not $R$-invariant anymore, anyway, for $\epsilon$ sufficiently small, the system will still have a saddle fixed point $c_{0}^{\epsilon}$ and two unstable fixed points $c_{1}^{\epsilon}% ,c_{2}^{\epsilon}.$ Hence, for any $\theta\in\lbrack0,2\pi),$ we have two different $T_{\epsilon},$ namely $T_{\epsilon}^{+},T_{\epsilon}^{-},$ both satisfying (\ref{T_DT1}-\ref{T_DT4}) as well as Assumptions A-D corresponding respectively to the choice of the Poincar\'{e} surfaces $\Sigma_{+}^{\epsilon },\Sigma_{-}^{\epsilon},$ which can be constructed as in the unperturbed case. To obtain meaningful plots of the deviation from $T$ of the perturbed maps as well as of the deviation of the associated invatiant densities from the unperturbed one, $\epsilon$ has been set equal to $2.5$ and $\theta$ to $70% {{}^\circ}% $ as in \cite{CMP}. \end{example} \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig5.jpg} } \caption{$T$ (thin line), $T_{\epsilon}^{+}$ (thick line to the left of $T$), $T_{\epsilon}^{-}$ (thick line to the right of $T$).} \label{fig:5} \end{figure} \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig6.jpg} } \caption{Fit of the invariant densities for the evolution under $T$ (solid line) and under $T_{\epsilon}^{+}$ (dashed line).} \label{fig:6} \end{figure} \begin{figure}[htbp] \centering \resizebox{0.75\textwidth}{!}{% \includegraphics{fig7.jpg} } \caption{Fit of the invariant densities for the evolution under $T$ (solid line) and under $T_{\epsilon}^{-}$ (dashed line).} \label{fig:7} \end{figure} \newpage
1,116,691,500,607
arxiv
\section{Introduction} Actions, Intent, Behavior and Outcomes; all four present highly correlated characteristics of a user on an interactive system. Whether it be interaction with a search website, or a puzzle game, these four attributes always create a complex relationship of inter-dependence. Starting with an implicit \emph{behavior}, a user starts interaction with any interface, and mostly has an initial \emph{intent}. In such systems, even lack of a specific intent can be considered as being an \emph{`undecided'} intent category. While the behavior of a user has limited dependence on the system, the intent at a particular moment is still locally affected by the \emph{actions} performed during a specific period of interaction. Therefore, these actions contribute to the evolution of intent, creating a cyclic series of modified actions. The combined relation of these actions, intent and initial behavior, leads to an \emph{outcome} which can itself be either intermediate or final. In case of an intermediate outcome, it further directs actions leading up to a similar sequence again ultimately terminating at a final outcome. On systems where users interact with web and mobile interfaces, these four attributes can both be observed and quantified up to a certain extent. For instance, on a movie review website, we can keep a record of each movie that a user has clicked on and spent time on the corresponding page. At a finer level, we can even capture actions like screen time and scroll rate on every page and each individual review. This enables us to capture data about several actions of user with exact timestamp, therefore providing a sequential data stream of actions. Depending on the kind of system, these action sequences can lead to a number of outcomes. For instance, on a marketplace website this can refer to a purchase, or a cart addition; On a reviewing platform, it can be a new review, or a comment or a `like'. The fact that these outcomes can possibly have dependence on the actions preceding them is the essential factor that we can capture through these sequences. Ability to transform these actions into ordered sequences with available outcomes helps us use the field of supervised sequence learning in an attempt to learn models of user interaction. Since the action sequences during a session are inspired by behavior and intent, being able to learn these sequences helps us gain an insight into the underlying models that might drive these actions. Supervised sequence learning is a branch of machine learning which identifies the ordered relatedness of different data points and uses them together as an inter-linked sequential input instead of using them as independent events. Recurrent Neural Network models like Long Short-Term Memory (LSTM)~\cite{Hochreiter1997,gers_lstm} are powerful neural models that efficiently learn sequences and derive embeddings representing the implicit relationship between sequence elements. We propose using these sequential learning models to learn from action sequences on interactive systems. Such models can be trained to learn user patterns on the system corresponding to several outcomes. For example, using scroll rate and screen time along instructional videos on MOOC websites can provide a quantifiable measure of user's attentiveness towards the video. This can be further linked with potential quizzes that depend on these videos. A sequence learning model can learn the impact of these screen scrolling actions on achieved quiz score by learning scroll action sequences with the scores as a target. While these models may not be true measures of causation, they can at least learn the presence of any strong correlation between the evolving sequence patterns and outcomes. In this paper, we present methods of deriving behavior and intent insight on web and mobile interfaces guided by tracking actions along the usage. We describe the processes for gathering potentially relevant features from actions, and representing them in the form of usable sequences. This process is followed by sequence learning on the actions to train models that correlate actions to outcomes. These trained models are then used by the system to understand potential user intent in several situations and compared against actual outcomes. Studying the predictions from these models with real evolution of a user session helps in the detection of key areas that affect change in predicted and real outcome, hence giving us a hint of actual intent. We also propose aggregation of this comparative information to identify spaces of improvement in systems targeted at a desired user outcome. The paper first describes components of our model in detail and explains the process from obtaining data to deriving inference. We then discuss applicability of our model in different scenarios. This is followed by experimental analysis on an online marketplace with an objective of predicting conversions. We the briefly discuss related work in this space of intent recognition and behavioral analysis. We conclude the paper with a discussion of our contributions to the field of marketing science. \section{The Model} User actions on an interactive system are often directed by a certain intent. Corresponding to different behaviors among users, these intents can present certain differences but due to the limitations of interface, these variations are often reasonably limited. Learning behavior of a user is a personalization property and is generally harder to learn, but intent recognition can be generalized over users and be tied instead to the interface. With the availability of usage quantification across several parameters, personalization methods have achieved high success rates in several domains. But due to the extremely large number of users and sparseness in data across parameters, learning behavior for each user still remains a challenging task. Intents, instead are more general as they have certain limitations depending on the scope of a system. While users may have many different navigation styles, the design of a website or an app can only provide a limited number of options that can be performed and therefore be tied to the intent. Therefore, this paper tries to learn user intent for a session and not specific user behavior. We expect that adding more sophisticated personalization models on our system can provide an even better understanding of user intent, but that is beyond the scope of this paper. This paper, as described earlier focuses on using the user actions to learn intent. Actions and outcomes are the most easily attainable interactions between a user and a system. User sessions on any of the digital platforms get some kind of input from the user in the form of clicks, taps, scrolls or more complicated inputs. Any such input can be tied with a timestamp in order to make an ordered sequence of these input events. These sequences can then be represented as a function of the intent with which a user starts the concerned session. Formally, for a user $u$ during session $S$, we define the relation between actions $\alpha_u$, intent $\iota_u$, and implicit behavior $\beta_u$, as: \begin{equation} \label{eq_intent_action} \alpha_u(S) = f(\iota_u(S), \beta_u, S) \end{equation} The dependence of actions on intent is not independent of behavior, but for learning correlations, the variation in behavior profiles might be large enough to be ignored by a learning model. With this assumption, we cluster the intents into reasonably sized groups which are much smaller than the number of behaviors observed on the system overall. Our concerned unknown in equation~\ref{eq_intent_action}, is the intent $\iota_u$. Obtaining inference on the intent directly from action sequences is not easy to achieve. The advantage of our model is the ability of our system to use action sequences as a medium to correlate intent and outcomes. Observable actions sequences act as known variables on the system. We use the other observable quantities on the system, \emph{outcomes}, as a target for learning at the end of these action sequences. Depending on a scenario, we can measure several outcomes like purchase event, or test results, and use the action sequences to learn them. The outcome $\omega_u$ for a user session $S$, can be formally represented as: \begin{equation} \label{eq_action_outcome} \omega_u(S) = g(\alpha_u(S), S) \end{equation} Both outcome $\omega_u$ and actions $\alpha_u$ are measurable quantities on the system and can be collected using different tracking measures. Our method first collects this dataset and then uses it to train the first stage model that depends on sequences. \subsection{Data Generation} Depending on the scenario under concern, an important phase of our model is to generate structured data from the raw usage datasets for websites or apps. This data is often available in the form of raw disconnected data points. First phase of this process, therefore, is to assemble events for a session. These can be both homogeneous (eg., clicks only) or heterogeneous (eg., scrolls, clicks, taps) in nature. Session events are then filtered and ordered by their timestamps to form the action sequence. For the purpose of learning, we need to represent actions with a set of features defining them. The process of feature extraction is also specific to a system and a scenario. We perform a feature extraction process at this stage and then standardize and normalize the features in both homogeneous and heterogeneous cases respectively. \subsection{Sequence Learning} Second phase of our model is to perform sequence learning on the action sequences in order to learn their representation corresponding to a specific target. We use the Long Short-Term Memory (LSTM) Neural Networks for deriving embeddings corresponding to the complete action sequences. These are then sent to a sigmoid layer for deriving the final probability corresponding to the outcome. Because of sequential nature of data, LSTM ensures prediction of next event in the sequence, which is then trained to predict a specific outcome $\omega_u$. While the broader objective of our model is to use this trained sequential learning model for generating analysis data, the model can also be used at this stage as a prediction model. For any system, we can train multiple sequence learners for different target labels using the same set of action sequences, and use them as individual prediction models. Combined together, we can even build deeper models where the LSTMs at first layer are responsible for constructing embeddings for the action sequences and then higher layers perform predictions across different kinds of objectives. \begin{equation} y = \textsc{LSTM}(\alpha_u;\theta) \end{equation} \begin{equation} z = \sigma(\mathbf{W}y + \mathbf{b}) \end{equation} \begin{equation} loss = \mathcal{L}(z, \omega_u) \end{equation} where \textsc{LSTM} defines an LSTM layer with an output embedding of last activation. $\mathbf{W}$ denotes the weight matrix for sigmoid layer and $\mathbf{b}$ is the bias associated with sigmoid layer. The $loss$ is measured using binary cross-entropy and is used to train the model using backpropagation through time. We will denote a trained LSTM from this phase as $\textsc{LSTM}_T$. For the next phase in the model, this pre-trained $\textsc{LSTM}_T$ now acts as a function, which takes in action sequences from future dataset and generates predictions on them. \subsection{Objective Maximization} With the help of $\textsc{LSTM}_T$, our model now evaluates action sequences that were not a part of the training dataset, and contain true labels. We create this second dataset in order to relate the sequences with intents. Given a sequence $\alpha_u$, a real outcome $\omega_u$ and a predicted outcome $z_u$, we build a confusion matrix for the predicted intents. This provides us with sets of sequences, where predicted and real outcomes are same, and ones where the outcomes are different. For the ones with same outcomes and prediction, we do not perform any further analysis. For the sequences where outcomes vary from the prediction, we pass them through a clustering model. A specific advantage of performing clustering at this stage against clustering sequences initially is the ability to filter out significant chunks of data that can be learned using neural networks. Since our model is aimed at improving the system for maximizing objective and not simply at predicting outcomes of sessions, we use this clustering stage as an understanding of user intent where action sequences falling within a specific cluster are assumed as belonging to a similar intent group. Each cluster is then used to analyze intent specific sessions in a detailed sequence analysis phase. \subsection{Sequence Analysis} After obtaining the reduced size intent clusters, we perform a detailed analysis on them that provides us with the final system specific improvement factors. Our action sequences were structured representation of user actions on the system. While $\textsc{LSTM}_T$ learned through the entire sequences, this stage evaluates each individual time step in the sequence and observes change in prediction at that stage. For each action sequence, we generate a series of predictions $P_u$. For a sequence with length of $T$ timesteps, then we represent $P_u$ as: \begin{equation} P_u = \{LSTM_T(\alpha_u(t)): \forall t \in [1,T)]\} \end{equation} This relation can be seen as an event-wise prediction of the outcome by our sequence learner. For example, in case of a purchase outcome, and events being represented by clicks, we can consider this set as a likeliness of purchase at each click on the website. First step in this analysis is to measure distance between predictions at each timestep. We obtain the distance set $D_u$ as: \begin{equation} D_u = \{dist(P_u(t)-1, P_u(t) \forall t \in [1,T)])\} \end{equation} $D_u$ is then sorted by the distance value. Depending on the variance across $D_u$ for different scenarios, we set a threshold value for the distances to be considered for further evaluation. By this stage we have gathered featured events $\alpha_u$ with their impact towards an outcome, along with a measured intensity of the impact using $D_u$. We then perform final semi-automated contrasting between feature vectors, sequences and predictions in order to explore interface events that create a higher distance between prediction and reality. \subsection{Semi-Automated Contrasting} This is the final phase of our model which is currently performed semi-automatically by an expert of the system. The sorted impact sequences using $D_u$ provide us with action events causing drastic changes in prediction. We combine sequences with such features together and observe overall impact caused by them on the predictions. In cases of strong significance, we are able to identify features of the system that can be potential causes of the change. A set of such evaluated features is then used to improve the system for maximizing specific objectives. \begin{figure} \includegraphics[height=1.5in, width=3in]{fig_analysis_dashboard} \caption{Model structure for implementing the stages till semi-automated contrasting} \label{fig_analysis_dashboard} \end{figure} \begin{figure} \includegraphics[height=1.5in, width=3in]{fig_model_as_service} \caption{Model as a service for predicting outcome, to be used differently} \label{fig_model_as_service} \end{figure} Our complete system is a combination of these modules that allow for condition-specific learning in any interactive framework. Combined together these modules can be used to explore usage across a specific objective. We also propose the usage of this model in active and passive form. As a passive system, it can be used to analyze sessions through a dashboard similar to the architecture shown in Figure~\ref{fig_analysis_dashboard}. A complete analysis model consists of the sequence learning, analysis, and contrasting modules. Modularity of our system also allows for an active usage of the system as a prediction service model during a session. Since the sequence learning model can provide predictions at each time step, it can be used in real time with any client for providing intent prediction. Figure~\ref{fig_model_as_service} depicts the architecture for using this system as a service with multiple clients. \section{Use Case Scenarios} In the previous section, we described how our model operates at each stage. In this section we discuss practical scenarios and systems where this model can be used. We also discuss some approaches to be followed after the semi-automated contrasting using our model to maximize concerned objectives. In general, our model can be used in any interactive system where user provides connected inputs at different points in time during a session on the platform. Since our model only requires the two observable quantities, action sequences and outcomes, we focus mainly on systems which can provide action sequences with certain features. We also assume that each system has at least one objective function which is relevant to it in some way and whose maximization can benefit the system. \subsection{Online Marketplaces} Our first scenario is for online marketplaces in the form of websites and apps. Online marketplaces cover the wide range of websites where some form of purchase can be made from a larger set of items. These can include shopping websites, event ticketing websites, and more such platform where user purchase is a desired outcome. Conversion rate is one important metric of such systems, and therefore, can be a significant objective for our system to maximize. Several more outcomes, like adding items to cart, returning a purchase, selling an item, etc can also be studied in such systems. Actions on such platforms comprise of clicks across different components like items, pages, categories, filters among others. Actions can also include scrolling events, viewing, zooming and more interaction functions provided by the application. Using our model, we first convert user interactions with the marketplace into an ordered sequences. We then derive features along each event. This can correspond to features like time distance between clicks, category corresponding to the clicked item, type of page where click was performed, and more specific details. All these features can potentially add relevant information to the overall model. We then train the Sequence Learning module to learn a model on the action sequences for predicting the outcome. This process is then followed by the remaining phases of our model to correlate events causing major change in the prediction between different timesteps. \subsection{Online Coursewares} Online coursewares are another significant form of interactive systems where users watch videos, read content, perform quizzes among other variations. A general objective of coursewares is to ensure learning among the users and to be able to distribute content in the best possible way. The objective function, in these cases, therefore, tries to maximize the learning outcome. This can be measured using scores on the quizzes, often provided at the end of video or interactive learning sessions. Session in this scenario can capture screen time spent by the user on videos, scroll rate during reading content or while watching the video, clicks or highlights in the reading content, amount of answer switching on quizzes, and more specific practices. All together, these sessions can provide a time-series of actions with rich meta-data along with a wide evaluated range of targets to learn. Targets can be scores on final quizzes, responses on course surveys, or some system specific measures. Using our model, similar to marketplaces, we will capture ordered sequence of the input actions, and will extract features for each action event. We will then perform Sequential Learning on these actions with the model objective of predicting a measure of user's performance in the course. \subsection{Other scenarios} While we discussed two specific cases, our proposed approach is applicable in a much broader variety of scenarios. Capturing meta data along user sessions in the form of timed sequences along with system specific targets can mostly be used to improve system. For instance, on specific interest based websites like cooking, biking, or arts, objectives are around improving readership and promoting discussions on posts. Similarly forums provide discussion channels that can be targeted at increasing responses or answers for emerging questions. Our model can be similarly applied to such systems, by using meta-data across usage and learning the patterns of usage directed at maximizing desired response. We do not go into details of these scenarios as the breadth in their range is wide and the paper is focused on the structured method for learning user intent, and not necessarily on the use cases of learned intent. \section{Model Evaluation} \begin{figure*} \includegraphics[width=0.33\linewidth]{graph_1} \includegraphics[width=0.33\linewidth]{graph_2} \includegraphics[width=0.33\linewidth]{graph_3} \caption{Plots of samples from click sessions with prediction for \emph{`purchase'} along the session with events at each click} \label{fig_graph_result} \end{figure*} We experimented and evaluated our model for the scenario of online marketplaces. Our data was collected from a ticketing website where users can \emph{sell} or \emph{purchase} tickets. Data was completely anonymized and each session was independent of any user specific parameters. We considered each user session as a unique entity and captured the action events for each click on the website during that session. This provided us with a time series of clicks for each session, where properties of these click were used to derive features within the sequence. For generating our training data, we sampled sessions from each hour of the day over three months in late 2016. Our neural model consisting of LSTM and feedforward neural network layers was built on Keras~\cite{keras} with Tensorflow~\cite{Tensorflow} backend. Target label for our dataset was the presence or absence of a \emph{conversion} in the session. We trained the model using binary cross-entropy as our loss function, and used the Adadelta~\cite{adadelta} optimizer. We evaluated performance of the trained model on sessions from both past and future months outside the training window. Our analysis data consisted of sessions in months from early 2016, and early 2017. We evaluated the result of the neural model, signifying the probability of a \emph{conversion within a session} with real labels from data. This system, when evaluated on 1-click before the final outcome achieved an average accuracy of $0.89$, and an average recall o $0.85$. We also evaluated the neural model against varying number of steps, $k$, before outcome. We observed a significant monotonic improvement in the prediction as we got closer to the end point. Average recall recorded by the model on test data at $k=4$, was $0.70$, whereas at $k=2$, it was $0.81$. At $k=1$, the penultimate step, the recall rises to an average of $0.85$. This consistent improvement in the ability of our model to predict conversions also shows its capability to use and improve with additional local action context in order to predict outcome. While a high accuracy strengthens the reliability of model alignment with user intent prediction, another attribute of our system is to use this knowledge for improving the system. This was performed by the sequential analysis process where we used trained learning model to visualize variations in predicted intent along with actions of the user. Any significant change in prediction over subsequent clicks was then used to evaluate feature change during those clicks. We derived graphical representations of clicks across some of the page actions like `Checkout', `Error', etc. Plots showing the change in prediction with clicks over time are presented in figure~\ref{fig_graph_result}. \section{Related Work} In this paper we presented models for intent recognition on interactive systems. The space of web interactions has been studied in several fields including Computer Science, Psychology and Economics for behavioral analysis and intent recognition. \cite{Radinsky:2012:MPB:2187836.2187918} explores behavior on web systems and makes use of this information in order to predict system parameters. \cite{Benevenuto:2009:CUB:1644893.1644900} makes use of the actions performed on a web system by collecting clickstream data and tries to derive inference based on those topics. \cite{Obendorf2007WebPR} performed a study on browser usage by using click-streams which are similar to our action sequences. This study made use of the similar sequences across different websites in order to understand parameters for the browser. We perform evaluation on the application under concern and treat the web browser as an independent platform. \cite{Wu:2009:PCP:1645953.1646127} used linear models for predicting conversion likeliness among users with the help of usage features derived over time. Recently,~\cite{Sun:2017:UCC:3063955.3063971} presented analysis on courseware clickstream data in order to improve systems. \cite{eye_shopping} use eye tracking methods for identifying intent of shopping. They study the dependence on eye actions directed towards system's outcome. Another common approach of defining user behavior on e-commerce websites is by building neural models as defined in~\cite{Borisov2016ANC} and~\cite{Wu2015NeuralMO}. While these works present different ways to analyze usage data directed at certain objective, our work provides a more concrete model of relating action sequences with intent recognition with the use of sequence learning. Sequence learning as a field has gained enormous success over past few years with LSTMs particularly being used in several cases. \cite{Graves2008} detailed the use of LSTM in speech recognition, followed by variants like bi-directional LSTMs~\cite{Graves2013} and Grid LSTMs~\cite{Kalchbrenner2015}. \section{Conclusions} In this paper, we presented a structured model with multiple modules aimed at improving interactive systems on web and mobile platforms. The paper presented strong relations between \emph{intent} and \emph{actions} and formalized the process of intent recognition. We presented a novel method of relating the sequences derived from different action events on a system, and learning their representations for further inference. By using session outcomes as a label for supervised sequence learning, we presented models that can efficiently predict future actions, giving hints on the intent of user. Our model also described active and passive learning systems, where an active system can use these predictions in real time, and a passive system, which can use the learning model to analyze usage on the system. We also presented models for analysis and contrasting that can help detect system characteristics that affect user's actions during a session. We have successfully deployed this system on an online ticketing marketplace and have discussed its potential use cases where this exact model can be used to optimize other system objectives. The model, in general, is independent of the nature of system and can be deployed in any setting which captures user actions. The time-series nature of this model makes it a great architecture for exploration with different sequence learning methods depending on nature of data. Beyond intent recognition, the paper also identifies relation between user behavior and actions. While we use presented models to obtain inference only on the intent and not on the behavior, we believe our models can benefit future studies in the space of behavior modeling as well. With the help of observed actions and outcomes, we can use a trained model to capture intent along different sessions of different users. We can further identify more specific attributes of users and sessions in order to relate intent with the underlying behavior. Modular structure of our model also allows for additional information to be used at any stage of the model. For instance, the sequential learning module provides us with a representation of next predicted action. If a system obtains more useful features after certain clicks, this representation can be used along with additional features in order to draw final prediction. Through the presented models, we hope to provide assistance in various application spaces, and expect for research in the space of marketing science to improve with clearer understanding of user intent. \bibliographystyle{aaai}
1,116,691,500,608
arxiv
\section{Introduction and background} The foundation of the theory of electronic structure of many-body systems is the nonrelativistic Schr\"odinger equation for the many-variable wave function. In the knowledge of the Hamilton operator, by an approximate wave-function method one can determine the ground-state energy as a variational expectation value which approaches the exact value from above. We term such approximations, a well-known prototype of which is the Hartree-Fock approximation, as wave-function optimal ones. An other class of approximations is named as density-optimal one, where we work with auxiliary orbitals of a Schr\"odinger-like equation written by using an effective single-particle potential. This potential is a functional of the one-particle density, the basic quantity of density-functional theory \cite{Kohn99}. Between the above limiting cases there is the approximation which is based on a two-variable function, the reduced one-particle density matrix. In density-matrix-functional theory this one-matrix is the basic quantity \cite{Davidson76}. With an exact one-matrix not only the energy term related to the external potential is exact but the kinetic energy term as well. Unfortunately, we do not know the explicit dependence of the interparticle interaction energy on the one-matrix. Owing to this difficulty, it has been the tendency in approximate methods to replace the two-particle density, the diagonal of the second-order density matrix, by ansatz kernels constructed from the one-matrix and its diagonal, the one-particle density. This Rapid Communication is devoted to a comparative study using an ansatz kernel with parametric point-wise representations for the input density and one-matrix. These are calculated for an exactly solvable one-dimensional model atom with two harmonically interacting externally confined particles. The model chosen constitutes a cornerstone in a large variety of fields in physics. It was introduced firstly by Heisenberg in order to discuss the He atom problem \cite{Heisenberg26}. Its application to nuclear physics dates back to Moshinsky \cite{Moshinsky68}. Below we summarize the necessary background needed to our present variational study. Using atomic units, the time-independent Schr\"odinger equation is solved for the singlet ground-state with the following \cite{Heisenberg26,Moshinsky68} one-dimensional Hamilton operator \begin{equation} \hat{H}\, =\, -\, \frac{1}{2}\left(\frac{d^2}{dx_1^2}\, + \frac{d^2}{dx_2^2}\right) +\frac{1}{2}\omega_0^2({x}_1^2+{x}_2^2) -\frac{1}{2}\Lambda\omega_0^2({x}_1-{x}_2)^2, \end{equation} which models, with coupling constant $\Lambda>0$, the repulsive interaction of two quantum particles in a confining external field characterized by $\omega_0$. The analytic solution is based on standard canonical transformations, $X_{+}=(x_1+x_2)/\sqrt{2}$ and $X_{-}=(x_1-x_2)/\sqrt{2}$, of space variables. For the ground-state wave function one gets in the original variables \begin{equation} \psi(x_1,x_2)=\left(\frac{\omega_{1}\, \omega_{2}}{\pi^2}\right)^{1/4} \exp\left[-\frac{1}{4}(x_1^2+x_2^2)(\omega_{1}+\omega_{2})\right]\, \exp\left[-\frac{1}{2}x_1x_2(\omega_{1}-\omega_{2})\right], \end{equation} where $\omega_{1}\equiv{\omega_0}$ and $\omega_{2}\equiv{\omega_0\sqrt{1-2\Lambda}}$. The inseparability, except at $\Lambda=0$, into a simple product form is transparent, and we have the $\Lambda\in{[0,0.5)}$ range for stability. The exact ground-state energy, denoted by $E_{ex}$, of the state $\psi(x_1,x_2)$ with $\hat{H}$ is \begin{equation} E_{ex}\, =\, \frac{1}{4}(\omega_1+\omega_2) + \frac{1}{2}\frac{\omega_0^2}{\omega_s} -\frac{1}{2}\, {\Lambda}\, \frac{\omega_0^2}{\omega_s}\, \left(2 -\frac{\omega_s}{\omega_1}\right), \end{equation} where $\omega_s\equiv{2\omega_1\omega_2/(\omega_1+\omega_2)}$. This equation shows that the virial theorem for limited motion is satisfied. The exact total energy $E_{ex}=(1/2)(\omega_1+\omega_2)$ and single-particle (see, below) density $n_1(x)$ were already \cite{Neal98} applied as constraining inputs in order to demonstrate the complete Hohenberg-Kohn-Sham path with density-functionals \cite{Kohn99,Dreizler90}. The unique effective single-particle potential $V_s(x)$, to a Schr\"odinger-like equation, was determined as well \begin{equation} V_s(x)\, =\, \frac{1}{2}\, \omega_s^2\, x^2 +\left(\mu -\frac{1}{2}\, \omega_s\right), \nonumber \end{equation} where $\mu=(\omega_1+\omega_2)^2/4\omega_2$ is the Lagrange multiplier (chemical potential) introduced along the path of a constrained (fixed number of particles) minimization with functionals. The local many-body potential $[V_s(x)-(1/2)\omega_0^2\, x^2]$ is model-specific, i.e., it is not universal. With $\psi(x_1,x_2)$ the reduced one-particle density matrix can be calculated from \begin{equation} \gamma(x,x')\, =\, \int_{-\infty}^{\infty}\, d\xi\, \psi(x,\xi)\, \psi^{*}(x',\xi), \end{equation} and its diagonal, $x=x'$, gives the normalized single-particle probability density $n_1(x)$ as \begin{equation} n_1(x)\, =\, \left(\frac{\omega_s}{\pi}\right)^{1/2}\, \exp\left(- \omega_s\, x^2\right). \end{equation} Mathematically, the mapping between $\gamma(x,x')$ and $n_1(x)$ is a linear one. The auxiliary orbital to a density-based product form $\psi_s(x_1,x_2)=\phi_s(x_1)\phi_s(x_2)$ is simply $\phi_s(x)=\sqrt{n_1(x)}$. Notice that in the direct path from the correlated wave function to the single-particle probability density we may loose physical informations (see, below for a concrete example) since the mapping between the exact wave function and the one-matrix is a nonlinear one. Precisely, it is this nonlinearity which makes the challenging inverse path from a given density $n_1(x)$ back to the wave function $\psi(x_1,x_2)$ a highly nontrivial (at $\Lambda\neq{0}$) problem \cite{Dreizler90}. \eject \section{Parametric modeling and results} In order to show more important details to the above-outlined a direct path, and thus establish our variational idea which will be based on an inverse path, we decompose $\psi(x_1,x_2)$ by applying to it Mehler's \cite{Erdelyi53,Glasser13} formula. After simple calculations, we get \begin{equation} \psi(x_1,x_2)=\sum_{n=0}^{\infty}z^n (1-z^2)^{1/2} \left[\left(\frac{\bar{\omega}}{\pi}\right)^{1/4}\frac{1}{\sqrt{2^n\, n!}}\right]^2 e^{-\frac{1}{2}\bar{\omega}(x_1^2+x_2^2)}\, H_n(\sqrt{\bar{\omega}}x_1)H_n(\sqrt{\bar{\omega}}x_2) \end{equation} in terms of Hermite polynomials, where $z\equiv{-(\sqrt{\omega_{1}}-\sqrt{\omega_{2}})/(\sqrt{\omega_{1}}+\sqrt{\omega_{2}})}$ and $\bar{\omega}\equiv{\sqrt{\omega_{1}\omega_{2}}}$. We stress that in the above Schmidt decomposition \cite{Riesz55} of $\psi(x_1,x_2)$ the sign of $z$ depends on the sign of the interparticle coupling $\Lambda$. This information on repulsion or attraction ($\Lambda<0$), encoded of course in the second exponential of Eq. (2), is lost when one calculates the one-matrix by Eq. (6) from its definition given by Eq. (4). We obtain \begin{equation} \gamma(x,x')=\sum_{n=0}^{\infty} P_n\, \left[\left(\frac{\bar{\omega}}{\pi}\right)^{1/4}\frac{1}{\sqrt{2^n\, n!}}\right]^2\, e^{-\frac{1}{2}\bar{\omega}(x^2+{x'}^2)}\, H_n(\sqrt{\bar{\omega}}x)H_n(\sqrt{\bar{\omega}}x'), \end{equation} where $P_n(\Lambda)\, =\, (1-\xi)\, \xi^n$, in terms \cite{Srednicki93} of $\xi\equiv{z^2}$, and $\sum_{n=0}^{\infty}P_n=1$. For the dimensionless parameter $\xi(\Lambda)$ we have the range of $\xi\in{[0,1]}$, since \begin{equation} \xi(\Lambda)\, =\, \left[\frac{1-(1-2\Lambda)^{1/4}}{1+(1-2\Lambda)^{1/4}}\right]^2. \end{equation} Due to the sign-insensitivity of $P_n$ on the interparticle coupling, there is a duality property \cite{Glasser13,Schilling13} of information-theoretic entropies based on such occupation numbers. This duality means that to any allowed repulsive coupling there exists a corresponding attractive one for which the calculated entropies should be equal. Along the above direct path the unique decomposition of the probability density $n_1(x)$ becomes \begin{equation} n_1(x)\, =\, \sum_{n=0}^{\infty} P_n\, \left[\left(\frac{\bar{\omega}}{\pi}\right)^{1/4}\frac{1}{\sqrt{2^n\, n!}}\, e^{-\frac{1}{2}\bar{\omega}x^2}\, H_n(\sqrt{\bar{\omega}}x)\right]^2. \end{equation} Based on careful pioneering works \cite{Muller84,Lieb07}, we already used \cite{Nagy11,Carlos12} instead of the exact pair-density, a parametric [with the $(q+r)=1$ condition] ansatz kernel \begin{equation} K(q,r,x_1,x_2)\, =\, 2\, n_1(x_1)\, n_1(x_2) - \gamma^q(x_1,x_2)\, \gamma^{r}(x_1,x_2), \end{equation} with the above inputs taken at given $\xi(\Lambda)$, in order to determine a parametric interparticle interaction energy (the last term) in the corresponding total-energy expression \begin{equation} E_{q,r}\, =\, \frac{1}{4}(\omega_1+\omega_2) + \frac{1}{2}\frac{\omega_0^2}{\omega_s} -\frac{1}{2}\, \Lambda\, \frac{\omega_0^2}{\omega_s}\, \left[2-\frac{(1-\xi^q)(1-\xi^{1-q})}{1+\xi}\right]. \end{equation} By a direct comparison of $E_{ex}$ and $E_{q,r}$, we obtained equality if and only if $q=r=0.5$. Of course, in such a symmetric case for the operator-powers the parametric normalization of the kernel $K(q,r)$, which is given by $(1-\xi)^{q+r}/(1-\xi^{q+r})$, is satisfied as well. In other words, we have a proper global normalization for the exchange-correlation hole described otherwise by the physical \cite{Kohn99} pair correlation function of the many-body system. In recent attempts by taking $q=r\neq{0.5}$, this fundamental rule is violated, as was mentioned \cite{Sharma08,Lathiotakis09} explicitly. We will return to such parametrization in our last section. In the light of the above, we arrived at the point where we can clearly state our idea on a parametrization based on the one-variable form of the exact density in Eq. (5). The idea rests on application of Mehler's formula \cite{Erdelyi53} directly to a given $n_1(x)$. We obtain \begin{equation} n_1(x)=\sum_{n=0}^{\infty} \mathcal{P}_n\, \left[\left(\frac{\omega_p}{\pi}\right)^{1/4}\frac{1}{\sqrt{2^n\, n!}}\, e^{-\frac{1}{2}{\omega_p}x^2}\, H_n(\sqrt{\omega_p}x)\right]^2, \end{equation} where $\mathcal{P}_n=(1-\xi_p)(\xi_p)^n$ with, of course, $\sum_{n=0}^{\infty}\mathcal{P}_n=1$. Mathematically, this is also a point-wise [now, parametric ($p$)] decomposition under the mild ($\omega_p\geq{\omega_s}$) constraint \begin{equation} \omega_s\, =\, \omega_p\frac{1-\xi_p}{1+\xi_p}. \end{equation} Notice at this important point that such direct decomposition of the basic variable of density-functional theory may form the background to the recently proposed \cite{Tellgren14} extended Kohn-Sham-like approach, in which the fractional occupation numbers could provide an enough flexibility beyond the conventional attempt where $P_0=1$ since $\omega_p=\omega_s$. Our idea is similar in spirit to the proposal made earlier \cite{muller84} within an extended Thomas-Fermi scheme to go beyond the Fermi-Dirac step-function, i.e., the ideal momentum distribution. Next, following Harriman's \cite{harriman81} enlightening pioneering work, we write the one-matrix by using $\xi_p$ and $\omega_p$ subject to the above constraint into the following parametric form \begin{equation} \gamma_p(x,x')\, =\, \sum_{n=0}^{\infty} \mathcal{P}_n\, \left[\left(\frac{\omega_p}{\pi}\right)^{1/4}\frac{1}{\sqrt{2^n\, n!}}\right]^2\, e^{-\frac{1}{2}{\omega_p}(x^2+x'^2)}\, H_n(\sqrt{\omega_p}x)\, H_n(\sqrt{\omega_p}x'). \end{equation} This new form is applied when we calculate the parametric ($\xi_p$) kinetic energy. The re-parametrized kernel, denoted by $K_p(q,x_1,x_2)$, to be applied to determine the interparticle interaction energy takes the form of \begin{equation} K_p(q,r,x_1,x_2)\, =\, 2\, n_1(x_1)\, n_1(x_2) - \gamma_p^q(x_1,x_2)\, \gamma_p^{r}(x_1,x_2). \end{equation} The required operator power ($q$) is obtained, as before with $P_n$ and $\xi(\Lambda)$ in $\gamma(x,x')$, by the simple change $\mathcal{P}_n\Rightarrow{(\mathcal{P}_n)^q}$ in the one-matrix $\gamma_p(x,x')$. \eject The freedom via $\xi_p$ in Eq. (14) allows us to write instead of the auxiliary, lower bound, $T_s=(1/2)\omega_s$ of conventional density-functional theory, a parametric kinetic energy as \begin{equation} T_p(\Lambda,\xi_p)\, =\, \frac{1}{2}\, \omega_s\, \left(\frac{1+\xi_p}{1-\xi_p}\right)^2, \end{equation} which shows that we can tune the kinetic energy into the proper direction when $\xi_p\neq{0}$, i.e., when we have noninteger occupation numbers for parametric orbitals. Unfortunately, this desired opportunity could give only a monotonic change since, as it is well-known, there is no upper bound for the kinetic energy. Furthermore, with the exact density as input, the potential energy in the external field [the second term in Eq. (3)] is parameter-free. However, using the parametric M\"uller-type approximation prescribed by Eq. (15), we can define a kind of constrained-search \cite{Levy79,Lieb83} within a framework fixed only by the exact density and the associated (parametric) one-particle density matrix as inputs. After straightforward calculation we get for the approximate ground-state energy \begin{equation} E_p(q,\Lambda,\xi_p)\, =\, \frac{1}{2}\, \omega_s\, \left(\frac{1+\xi_p}{1-\xi_p}\right)^2 + \frac{1}{2}\,\frac{\omega_0^2}{\omega_s} -\frac{1}{2}\, \Lambda\, \frac{\omega_0^2}{\omega_s}\, \left[2-\frac{(1-\xi_p^q)(1-\xi_p^{1-q})}{1+\xi_p}\right]. \end{equation} This form shows transparently the $\xi_p$-dependent terms of kinetic and interparticle origin. When $\xi_p\equiv{0}$ and we treat $\omega_s$ as a variational parameter instead of fixing it to the Kohn-Sham (KS) value, we recover the well-known \cite{Moshinsky68} Hartree-Fock result where $\omega_{HF}=\omega_0\sqrt{1-\Lambda}$, thus $E_{HF}=2\omega_0\sqrt{1-\Lambda}$, and we have a product-form ground-state as (with $\omega_s$) in KS. Making differentiations in Eq. (17), firstly \cite{Lieb07} at $q=r=0.5$, we obtain as condition \begin{equation} \frac{\sqrt{\xi_p}}{1-\xi_p}\, \left(\frac{1+\xi_p}{1-\xi_p}\right)^3\, =\, \Lambda\left(\frac{\omega_0}{2\omega_s}\right)^2 \, \equiv{\frac{\sqrt{\xi}}{1-\xi}\, \left(\frac{1+\xi}{1-\xi}\right)^3} \end{equation} at the exact input-density, i.e., at fixed $\omega_s$ in the Kohn-Sham auxiliary orbital. To get the unique right-hand-side, the shorthands introduced earlier at Eqs. (2-3) are employed. The solution is $\xi_p(\Lambda)=\xi(\Lambda)$. Therefore, starting from the point-wise decomposition of the exact density we arrive in our method with variable occupation numbers and fixed $q$, at the exact ground-state energy $E_p[q=0.5,\Lambda,\xi_p(\Lambda)]=E_{ex}$ of the correlated model. Our isospectral deformation (a quantum analog of the isoperimetric problem of Queen Dido of Carthage) of a real input $n_1(x)$ seems to be useful to treat a correlated two-body system. This conclusion on a prototype model is similar to the one based on the single-particle Green's function of many-body theory \cite{Migdal58} on a degenerate Fermion system. There, the ground-state energy and the momentum distribution are completely determined by that function. The quasiparticle weight \cite{Ziesche97}, in our case, could be $(\mathcal{P}_0-\mathcal{P}_1)=[1-\xi_p(\Lambda)]^2$. \eject At small interparticle coupling we get $\xi_p(\Lambda,q=0.5)\sim{\Lambda^2}$, i.e., a similar scaling as in Wigner's correlation energy defined by $(E_{ex}-E_{HF})=(1/2)(\omega_1+\omega_2-2\, \omega_{HF})\sim{(-\Lambda^2)}$. Such, traditional, definition differs from the one used in modern density-functional theory where one introduces $(1/2)(\omega_1+\omega_2-2\, \omega_{s})$ as exchange-correlation energy. Within the physically restricted class for approximate pair-densities with $(q+r)=1$ now we analyze, without the loss of generality, the $q=0.5+\delta$ case where the fixed (say, $|\delta|<0.2$) deviation measures a slight departure from the successful symmetric \cite{Lieb07} case investigated above. The corresponding variational constraint on the energy results in \begin{equation} \frac{\xi_p^q}{q\, (\xi_p^{2q-1}-\xi_p)+(1-q)(1-\xi_p^{2q})} \left(\frac{1+\xi_p}{1-\xi_p}\right)^3\, =\, \Lambda\left(\frac{\omega_0}{2\omega_s}\right)^2. \end{equation} The solution of this constraining equation becomes $\xi_p(q=0.5+\delta,\Lambda\rightarrow{0})\sim{\Lambda^{2/(1+2|\delta|)}}$. In Fig. \ref{figure1} we plot an informative ratio $R(\Lambda)\equiv{\xi_p(\Lambda)/\xi(\Lambda)}$, at two illustrative values $q=0.4$ (dashed curve) and $q=0.3$ (dash-dotted curve). In general, $\xi_p(q=0.5+\delta,\Lambda\rightarrow{0})\geq{\xi(\Lambda\rightarrow{0})}$ but the situation changes as soon as $\Lambda$ grows. For large values of $\Lambda$, we have an opposite behavior $\xi_p(q=0.5+\delta,\Lambda\rightarrow{0.5})\leq{\xi(\Lambda\rightarrow{0.5})}$. For each values of $q$, there exists a value ($\Lambda_0$) of the coupling for which $R(\Lambda_0)=1$. \begin{figure} \centering \includegraphics[width=10cm]{FigLambda.pdf} \caption{The ratio $R(\Lambda)\equiv{\xi_p(\Lambda)/\xi(\Lambda)}$ as a function of $\Lambda$. Dashed and dash-dotted curves refer, respectively, to $q=0.4$ and $q=0.3$.} \label{figure1} \end{figure} In the light of Fig. \ref{figure1}, we turn to an information-theoretic investigation of the above case. Here we restrict ourselves to the purity $\Pi$ and the associated linear entropy $L=1-\Pi$, as in a recent calculation \cite{Bondia12} on the model system with triplet configuration. The purity is the inverse of the degree-of-correlation \cite{eberly94} and is defined by \begin{equation} \Pi_p(\xi_p)\, =\, \sum_{n=0}^{\infty}\, [\mathcal{P}_n(\xi_p)]^2. \nonumber \end{equation} The summation, performed with our occupation numbers $\mathcal{P}_n=(1-\xi_p)(\xi_p)^n$, results in \begin{equation} \Pi_p(\xi_p)\, =\, \frac{1-\xi_p(q,\Lambda)}{1+\xi_p(q,\Lambda)}, \end{equation} showing a deviation from idempotency at finite value of the interparticle coupling. The inequality obtained above at small enough $\Lambda$ shows that $L_p[\xi_p(q\neq{0.5},\Lambda)]\geq{L[\xi(\Lambda)]}$. However, we got $L_p[\xi_p(q\neq{0.5},\Lambda)]\leq{L[\xi(\Lambda)]}$ at high enough $\Lambda$. Therefore, our consideration of an information-theoretic measure \cite{Parr00,Mauser13} of the minimum entropy deficiency principle, i.e., minimum missing information principle, demonstrates that such quantity alone is not applicable as good measure of how correlated a Hamiltonian is, in complete agreement with the forecast \cite{Mauser13}. We stress that our conclusion is based on the simultaneous consideration of the kinetic and potential energy components of an expectation value. A consideration based only on the kinetic component would orient us into the wrong (maximum entropy) direction, since in that case only a monotonic change would be allowed in the entropy. \section{Summary and comments} The point-wise-decomposed forms for the exact single-particle probability density and associated one-matrix of an exactly solvable interacting model atom are used in order to analyze energies obtained by a M\"uller-type approximation for the pair-density. From the analysis we found that the flexibility of the parametric method developed is robust, and thus it can be a practically useful one among approaches which rest on restricted informations in absence of the exact wave function for a given Schr\"odinger Hamiltonian. In fact, our method can be considered as an extended \cite{Tellgren14} Kohn-Sham approach suggested recently. Its future extension, by using time-dependent occupation numbers \cite{Pernal07,Appel10,Nagy12}, to the time-domain could be equally important \cite{Dreizler86} since the time-dependent density functional theory is based on mapping \cite{Ullrich12} and not on a variational constraint with Schr\"odinger Hamiltonian. Of course, the knowledge of a precise single-particle density is important in the method investigated. As it is well-known \cite{Burke13} from practical density-functional theory, approximate densities could make dominating errors in many situations beyond those errors which are due to an approximate functional. Our method could allow a desired future investigation on this challenging problem by changing slightly, for instance, the frequency in the Kohn-Sham orbital. Furthermore, the $(q=r)\neq{0.5}$ approximation, in which one violates a normalization condition \cite{Kohn99}, could be a practically useful one according to numerical tests on different systems \cite{Sharma08,Lathiotakis09}. In such treatments values of about $(q=r)\in{[0.525,0.65]}$ are suggested. In our isospectral deformation method the $(q=r)\neq{0.5}$ case would require a simple change in the last term of Eq. (17) to \begin{equation} -\frac{1}{2}\, \Lambda\, \frac{\omega_0^2}{\omega_s}\, \left[2-\frac{(1-\xi_p^q)(1-\xi_p^{r})(1-\xi_p)^{q+r+1}} {(1+\xi_p)(1-\xi_p^{q+r})^2}\right]. \nonumber \end{equation} Finally, beyond the correlated two-particle case, an extension to the many-particle case could start with the so-called radical \cite{Schirmer07} Kohn-Sham framework where one represents the spherical \cite{Lieb07} ground-state density in terms of one orbital. \eject \begin{acknowledgments} I.N. thanks Professor P. M. Echenique for the warm hospitality at the DIPC. We are grateful to Professors J. Gracia-Bond\'{i}a, E. H. Lieb, and A. Rubio for discussions. The work of C.L.B.-R. was supported by a Francisco Jos\' e de Caldas scholarship, funded by Colciencias. \end{acknowledgments}
1,116,691,500,609
arxiv
\section*{Acknowledgements} We would like to thank the members of the Dialogue Modelling Group of the University of Amsterdam for their useful discussions, and our anonymous reviewers for their insightful comments. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No.\ 819455). \section{Possible Criteria to Distinguish Constructions} \label{sec:app-criteria} Lexicalised constructions can be classified according to multiple criteria \cite{titone1994descriptive,wray2002formulaic,columbus2013support}, including those listed below. \begin{itemize} \item \textbf{Compositionality} This criterion is typically used to separate idioms from other formulaic expressions, although it is sometimes referred to as \textit{transparency} to underline its graded, rather than binary, nature. There is no evidence, however, that the processing advantage of idioms differs from that of compositional phrases \cite{tabossi2009idioms,jolsvai2013meaning,carrol2020all}. \textit{Therefore we ignore this criterion in the current study.} \item \textbf{Literal plausibility} This criterion is typically used to discriminate among different types of idioms \cite{titone1994descriptive,titone2014time}---as compositional phrases are literally plausible by definition. \textit{Because we ignore distinctions made on the basis of compositionality, we do not use this criterion.} \item \textbf{Meaningfulness} Meaningful expressions are idioms and compositional phrases (e.g. \textit{`on my mind'}, \textit{`had a dream'}) whereas sentence fragments that break constituency boundaries (e.g., \textit{`of a heavy'}, \textit{'by the postal'}) are considered less meaningful \cite[as measured in norming studies, e.g., by][]{jolsvai2013meaning}. There is some evidence that the meaningfulness of multi-word expressions correlates with their processing advantage even more than their frequency \cite{jolsvai2013meaning}; yet expressions are particularly frequent, they present processing advantages even if they break regular phrasal structures \cite{bybee1999effect,tremblay2011processing}. Moreover, utterances that break regular constituency rules are particularly frequent in spoken dialogue data (e.g., \textit{`if you could search for job and that's not'}, \textit{`you don't wanna damage your relationship with'}). \textit{For these reasons, we do not exclude constructions that span multiple constituents from our analysis.} \item \textbf{Schematicity} This criterion distinguishes expressions where all the lexical elements are fixed from expressions ``with slots'' that can be filled by varying lexical elements.\textit{In this study, we focus on fully lexicalised constructions.} \item \textbf{Familiarity} This is a subjective criterion that strongly correlates with objective frequency \cite{carrol2020all}. Human experiments would be required to obtain familiarity norms for our target data, and the resulting norms would only be an approximation of the familiarity judgements of the true speakers we analyse the language of. \textit{Therefore, we ignore this criterion in the current study.} \item \textbf{Communicative function} Formulaic expressions can fulfil a variety of discourse and communicative functions. \citet{biber2004if}, e.g., distinguish between stance expressions (attitude, certainty with respect to a proposition), discourse organisers (connecting prior and forthcoming discourse), and referential expressions; and for each of these three primary discourse functions, more specific subcategories are defined. This type of classification is typically done a posteriori---i.e., after a manual analysis of the expressions retrieved from a corpus according to other criteria \cite{biber2007lexical}. In the BNC, for example, we find epistemic lexical bundles (\textit{`I don't know'}, \textit{`I don't think'}), desire bundles (\textit{`do you want to'}, \textit{'I don't want to'}), obligation/directive bundles (\textit{`you don't have to'}), and intention/prediction bundles (\textit{`I'm going to'}, \textit{`it's gonna be'}). \textit{We do not use this criterion to avoid an a priori selection of the constructions.} \end{itemize} \section{Extraction of Repeated Constructions} \label{sec:app-extraction} \begin{table*}[!h] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{cccclccc} \toprule \textbf{Speaker} & \textbf{RI} & \textbf{RI~Utt} & \textbf{Dist} & \textbf{Turn} & $\boldsymbol{H(u)}$ & $\boldsymbol{H(c)}$ & $\boldsymbol{FE(c,u)}$ \\ \midrule A & 0 & 0 & - & [...] I think that everyone should have the same opportunities & & & \\ & & & & \ \ and \textbf{I don't think you should be} proud or ashamed of what & 4.24 & 1.90 & 1.21 \\ & & & & \ \ your you know what your situation is whether you what your & & & \\ & & & & \ \ what your race is whether you're a woman or a man whether & & & \\ & & & & \ \ you live from this pl whether you're in this place [...] & & & \\ \midrule A & 1 & 0 & 80 & I well I th I don't think it should \textbf{I don't think you should be} & 3.40 & 1.73 & 1.40 \\ \midrule A & 2 & 0 & 19 & Well yes perhaps but \textbf{I don't think you should be} like um & 3.95 & 1.06 & 2.25 \\ & & & & \ \ embarrassed about it or I think I think you should just sort of & & & \\ \bottomrule \end{tabular} } \caption{Repetition chain for the construction \textit{`I don't think you should be'} in dialogue S2AX of the Spoken BNC, annotated with repetition index (RI), repetition index in utterance (RI~Utt), and distance from previous mention (Dist; number of tokens). $H(u)$ is the utterance information content, $H(c)$ and $FE(c,u)$ are the construction's information content and facilitating effect.} \label{tab:chain-len7} \end{table*} We define a limited specific vocabulary of generic nouns that should not be considered referential. The vocabulary includes: \textit{bit}, \textit{bunch}, \textit{day}, \textit{days}, \textit{fact}, \textit{god}, \textit{idea}, \textit{ideas}, \textit{kind}, \textit{kinds}, \textit{loads}, \textit{lot}, \textit{lots}, \textit{middle}, \textit{ones}, \textit{part}, \textit{problem}, \textit{problems}, \textit{reason}, \textit{reasons}, \textit{rest}, \textit{side}, \textit{sort}, \textit{sorts}, \textit{stuff}, \textit{thanks}, \textit{thing}, \textit{things}, \textit{time}, \textit{times}, \textit{way}, \textit{ways}, \textit{week}, \textit{weeks}, \textit{year}, \textit{years}. We also find all the filled pauses and exclude word sequences that consist for more than 50\% of filled pauses. Filled pauses in the Spoken BNC are transcribed as: \textit{huh}, \textit{uh}, \textit{erm}, \textit{hm}, \textit{mm}, \textit{er}. Figure~\ref{fig:proportions} shows the proportion of tokens in an utterance belonging to constructions (referential and non-referential) and to non-construction sequences. Table~\ref{tab:chain-len7} shows a whole construction chain (from the first mention to the last repetition) for a construction of length 6. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/proportions.png} \caption{Proportion of tokens in an utterance that belong to referential constructions, non-referential constructions, and to non-construction sequences. The \textit{x} axis shows percentages indicating utterance positions in the dialogue relative to the dialogue length.} \label{fig:proportions} \end{figure} \section{Language Model} \subsection{Finetuning} \label{sec:app-finetuning} We finetune the \textit{`small' variant} of GPT-2 \cite{radford2019language} and DialoGPT \cite{zhang2019dialogpt} on our finetuning split of the Spoken BNC (see Section~\ref{sec:data}) using HuggingFace's implementation of the models with default tokenizers and parameters \cite{wolf2020transformers}. Dialogue turns are simply concatenated; we have experimented with labelling the dialogue turns (i.e., \textit{A: utterance 1, B: utterance 2} and found that this leads to higher perplexity. The finetuning results for both models are presented in Table~\ref{tab:finetuning}. We finetune the models and measure their perplexity using Huggingface's finetuning script. We use early stopping over 5 epochs.\footnote{The number of epochs (5) has been selected in preliminary experiments together with the learning rate (\num{1e-4}). In these experiments---which we ran for 40 epochs---we noticed that the \num{1e-4} learning rate offers the best tradeoff of training time and perplexity out of four possible values: \num{1e-2}, \num{1e-3}, \num{1e-4}, \num{1e-5}. We obtained insignificantly lower perplexity values with a learning rate of \num{1e-5}, with significantly longer training time: 20 epochs for GPT-2 and 28 epochs for DialoGPT.} Sequence length and batch size vary together because they together determine the amount of memory required; more expensive combinations (e.g., 256 tokens with batch size 16) require an exceedingly high amount of GPU memory. Reducing the maximum sequence length has limited impact: 99.90\% of dialogue turns have at most 128 words. DialoGPT starts from extremely high perplexity values but catches up quickly with finetuning. GPT-2 starts from much lower perplexity values and reaches virtually the same perplexity as DialoGPT after finetuning. For the pre-trained DialoGPT perplexity is extremely high, and the perplexity trend against maximum sequence length is surprisingly upward. These two behaviours indicate that the pre-trained DialoGPT is less accustomed than GPT-2 to the characteristics of our dialogue data. DialoGPT is trained on written online group conversations, while we use a corpus of transcribed spoken conversations between two speakers. In contrast, GPT-2 has been exposed to the genre of fiction, which contains scripted dialogues, and thus to a sufficiently similar language use. We select GPT-2 finetuned with a maximum sequence length of 128 and 512 as our best two models; these two models (which we now refer to as \textit{frozen}) are used for the adaptive learning rate selection (Section~\ref{sec:app-lr}). \begin{table*}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule \textbf{Model} & \textbf{Learning rate} & \textbf{Max sequence length} & \textbf{Batch size} & \textbf{Best epoch} & \textbf{Perplexity finetuned} & \textbf{Perplexity pre-trained} \\ \midrule DialoGPT & 0.0001 & 128 & 16 & 3 & 23.21 & 7091.38 \\ DialoGPT & 0.0001 & 256 & 8 & 4 & 22.26 & 12886.92 \\ DialoGPT & 0.0001 & 512 & 4 & 4 & 21.73 & 21408.32 \\ GPT-2 & 0.0001 & 128 & 16 & 4 & 23.32 & 173.76 \\ GPT-2 & 0.0001 & 256 & 8 & 3 & 22.21 & 159.23 \\ GPT-2 & 0.0001 & 512 & 4 & 3 & 21.55 & 149.82 \\ \bottomrule \end{tabular} } \caption{Finetuning results for GPT-2 and DialoGPT on our finetuning split of the Spoken BNC.} \label{tab:finetuning} \end{table*} \subsection{Learning Rate Selection} \label{sec:app-lr} To find the appropriate learning rate for on-the-fly adaptation (see Section~\ref{sec:modelling}), we randomly select 18 dialogues $D$ from the analysis split of the Spoken BNC and run an 18-fold cross-validation for a set of six candidate learning rates: \num{1e-5}, \num{1e-4}, $\ldots$, 1. We finetune the model on each dialogue using one of these learning rate values, and compute perplexity change 1) on the dialogue itself (to measure \textit{adaptation}) as well as 2) on the remaining 17 dialogues (to measure \textit{generalisation}). We set the Transformer's context window to 50 to reproduce the experimental conditions presented in Section~\ref{sec:estimates}. More precisely, for each dialogue $d \in D$, we calculate the perplexity of our two frozen models (Section~\ref{sec:app-finetuning}) on $d$ and $D \setminus \{d\}$ (which we refer to as $ppl_{before}(d)$ and $ppl_{before}(D)$, respectively). Then, we finetune the models on $d$ using the six candidate learning rates, and measure again the perplexity over $d$ and $D \setminus \{d\}$ (respectively, $ppl_{after}(d)$ and $ppl_{after}(D)$). The change in performance is evaluated according to two metrics: $\frac{ppl_{after}(d) - ppl_{before}(d)}{ppl_{before}(d)}$ measures the degree to which the model has successfully adapted to the target dialogue; $\frac{ppl_{after}(D) - ppl_{before}(D)}{ppl_{before}(D)}$ measures whether finetuning on the target dialogue has caused any loss of generalisation. The learning rate selection results are presented in Figure~\ref{fig:lr}. We select \num{1e-3} as the best learning rate and pick the model finetuned with a maximum sequence length of 512 as our best model. The difference in perplexity reduction (both adaptation and generalisation) is minimal with respect to the model finetuned with a maximum sequence length of 128, but since the analysis split of the Spoken BNC contains turns longer than 128 tokens, we select the 512 version. Similarly to \citet{van2018neural}, we find that finetuning on a dialogue does not cause a loss in generalisation but instead helps the model generalise to other dialogues. Unlike \shortcite{van2018neural}, who used LSTM language models, we find that learning rates larger than $\num{1e-1}$ cause backpropagation to overshoot, even within a single dialogue. In Figure~\ref{fig:lr}, the bars for $\num{1e-1}$ and $1$ are not plotted because the corresponding data contains infinite perplexity values (due to numerical overflow). The selected learning rate, $\num{1e-3}$, is a relatively low learning rate for on-the-fly adaptation but it is still higher than the best learning rate for the entire dataset by a factor of 10. \begin{figure}[h] \centering \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=\linewidth]{figures/gpt2-w128-bsz16-cv.pdf} \vspace{-2.5em} \caption{} \label{fig:lr-128} \end{subfigure}% \hspace{.01\textwidth} \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=\linewidth]{figures/gpt2-w512-bsz4-cv.pdf} \vspace{-2.5em} \caption{} \label{fig:lr-512} \end{subfigure}% \caption{The adaptation and generalisation performance (defined in Section~\ref{sec:app-lr}) with varying learning rate.} \label{fig:lr} \end{figure} \section{Linear Mixed Effect Models} \label{sec:app-lme} As explained in §\ref{sec:stat-model} of the main paper, we fit a linear mixed effect model using facilitating effect as the response variable and including multilevel random effects grouped by dialogues and individual speakers.\footnote{We also try grouping observations only by dialogue and only by individual speakers. The amount of variance explained (but unaccounted for by the fixed effects) decreases, so we keep the two-level random effects.}. The fixed effects of the model, resulting from a backward stepwise selection procedure, are presented in §\ref{sec:stat-model}. Non-binary predictors are log-transformed, mean-centered, and scaled by 2~sd. The final model is summarised in Listing~\ref{lme-fe} and its coefficients are visualised in Figure~\ref{fig:fe-lme-summary}. We rely on the \texttt{lme4} and \texttt{lmerTest} R packages for this analysis. \begin{lstlisting}[label=lme-fe,float=*,frame=tb,caption=Linear mixed effect model for Facilitating Effect] MODEL INFO: Observations: 46399 Dependent Variable: Facilitating Effect Type: Mixed effects linear regression MODEL FIT: AIC = 99197.283, BIC = 99302.224 Pseudo-R^2 (fixed effects) = 0.084 Pseudo-R^2 (total) = 0.111 FIXED EFFECTS: ----------------------------------------------------------------------------------- Est. 2. --------------------------- ------- -------- -------- --------- ----------- ------- (Intercept) 0.704 0.683 0.725 65.527 185.698 0.000 log Utterance Position 0.046 0.026 0.066 4.556 9274.269 0.000 log Construction Length 0.098 0.084 0.111 14.396 46372.022 0.000 log Repetition Index 0.079 0.063 0.094 10.096 45082.205 0.000 log Distance -0.311 -0.328 -0.293 -34.571 46269.156 0.000 Previous Same Utterance -0.099 -0.184 -0.013 -2.262 46063.723 0.024 log Rep. Index in Utterance 0.178 0.130 0.226 7.243 45765.367 0.000 PMI -0.139 -0.154 -0.124 -18.225 45172.205 0.000 Referential 0.124 0.099 0.149 9.887 46214.616 0.000 ----------------------------------------------------------------------------------- p values calculated using Satterthwaite d.f. RANDOM EFFECTS: ------------------------------------------------ Group Parameter Std. Dev. ---------------------- ------------- ----------- Speaker:`Dialogue ID (Intercept) 0.082 Dialogue ID (Intercept) 0.090 Residual 0.701 ------------------------------------------------ Grouping variables: ----------------------------------------- Group # groups ICC ---------------------- ---------- ------- Speaker:`Dialogue ID 368 0.013 Dialogue ID 185 0.016 ----------------------------------------- Continuous predictors are mean-centered and scaled by 2 s.d. \end{lstlisting} \section{Further Results} \label{sec:app-further-results} \subsection{Same-Utterance Self-Repetitions} \label{sec:app-same-utterance} We investigate the interaction between cumulativity and recency (see §\ref{sec:results-mentions}) by focusing on densely clustered repetitions, produced by a speaker within a single utterance (the median distance between repetitions in the same utterance is 8 words; across turns it is 370.5 words). Table~\ref{tab:chain-len3} shows an example of same-utterance repetition. Repeating a construction when it has already been mentioned in the current utterance limits its facilitating effect ($\beta=-0.099, p<0.05$, \textit{95\% c.i.} -0.184:-0.013): if a portion of the utterance already consists of a construction, utterance information content will already be reduced, which in turn reduces the potential for the facilitating effect of repetitions. Nevertheless, we find \textbf{strong cumulativity effects for self-repetitions within the same utterance}: the repetition index \textit{within the current utterance} of a construction mention (i.e., how often the construction has been repeated so far in the utterance) has a positive effect on \textit{FE} ($\beta=0.178, p<0.005$, \textit{95\% c.i.} 0.130:0.226); see Figure~\ref{fig:fe-inturn}. In sum, same-utterance self-repetitions, especially those involving three or more mentions in a single utterance, can have a strong reduction effect on utterance information content. Although this may seem a simple yet very effective strategy for information rate mitigation, it is unlikely to be very effective in terms of the amount of information exchanged. Indeed, speakers do not use this strategy often in the Spoken BNC: 6.82\% of the total construction occurrences have at least one previous mention in the same utterance. \subsection{Interaction-Specificity} To distinguish interaction-specific constructions---those repeated particularly often in certain dialogues---from interaction-agnostic ones, we measure the association strength between a construction $c$ and a dialogue $d$ as the pointwise mutual information (PMI) between the two: \begin{align} \operatorname{PMI}(c,d) = \log_2 \frac{P(c|d)}{P(c)} \label{eq:pmi} \end{align} This quantifies how unusually frequent a construction is in a given dialogue, compared to the rest of the corpus. For example, for a construction to obtain a PMI score of 1, its probability given the dialogue $P(c|d)$ must be twice as high as its prior probability $P(c)$. Low PMI scores (especially below 1) characterise interaction-agnostic constructions, whereas higher PMI scores indicate that constructions are specific to a given dialogue. The probabilities in Eq.~\ref{eq:pmi} are obtained using maximum likelihood estimation over the analysis split of the Spoken BNC. PMI scores have a negative effect on \textit{FE} ($\beta=-0.139, p<0.005$, 95\% c.i. -0.154:-0.124), indicating that interaction-agnostic constructions have a stronger facilitating effect than interaction-specific ones. Figure~\ref{fig:fe-pmi} shows the \textit{FE} distributions for the most extreme cases: constructions with a PMI lower than 1 (`agnostic') and constructions that have been repeated in only one dialogue (`specific'). \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/lme-coefficients.png} \caption{Significant predictors of facilitating effect. Mixed effects linear regression, continuous predictors are mean-centred and scaled by 2 standard deviations.} \label{fig:fe-lme-summary} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=\linewidth]{figures/fe-repindex-in-utt.png} \caption{} \label{fig:fe-inturn} \end{subfigure} \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=\linewidth]{figures/fe-pmi.png} \caption{} \label{fig:fe-pmi} \end{subfigure} \caption{Facilitating effect against repetition index \textit{within the current utterance} (a) and facilitating effect of interaction-agnostic constructions ($\operatorname{PMI}(c,d)< 1$) vs.\ interaction-specific constructions ($\operatorname{PMI}(c,d) = \max_{c',d'} \operatorname{PMI}(c', d')$) (b).} \end{figure} \section{Computing Infrastructure and Budget} \label{sec:app-compute} Our experiments were carried out using a single GPU on a computer cluster with Debian Linux OS. The GPU nodes on the cluster are GPU GeForce 1001 1080Ti, 11GB GDDR5X, with NVIDIA driver version 418.56 and CUDA version 10.1. The total computational budget required to finetune the language model amounts to 45 minutes; obtaining surprisal estimates requires 4 hours, and selecting the adaptation learning rate requires 9 hours. \section{Background} \label{sec:background} \subsection{Constructions} \label{sec:background-constructions} This work focuses on \textit{constructions}, seen as particular configurations of structures and lexemes in usage-based accounts of natural language \cite{tomasello2003constructing,bybee2006usage,bybee2010language,goldberg2006constructions}. According to these accounts, models of language processing must consider not only individual lexical elements according to their syntactic roles but also more complex form-function units, which can break regular phrasal structures---e.g., \textit{`I~know I'}, \textit{`something out of'}. We further focus on fully lexicalised constructions (sometimes called \textit{formulaic expressions}, or \textit{multi-word expressions}). Commonly studied types of constructions are idioms (\textit{`break the ice'}), collocations (\textit{`pay attention to'}), phrasal verbs (\textit{`make up'}), and lexical bundles (\textit{`a lot of the'}). In §\ref{sec:extraction}, we explain how the notion of lexicalised construction is operationalised in the current study; Table~\ref{tab:construction-examples} shows some examples. \begin{table}[t] \centering \small \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}p{0.151\textwidth} p{0.155\textwidth} p{0.134\textwidth}@{}} \toprule \textbf{SPXV} & \textbf{SAXQ} & \textbf{S9YG} \\ \midrule want to be with him & \textit{it on the television} & I bet you can \\ \textit{shit like that} & \textit{for a family} & yeah I used to \\ I can be & think that's a & \textit{go to bed} \\ to see her & \textit{the orient express} & and I love \\ and she just & one thing that & \textit{the window and} \\ I quite like & \textit{one of my favourites} & and I think it's \\ you don't like & \textit{on the television} & yeah I think so \\ and you're like & yes yeah I & \textit{the same people} \\ going to go & erm I think & is she in \\ you're going to & a really good & \textit{lock the door} \\ \bottomrule \end{tabular} } \caption{Top 10 constructions from three dialogues of the Spoken BNC \cite{love2017spoken}, sorted according to the PMI between a construction and its dialogue (§\ref{sec:stat-model}). Referential constructions in italics (§\ref{sec:extraction}). Headers correspond to the dialogues' IDs in the corpus.} \label{tab:construction-examples} \end{table} A common property of constructions is their frequent occurrence in natural language. As such, they possess what, in usage-based accounts, is sometimes referred to as `processing advantage' \cite{conklin2012processing,carrol2020all}. Evidence for the processing advantage of construction usage has been found in reading \cite{arnon2010more,tremblay2011processing}, naming latency \cite{bannard2008stored,janssen2012phrase}, eye-tracking \cite{underwood2004eyes,siyanova2011seeing}, and electrophysiology \cite{tremblay2010holistic,siyanova2017representation}. In this paper, we model this processing advantage as reduced information content and show that it can mitigate information rate throughout entire dialogues. \subsection{Information Content, Surprisal, and Processing Effort} \label{sec:background-surprisal} Estimates of information content have been shown to be good predictors of processing effort in perception \cite{jelinek1975design,clayards2008perception}, reading \cite{keller2004entropy,demberg2008data,levy2009eye}, and sentence interpretation \cite{levy2008noisy,gibson2013rational}. In these studies, information content is typically referred to as \textit{surprisal}, taken as a measure of how unpredictable, unlikely, or surprising a linguistic signal is in its context. As speakers take into consideration their addressee's processing effort \cite{clark1986referring,clark1989contributing}, their linguistic choices can often be explained as strategies to manage the fluctuations of information content over time. Surprisal-based accounts have indeed been successful at explaining various aspects of language production: speakers tend to reduce the duration of less surprising sounds \cite{aylett2004smooth,aylett2006language,bell2003effects,demberg2012syntactic}; they are more likely to drop sentential material within less surprising scenarios \cite{jaeger2007speakers,frank2008speaking,jaeger2010redundancy}; they tend to overlap at low-surprisal dialogue turn transitions \cite{dethlefs2016information}; and they produce sentences at a constant information rate in texts \cite{genzel2002entropy,qian2011topic,giulianelli2021analysing}. To measure information content we use GPT-2 \cite{radford2019language}, a neural language model. We thereby follow the established approach~\cite[e.g.,][]{genzel2002entropy,keller2004entropy,xu2018information} of using language models to estimate information content. Neural models' estimates in particular have been shown to be good predictors of processing effort, measured as reading time, gaze duration, and N400 response \cite{monsalve2012lexical,goodkind-2018-predictive,merkx2021human,schrimpf2021neural}. We further implement a simple neural adaptation mechanism, performing continuous gradient updates based on utterance prediction error; this not only leads to a more psychologically plausible model but also to the estimation of more human-like expectations \cite{van2018neural}.\looseness=-1 \section{Discussion and Conclusions} \label{sec:conclusion} Construction repetition is a pervasive phenomenon in dialogue; their frequent occurrence gives constructions a processing advantage~\cite{conklin2012processing}. In this paper, we show that the processing advantage of constructions can be naturally modelled as reduced information content and propose that speakers' production of constructions can be seen as a strategy for information rate mitigation. This strategy can explain why utterance information content is often found to decrease over the course of dialogues~\cite{vega2009looking,giulianelli2021analysing}, in contrast with the predictions of theories of optimal use of the communication channel~\cite{genzel2002entropy}. We observe that, as predicted, construction usage in English open-domain spoken dialogues mitigates the information rate of utterances. Furthermore, while constructions are produced at a stable rate throughout dialogues, their facilitating effect---our proposed measure of reduction in utterance information content---increases over time. We find that this increment is led by construction repetition, with facilitating effect being positively affected by repetition frequency, density, and by the contents of a construction. Repetitions of referential constructions reduce utterance information content more aggressively, arguably making them a more cost-reducing alternative to the shortening strategy observed in chains of referring expressions~\cite{krauss1964changes,krauss1967effect}, which instead tends to preserve rate constancy \cite{giulianelli2021dialogues}.\footnote{ Expression shortening is more efficient, however, in terms of articulatory cost. }\looseness-1 \paragraph{Relation to cognitive effort} We consider repetitions as a way for speakers to make dialogic interaction less cognitively demanding both on the production and on the comprehension side. This is not at odds with the idea that repetitions are driven by interpersonal synergies \cite{fusaroli2014dialog} and coordination \cite{sinclair-fernandez-2021-construction}. We think that the operationalisation of these higher level processes can be described by means of lower level, efficiency-oriented mechanisms, with synergy and coordination both corresponding to reduced collaborative effort. Although information content estimates from neural language models have been shown to correlate with human processing effort (cf.\ §\ref{sec:background-surprisal}), we cannot claim that our work directly models human cognitive processes as we lack the relevant human data to measure such correlation for the corpus at hand. \paragraph{Adaptive language model} Our decision to use an adaptive neural language model affects information content estimates in two main ways. On the one hand, due to their high frequency, constructions are likely to be assigned higher probabilities by this model, and therefore lower information content. We stress that we do not present constructions’ lower information content as a novel result, nor do we make any claims based on this result. As explained in §\ref{sec:preliminary-construction}, this is a precondition for our experiments on the facilitating effect of constructions, which is not determined exclusively by their information content (as empirically shown in §\ref{sec:ic-vs-fe}) but rather measures the effect of construction usage on the information content of entire utterances. On the other hand, because our model is adaptive, the probability of constructions is likely to increase as a result of their appearance in the dialogue history. Adaptation, however, also contributes to lower utterance information content \textit{overall} through the exploitation of topical and stylistic cues, as demonstrated by the lower perplexity of the adaptive model on the entire target dialogue as well as on other dialogues from the same dataset (see §\ref{sec:modelling} and Appendix~\ref{sec:app-lr}). In conclusion, while our adaptive language model assigns higher probabilities to frequently repeated tokens---as expected from a psychologically plausible model of utterance processing---it is not responsible for the discovered patterns of construction facilitating effect. In future work, the model can be improved, e.g., by conditioning on the linguistic experience of individual speakers. \paragraph{Types of dialogue} To consolidate our findings, construction repetition patterns should also be studied in dialogues of different genres and on datasets where utterance information content was not found to decrease. We have chosen the Spoken BNC for our study as it contains dialogues from a large variety of real-life contexts, which makes it a representative dataset of open-domain dialogue. In task-oriented dialogue, we expect constructions to consist of a more limited, task-specific vocabulary, resulting in longer chains of repetition and potentially more frequent referential construction usage. These peculiarities of task-oriented dialogue may influence the strength of the facilitating effect (as we have seen, facilitating effect is affected by both frequency and referentiality) but we expect our main results to still hold, as they are generally related to the processing advantage of constructions. \looseness-1 \paragraph{Relevance for dialogue generation models} Besides contributing new empirical evidence on construction usage in dialogue, our findings inform the development of more naturalistic utterance generation models. They suggest that models should be continually updated for their probabilities to better reflect human expectations; that attention mechanisms targeting contexts of different sizes (local vs.\ global) may have a significant impact on the naturalness of generated utterances; and that while anomalous repetitions (e.g., generation loops) should be prevented \cite{li2016deep,holtzman2019curious}, it is important to ensure that natural sounding repetitions are not suppressed. We expect dialogue systems that are able to produce human-like patterns of repetitions to be perceived as more natural overall---with users having the feeling that common ground is successfully maintained~\cite{PickeringGarrod2004}---and to lead to more effective communication~\cite{reitter2014alignment}. In our view, such human-like patterns can be reproduced by steering generation models towards the trends of information rate observed in humans. \section{Data} \label{sec:data} We conduct our study on the Spoken British National Corpus\footnote{\url{http://www.natcorp.ox.ac.uk}.}~\cite{love2017spoken}, a dataset of transcribed open-domain spoken dialogues containing 1,251 contemporary British English conversations, collected in a range of real-life contexts. We focus on the 622 dialogues that feature only two speakers, and randomly split them into a 70\% finetuning set (to be used as described in §\ref{sec:method}) and a 30\% analysis set (used in our experiments, as described in §\ref{sec:preliminary} and~§\ref{sec:results}). Table~\ref{tab:bnc-length} shows some statistics of the dialogues used in this study. \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}lcccc@{}} \toprule & \textbf{Mean $\pm$ Sd} & \textbf{Median} & \textbf{Min} & \textbf{Max} \\ \midrule \textbf{Dialogue length (\# utterances)} & 736 $\pm$ 599 & 541.5 & 67 & 4859 \\[2pt] \textbf{Dialogue length (\# words)} & 7753 $\pm$ 5596 & 6102 & 819 & 39575 \\[2pt] \textbf{Utterance length (\# words)} & 11 $\pm$ 15 & 6 & 1 & 982 \\ \bottomrule \end{tabular} } \caption{ Two-speaker dialogue statistics, Spoken BNC.} \label{tab:bnc-length} \end{table} \subsection{Extracting Repeated Constructions} \label{sec:extraction} We define constructions as multi-word sequences repeated within a dialogue. To extract constructions from each dialogue, we use the sequential pattern mining method proposed by \citet{duplessis2017utterance,duplessis2017automatic,duplessis2021towards}, which treats the extraction task as an instance of the longest common subsequence problem \cite{hirschberg1977algorithms,bergroth2000survey}.\footnote{Their code is freely available at \url{https://github.com/GuillaumeDD/dialign}.} We modify it to not discard multiple repetitions of a construction that occur in the same utterance. We focus on constructions of at least three tokens, uttered at least three times in a dialogue by any of the dialogue participants. Repeated sequences that mostly appear as a sub-part of a larger construction are discarded.\footnote{We discard constructions that appear less than twice outside of a larger repeated construction in a given dialogue (e.g., \textit{`think of it'} vs.\ \textit{`think of it like'}).} We also exclude sequences containing punctuation marks or which consist of more than 50\% filled pauses (e.g., \textit{`mm'}, \textit{`erm'}).\footnote{The full list of filled pauses can be found in Appendix~\ref{sec:app-extraction}.} Applying the described extraction procedure to the 187 dialogues in the analysis split of the Spoken BNC yields a total of 5,893 unique constructions and 60,494 occurrences. Further statistics of the extracted constructions are presented in Table~\ref{tab:construction-stats}, and Table~\ref{tab:construction-examples} shows 10 example constructions extracted from three dialogues. For analysis purposes, we distinguish between referential and non-referential constructions. We label a construction as \textit{referential} if it includes nouns, unless the nouns are highly generic.\footnote{We define a limited specific vocabulary of generic nouns (e.g., \emph{`thing'}, \emph{`fact'}, \emph{'time'}); full vocabulary in Appendix~\ref{sec:app-extraction}.} Referential constructions are mostly topic-determined; examples are \emph{`playing table tennis'}, \emph{`a woolly jumper'}, \emph{`a room with a view'}. The remaining constructions are labelled as \textit{non-referential}. These mainly include topic-independent expressions and conversational markers, such \emph{`a lot of'}, \emph{`I don't know'}, and \emph{`yes of course'}. Our dataset consists of 5,291 referential and 55,203 non-referential construction occurrences, 1,143 and 4,750 construction forms; see Table~\ref{tab:construction-examples} for further examples. \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}lccc@{}} \toprule & \textbf{Mean $\pm$ Sd} & \textbf{Median} & \textbf{Max} \\ \midrule \textbf{Construction Length} & 3.27 $\pm$ 0.58 & 3 & 7 \\[2pt] \textbf{Construction Frequency} & 4.29 $\pm$ 3.04 & 3 & 70 \\[2pt] \textbf{Constructions per Dialogue} & 325.34 $\pm$ 458.64 & 149 & 2817 \\[2pt] \ \ \ \ \small \textbf{\textit{Referential}} & \small 30.96 $\pm$ 39.75 & \small 19 & \small 346 \\[2pt] \ \ \ \ \small \textbf{\textit{Non-Referential}} & \small 296.88 $\pm$ 424.17 & \small 134.5 & \small 2530 \\[2pt] \textbf{Utterance Length} & 31.19 $\pm$ 36.19 & 21 & 959 \\ \bottomrule \end{tabular} } \caption{Construction statistics for our analysis split of the Spoken BNC. \textit{Constr.\ Frequency}: occurrences of a given construction in a dialogue. \textit{Constr.\ per Dialogue}: occurrences of all constructions in a dialogue. \textit{Utterance Length}: number of words in utterances containing a construction. The minimum is always 3 by design (§\ref{sec:extraction}). The difference between referential and non-referential is only significant for \textit{Constr.\ per Dialogue}.} \label{tab:construction-stats} \end{table} \section{Introduction} \label{sec:introduction} The repeated use of particular configurations of structures and lexemes, \textit{constructions}, is pervasive in conversational language use~\cite{tomasello2003constructing,goldberg2006constructions}. Such repetition can be understood as a surface level signal of processes of coordination~\cite{sinclair-fernandez-2021-construction} or `interpersonal synergy' between conversational partners~\cite{fusaroli2014dialog}. Speakers may use repetitions to successfully maintain common ground with their interlocutors~\cite{BrennanClark1996,PickeringGarrod2004}, because they are primed by their recent linguistic experience \cite{bock1986syntactic}, or to avoid a costly on-the-fly search for alternative phrasings~\cite[see, e.g.,][]{kuiper1995smooth}. At the same time, repetitions are also advantageous for comprehenders. Repeating a sequence of words positively reshapes expectations for those words, allowing comprehenders to process them more rapidly~\cite[for a review, see][]{bigand2005repetition}. As speakers are known to take into consideration both their own production cost and their addressee's processing effort~\cite{clark1986referring,clark1989contributing,frank2012predicting}, its two-sided processing advantage, as described above, makes construction repetition an efficient, cost-reducing communication strategy. In this paper, we investigate whether and how these information-theoretic properties of repetitions shape patterns of information rate in open-domain spoken dialogue. Information theory is the study of the conditions affecting the transmission and processing of information. To the foundations of the field belongs the noisy-channel coding theorem~\cite{shannon1948}, which states that for any given degree of noise in a communication channel, it is possible to communicate discrete signals nearly error-free up to a maximum information rate, the \textit{channel capacity}. If speakers use the communication channel optimally, they might send information at a rate that is always close to the channel capacity. This observation is at the basis of the principle of Entropy Rate Constancy~\cite[ERC;][]{genzel2002entropy}, which predicts that the information rate of speaker's utterances, measured as the utterance conditional entropy (i.e., its in-context \textit{Shannon information content} or \textit{information density}) remains constant throughout discourse. The ERC predictions have been empirically confirmed for written language production~\cite{genzel2002entropy,genzel2003variation,qian2011topic} but results on dialogue are mixed \cite{vega2009looking,doyle2015audience,doyle-frank-2015-shared,xu2018information,giulianelli2021dialogues}, with some studies suggesting a decreasing information rate over the course of dialogues~\cite{vega2009looking,giulianelli2021analysing}. We hypothesise that this decreasing trend in dialogue may be associated with construction repetition. We conjecture that speakers use construction repetition as a strategy for information rate mitigation, by padding the more information dense parts of their utterances with progressively less information dense constructions---leading to an overall decrease in information rate over the course of a dialogue. \looseness=-1 We extract occurrences of fully lexicalised constructions (see Table~\ref{tab:construction-examples} for examples) from a corpus of open-domain spoken dialogues and use a Transformer-based neural language model to estimate their contribution to utterance information content. First, we confirm that constructions indeed exhibit lower information content than other expressions and that information content further decreases when constructions are repeated. Then, we show that the decreasing trend of information content observed \textit{over utterances}---which contradicts the ERC principle---is driven by the increasing mitigating effect of construction repetition, measured as a construction's (increasingly) negative contribution to the information content of its containing utterance, what we call its \textit{facilitating effect}. In sum, our study provides new empirical evidence that dialogue partners use construction repetition as a strategy for information rate mitigation, which can explain why the rate of information transmission in dialogue, in contrast to the constancy predicted by the theory~\cite{genzel2002entropy}, is often found to decrease. Our findings inform the development of better dialogue models. They indicate, as suggested in related work \cite[e.g.,][]{xi2021taming}, that while avoiding \textit{degenerate} repetitions in utterance generation \cite{li2016deep,welleck2019neural} is an appropriate strategy, dialogue systems should not suppress \textit{human-like} patterns of repetition as these make automatic systems be perceived as more natural and more effective in conversational settings. \section{Experimental Setup} \label{sec:method} In this section, we define our information-theoretic measures and present the adaptive language model used to produce information content estimates.\footnote{Code and statistical analysis are available at \url{https://github.com/dmg-illc/uid-dialogue}.} \subsection{Information Content Measures} \label{sec:estimates} The \textit{information content} of a word choice $w_i$ is the negative logarithm of the corresponding word probability, conditioned on the utterance context~$u_{:w_i}$ (i.e., the words that precede $w_i$ in utterance~$u$) and on the local dialogue context $l$: \begin{align} H(w_i|u_{:w_i},l) = - \log_2 P(w_i | u_{:w_i},l) \label{eq:word-surprisal} \end{align} We define the local dialogue context~$l$ as the 50 tokens that precede the first word in the utterance.\footnote{Building on prior work \cite{reitter2006computational} that uses a window of 15 seconds of spoken dialogue as the locus of local repetition effects, we compute the average speech rate in the Spoken BNC (3.16 tokens/second) and multiply it by 15; we then round up the result (47.4) to 50 tokens.} We use tokens as a unit of context size, rather than utterances, since they more closely correspond to the temporal units used in previous work \cite[e.g.,][]{reitter2006computational}, and since the length of utterances can vary significantly (see Table~\ref{tab:bnc-length}). To measure the information content of a construction~$c$, we average over word-level information content values: \begin{align} H(c;u_{:c},l) = \frac{1}{|c|} \sum_{w_i \in c} H(w_i|u_{:c},l) \label{eq:construction-surprisal} \end{align} We use the same averaging strategy to compute the information content of entire utterances, following prior work~\cite[e.g.,][]{genzel2002entropy,xu2018information}: \begin{align} H(u;l) = \frac{1}{|u|} \sum_{w_i \in u} H(w_i|u_{:w_i},l) \label{eq:utterance-surprisal} \end{align} The above information content estimates target constructions and entire utterances but they do not qualify the relationship between the two. We also measure the information content change (increase or reduction in information rate) contributed by a construction $c$ to its containing utterance, which we call the \textit{facilitating effect} of a construction. Facilitating effect is defined as the logarithm of the ratio between the information content of a construction and that of its utterance context: \begin{align} {\it FE}(c;u,l) = \log_2 \frac{\frac{1}{|u|-|c|} \sum_{c \not\ni w_j \in u} H(w_j|u_{:w_i},l)}{\frac{1}{|c|} \sum_{w_i \in c} H(w_i|u_{:c},l)} \label{eq:facilitating-effect} \end{align} By definition, this quantity is positive when the construction has lower information content than its context, and negative when it has higher information content. When the utterance consists of a single construction, facilitating effect is set to 0. We can expect the values produced by our information content and facilitating effect measurements (Eq.\ \ref{eq:construction-surprisal} and \ref{eq:facilitating-effect}, respectively) to correlate: it is more likely for a construction to have a (positive) facilitating effect if its information content is low. When a construction's information content is high, the information content of its utterance context must be even greater for facilitating effect to occur. Nevertheless, perfect correlation does not follow a priori from the definition of the two measures; we will show this empirically in §\ref{sec:ic-vs-fe}. \subsection{Language Model} \label{sec:modelling} To estimate the per-word conditional probabilities that are necessary to compute information content (Eq.\ \ref{eq:word-surprisal}), we use an adaptive language model. The model is conditioned on local contextual cues via an attention mechanism \cite{vaswani2017attention} and it learns continually \cite[see, e.g.,][]{krause2018dynamic} from exposure to the global dialogue context. We use GPT-2 \cite{radford2019language}, a pre-trained autoregressive Transformer language model. We rely on HuggingFace's implementation of GPT-2 with default tokenizers and parameters \cite{wolf2020transformers} and finetune the pre-trained model on a 70\% training split of the Spoken BNC to adapt it to the idiosyncrasies of spoken dialogic data.\footnote{More details on finetuning can be found in Appendix~\ref{sec:app-finetuning}.} We refer to this finetuned version as the \textit{frozen} model. We use an attention window of length $|u_{:w_i}| + 50$, i.e., the sum of the utterance length up to word $w_i$ and the size of the local dialogue context. As a continual learning mechanism, we use back-propagation on the cross-entropy next word prediction error, a simple yet effective adaptation approach motivated in §\ref{sec:background-surprisal}. Following \citet{van2018neural}, when estimating information content for a dialogue, we begin by processing the first utterance using the frozen language model and then gradually update the model parameters after each turn. For these updates to have the desired effect, the learning rate should be appropriately tuned. It should be sufficiently high for the language model to adapt during a single dialogue, yet an excessively high learning rate can cause the language model to lose its ability to generalise across dialogues. To find the appropriate rate, we randomly select 18 dialogues from the analysis split of the Spoken BNC\footnote{This amounts to ca.~10\% of the analysis split. We use the analysis split because there is no risk of ``overfitting'' with respect to our main analyses.} and run an 18-fold cross-validation for a set of six candidate learning rates: \num{1e-5}, \num{1e-4}, $\ldots$,~1. We finetune the model on each dialogue using one of these learning rates and compute perplexity reduction (i)~on the dialogue itself (\textit{adaptation}) as well as (ii)~on the remaining 17 dialogues (\textit{generalisation}). We select the learning rate yielding the best adaptation over cross-validation folds (\num{1e-3}), while still improving the model's generalisation ability. See Appendix~\ref{sec:app-lr} for further details. \looseness=-1 \section{Preliminary Experiments} \label{sec:preliminary} In this section, we present preliminary experiments on the information content of utterances and constructions, which set the stage for our analysis of the facilitating effect of construction repetition. \subsection{Utterance Information Content} \label{sec:preliminary-utterance} \looseness=-1 Our experiments are motivated by the mixed results on the dynamics of information rate in dialogue discussed in §\ref{sec:introduction}. We thus begin by testing if the Entropy Rate Constancy (ERC) principle holds in the Spoken BNC, i.e., whether utterance information content remains stable over the course of a dialogue. Following a procedure established in prior work~\cite{xu2018information}, we fit a linear mixed effect model with the logarithm of utterance position and construction length as fixed effects (we will refer to their coefficients as $\beta$), and include multi-level random effects grouped by dialogue. \textit{For the ERC principle to hold, the position of an utterance within a dialogue should have no effect on its information content.} Instead, we find that utterance information content decreases significantly over time ($\beta\!=\!-0.119, p\!<\!0.005$, \textit{95\% c.i.}\ $-0.130\!:\!-0.108$), in line with previous negative results on open-domain and task-oriented dialogue~\cite{vega2009looking,giulianelli2021analysing}. The strongest drop occurs in the first ten dialogue utterances ($\beta\!=\!-0.886, p\!<\!0.005$, \textit{95\% c.i.}\ $-0.954\!:\!-0.818$) but the decrease is still significant for later utterances ($\beta\!=\!-0.043, p\!<\!0.005$, \textit{95\% c.i.}\ $-0.054\!:\!-0.032$). \subsection{Construction Information Content} \label{sec:preliminary-construction} Our hypothesis that construction repetition progressively reduces the information rate of utterances is motivated by the fact that constructions are known to have a processing advantage (see §\ref{sec:introduction} and §\ref{sec:background-constructions}). This property makes them an efficient production strategy, i.e., one that reduces the speaker's and addressee's collaborative effort. Before investigating if the hypothesised information rate mitigation strategy is at play, we test whether our information theoretic measures and the language model used to generate them are able to capture processing advantage: \textit{we expect our framework to yield lower information content estimates (Eq.\ \ref{eq:construction-surprisal}) for constructions than for other word sequences.} Indeed, the information content of constructions is significantly lower than that of non-construction sequences ($t\!=\!-168.82, p\!<\!0.005$, \textit{95\% c.i.}\ $-2.033\!:\!-1.987$).\footnote{We extract all 3- to 7-grams from our analysis split of the Spoken BNC, excluding all \textit{n}-grams that are equal to extracted constructions. We then sample, for each length $n$ from 3 to 7, $s_n$ non-construction sequence occurrences---where $s_n$ is the number of occurrences of $n$-tokens-long extracted constructions.\label{ftn:non-constructions}. The length distributions should match because length has an effect on \textit{S} and \textit{FE} (see §\ref{sec:construction-types}).} Constructions' information content is on average 2 bits lower than that of non-constructions. We conclude that our estimates of information content are a sensible model of the processing advantage of constructions. \subsection{Stable Rate of Construction Usage} \label{sec:stable-usage} In experiment §\ref{sec:preliminary-construction}, we confirmed that constructions have lower information content than other utterance material. A simple strategy to decrease utterance information content over dialogues (we do observe this decrease in the Spoken BNC, as described in §\ref{sec:preliminary-utterance}) could then simply be to increase the rate of construction usage. To test if this strategy is at play, we fit a linear mixed effect model with utterance position as the predictor and the proportion of construction tokens in an utterance as the response variable. Over the course of a dialogue, the increase in the proportion of an utterance's tokens which belong to a construction is negligible ($\beta\!=\!0.004, p\!<\!0.05$, \textit{95\% c.i.}\ $0.001\!:\!0.008$). Speakers produce constructions at a stable rate (see also Figure~\ref{fig:proportions} in Appendix~\ref{sec:app-extraction}), indicating that an alternative strategy for information rate reduction is at work.\looseness-1 \begin{table*} \centering \small \resizebox{\textwidth}{!}{ \begin{tabular}{cccclccc} \toprule \textbf{Speaker} & \textbf{RI} & \textbf{RI~Utt} & \textbf{Dist} & \textbf{Turn} & $\boldsymbol{H(u)}$ & $\boldsymbol{H(c)}$ & $\boldsymbol{FE(c;u)}$ \\ \midrule A & 0 & 0 & - & Drink? that was what he did yeah just just to just to know that & 5.99 & 4.73 & 0.40 \\ & & & & \ \ I he \textbf{might not be} a complete twat but just a fyi & & & \\ \midrule B & 1 & 0 & 1586 & Especially for my birthday mind you I \textbf{might not be} here for & 5.04 & 4.01 & 0.53 \\ & 2 & 1 & 14 & \ \ mine and I went what do you mean you \textbf{might not be} here? & & 2.70 & 0.90\\ \bottomrule \end{tabular} } \caption{ Repetition chain for the construction \textit{`might not be'} in dialogue SXWH of the Spoken BNC, annotated with repetition index (RI), RI in utterance (RI~Utt), and distance from previous mention (Dist; in tokens). $H(u)$ is the utterance information content, $H(c)$ and $FE(c;u)$ are the construction's information content and facilitating effect. } \label{tab:chain-len3} \end{table*} \subsection{Information Content vs.\ Facilitating Effect} \label{sec:ic-vs-fe} The facilitating effect \textit{FE} of a construction is a function of its information content and the information content of its containing utterance~(Eq.~\ref{eq:facilitating-effect}). To ensure that our estimates of \textit{FE} are not entirely determined by construction information content (cf.\ §\ref{sec:estimates}), we inspect the relation between the two measures empirically, by looking at the values they take in our dataset of constructions. We find that the Kendall's rank-correlation between \textit{FE} and information content is $-0.623$ ($p\!<\!0.005$): although this is a rather strong negative correlation, the fact that the score is not closer to $-1$ indicates that there are cases where the two values are both either high or low. We indeed find examples of constructions with high information content \textit{H} and high facilitating effect \textit{FE}: \vspace{0.4em} \\ \noindent \resizebox{\columnwidth}{!}{ \begin{tabular}{l@{\ }l@{}} A: & we'll level that right press p purchase and \\ B: & right \\ A: & \textbf{go back to} recommended \textit{\textbf{(}}$\boldsymbol{H\!=\!5.30 \ \ FE\!=\!1.65}$\textit{\textbf{)}} \ \ \ \vspace{0.8em} \\ \end{tabular} } as well cases where information content is low and facilitating effect is low or negative: \vspace{0.4em} \\ \noindent \resizebox{\columnwidth}{!}{ \begin{tabular}{l@{\ }l@{}} A: & right let's go and have a drink \\ B & yeah \\ A: & \textbf{let's go and have} a drink \textit{\textbf{(}}$\boldsymbol{H\!=\!2.10 \ \ FE\!=\!-2.21}$\textit{\textbf{)}} \vspace{0.8em} \\ \end{tabular} } These examples have been selected among occurrences with \textit{H}/\textit{FE} higher or lower than the mean \textit{H}/\textit{FE} $\pm$ sd, respectively $3.62\!\pm\!1.48$ and $0.62\!\pm\!0.73$. Further analysis shows that this is not only true for individual instances but for entire groups of constructions. In particular, although their information content is overall higher ($t\!=\!13.511, p\!<\!0.005$, \textit{95\% c.i.}\ $0.371\!:\!0.497$), referential constructions also have higher facilitating effect than non-referential ones ($t\!=\!3.115, p\!<\!0.005$, \textit{95\% c.i.}\ $0.016\!:\!0.072$). We conclude that the two measures capture different aspects of a construction's information rate profile, with facilitating effect being sensitive to both construction and utterance information content. \section{The Facilitating Effect of Construction Repetition} \label{sec:results} We now test whether constructions have a positive facilitating effect, i.e., whether they reduce the information content of their containing utterances. We present our main statistical model in §\ref{sec:stat-model}, describe the effects of \textit{FE} predictors specific to unique construction mentions in §\ref{sec:results-mentions}, and analyse differences between types of constructions in §\ref{sec:construction-types}.\looseness-1 \subsection{Method} \label{sec:stat-model} \looseness=-1 To understand what shapes a construction's facilitating effect, we collect several of motivated features that can be expected to be informative \textit{FE} predictors. We fit a linear mixed effect (LME) model using (i)~these features as fixed effects, (ii)~\textit{FE} as the response variable, (iii)~and multi-level random effects grouped by dialogue and individual speaker ID. The first predictor is \textit{utterance position}, i.e., the index of the utterance within the dialogue, which allows us to test if \textit{FE} increases over the course of a dialogue. We then include predictors that distinguish different types of repetition. Since we expect a construction mention to increase expectation for subsequent occurrences---thus reshaping their information content---we consider its \textit{repetition index}, i.e., how often the construction has been repeated so far in the dialogue. Expectation is also shaped by intervening material, so we additionally track \textit{distance}, the number of tokens separating a construction mention from the preceding one. As \textit{FE} is the interplay between a construction and its utterance context, it is important to know whether the utterance context contains other mentions of the construction. We use a binary indicator (\textit{previous same utterance}) to single out occurrences whose previous mention is in the same utterance; for these cases, we also count the number of same-utterance previous mentions (\textit{repetition index in utterance}). To explore whether \textit{FE} varies across types of expressions, we also include a binary feature indicating whether the construction is \textit{referential} or non-referential (§\ref{sec:extraction}). Finally, we keep track of \textit{construction length}, the number of tokens that constitutes a construction, and \textit{PMI}, the pointwise mutual information between a construction and its dialogue, which is essentially a measure of the construction's frequency in the current dialogue as a function of its overall frequency in the corpus, indicating the construction's degree of interaction-specificity.\footnote{ The probabilities for the PMI calculation are obtained using maximum likelihood estimation over our analysis split of the Spoken BNC.} To determine the fixed effects of the final model, we start with all the predictors listed above (the non-binary ones are log-transformed) and perform backward stepwise selection, iteratively removing the predictor with the lowest significance and keeping only those with $p\!<\!0.05$. All predictors make it into our final model, the one which best fits the data according to both the Akaike and the Bayesian Information Criterion. The full specification of the best model, with model fit statistics as well as fixed and random effect coefficients, are in Appendix~\ref{sec:app-lme}. The next two sections present our main findings; we report fixed effect coefficients~($\beta$), p-values~($p$), and 95\% confidence intervals~(\textit{c.i.}).\looseness-1 \subsection{Construction Mentions} \label{sec:results-mentions} Our first observation is that construction usage reduces \textit{utterance} information content. More precisely, we find that \textbf{facilitating effect is higher for constructions than for non-construction sequences} ($t\!=\!118.79, p\!<\!0.005$, \textit{95\% c.i.}\ $0.536\!:\!0.554$). Constructions have on average 62\% lower information content than their utterance context; the average percentage drops to 7\% for non-construction sequences.\footnote{ These are the same sampled non-construction sequences as in~§\ref{sec:preliminary-construction}. Their average \textit{FE} is $0.07 \pm 0.80$. } Figure~\ref{fig:fe-constr-nonconstr} shows the two distributions. We also observe a positive effect of utterance position on \textit{FE} ($\beta\!=\!0.046, p\!<\!0.005$, \textit{95\% c.i.}\ $0.026\!:\!0.06$); that is, \textbf{the facilitating effect of constructions increases over the course of dialogues}. While the proportion of construction tokens remains stable~(§\ref{sec:stable-usage}), their mitigating contribution to utterance information content increases throughout dialogues---perhaps since speakers are more likely to \textit{repeat} established constructions as the dialogue develops. We indeed find that \textbf{repeated constructions have stronger facilitating effect}: there is a significant difference between the \textit{FE} of first mentions and repetitions ($t\!=\!-38.904, p\!<\!0.005$, \textit{95\% c.i.}\ $-0.265\!:\!-0.239$), as shown in Figure~\ref{fig:fe-rep-first}. The information content of repetitions is on average 68\% lower than that of their utterance context; for first mentions, it is on average 42\% lower. Having observed that the mitigating contribution of constructions to utterance information content indeed increases with construction repetition, we now look at how the \textit{FE} of repetitions varies as a function of their distribution across time. On the one hand, we find that \textbf{facilitating effect is cumulative}: repeating a construction reduces utterance information content more strongly as more mentions of the construction accumulate in the dialogue (Figure~\ref{fig:fe-cumul}). The effect of repetition index (i.e., how often the construction has been repeated so far in the dialogue) is positive on \textit{FE} ($\beta\!=\!0.079, p\!<\!0.005$, \textit{95\% c.i.}\ $0.063\!:\!0.094$). On the other hand, the distance of a repetition from the previous mention has a negative effect on \textit{FE} ($\beta\!=\!-0.311, p\!<\!0.005$, \textit{95\% c.i.}\ $-0.328\!:\!-0.293$). That is, \textbf{facilitating effect decays as a function of the distance between subsequent mentions}. As shown in Figure~\ref{fig:fe-decay}, this is a fast decay effect: the most substantial drop occurs for low distance values. The large magnitude of this coefficient indicates that recency is an important factor for constructions to have a strong facilitating effect. Indeed, almost one third (31.8\%) of all repetitions produced by speakers are not more than 200 tokens apart from their previous mention. Further results showing strong cumulativity effects for self-repetitions within the same utterance can be found in Appendix~\ref{sec:app-same-utterance}. \subsection{Types of Construction} \label{sec:construction-types} In this section, we analyse factors shaping the facilitating effect of construction forms, rather than individual mentions. We focus on the length of a construction and on whether it is referential. Construction length has a positive effect on \textit{FE} ($\beta\!=\!0.098, p\!<\!0.005$, \textit{95\% c.i.}\ $0.087\!:\!0.119$): \textbf{longer constructions have stronger facilitating effect.} Table~\ref{tab:chain-len3} shows a full repetition chain for a construction of length 3; Table~\ref{tab:chain-len7} (Appendix~\ref{sec:app-extraction}) for one of length 6. Non-construction sequences display an opposite, weaker trend ($\beta\!=\!-0.019, p\!<\!0.05$, \textit{95\% c.i.}\ $-0.032\!:\!-0.005$), as measured with a linear model. A possible explanation for the positive trend of constructions is related to production cost. Longer constructions are more costly for the speaker, so for them to still be an efficient production choice, their facilitating effect must be higher. Finally, we observe that \textbf{referential constructions have a stronger facilitating effect than non-referential ones}. Our LME model yields a positive effect for referentiality on \textit{FE} ($\beta\!=\!0.124, p\!<\!0.005$, 95\% c.i $0.099:0.149$) and we find a significant difference between the \textit{FE} of the two types ($t\!=\!3.115, p\!<\!0.005$, \textit{95\% c.i.}\ $0.072\!:\!0.016$). Looking in more detail, first mentions of referential constructions have higher information content and lower \textit{FE} than first mentions of non-referential ones (\textit{H}: $t\!=\!15.435, p\!<\!0.005$, \textit{95\% c.i.}\ $1.115\!:\!0.864$; \textit{FE}: $t\!=\!-9.315, p\!<\!0.005$, \textit{95\% c.i.}\ $-0.246\!:\!-0.161$), perhaps since words in referential sequences tend to be less frequent and more context-dependent. However, when repeated, their information content drops more substantially, reproducing inverse frequency effects attested in humans for syntactic repetitions \cite{bock1986syntactic,scheepers2003syntactic}. As a result, their \textit{FE} exceeds that of non-referential constructions (\textit{FE}: $t\!=\!8.818, p\!<\!0.005$, \textit{95\% c.i.}\ $0.117\!:\!0.183$), with the information content of a repeated reference being 81\% lower than that of its utterance context. Overall, these findings indicate that although referential constructions are less frequent than non-referential ones (23.3\% vs.\ 76.7\%; see §\ref{sec:extraction}), their repetition is a particularly effective strategy of information rate mitigation. \looseness=-1
1,116,691,500,610
arxiv
\section{Helioseismology and Photospheric Abundances} This talk summarizes some recent work \cite{HS} on a discrepancy in the Standard Solar Model (SSM) -- a conflict between helioseismology and the new metal abundances that emerged from improved modeling of the photosphere -- and on a possible connection to a key SSM assumption, that the early Sun was chemically homogeneous due to its passage through the fully convective Hayashi phase. We suggest a speculative mechanism -- planetary formation -- that could invalidate this assumption, and an opportunity, CN-cycle neutrinos, for independently determining the Sun's central metalicity. The SSM assumes local hydrostatic equilibrium and proton burning by the pp chain and CN cycle, with the latter accounting for about 1\% of energy generation. The Sun's evolution from zero-age on the main sequence is constrained by various boundary conditions (initial mass, present luminosity, etc.), including the initial composition. Assuming a homogeneous proto-Sun, the initial core metalicity (Z) is fixed to today's surface abundances under the assumption that these have changed little over the past 4.6 b.y. of solar evolution, while the He/H ratio is adjusted to produce the correct modern luminosity. Small corrections due to diffusion of heavy elements are made in the model. Photospheric absorption lines are the only practical way to fix the abundances of certain volatile elements such as C, N, and O. Metals influence the SSM through bound$\leftrightarrow$free transitions that affect the opacity, with O and Ne being important for temperatures characteristic of the upper radiative zone. Until recently, metalicities determined from interpretations of photospheric absorption lines, e.g., the 1998 work of Grevesse and Sauval \cite{GS98}, led to SSM sound speed profiles that agreed with helioseismology. These earlier line analyses were based on 1D models of the photosphere, despite known stratification, convection, and inhomogeneities. To address these deficiencies, parameter-free 3D models were developed. These more complete models markedly improved line shapes and the consistency of line sources. The new analyses \cite{AGS05}, however, led to a significant reduction in Z, 0.0169 $\rightarrow$ 0.0122, altering SSM sound speeds and destroying the once good agreement between helioseismology and the SSM (see Fig. 1). \begin{figure} \centering \includegraphics[width=10cm]{fig1.pdf} \caption{The sound speed discrepancies for GS98 and AGS05 abundances. See \cite{HS} for discussion.} \end{figure} The reduced Z also affects the SSM $^8$B neutrino flux, due to the sensitivity of this prediction to core temperature. The change from GS98 \cite{GS98} to AGS05 \cite{AGS05} abundances lowers the $^8$B flux prediction from 5.95 to 4.72 $\times$ 10$^6$/cm$^2$s. The 391-day SNO NCD-phase result is 5.54 $\pm$ 0.32 $\pm$ 0.35 $\times$ 10$^6$/cm$^2$s \cite{SNO}. \section{Metals in the Early Solar System} The convective zone extends over the outer 30\% of the Sun by radius and contains about 3\% of the Sun's mass. The change from GS98 to AGS05 abundances lowers the total metal content of this zone by 50 $M_\oplus$. Interestingly, the one known example of large-scale metal segregation in the solar system, the formation of the gaseous giant planets, concentrates a similar amount of metal, $\sim$ 40-90 $M_\oplus$, depending on modeling uncertainties. The conventional picture (see \cite{HS} for references) places planetary formation late in the development of the solar nebula, when the last few percent of the gas has formed into a disk, with metal-rich grains and ice concentrated in the disk's midplane. In the core accretion model, midplane interactions allow rocky planetary cores to grow until $\sim$ 10$M_\oplus$, after which the gravitational potential is sufficient to capture gas. Envelope formation is thought to be rapid, requiring perhaps as little at 1My. This process produced metal enrichments of Jupiter and Saturn of $\sim$ 3-7 \cite{guillot}. The process of planetary formation, by scrubbing metals from an initially homogeneous gas cloud, would produce enough metal-depleted gas to dilute the convective zone. This could lead to a two-zone Sun -- a core higher in Z than the surface -- contradicting a key SSM assumption and possibly accounting for the apparent discrepancy between helioseismology and photospheric abundances. This conjecture passes some simple tests connected with the total budget of metals and the total mass of gas that a Jupiter could perturb gravitationally during planetary formation. It requires 1) planetary formation to occur after the Sun developed a radiative core (separating the interior from the exterior) and 2) deposition of a significant fraction of the metal-poor gas onto the Sun. These assumptions do not appear unreasonable \cite{HS}. There are several variables in this picture, including the amount of gas processed, the efficiency of the fractionation, whether the fractionation affects all elements equally, and the dynamics of the depleted gas. The constraints include the photospheric abundances and partial abundances for Jupiter and Saturn, determined from Galileo, Cassini, and subsequent modeling \cite{guillot}. One would need to explore this parameter space to test whether this scenario could quantitatively account for observations. \section{Can Neutrinos Help?} It would be helpful to test the SSM assumption of a homogenous Sun by directly comparing abundances on the surface and in the core. We noted earlier that the $^8$B neutrino flux responds to changes in metalicity due to the influence of metals on core temperature. But the change is modest and not characteristic: many of the 19 parameters of the SSM can be adjusted to produce similar core temperature changes. Changes in fluxes due to parameter variations that alter core temperature will be termed ``environmental." But CN solar neutrino sources have a linear dependence on core metalicity, in addition to the environmental sensitivity. The BPS08(GS) SSM \cite{BPS} predicts a modest 0.8\% CN-cycle contribution to solar energy generation but measurable neutrino fluxes, e.g., \begin{equation} ^{15}O(\beta^+)^{15}N~~~E_\nu \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1.732 \mathrm{~MeV}~~~\phi=(2.20^{+0.73}_{-0.63}) \times 10^8 \mathrm{cm}^2 \mathrm{s}. \end{equation} Because the CN and $^8$B neutrinos have a similar dependence on the core temperature, environmental uncertainties (solar age, opacity, luminosity,...) produce correlated changes in these fluxes. This correlation (see Fig. 2) allows one to use the measured $^8$B flux \cite{SNO,SK} to largely eliminate environmental uncertainties affecting the CN flux, yielding \begin{eqnarray} {\mathrm{R^{SNO+}(CN)} \over \mathrm{R^{SSM}(CN)}} &=& {\mathrm{X(C+N)} \over \mathrm{X^{SSM}(C+N)}}\left( \mathrm{{R^{SK}(^8B)} \over \mathrm{R^{SSM}(^8B)}}\right)^{0.828} \nonumber \\ &\times& \left[1 \pm 0.03(\mathrm{SK}) \pm 0.026 (\mathrm{res~env})\pm0.049 (\mathrm{LMA}) \pm 0.071 (\mathrm{nucl}) \right] \end{eqnarray} The ratio of the CN-neutrino rate R measured in a future deep scintillator detector (e.g., SNO+ \cite{SNO+}, Borexino \cite{Borexino}, or Hanohano \cite{Hanohano}) to that calculated in the SSM appears on the left side. The quantity of interest, the ratio of the primordial core C+N metalicity X to the SSM value, appears as the first term on the right. The proportionality between these ratios can be expressed in terms of the ratio of the measured and SSM rates for Super-Kamiokande (SK). Residual uncertainties include the SK $^8$B measurement error, remaining environmental dependences (after use of the SK constraint), neutrino oscillation parameters, and nuclear cross sections. Further details are given in \cite{HS}. \begin{figure} \centering \includegraphics[width=10cm]{fig2.pdf} \caption{The SSM $^8$B - $^{15}$O neutrino flux correlation, used in \cite{HS} to reduce SSM uncertainties.} \end{figure} Thus the current overall theoretical uncertainty in relating a future CN neutrino flux measurement to core metalicity is about 9.6\%. The dominate uncertainties, those due to flavor physics and nuclear cross sections, can be reduced by future laboratory measurements. SNO+, a deep scintillator experiment that will be constructed in SNOLab, may be able to measure the CN flux to an accuracy of about 10\% \cite{SNO+}. Given that recent changes in core metalicity are $\sim$ 30\%, it appears that future neutrino experiments may be able to constrain core metalicity at an interesting level of precision. This work was supported in part by the Office of Nuclear Physics, U.S. Department of Energy.
1,116,691,500,611
arxiv
\section{\label{sec:Introduction}Introduction} The simplest examples of fractal sets and measures are self-similar sets and measures on the line. These are objects that, like the classical middle-third Cantor set, are made up of finitely many scaled copies of themselves. When these scaled copies are sufficiently separated from each other the small-scale structure is relatively easy to understand, and, in particular, there is a closed formula for the dimension. If one does not assume this separation, however, the picture becomes significantly more complicated, and it is a longstanding open problem to compute the dimension. This problem has spawned a number of related conjectures, the most general of which is that, unless some of the small-scale copies exactly coincide, the dimension should be equal to the combinatorial upper bound, that is, the dimension one would get if the small-scale copies did not intersect at all. Special cases of this conjecture have received wide attention, e.g. Furstenberg's projection problem and the Bernoulli convolutions problem. The purpose of this paper is to shed some new light on these matters. \subsection{\label{sub:Self-similar-sets}Self-similar sets and measures and their dimension} In this paper an iterated function system (IFS) will mean a finite family $\Phi=\{\varphi_{i}\}_{i\in\Lambda}$ of linear contractions of $\mathbb{R}$, written $\varphi_{i}(x)=r_{i}x+a_{i}$ with $|r_{i}|<1$ and $a_{i}\in\mathbb{R}$. To avoid trivialities we assume throughout that there are at least two distinct contractions. A self similar set is the attractor of such a system, i.e. the unique compact set $\emptyset\neq X\subseteq\mathbb{R}$ satisfying \begin{equation} X=\bigcup_{i\in\Lambda}\varphi_{i}X.\label{eq:self-similar-set} \end{equation} The self-similar measure associated to $\Phi$ and a probability vector $(p_{i})_{i\in\Lambda}$ is the unique Borel probability measure $\mu$ on $\mathbb{R}^{d}$ satisfying \begin{equation} \mu=\sum_{i\in\Lambda}p_{i}\cdot\varphi_{i}\mu.\label{eq:self-similar-measure} \end{equation} Here $\varphi\mu=\mu\circ\varphi^{-1}$ denotes the push-forward of $\mu$ by $\varphi$. When the images $\varphi_{i}X$ are disjoint or satisfy various weaker separation assumptions, the small-scale structure of self-similar sets and measures is quite well understood. In particular the Hausdorff dimension $\dim X$ of $X$ is equal to the similarity dimension% \footnote{This notation is imprecise, since the similarity dimension depends on the IFS $\Phi$ rather than the attractor $X$, but the meaning should always be clear from the context. A similar remark holds for the similarity dimension of measures.% } $\sdim X$, i.e. the unique solution $s\geq0$ of the equation $\sum|r_{i}|^{s}=1$. With the dimension of a measure $\theta$ defined by% \footnote{This is the lower Hausdorff dimension. There are many other notions of dimension but for self-similar measures all the major ones coincide since such measures are exact dimensional \cite{FengHu09}. % } \[ \dim\theta=\inf\{\dim E\,:\,\theta(E)>0\}, \] and assuming again sufficient separation of the images $\varphi_{i}X$, the dimension $\dim\mu$ of a self-similar measure $\mu$ is equal to the similarity dimension of $\mu$, defined by \[ \sdim\mu=\frac{\sum p_{i}\log p_{i}}{\sum p_{i}\log r_{i}}. \] It is when the images $\varphi_{i}X$ have significant overlap that computing the dimension becomes difficult, and much less is known. One can give trivial bounds: the dimension is never greater than the similarity dimension, and it is never greater than the dimension of the ambient space $\mathbb{R}$, which is 1. Hence \begin{eqnarray} \dim X & \leq & \min\{1,\sdim X\}\label{eq:similarity-dimension-bound}\\ \dim\mu & \leq & \min\{1,\sdim\mu\}.\label{eq:similarity-bound-for-measures} \end{eqnarray} However, without special combinatorial assumptions on the IFS, current methods are unable even to decide whether or not equality holds in \eqref{eq:similarity-dimension-bound} and \eqref{eq:similarity-bound-for-measures}, let alone compute the dimension exactly. The exception is when there are sufficiently many exact overlaps among the ``cylinders'' of the IFS. More precisely, for $i=i_{1}\ldots i_{n}\in\Lambda^{n}$ write \[ \varphi_{i}=\varphi_{i_{1}}\circ\ldots\circ\varphi_{i_{n}}. \] One says that exact overlaps occur if there is an $n$ and distinct $i,j\in\Lambda^{n}$ such that $\varphi_{i}=\varphi_{j}$ (in particular the images $\varphi_{i}X$ and $\varphi_{j}X$ coincide).% \footnote{If $i\in\Lambda^{k}$, $j\in\Lambda^{m}$ and $\varphi_{i}=\varphi_{j}$, then $i$ cannot be a proper prefix of $j$ and vice versa, so $ij,ji\in\Lambda^{k+m}$ are distinct and $\varphi_{ij}=\varphi_{ji}$. Thus exact overlaps occurs also if there is exact coincidence of cylinders at ``different generations''. Stated differently, exact overlaps means that the semigroup generated by the $\varphi_{i}$, $i\in\Lambda$, is not freely generated by them.% } If this occurs then $X$ and $\mu$ can be expressed using an IFS $\Psi$ which is a proper subset of $\{\varphi_{i}\}_{i\in\Lambda^{n}}$, and a strict inequality in \eqref{eq:similarity-dimension-bound} and \eqref{eq:similarity-bound-for-measures} sometimes follows from the corresponding bound for $\Psi$. \subsection{\label{sub:Main-results}Main results} The present work was motivated by the folklore conjecture that \emph{the occurrence of exact overlaps is the only mechanism which can lead to a strict inequality in} \eqref{eq:similarity-dimension-bound} \emph{and }\eqref{eq:similarity-bound-for-measures} (see e.g. \cite[question 2.6]{PeresSolomyak2000b}). Our main result lends some support to the conjecture and proves some special cases of it. All of our results hold, with suitable modifications, in higher dimensions, but this will appear separately. Fix $\Phi=\{\varphi_{i}\}_{i\in\Lambda}$ as in the previous section and for $i\in\Lambda^{n}$ write $r_{i}=r_{i_{1}}\cdot\ldots\cdot r_{i_{n}}$, which is the contraction ratio of $\varphi_{i}$. Define the distance between the cylinders associated to $i,j\in\Lambda^{n}$ by \[ d(i,j)=\left\{ \begin{array}{cc} \infty & r_{i}\neq r_{j}\\ |\varphi_{i}(0)-\varphi_{j}(0)| & r_{i}=r_{j} \end{array}\right.. \] Note that $d(i,j)=0$ if and only if $\varphi_{i}=\varphi_{j}$ and that the definition is unchanged if $0$ is replaced by any other point. For $n\in\mathbb{N}$ let \[ \Delta_{n}=\min\{d(i,j)\,:\, i,j\in\Lambda^{n}\,,\, i\neq j\}. \] Let us make a few observations: \begin{itemize} \item [-] Exact overlaps occur if and only if $\Delta_{n}=0$ for some $n$ (equivalently all sufficiently large $n$). \item [-] $\Delta_{n}\rightarrow0$ exponentially. Indeed, the points $\varphi_{i}(0)$, $i\in\Lambda^{n}$, can be shown to lie in a bounded interval independent of $n$, and the exponentially many sequences $i\in\Lambda^{n}$ give rise to only polynomially many contraction ratios $r_{i}$. Therefore there are distinct $i,j\in\Lambda^{n}$ with $r_{i}=r_{j}$ and $|\varphi_{i}(0)-\varphi_{j}(0)|<|\Lambda|^{-(1-o(1))n}$. \item [-]There can also be an exponential lower bound for $\Delta_{n}$. This occurs when the images $\varphi_{i}(X)$, $i\in\Lambda$, are disjoint, or under the open set condition, but also sometimes without separation as in Garsia's example \cite{Garsia1962} or the cases discussed in Theorems \ref{cor:algebraic-parameters} and \ref{thm:furstenberg} below. \end{itemize} Our main result on self-similar measures is the following. \begin{thm} \label{thm:main-individual}If $\mu$ is a self-similar measure on $\mathbb{R}$ and if $\dim\mu<\min\{1,\sdim\mu\}$, then $\Delta_{n}\rightarrow0$ super-exponentially, i.e. $\lim(-\frac{1}{n}\log\Delta_{n})=\infty$. \end{thm} The conclusion is about $\Delta_{n}$, which is determined by the IFS $\Phi$, not by the measure. Thus, if the conclusion fails, then $\dim\mu=\sdim\mu$ for every self-similar measure of $\Phi$. \begin{cor} \label{cor:main-self-similar-sets}If $X$ is the attractor of an IFS on $\mathbb{R}$ and if $\dim X<\min\{1,\sdim X\}$, then $\lim(-\frac{1}{n}\log\Delta_{n})=\infty$. \end{cor} \begin{proof} The self-similar measure $\mu$ associated to the probabilities $p_{i}=r_{i}^{\sdim X}$ satisfies $\sdim\mu=\sdim X$. Since $\mu(X)=1$ we have $\dim\mu\leq\dim X$, so by hypothesis $\dim\mu<\min\{1,\sdim\mu\}$, and by the theorem, $\Delta_{n}\rightarrow0$ super-exponentially. \end{proof} Theorem \ref{thm:main-individual} is derived from a more quantitative result about the entropy of finite approximations of $\mu$. Write $H(\mu,\mathcal{E})$ for the Shannon entropy of a measure $\mu$ with respect to a partition $\mathcal{E}$, and $H(\mu,\mathcal{E}|\mathcal{F})$ for the conditional entropy on $\mathcal{F}$; see Section \ref{sub:Preliminaries-on-entropy}. For $n\in\mathbb{Z}$ the dyadic partitions of $\mathbb{R}$ into intervals of length $2^{-n}$ is \[ \mathcal{D}_{n}=\{[\frac{k}{2^{n}},\frac{k+1}{2^{n}})\,:\, k\in\mathbb{Z}\}. \] For $t\in\mathbb{R}$ we also write $\mathcal{D}_{t}=\mathcal{D}_{[t]}$. We remark that $\liminf\frac{1}{n}H(\theta,\mathcal{D}_{n})\geq\dim\theta$ for any probability measure $\theta$, and the limit exists and is equal to $\dim\theta$ when $\theta$ is exact dimensional, which is the case for self-similar measures \cite{FengHu09}. We first consider the case that $\Phi$ is uniformly contracting, i.e. that all $r_{i}$ are equal to some fixed $r$. Fix a self-similar measure $\mu$ defined by a probability vector $(p_{i})_{i\in\Lambda}$ and for $i\in\Lambda^{n}$ write $p_{i}=p_{i_{1}}\cdot\ldots\cdot p_{i_{n}}$. Without loss of generality one can assume that $0$ belongs to the attractor $X$. Define the $n$-th generation approximation of $\mu$ by \begin{equation} \nu^{(n)}=\sum_{i\in\Lambda^{n}}p_{i}\cdot\delta_{\varphi_{i}(0)}.\label{eq:generation-n-measure} \end{equation} This is a probability measure on $X$ and $\nu^{(n)}\rightarrow\mu$ weakly. Moreover, writing \[ n'=n\log_{2}(1/r), \] $\nu^{(n)}$ closely resembles $\mu$ up to scale $2^{-n'}=r^{n}$ in the sense that \[ \lim_{n\rightarrow\infty}\frac{1}{n'}H(\nu^{(n)},\mathcal{D}_{n'})=\dim\mu. \] The main question we are interested in is the behavior of $\nu^{(n)}$ at smaller scales. Observe that the entropy $H(\nu^{(n)},\mathcal{D}_{n'})$ of $\nu^{(n)}$ at scale $2^{-n'}$ may not exhaust the entropy $H(\nu^{(n)})$ of $\nu^{(n)}$ as a discrete measure (i.e. with respect to the partition into points). If there is substantial excess entropy it is natural to ask at what scale and at what rate it appears; it must appear eventually because $\lim_{k\rightarrow\infty}H(\nu^{(n)},\mathcal{D}_{k})=H(\nu^{(n)})$. The excess entropy at scale $k$ relative to the entropy at scale $n'$ is just the conditional entropy $H(\nu^{(n)},\mathcal{D}_{k}|\mathcal{D}_{n'})=H(\nu^{(n)},\mathcal{D}_{k})-H(\nu^{(n)},\mathcal{D}_{n'})$. \begin{thm} \label{thm:main-individual-entropy-1}Let $\mu$ be a self-similar measure on $\mathbb{R}$ defined by an IFS with uniform contraction ratios. Let $\nu^{(n)}$ be as above. If $\dim\mu<1$, then \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{n'}H(\nu^{(n)},\mathcal{D}_{qn'}|\mathcal{D}_{n'})=0\quad\mbox{ for every }q>1.\label{eq:36} \end{equation} \end{thm} Note that we assume $\dim\mu<1$ but not necessarily $\dim\mu<\sdim\mu$. The statement is valid when $\dim\mu=\sdim\mu<1$, although for rather trivial reasons. We now formulate the result in the non-uniformly contracting case. Let \[ r=\prod_{i\in\Lambda}r_{i}^{p_{i}} \] so that $\log r$ is the average logarithmic contraction ratio when $\varphi_{i}$ is chosen randomly with probability $p_{i}$. Note that, by the law of large numbers, with probability tending to $1$, an element $i\in\Lambda^{n}$ chosen according to the probabilities $p_{i}$ will satisfy $r_{i}=r^{n(1+o(1))}=2^{n'(1+o(1))}$. With this definition and $\nu^{(n)}$ defined as before, the theorem above holds as stated, but note that now the partitions $\mathcal{D}_{k}$ are not suitable for detecting exact overlaps, since $\varphi_{i}(0)=\varphi_{j}(0)$ may happen for some $i,j\in\Lambda^{n}$ with $r_{i}\neq r_{j}$. To correct this define the probability measure $\widetilde{\nu}^{(n)}$ on $\mathbb{R}\times\mathbb{R}$ by \[ \widetilde{\nu}^{(n)}=\sum_{i\in\Lambda^{n}}\delta_{(\varphi_{i}(0),r_{i})} \] and the partition of $\mathbb{R}\times\mathbb{R}$ given by \[ \widetilde{\mathcal{D}}_{n}=\mathcal{D}_{n}\times\mathcal{F}, \] where $\mathcal{F}$ is the partition of $\mathbb{R}$ into points. \begin{thm} \label{thm:main-individual-entropy-1-1}Let\, $\mu$ be a self-similar measure on $\mathbb{R}$ and $\widetilde{\nu}^{(n)}$ as above. If $\dim\mu<1$, then \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{n'}H(\widetilde{\nu}^{(n)},\mathcal{\widetilde{D}}_{qn'}|\widetilde{\mathcal{D}}_{n'})=0\quad\mbox{ for every }q>1.\label{eq:36-1} \end{equation} \end{thm} To derive Theorem \ref{thm:main-individual}, let $\mu$ be as in the last theorem with $\dim\mu<\min\{1,\sdim\mu\}$. The conclusion of the last theorem is equivalent to $\frac{1}{n'}H(\widetilde{\nu}^{(n)},\widetilde{\mathcal{D}}_{qn'})\rightarrow\dim\mu$ for every $q>1$. Hence for a given $q$ and all sufficiently large $n$ we will have $\frac{1}{n'}H(\widetilde{\nu}^{(n)},\widetilde{\mathcal{D}}_{qn'})<\sdim\mu$. Since $\widetilde{\nu}^{(n)}=\sum_{i\in\Lambda^{n}}p_{i}\cdot\delta_{(\varphi_{i}(0),r_{i})}$, if each pair $(\varphi_{i}(0),r_{j})$ belonged to a different atom of $\widetilde{\mathcal{D}}_{qn'}$ then we would have $\frac{1}{n'}H(\widetilde{\nu}^{(n)},\widetilde{\mathcal{D}}_{qn'})=-\frac{1}{n\log(1/r)}\sum_{i\in\Lambda^{n}}p_{i}\log p_{i}=\sdim\mu$, a contradiction. Thus there must be distinct $i,j\in\Lambda^{n}$ for which $(\varphi_{i}(0),r_{i})$, $(\varphi_{j}(0),r_{j})$ lie in the same atom of $\widetilde{\mathcal{D}}_{qn'}$, giving $\Delta_{n}<2^{-qn'}$. \subsection{\label{sub:Outline-of-the-proof}Outline of the proof} Let us say a few words about the proofs. For simplicity we discuss Theorem \ref{thm:main-individual-entropy-1}, where there is a common contraction ratio $r$ to all the maps. For a self similar measure $\mu=\sum_{i\in\Lambda}p_{i}\cdot\varphi_{i}\mu$, iterate this relation $n$ times to get $\mu=\sum_{i\in\Lambda^{n}}p_{i}\cdot\varphi_{i}\mu$. Since each $\varphi_{i}$, $i\in\Lambda^{n}$, contracts by $r^{n}$, all the measures $\varphi_{i}\mu$, $i\in\Lambda^{n}$, are translates of each other, the last identity can be re-written as a convolution \[ \mu=\nu^{(n)}*\tau^{(n)}, \] where as before $\nu^{(n)}=\sum_{i\in\Lambda^{n}}p_{i}\cdot\delta_{\varphi_{i}(0)}$, and $\tau^{(n)}$ is $\mu$ scaled down by $r^{n}$. Fix $q$ and write $a\approx b$ to indicate that the difference tends to $0$ as $n\rightarrow\infty$. From the entropy identity $H(\mu,\mathcal{D}_{(q+1)n'})=H(\mu,\mathcal{D}_{n'})+H(\mu,\mathcal{D}_{(q+1)n'}|\mathcal{D}_{n'})$ and the fact that $\frac{1}{n'}H(\mu,\mathcal{D}_{n'})\approx\frac{1}{n'}H(\nu^{(n)},\mathcal{D}_{n'})$, we find that the mean entropy \[ A=\frac{1}{(q+1)n'}H(\mu,\mathcal{D}_{(q+1)n'}) \] is approximately a convex combination $A\approx\frac{1}{(q+1)}B+\frac{q}{(q+1)}C$ of the mean entropy \[ B=\frac{1}{n'}H(\nu^{(n)},\mathcal{D}_{n'}), \] and the mean conditional entropy \[ C=\frac{1}{qn'}H(\mu,\mathcal{D}_{(q+1)n'}|\mathcal{D}_{n'})=\sum_{I\in\mathcal{D}_{n'}}\mu(I)\cdot\frac{1}{qn'}H(\nu_{I}^{(n)}*\tau^{(n)},\mathcal{D}_{(q+1)n'}), \] where $\nu_{I}^{(n)}$ is the normalized restriction of $\nu^{(n)}$ on $I$. Since $A\approx\dim\mu$ and $B\approx\dim\mu$, we find that $C\approx\dim\mu$ as well. On the other hand we also have $\frac{1}{qn'}h(\tau^{(n)},\mathcal{D}_{(q+1)n'})\approx\dim\mu$. Thus by the expression above, $C$ is an average of terms each of which is close to the mean, and therefore most of them are equal to the mean. We find that \begin{equation} \frac{1}{qn'}H(\nu_{I}^{(n)}*\tau^{(n)},\mathcal{D}_{(q+1)n'})\approx C\approx\dim\mu\approx\frac{1}{qn'}H(\tau^{(n)},\mathcal{D}_{(q+1)n'})\label{eq:approximate-non-growth} \end{equation} for large $n$ and ``typical'' $I\in\mathcal{D}_{n'}$. The argument is then concluded by showing that \eqref{eq:approximate-non-growth} implies that either $\frac{1}{qn'}H(\tau^{(n)},\mathcal{D}_{(q+1)n'})\approx1$ (leading to $\dim\mu=1$), or that typical intervals $I$ satisfy $\frac{1}{qn'}H(\nu_{I}^{(n)},\mathcal{D}_{(q+1)n'})\approx0$ (leading to \eqref{eq:36}). Now, for a general pair of measures $\nu,\tau$ the relation $\frac{1}{n}H(\nu*\tau,\mathcal{D}_{n})\approx\frac{1}{n}H(\nu,\mathcal{D}_{n})$ analogous to \eqref{eq:approximate-non-growth} does not have such an implication. But, while we know nothing about the structure of $\nu_{I}^{(n)}$, we do know that $\tau^{(n)}$, being self-similar, is highly uniform at different scales. We will be able to utilize this fact to draw the desired conclusion. Evidently, the main ingredient in the argument is an analysis of the growth of measures under convolution, which will occupy us starting in Section \ref{sec:Additive-combinatorics}. \subsection{\label{sub:main-applications}Applications} Theorem \ref{thm:main-individual} and its corollaries settle a number of cases of the aforementioned conjecture. Specifically, in any class of IFSs where one can prove that cylinders are either equal or exponentially separated, the only possible cause of dimension drop is the occurrence of exact overlaps. Thus, \begin{thm} \label{cor:algebraic-parameters}For IFSs on $\mathbb{R}$ defined by algebraic parameters, there is a dichotomy: Either there are exact overlaps or the attractor $X$ satisfies $\dim X=\min\{1,\sdim X\}$.\end{thm} \begin{proof} Let $\varphi_{i}(x)=r_{i}x+a_{i}$ and suppose $r_{i},a_{i}$ are algebraic. For distinct $i,j\in\Lambda^{n}$ the distance $|\varphi_{i}(0)-\varphi_{j}(0)|$ is a polynomial of degree $n$ in $r_{i},a_{i}$, and hence is either equal to $0$, or is $\geq s^{n}$ for some constant $s>0$ depending only on the numbers $r_{i},a_{i}$ (see Lemma \ref{lem:Liouville-bound}). Thus $\Delta_{n}\geq s^{n}$ and the conclusion follows from Corollary \ref{cor:main-self-similar-sets}. \end{proof} There are a handful of cases where a similar argument can handle non-algebraic parameters. Among these is a conjecture by Furstenberg from the 1970s, asserting that if the ``one dimensional Sierpinski gasket'' \[ F=\left\{ \sum(i_{n},j_{n})3^{-n}\,:\,(i_{n},j_{n})\in\{(0,0),(1,0),(0,1)\}\right\} \] is projected orthogonally to a line of irrational slope, then the dimension of the image is $1$ (see e.g. \cite[question 2.5]{PeresSolomyak2000b}).% \footnote{This was motivated by a dual conjecture asserting that any line $\ell$ of irrational slope meets $F$ in a of zero dimensional set, and this, in turn, is an analog of similar conjectures arising in metric number theory and layed out in \cite{Furstenberg70}. The intersections and projections conjectures are related by the heuristic that for a map $F\rightarrow\mathbb{R}$, a large image corresponds to small fibers, but there is only an implication in one direction (the statement about intersections implies the one about projections using \cite{Furstenberg08}).% } It is more convenient to replace orthogonal projections with the parametrized v linear maps $\pi_{t}:\mathbb{R}^{2}\rightarrow\mathbb{R}$ given by \[ \pi_{t}(x,y)=tx+y \] (up to a linear change of coordinates in the range, this represents the orthogonal projection to the line with slope $-1/t$). One may verify that the image $F_{t}=\pi_{t}F$ is the self-similar defined by the contractions \begin{equation} x\mapsto\frac{1}{3}x\quad,\quad x\mapsto\frac{1}{3}(x+1)\quad,\quad x\mapsto\frac{1}{3}(x+t).\label{eq:sierpinski-gasket-contractions} \end{equation} Therefore $\sdim F_{t}=1$ for all $t$, and it is not hard to show that exact overlaps occur only for certain rational values of $t$. Thus, Furstenberg's conjecture is a special case of the motivating conjecture of this paper. From general considerations such as Marstrand's theorem, we know that $\dim F_{t}=1$ for a.e. $t$, and Kenyon showed that this holds also for a dense $G_{\delta}$ set of $t$ \cite{Kenyon97}. In the same paper Kenyon also classified those rational $t$ for which $\dim F_{t}=1$, and showed that $F_{t}$ has Lebesgue measure $0$ for all irrational $t$ (strengthening the conclusion of a general theorem of Besicovitch that gives this for a.e. $t$). For some other partial results see \cite{SwiatekVeerman2002}. \begin{thm} \label{thm:furstenberg} If $t\notin\mathbb{Q}$ then $\dim F_{t}=1$.\end{thm} \begin{proof} Fix $t$, and suppose that $\dim F_{t}<1$. Let $\Lambda=\{0,1,t\}$ and $\varphi_{i}(x)=\frac{1}{3}(x+i)$, so $F_{t}$ is the attractor of $\{\varphi_{i}\}_{i\in\Lambda}$. For $i\in\Lambda^{n}$ one may check that $\varphi_{i}(0)=\sum_{k=1}^{n}i_{k}3^{-k}$. Inserting this into the difference $\varphi_{i}(0)-\varphi_{j}(0)$ we can separate the terms that are multiplied by $t$ from those that are not, and we find that $|\varphi_{i}(0)-\varphi_{j}(0)|=p_{i,j}-t\cdot q_{i,j}$ for rational numbers $p_{i,j},q_{i,j}$ belonging to the set \[ X_{n}=\{\sum_{i=1}^{n}a_{i}3^{-i}\,:\, a_{i}\in\{\pm1,0\}\} \] Therefore there are $p_{n},q_{n}\in X_{n}$ such that $\Delta_{n}=|p_{n}-tq_{n}|$, so by Corollary \ref{cor:main-self-similar-sets}, \begin{equation} |p_{n}-t\cdot q_{n}|<30^{-n}\qquad\mbox{for large enough }n\label{eq:75} \end{equation} If $q_{n}=0$ for $n$ satisfying \eqref{eq:75} then $|p_{n}|<30^{-n}$, but, since $p_{n}$ is rational with denominator $3^{n}$, this can only happen if $p_{n}=0$. This in turn implies that $\Delta_{n}=0$, i.e. there are exact overlaps, so $t\in\mathbb{Q}$. On the other hand suppose $q_{n}\neq0$ for all large $n$. Since $q_{n}$ is a non-zero rational with denominator $3^{n}$ we have $q_{n}\geq3^{-n}$. Dividing \eqref{eq:75} by $q_{n}$ we get $|t-p_{n}/q_{n}|<10^{-n}$. Subtracting successive terms, by the triangle inequality we have \[ |\frac{p_{n+1}}{q_{n+1}}-\frac{p_{n}}{q_{n}}|<2\cdot10^{-n}\qquad\mbox{for large enough }n. \] But $p_{n},q_{n},p_{n+1},q_{n+1}\in X_{n+1}$, so $p_{n+1}/q_{n+1}-p_{n}/q_{n}$ is rational with denominator $\leq9^{n+1}$, giving \[ |\frac{p_{n+1}}{q_{n+1}}-\frac{p_{n}}{q_{n}}|\neq0\quad\implies\quad|\frac{p_{n+1}}{q_{n+1}}-\frac{p_{n}}{q_{n}}|\geq9^{-(n+1)}. \] Since $9^{-(n+1)}\leq2\cdot10^{-n}$ is impossible for large $n$, the last two equations imply that $p_{n}/q_{n}=p_{n+1}/q_{n+1}$ for all large $n$. Therefore there is an $n_{0}$ such that $|t-p_{n_{0}}/q_{n_{0}}|<10^{-n}$ for $n>n_{0}$ which gives $t=p_{0}/q_{0}$. \end{proof} The argument above is due to B. Solomyak and P. Shmerkin and we thank them for permission to include it here. Similar considerations work in a few other cases, but one already runs into difficulties if in the example above we replace the contraction ratio 1/3 with any non-algebraic $0<r<1$ (see also the discussion following Theorem \ref{thm:BC} below). In the absence of a resolution of the general conjecture, we turn to parametric families of self-similar sets and measures. The study of parametric families of general sets and measures is classical; examples include the projection theorems of Besicovitch and Marstrand and more recent results like those of Peres-Schlag \cite{PeresSchlag2000} and Bourgain \cite{Bourgain2010}. When the sets and measures in question are self-similar we shall see that the general results can be strengthened considerably. Let $I$ be a set of parameters, let $r_{i}:I\rightarrow(-1,1)\setminus\{0\}$ and $a_{i}:I\rightarrow\mathbb{R}$, $i\in\Lambda$. For each $t\in I$ define $\varphi_{i,t}:\mathbb{R}\rightarrow\mathbb{R}$ by $\varphi_{i,t}(x)=r_{i}(t)(x-a_{i}(t))$. For a sequence $i\in\Lambda^{n}$ let $\varphi_{i,t}=\varphi_{i_{1},t}\circ\ldots\circ\varphi_{i_{n},t}$ and define \begin{eqnarray} \Delta_{i,j}(t) & = & \varphi_{i,t}(0)-\varphi_{j,t}(0).\label{eq:44} \end{eqnarray} The quantity $\Delta_{n}=\Delta_{n}(t)$ associated as in the previous section to the IFS $\{\varphi_{i,t}\}_{i\in\Lambda}$ is not smaller than the minimum of $|\Delta_{i,j}(t)|$ over distinct $i,j\in\Lambda^{n}$ (since it is the minimum over pairs $i,j$ with $r_{i}=r_{j}$). Thus, $\Delta_{n}\rightarrow0$ super-exponentially implies that $\min\{|\Delta_{i,j}(t)|\,,\, i,j\in\Lambda^{n}\}\rightarrow0$ super-exponentially as well, so Theorem \ref{thm:main-individual} has the following formal implication. \begin{thm} \label{thm:description-of-exceptional-params}Let $\Phi_{t}=\{\varphi_{i,t}\}$ be a parametrized IFS as above. For every $\varepsilon>0$ let\textup{\emph{ \begin{equation} E_{\varepsilon}=\bigcup_{N=1}^{\infty}\,\bigcap_{n>N}\left(\bigcup_{i,j\in\Lambda^{n}}(\Delta_{i,j})^{-1}(-\varepsilon^{n},\varepsilon^{n})\right)\label{eq:23} \end{equation} }}and \begin{equation} E=\bigcap_{\varepsilon>0}E_{\varepsilon}.\label{eq:52} \end{equation} Then for $t\in I\setminus E$, for every probability vector $p=(p_{i})$ the associated self-similar measure $\mu_{t}$ of $\Phi_{t}$ satisfies $\dim\mu_{t}=\min\{1,\sdim\mu_{t}\}$, and the attractor $X_{t}$ of $\Phi_{t}$ satisfies $\dim X_{t}=\min\{1,\sdim X_{t}\}$. \end{thm} Our goal is to show that the set $E$ defined in the theorem above is small. We restrict ourselves to the case that $I\subseteq\mathbb{R}$ is a compact interval; a multi-parameter version will appear in \cite{Hochman2012b}. Extend the definition of $\Delta_{i,j}$ to infinite sequences $i,j\in\Lambda^{\mathbb{N}}$ by \begin{eqnarray} \Delta_{i,j}(t) & = & \lim_{n\rightarrow\infty}\Delta_{i_{1}\ldots i_{n},j_{1}\ldots j_{n}}(t).\label{eq:50} \end{eqnarray} Convergence is uniform over $I$ and $i,j$, and if $a_{i}(\cdot)$ and $r_{i}(\cdot)$ are real analytic in a neighborhood of $I$ then so are the functions $\Delta_{i,j}(\cdot)$. \begin{thm} \label{thm:main-parametric}Let $I\subseteq\mathbb{R}$ be a compact interval, let $r:I\rightarrow(-1,1)\setminus\{0\}$ and $a_{i}:I\rightarrow\mathbb{R}$ be real analytic, and let $\Phi_{t}=\{\varphi_{i,t}\}_{i\in\Lambda}$ be the associated parametric family of IFSs, as above. Suppose that \[ \forall i,j\in\Lambda^{\mathbb{N}}\quad\left(\;\Delta_{i,j}\equiv0\mbox{ on }I\quad\iff\quad i=j\;\right). \] Then \textup{\emph{the set $E$ of ``exceptional'' parameters in Theorem \ref{thm:description-of-exceptional-params} has}} Hausdorff and packing dimension $0$. \end{thm} The condition in the theorem is extremely mild. Essentially it means that the family does not have overlaps ``built in''. For an example where the hypothesis fails, consider the case that there are $i\neq j$ with $\varphi_{i,t}=\varphi_{j,t}$ for all $t$. In this case the conclusion sometimes fails as well. Most existing results on parametric families of IFSs are based on the so-called transversality method, introduced by Pollicott and Simon \cite{PollicottSimon1995} and developed, among others, by Solomyak \cite{Solomyak1995} and Peres-Schlag \cite{PeresSchlag2000}. Theorem \ref{thm:main-parametric} is based on a similar but much weaker ``higher order'' transversality condition, which is automatically satisfied under the stated hypothesis. We give the details in Section \ref{sub:Transversality-and-exceptions}. See \cite{ShmerkinSolomyak2006} for an effective derivation of higher-order transversality in certain contexts. As a demonstration we apply this to the Bernoulli convolutions problem. For $0<\lambda<1$ let $\nu_{\lambda}$ denote the distribution of the real random variable $\sum_{n=0}^{\infty}\pm\lambda^{n}$, where the signs are chosen i.i.d. with equal probabilities. The name derives from the fact that $\nu_{\lambda}$ is the infinite convolution of the measures $\frac{1}{2}\left(\delta_{-\lambda^{n}}+\delta_{\lambda^{n}}\right)$, $n=0,1,2,\ldots$, but the pertinent fact for us is that $\nu_{\lambda}$ is a self-similar measure, given by assigning equal probabilities to the contractions \begin{equation} \varphi_{\pm}(x)=\lambda x\pm1.\label{eq:BC-contractions} \end{equation} For $\lambda<\frac{1}{2}$ the measure is supported on a self-similar Cantor set of dimension $<1$, but for $\lambda\in[\frac{1}{2},1)$ the support is an interval, and it is a well-known open problem to determine whether it is absolute continuous. Exact overlaps can occur only for certain algebraic $\lambda$, and Erd\H{o}s showed that when $\lambda^{-1}$ is a Pisot number $\nu_{\lambda}$ is in fact singular \cite{Erdos1939}. No other parameters $\lambda\in[\frac{1}{2},1)$ are known for which $\nu_{\lambda}$ is singular. In the positive direction, it is known that $\nu_{\lambda}$ is absolutely continuous for a.e. $\lambda\in[1/2,1)$ (Solomyak \cite{Solomyak1995}) and the set of exceptional $\lambda\in[a,1)$ has dimension $<1-C(a-1/2)$ for some $C>0$ (Peres-Schlag \cite{PeresSchlag2000}) and its dimension tends to $0$ as $a\rightarrow1$ (Erd\H{o}s \cite{Erdos1940}). We shall consider the question of when $\dim\nu_{\lambda}=1$. This is weaker than absolute continuity but little more seems to be known about this question except the relatively soft fact that the set of parameters with $\dim\nu_{\lambda}=1$ is also topologically large (contains a dense $G_{\delta}$ set); see \cite{PeresSchlagSolomyak00}. In particular the only parameters $\lambda\in[1/2,1)$ for which $\dim\nu_{\lambda}<1$ is known are inverses of Pisot numbers (Alexander-Yorke \cite{AlexanderYorke1984}). We also note that in many of the problems related to Bernoulli convolutions it is the dimension of $\nu_{\lambda}$, rather than its absolute continuity, that are relevant. For discussion of some applications see \cite[Section 8]{PeresSchlagSolomyak00} and \cite{PrzytyckiUrbanski1989}. \begin{thm} \label{thm:BC}$\dim\nu_{\lambda}=1$ outside a set of $\lambda$ of dimension $0$. Furthermore, the exceptional parameters for which $\dim\nu_{\lambda}<1$ are ``nearly algebraic'' in the sense that for every $0<\theta<1$ and all large enough $n$, there is a polynomial $p_{n}(t)$ of degree $n$ and coefficients $0,\pm1$, such that $|p_{n}(\lambda)|<\theta^{n}$.\end{thm} \begin{proof} Take the parametrization $r(t)=t$, $a_{\pm}(t)=\pm1$ for $t\in[1/2,1-\varepsilon]$. Then $\Delta_{i,j}(t)=\sum(i_{n}-j_{n})\cdot t^{n}$ and this vanishes identically if and only if $i=j$, confirming the hypothesis of Theorem \ref{thm:main-parametric}. Since $\Delta_{n}(t)$ is a polynomial of degree $n$ with coefficients $0,\pm1$, so the second statement follows the description of the set $E$ in Theorem \ref{thm:main-parametric}. \end{proof} Arguing as in the proof of Theorem \ref{thm:furstenberg}, in order to show that $\dim\nu_{\lambda}=1$ for all non-algebraic $\lambda$, it would suffice to answer the following question in the affirmative:% \footnote{In order to show that an ``almost-root'' of a polynomial is close to an acrual root one can rely on the classical transversality arguments, e.g. \cite{Solomyak1995}.% } \begin{question} Let $\Pi_{n}$ denote the collection of polynomial of degree $\leq n$ with coefficients $0,\pm1$. Does there exist a constant $s>0$ such that for $\alpha,\beta$ that are roots of polynomials in $\Pi_{n}$ either $\alpha=\beta$ or $|\alpha-\beta|>s^{n}$? \end{question} Classical bounds imply that this for $s\sim1/n$, but we have not found an answer to the question in the literature. Another problem to which our methods apply is the Keane-Smorodinsky \{0,1,3\}-problem. For details about the problem we refer to Pollicott-Simon \cite{PollicottSimon1995} or Keane-Smorodinsky-Solomyak \cite{KeaneSmorodinskySolomyak1995}. Finally, our methods also can be adapted with minor changes to IFSs that ``contract on average'' \cite{NicolSidorovBroomhead2002}. We restrict attention to a problem raised by Sinai \cite{PeresSimonSolomyak2006} concerning the maps $\varphi_{-}:x\mapsto(1-\alpha)x-1$ and $\varphi_{+}:x\mapsto(1+\alpha)x+1$. A composition of $n$ of these maps chosen i.i.d. with probability $\frac{1}{2},\frac{1}{2}$ asymptotically contracts by approximately $(1-\alpha^{2})^{n/2}$, and so for each $0<\alpha<1$ there is a unique probability measure $\mu_{\alpha}$ on $\mathbb{R}$ satisfying $\mu_{\alpha}=\frac{1}{2}\varphi_{-}\mu_{\alpha}+\frac{1}{2}\varphi_{+}\mu_{\alpha}$. Little is known about the dimension or absolute continuity of $\mu_{\alpha}$ beyond upper bounds analogous to \eqref{eq:similarity-bound-for-measures}. Some results in a randomized analog of this model have been obtained by Peres, Simon and Solomyak \cite{PeresSimonSolomyak2006}. We prove \begin{thm} \label{thm:Sinais-problem} There is a set $E\subseteq(0,1)$ of Hausdorff (and packing) dimension $0$ such that $\dim\mu_{\alpha}=\min\{1,\sdim\mu_{\alpha}\}$ for $\alpha\in(0,1)\setminus E$. \end{thm} For further discussion of this problem see Section \ref{sub:Applications}. \subsection{Absolute continuity?} There is a conjecture analogous to the one we began with, predicting that if $\mu$ is a self-similar measure, $\sdim\mu>1$, and there are no exact overlaps, then $\mu$ should be absolutely continuous with respect to Lebesgue measure. The Bernoulli convolutions problem discussed above is a special case of this conjecture. Our methods at present are not able to tackle this problem. At a technical level, whenever our methods give $\dim\mu=1$ it is a consequence of showing that $H(\mu,\mathcal{D}_{n})=n-o(n)$. In contrast, absolute continuity would require better asymptotics, e.g. $H(\mu,\mathcal{D}_{n})=n-O(1)$ (see \cite[Theorem 1.5]{Garsia1963}). More substantially, our arguments do not distinguish between the critical $\sdim\mu=1$, where the conclusion of the conjecture is generally false, and super-critical phase $\sdim\mu>1$, so in their present form they cannot possibly give results about absolute continuity. The discussion above notwithstanding, shortly after this paper appeared in preprint form, P. Shmerkin found an ingenious way to ``amplify'' our results on parametric families of self-similar measures and obtain results about absolute continuity. For instance, \begin{thm*} [Shmerkin \cite{Shmerkin2013}] There is a set $E\subseteq(\frac{1}{2},1)$ of Hausdorff dimension $0$, such the Bernoulli convolution $\nu_{\lambda}$ is absolutely continuous for all $\lambda\in(\frac{1}{2},1)\setminus E$. \end{thm*} The idea of the proof is to split $\nu_{\lambda}$ as a convolution $\nu'_{\lambda}*\nu''_{\lambda}$ of self-similar measures, with $\sdim\nu'_{\lambda}\geq1$ and $\sdim\nu''_{\lambda}>0$. By Theorem \ref{thm:main-parametric}, $\dim\nu'_{\lambda}=1$ outside a zero-dimensional set $E'$ of parameters. On the other hand a classical argument of Erd\H{o}s and Kahane shows that, outside a zero-dimensional set $E''$ of parameters, the Fourier transform of $\nu''_{\lambda}$ has power decay. Taking $E=E'\cup E''$, Shmerkin shows that $\nu_{\lambda}=\nu'_{\lambda}*\nu''_{\lambda}$ is absolutely continuous for $\lambda\in(\frac{1}{2},1)\setminus E$. At present the argument above is limited by the fact that $E''$ is completely non-effective, so, unlike Theorem \ref{thm:main-individual}, it does not give a condition that applies to\emph{ individual }self-similar measure, and does not provide concrete new examples of parameters for which $\nu_{\lambda}$ is absolutely continuous. In contrast, Corollary \ref{cor:algebraic-parameters} tells us that $\dim\nu_{\lambda}=1$ whenever $\lambda\in(\frac{1}{2},1)\cap\mathbb{Q}$, as well as other algebraic examples. It remains a challenge to prove a similar result for absolute continuity. \subsection{\label{sub:Organization}Notation and organization of the paper } The main ingredient in the proofs are our results on the growth of convolutions of measures. We develop this subject in the next three sections: Section \ref{sec:Additive-combinatorics} introduces the statements and basic definitions, Section \ref{sec:Entropy-concentration-uniformity-saturation} contains some preliminaries on entropy and convolutions, and Section \ref{sec:Entropy-growth-for-convolutions} proves the main results on convolutions. In Section \ref{sec:Parameterized-families-of-self-similar-measures} we prove Theorem \ref{thm:main-individual} and the other main results. We follow standard notational conventions. $\mathbb{N}=\{1,2,3,\ldots\}$. All logarithms are to base $2$. $\mathcal{P}(X)$ is the space of probability measures on $X$, endowed with the weak-{*} topology if appropriate. We follow standard ``big $O$'' notation: $O_{\alpha}(f(n))$ is an unspecified function bounded in absolute value by $C_{\alpha}\cdot f(n)$ for some constant $C_{\alpha}$ depending on $\alpha$. Similarly $o(1)$ is a quantity tending to $0$ as the relevant parameter $\rightarrow\infty$. The statement ``for all $s$ and $t>t(s),\ldots$'' should be understood as saying ``there exists a function $t(\cdot)$ such that for all $s$ and $t>t(s),\ldots$''. If we want to refer to the function $t(\cdot)$ outside the context where it is introduced we will designate it as $t_{1}(\cdot)$, $t_{2}(\cdot)$, etc. \subsection*{Acknowledgment} I am grateful to Pablo Shmerkin and Boris Solomyak for many contributions which have made this a better paper, and especially their permission to include the derivation of Theorem \ref{thm:furstenberg}. I also thank Nicolas de Saxce and Izabella Laba for their comments. This project began during a visit to Microsoft Research in Redmond, Washington, and I would like to thank Yuval Peres and the members of the theory group for their hospitality. \section{\label{sec:Additive-combinatorics}An inverse theorem for the entropy of convolutions} \subsection{\label{sub:Entropy-and-additive-combinatorics}Entropy and additive combinatorics} As we saw in Section \ref{sub:Outline-of-the-proof}, a key ingredient in the proof of Theorems \ref{thm:main-individual-entropy-1} is an analysis of the growth of measures under convolution. This subject is of independent interest and will occupy us for a large part of this paper. It will be convenient to introduce the normalized scale-$n$ entropy \[ H_{n}(\mu)=\frac{1}{n}H(\mu,\mathcal{D}_{n}). \] Our aim is to obtain structural information about measures $\mu,\nu$ for which $\mu*\nu$ is small in the sense that \begin{equation} H_{n}(\mu*\nu)\leq H_{n}(\mu)+\delta,\label{eq:mean-entropy-growth} \end{equation} where $\delta>0$ is small but fixed, and $n$ is large. This problem is a relative of classical ones in additive combinatorics concerning the structure of sets $A,B$ whose sumset $A+B=\{a+b\,:\, a\in A\,,\, b\in B\}$ is appropriately small. The general principle is that when the sum is small, the sets should have some algebraic structure. Results to this effect are known as inverse theorems. For example the Freiman-Rusza theorem asserts that if $|A+B|\leq C|A|$ then $A,B$ are close, in a manner depending on $C$, to generalized arithmetic progressions% \footnote{A generalized arithmetic progression is an affine image of a box in a higher-dimensional lattice.% } (the converse is immediate). For details and more discussion see e.g \cite{TaoVu2006}. The entropy of a discrete measure corresponds to the logarithm of the cardinality of a set, and convolution is the analog for measures of the sumset operation. Thus the analog of the condition $|A+A|\leq C|A|$ is \begin{equation} H_{n}(\mu*\mu)\leq H_{n}(\mu)+O(\frac{1}{n})\label{eq:O-of-1-entropy-growth} \end{equation} An entropy version of Freiman's theorem was recently proved by Tao \cite{Tao2010}, who showed that if $\mu$ satisfies \eqref{eq:O-of-1-entropy-growth} then it is close, in an appropriate sense, to a uniform measures on a (generalized) arithmetic progression. The condition \eqref{eq:mean-entropy-growth}, however, significantly weaker than \eqref{eq:O-of-1-entropy-growth} even when the latter is specialized to $\nu=\mu$, and it is harder to draw conclusions from it about the global structure of $\mu$. Consider the following example. Start with an arithmetic progression of length $n_{1}$ and gap $\varepsilon_{1}$, and put the uniform measure on it. Now split each atom $x$ into an arithmetic progression of length $n_{2}$ and gap $\varepsilon_{2}<\varepsilon_{1}/n_{2}$, starting at $x$ (so the entire gap fits in the space between $x$ and the next atom). Repeat this procedure $N$ times with parameters $n_{i},\varepsilon_{i}$, and call the resulting measure $\mu$. Let $k$ be such that $\varepsilon_{N}$ is of order $2^{-k}$. It is not hard to verify that we can have $H_{k}(\mu)=1/2$ but $|H_{k}(\mu)-H_{k}(\mu*\mu)|$ arbitrarily small. This example is actually the uniform measure on a (generalized) arithmetic progression, as predicted by Freiman-type theorems, but the rank $N$ can be arbitrarily large. Furthermore if one conditions $\mu$ on an exponentially small subset of its support one gets another example with the similar properties that is quite far from a generalized arithmetic progression. Our main contribution to this matter is Theorem \ref{thm:inverse-thm-Rd} below, which shows that constructions like the one above are, in a certain statistical sense, the only way that \eqref{eq:mean-entropy-growth} can occur. We note that there is a substantial existing literature on the growth condition $|A+B|\leq|A|^{1+\delta}$, which is the sumset analog of \eqref{eq:mean-entropy-growth}. Such a condition appears in the sum-product theorems of Bourgain-Katz-Tao \cite{BourgainKatzTao2004} and in the work of Katz-Tao \cite{KatzTao2001}, and in the Euclidean setting more explicitly in Bourgain's work on the Erd\H{o}s-Volkmann conjecture \cite{Bourgain2003} and Marstrand-like projection theorems \cite{Bourgain2010}. However we have not found a result in the literature that meets our needs and, in any event, we believe that the formulation given here will find further applications. \subsection{\label{sub:Component-measures}Component measures } The following notation will be needed in $\mathbb{R}^{d}$ as well as $\mathbb{R}$. Let $\mathcal{D}_{n}^{d}=\mathcal{D}_{n}\times\ldots\times\mathcal{D}_{n}$ denote the dyadic partition of $\mathbb{R}^{d}$; we often suppress the superscript when it is clear from the context. Let $\mathcal{D}_{n}(x)\in\mathcal{D}_{n}$ denote the unique level-$n$ dyadic cell containing $x$. For $D\in\mathcal{D}_{n}$ let $T_{D}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ be the unique homothety mapping $D$ to $[0,1)^{d}$. Recall that if $\mu\in\mathcal{P}(\mathbb{R})$ then $T_{D}\mu$ is the push-forward of $\mu$ through $T_{D}$ . \begin{defn} For $\mu\in\mathcal{P}(\mathbb{R}^{d})$ and a dyadic cell $D$ with $\mu(D)>0$, the (raw) $D$-component of $\mu$ is \[ \mu_{D}=\frac{1}{\mu(D)}\mu|_{D} \] and the (rescaled) $D$-component is \[ \mu^{D}=\frac{1}{\mu(D)}T_{D}(\mu|_{D}). \] For $x\in\mathbb{R}^{d}$ with $\mu(\mathcal{D}_{n}(x))>0$ we write \begin{eqnarray*} \mu_{x,n} & = & \mu_{\mathcal{D}_{n}(x)}\\ \mu^{x,n} & = & \mu^{\mathcal{D}_{n}(x)}. \end{eqnarray*} These measures, as $x$ ranges over all possible values for which $\mu(\mathcal{D}_{n}(x))>0$, are called the level-$n$ components of $\mu$. \end{defn} Our results on the multi-scale structure of $\mu\in\mathbb{R}^{d}$ are stated in terms of the behavior of random components of $\mu$, defined as follows.% \footnote{Definition \ref{def:component-distribution} is motivated by Furstenberg's notion of a CP-distribution \cite{Furstenberg70,Furstenberg08,HochmanShmerkin2011}, which arise as limits as $N\rightarrow\infty$ of the distribution of components of level $1,\ldots,N$. These limits have a useful dynamical interpretation but in our finitary setting we do not require this technology.% } \begin{defn} \label{def:component-distribution}Let $\mu\in\mathcal{P}(\mathbb{R}^{d})$. \begin{enumerate} \item A random level-$n$ component, raw or rescaled, is the random measure $\mu_{D}$ or $\mu^{D}$, respectively, obtained by choosing $D\in\mathcal{D}_{n}$ with probability $\mu(D)$; equivalently, the random measure $\mu_{x,n}$ or $\mu^{x,n}$, respectively, with $x$ chosen according to $\mu$. \item For a finite set $I\subseteq\mathbb{N}$, a random level-$I$ component, raw or rescaled, is chosen by first choosing $n\in I$ uniformly, and then (conditionally independently on the choice of $n$) choosing a raw or rescaled level-$n$ component, respectively. \end{enumerate} \end{defn} \begin{notation} When the symbols $\mu^{x,i}$ and $\mu_{x,i}$ appear inside an expression $\mathbb{P}\left(\ldots\right)$ or $\mathbb{E}\left(\ldots\right)$, they will always denote random variables drawn according to the component distributions defined above. The range of $i$ will be specified as needed. \end{notation} The definition is best understood with some examples. For $A\subseteq\mathcal{P}([0,1]^{d})$ we have \begin{eqnarray*} \mathbb{P}_{i=n}\left(\mu^{x,i}\in A\right) & = & \int1_{A}(\mu^{x,n})\, d\mu(x)\\ \mathbb{P}_{0\leq i\leq n}\left(\mu^{x,i}\in A\right) & = & \frac{1}{n+1}\sum_{i=0}^{n}\int1_{A}(\mu^{x,i})\, d\mu(x). \end{eqnarray*} This notation implicitly defines $x,i$ as random variables. Thus if $A_{0},A_{1},\ldots\subseteq\mathcal{P}([0,1]^{d})$ and $D\subseteq[0,1]^{d}$ we could write \[ \mathbb{P}_{0\leq i\leq n}\left(\mu^{x,i}\in A_{i}\mbox{ and }x\in D\right)=\frac{1}{n+1}\sum_{i=0}^{n}\mu\left(x\,:\,\mu^{x,i}\in A_{i}\mbox{ and }x\in D\right). \] Similarly, for $f:\mathcal{P}([0,1)^{d})\rightarrow\mathbb{R}$ and $I\subseteq\mathbb{N}$ we the expectation \[ \mathbb{E}_{i\in I}\left(f(\mu^{x,i})\right)=\frac{1}{|I|}\sum_{i\in I}\int f(\mu^{x,i})\, d\mu(x). \] When dealing with components of several measures $\mu,\nu$, we assume all choices of components $\mu^{x,i}$, $\nu^{y,j}$ are independent unless otherwise stated. For instance, \[ \mathbb{P}_{i=n}\left(\mu^{x,i}\in A\,,\,\nu^{y,i}\in B\right)=\int\int1_{A}(\mu^{x,n})\cdot1_{B}(\nu^{y,n})\, d\mu(x)\, d\nu(y). \] Here $1_{A}$ is the indicator function on $A$, given by $1_{A}(\omega)=1$ if $\omega\in A$ and $0$ otherwise. We record one obvious fact, which we will use repeatedly: \begin{lem} \label{lem:components-average-to-the-whole}For $\mu\in\mathcal{P}(\mathbb{R}^{d})$ and $n\in\mathbb{N}$, \[ \mu=\mathbb{E}_{i=n}\left(\mu_{x,i}\right). \] \end{lem} Finally, we sometimes use similar notation to average a sequence $a_{n},\ldots,a_{n+k}\in\mathbb{R}$: \[ \mathbb{E}_{n\leq i\leq n+k}\left(a_{i}\right)=\frac{1}{k+1}\sum_{i=n}^{n+k}a_{i}. \] \subsection{\label{sub:inverse-theorem}An inverse theorem} The approximate equality $H_{n}(\mu*\nu)\approx H_{n}(\mu)$ occurs trivially if either $\mu$ is uniform (Lebesgue) measure on $[0,1]$, or if $\nu=\delta_{x}$ is a point mass. As we saw in Section \ref{sub:Entropy-and-additive-combinatorics}, there are other ways this can occur, but the theorem below shows that is a \emph{statistical} sense, \emph{locally }(i.e. for typical component measures) the two trivial scenarios are essentially the only ones. In order to state this precisely we require finite-scale and approximate versions of being uniform and being a point mass. There are many definitions to choose from. One possible choice is the following: \begin{defn} \label{def:almost-atomic}A measure $\mu\in\mathcal{P}([0,1])$ is $\varepsilon$\emph{-atomic} if there is an interval $I$ of length $\varepsilon$ such that $\mu(I)>1-\varepsilon$. \end{defn} Alternatively we could require that the entropy be small at a given scale, or that the random variable whose distribution is the given measure has small variance. Up to choice of parameters these definitions coincide and we shall use all the definitions later. See Definition \ref{def:ep-em-almost-atomic} and the discussion following it, and Lemma \ref{lem:concentration-from-covariance-matrix}, below. \begin{defn} \label{def:almost-uniform}A measure $\mu\in\mathcal{P}([0,1])$ is \emph{$(\varepsilon,m)$-uniform if $H_{m}(\mu)>1-\varepsilon$.} \end{defn} Again one can imagine many alternative definitions. For example, almost-uniformity of $\mu\in\mathcal{P}([0,1])$ at scale $\delta$ could mean that $|\mu(I)-|I||<\delta^{2}$ for all intervals $I$ of length $|I|\geq\delta$, or that the Fourier transform $\widehat{\mu}(\xi)$ is small at frequencies $|\xi|<1/\delta$. Again, these definitions are essentially equivalent, up to adjustment of parameters, to the one above. We shall not use them here. \begin{thm} \label{thm:inverse-thm-Rd}For every $\varepsilon>0$ and integer $m\geq1$ there is a $\delta=\delta(\varepsilon,m)>0$ such that for every $n>n(\varepsilon,\delta,m)$, the following holds: if $\mu,\nu\in\mathcal{P}([0,1])$ and \[ H_{n}(\mu*\nu)<H_{n}(\mu)+\delta, \] then there are disjoint subsets $I,J\subseteq\{1,\ldots,n\}$ with $|I\cup J|>(1-\varepsilon)n$, such that \begin{eqnarray*} \mathbb{P}_{i=k}\left(\mu^{x,i}\mbox{ is }(\varepsilon,m)\mbox{-uniform}\right)\;>\;1-\varepsilon & & \mbox{for }k\in I\\ \mathbb{P}_{i=k}\left(\nu^{x,i}\mbox{ is }\varepsilon\mbox{-atomic}\right)\;>\;1-\varepsilon & & \mbox{for }k\in J. \end{eqnarray*} \end{thm} From this it is easy to derive many variants of the theorem for the other notions of atomicity and uniformity discussed above. In Section \ref{sub:inverse-theorem} we give a marginally stronger statement in which atomicity is expressed in terms of entropy. The proof is given in Section \ref{sub:Proof-of-inverse-theorem}. The dependence of $\delta$ on $\varepsilon,m$ is effective, but the bounds we obtain are certainly far from optimal, and we do not pursue this topic. The value of $n$ depends among other things on the rate at which $H_{m}(\mu)\rightarrow\dim\mu$, which is currently not effective. The converse direction of the theorem is false, that is, there are measures which satisfy the conclusion but also $H_{n}(\mu*\nu)>H_{n}(\mu)+\delta$. To see this begin with a measure $\mu\in[0,1]$ such that $\dim(\mu*\mu)=\dim\mu=1/2$, and such that $\lim H_{n}(\mu)=\lim H_{n}(\mu*\mu)=\frac{1}{2}$ (such measures are not hard to construct, see e.g. \cite{ErdosVolkmann1966} or the more elaborate constructions in \cite{Korner2008,SchmelingShmerkin2009}). By Marstrand's theorem, for a.e. $t$ the scaled measure $\nu(A)=\mu(tA)$ satisfies $\dim\mu*\nu=1$ and hence $H_{n}(\mu*\nu)\rightarrow1$. But it is easy to verify that, as the conclusion of the theorem holds for the pair $\mu,\mu$, it holds for $\mu,\nu$ as well. Note that there is no assumption on the entropy of $\nu$, but if $H_{n}(\nu)$ is sufficiently close to $0$ the conclusion will automatically hold with $I$ empty, and if $H_{n}(\nu)$ is not too close to $0$ then $J$ cannot be too large relative to $n$ (see Lemma \ref{lem:entropy-local-to-global} below). We obtain the following useful conclusion. \begin{thm} \label{thm:inverse-thm-R}For every $\varepsilon>0$ and integer $m$, there is a $\delta=\delta(\varepsilon,m)>0$ such that for every $n>n(\varepsilon,\delta,m)$ and every $\mu\in\mathcal{P}([0,1])$, if \[ \mathbb{P}_{_{0\leq i\leq n}}\left(H_{m}(\mu^{x,i})<1-\varepsilon\right)>1-\varepsilon \] then for every $\nu\in\mathcal{P}([0,1])$ \[ H_{n}(\nu)>\varepsilon\qquad\implies\qquad H_{n}(\mu*\nu)\geq H_{n}(\mu)+\delta. \] \end{thm} Specializing the above to self-convolutions we have the following result, which shows that constructions like the one described in Section \ref{sub:Entropy-and-additive-combinatorics} are, roughly, the only way that $H_{n}(\mu*\mu)=H_{n}(\mu)+\delta$ can occur. This should be compared with the results of Tao \cite{Tao2010}, who studied the condition $H_{n}(\mu*\mu)=H_{n}(\mu)+O(\frac{1}{n})$. \begin{thm} \label{thm:self-convolution}For every $\varepsilon>0$ and integer $m$, there is a $\delta=\delta(\varepsilon,m)>0$ such that for every sufficiently large $n>n(\varepsilon,\delta,m)$ and every $\mu\in\mathcal{P}([0,1))$, if \[ H_{n}(\mu*\mu)<H_{n}(\mu)+\delta \] then there disjoint are subsets $I,J\subseteq\{0,\ldots,n\}$ with $|I\cup J|\geq(1-\varepsilon)n$ and such that \begin{eqnarray*} \mathbb{P}_{i=k}\left(\mu^{x,i}\mbox{ is }(\varepsilon,m)\mbox{-uniform}\right)\;>\;1-\varepsilon & & \mbox{for }k\in I\\ \mathbb{P}_{i=k}\left(\mu^{x,i}\mbox{ is }\varepsilon\mbox{-atomic}\right)\;>\;1-\varepsilon & & \mbox{for }k\in J. \end{eqnarray*} \end{thm} These results hold more generally for compactly supported measures but the parameters will depend on the diameter of the support. They can also be extended to measures with unbounded support under additional assumptions, see Section \ref{sub:Applications}. \section{\label{sec:Entropy-concentration-uniformity-saturation}Entropy, atomicity, uniformity} \subsection{\label{sub:Preliminaries-on-entropy}Preliminaries on entropy} The Shannon entropy of a probability measure $\mu$ with respect to a countable partition $\mathcal{E}$ is given by \[ H(\mu,\mathcal{E})=-\sum_{E\in\mathcal{E}}\mu(E)\log\mu(E), \] where the logarithm is in base $2$ and $0\log0=0$. The conditional entropy with respect to a countable partition $\mathcal{F}$ is \[ H(\mu,\mathcal{E}|\mathcal{F})=\sum_{F\in\mathcal{F}}\mu(F)\cdot H(\mu_{F},\mathcal{E}), \] where $\mu_{F}=\frac{1}{\mu(F)}\mu|_{F}$ is the conditional measure on $F$. For a discrete probability measure $\mu$ we write $H(\mu)$ for the entropy with respect to the partition into points, and for a probability vector $\alpha=(\alpha_{1},\ldots,\alpha_{k})$ we write \[ H(\alpha)=-\sum\alpha_{i}\log\alpha_{i}. \] We collect here some standard properties of entropy. \begin{lem} \label{lem:entropy-combinatorial-properties}Let $\mu,\nu$ be probability measures on a common space, $\mathcal{E},\mathcal{F}$ partitions of the underlying space and $\alpha\in[0,1]$. \begin{enumerate} \item \label{enu:entropy-positivity}$H(\mu,\mathcal{E})\geq0$, with equality if and only if $\mu$ is supported on a single atom of $\mathcal{E}$. \item \label{enu:entropy-combinatorial-bound}If $\mu$ is supported on $k$ atoms of $\mathcal{E}$ then $H(\mu,\mathcal{E})\leq\log k$. \item \label{enu:entropy-refining-partitions}If $\mathcal{F}$ refines $\mathcal{E}$ (i.e. $\forall\; F\in\mathcal{F}\;\exists E\in\mathcal{E}\, s.t.\, F\subseteq E$) then $H(\mu,\mathcal{F})\geq H(\mu,\mathcal{E})$. \item \label{enu:entropy-conditional-formula}If $\mathcal{E}\lor\mathcal{F}=\{E\cap F\,:\, E\in\mathcal{E}\,,\, F\in\mathcal{F}\}$ is the join of $\mathcal{E}$ and $\mathcal{F}$, then \[ H(\mu,\mathcal{E}\lor\mathcal{F})=H(\mu,\mathcal{F})+H(\mu,\mathcal{E}|\mathcal{F}). \] \item \label{enu:entropy-concavity}$H(\cdot,\mathcal{E})$ and $H(\cdot,\mathcal{E}|\mathcal{F})$ are concave \item \label{enu:entropy-almost-convexity}$H(\cdot,\mathcal{E})$ obeys the ``convexity'' bound \[ H(\sum\alpha_{i}\mu_{i},\mathcal{E})\leq\sum\alpha_{i}H(\mu_{i},\mathcal{E})+H(\alpha). \] \end{enumerate} \end{lem} In particular, we note that for $\mu\in\mathcal{P}([0,1]^{d})$ we have the bounds $H(\mu,\mathcal{D}_{m})\leq md$ (hence $H_{n}(\mu)\leq1$) and $H(\mu,\mathcal{D}_{n+m}|\mathcal{D}_{n})\leq md$. Although the function $(\mu,m)\mapsto H(\mu,\mathcal{D}_{m})$ is not weakly continuous, the following estimates provide usable substitutes. \begin{lem} \label{lem:entropy-weak-continuity-properties}Let $\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})$, $\mathcal{E},\mathcal{F}$ are partitions of $\mathbb{R}^{d}$, and $m,m'\in\mathbb{N}$. \begin{enumerate} \item \label{enu:entropy-approximation} Given a compact $K\subseteq\mathbb{R}^{d}$ and $\mu\in\mathcal{P}(K)$, there is a neighborhood $U\subseteq\mathcal{P}(K)$ of $\mu$ such that $|H(\nu,\mathcal{D}_{m})-H(\mu,\mathcal{D}_{m})|=O_{d}(1)$ for $\nu\in U$. \item \label{enu:entropy-combinatorial-distortion} If each $E\in\mathcal{E}$ intersects at most $k$ elements of $\mathcal{F}$ and vice versa, then $|H(\mu,\mathcal{E})-H(\mu,\mathcal{F})|=O(\log k)$. \item \label{enu:entropy-geometric-distortion} If $f,g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}$ and $\left\Vert f(x)-g(x)\right\Vert \leq C2^{-m}$ for $x\in\mathbb{R}^{d}$ then $|H(f\mu,\mathcal{D}_{m})-H(g\mu,\mathcal{D}_{m})|\leq O_{C,k}(1)$. \item \label{enu:entropy-translation} If $\nu(\cdot)=\mu(\cdot+x_{0})$ then $\left|H(\mu,\mathcal{D}_{m})-H(\nu,\mathcal{D}_{m})\right|=O_{d}(1)$. \item \label{enu:entropy-change-of-scale} If $C^{-1}\leq m'/m\leq C$, then $\left|H(\mu,\mathcal{D}_{m})-H(\mu,\mathcal{D}_{m'})\right|\leq O_{C,d}(1)$. \end{enumerate} \end{lem} Recall that the total variation distance between $\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})$ is \[ \left\Vert \mu-\nu\right\Vert =\sup_{A}|\mu(A)-\nu(A)|, \] where the supremum is over Borel sets $A$. This is a complete metric on $\mathcal{P}(\mathbb{R}^{d})$. It follows from standard measure theory that for every $\varepsilon>0$ there is a $\delta>0$ such that if $\left\Vert \mu-\nu\right\Vert <\delta$ then there are probability measures $\tau,\mu',\nu'$ such that $\mu=(1-\varepsilon)\tau+\varepsilon\mu'$ and $\nu=(1-\varepsilon)\tau+\varepsilon\nu'$. Combining this with Lemma \ref{lem:entropy-combinatorial-properties} \eqref{enu:entropy-concavity} and \eqref{enu:entropy-almost-convexity}, we have \begin{lem} \label{lem:entropy-total-variation-continuity}For every $\varepsilon>0$ there is a $\delta>0$ such that if $\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})$ and $\left\Vert \mu-\nu\right\Vert <\delta$ then for any finite partition $\mathcal{A}$ of $\mathbb{R}^{d}$ with $k$ elements, \[ |H(\mu,\mathcal{A})-H(\nu,\mathcal{A})|<\varepsilon\log k+H(\varepsilon) \] In particular, if $\mu,\nu\in\mathcal{P}([0,1]^{d})$, then \[ |H_{m}(\mu)-H_{m}(\nu)|<\varepsilon+\frac{H(\varepsilon)}{m} \] \end{lem} \subsection{\label{sub:Global-entropy-from-local-entropy}Global entropy from local entropy} Recall from Section \ref{sub:Component-measures} the definition of the raw and re-scaled components $\mu_{x,n}$, $\mu^{x,n}$, and note that \begin{equation} H(\mu^{x,n},\mathcal{D}_{m})=H(\mu_{x,n},\mathcal{D}_{n+m}).\label{eq:convert-restriction-entropy-to-local-entropy} \end{equation} Also, note that \begin{eqnarray*} \mathbb{E}_{i=n}\left(H_{m}(\mu^{x,i})\right) & = & \int\frac{1}{m}H(\mu^{x,n},\mathcal{D}_{m})\, d\mu(x)\\ & = & \frac{1}{m}\int H(\mu_{x,n},\mathcal{D}_{n+m})\, d\mu(x)\\ & = & \frac{1}{m}\sum_{D\in\mathcal{D}_{n}}\mu(D)H(\mu_{D},\mathcal{D}_{m+n})\\ & = & \frac{1}{m}H(\mu,\mathcal{D}_{n+m}\,|\,\mathcal{D}_{n}). \end{eqnarray*} \begin{lem} \label{lem:entropy-local-to-global}For $r\geq1$ and $\mu\in\mathcal{P}([-r,r]^{d})$ and integers $m<n$, \begin{eqnarray*} H_{n}(\mu) & = & \mathbb{E}_{_{0\leq i\leq n}}\left(H_{m}(\mu^{x,i})\right)+O(\frac{m}{n}+\frac{\log r}{n}). \end{eqnarray*} \end{lem} \begin{proof} By the paragraph before the lemma, the statement is equivalent to \[ H_{n}(\mu)=\frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{m}H(\mu,\mathcal{D}_{i+m}|\mathcal{D}_{i})+O(\frac{m}{n}+\frac{\log r}{n}). \] At the cost of adding $O(m/n)$ to the error term we can delete up to $m$ terms from the sum. Thus without loss of generality we may assume that $n/m\in\mathbb{N}$. When $m=1$, iterating the conditional entropy formula and using $H(\mu,\mathcal{D}_{0})=O(\log r)$ gives \[ \sum_{i=0}^{n-1}H(\mu,\mathcal{D}_{i+1}\,|\,\mathcal{D}_{i})=H(\mu,\mathcal{D}_{n}|\mathcal{D}_{0})=H(\mu,\mathcal{D}_{n})-O(\log r) \] The result follows on dividing by $n$. For general $m$, first decompose the sum according to the residue class of $i\bmod m$ and apply the above to each one: \begin{eqnarray*} \sum_{i=0}^{n-1}\frac{1}{m}H(\mu,\mathcal{D}_{i+m}\,|\,\mathcal{D}_{i}) & = & \frac{1}{m}\sum_{p=0}^{m-1}\left(\sum_{k=0}^{n/m-1}H(\mu,\mathcal{D}_{(k+1)m+p}\,|\,\mathcal{D}_{km+p})\right)\\ & = & \frac{1}{m}\sum_{p=0}^{m-1}H(\mu,\mathcal{D}_{n+p}\,|\,\mathcal{D}_{p}). \end{eqnarray*} Dividing by $n$, the result follows from the bound \[ \left|\frac{1}{n}H(\mu,\mathcal{D}_{n+p}|\mathcal{D}_{p})-H_{n}(\mu)\right|<\frac{2m+O(\log r)}{n}, \] which can be derived from the identities \begin{eqnarray*} H(\mu,\mathcal{D}_{n})+H(\mu,\mathcal{D}_{n+p}|\mathcal{D}_{n}) & = & H(\mu,\mathcal{D}_{n+p})\\ & = & H(\mu,\mathcal{D}_{p})+H(\mu,\mathcal{D}_{n+p}|\mathcal{D}_{p}) \end{eqnarray*} together with the fact that $H(\mu,\mathcal{D}_{p})\leq p+\log r$ and $H(\mu,\mathcal{D}_{rm+p}|\mathcal{D}_{rm})\leq p$, and recalling that $0\leq p<m$. \end{proof} We have a similar lower bound for the entropy of a convolution in terms of convolutions of its components at each level. \begin{lem} \label{lem:entropy-of-convolutions-via-component-convolutions}Let $r>0$ and $\mu,\nu\in\mathcal{P}([-r,r]^{d})$. Then for $m<n\in\mathbb{N}$ \begin{eqnarray*} H_{n}(\mu*\nu) & \geq & \mathbb{E}_{0\leq i\leq n}\left(\frac{1}{m}H(\mu_{x,i}*\nu_{y,i},\mathcal{D}_{i+m}|\mathcal{D}_{i})\right)+O(\frac{m+\log r}{n}).\\ & \geq & \mathbb{E}_{0\leq i\leq n}\left(H_{m}(\mu^{x,i}*\nu^{y,i})\right)+O(\frac{1}{m}+\frac{m}{n}+\frac{\log r}{n}). \end{eqnarray*} \end{lem} \begin{proof} As in the previous proof, by introducing an error of $O(m/n)$ we can assume that $m$ divides $n$, and by the conditional entropy formula, \begin{eqnarray*} H(\mu*\nu,\mathcal{D}_{n}) & = & \sum_{k=0}^{n/m-1}H(\mu*\nu,\mathcal{D}_{(k+1)m}|\mathcal{D}_{km})+H(\mu*\nu,\mathcal{D}_{0})\\ & = & \sum_{k=0}^{n/m-1}H(\mu*\nu,\mathcal{D}_{(k+1)m}|\mathcal{D}_{km})+O(\log r) \end{eqnarray*} since $\mu*\nu$ is supported on $[-2r,2r]^{d}$. Apply the linear map $(x,y)\mapsto x+y$ to the trivial identity $\mu\times\nu=\mathbb{E}_{i=k}(\mu_{x,i}\times\nu_{y,i})$ (Lemma \eqref{lem:components-average-to-the-whole} for the product measure). We obtain the identity $\mu*\nu=\mathbb{E}_{i=k}(\mu_{x,i}*\nu_{xy,i})$. By concavity of conditional entropy (Lemma \ref{lem:entropy-combinatorial-properties} \eqref{enu:entropy-concavity}), \begin{eqnarray*} H(\mu*\nu,\mathcal{D}_{n}) & = & \sum_{k=0}^{n/m-1}H\left(\mathbb{E}_{i=km}(\mu_{x,i}*\nu_{xy,i}),\mathcal{D}_{(k+1)m}|\mathcal{D}_{km}\right)+O(\log r)\\ & \geq & \sum_{k=0}^{n/m-1}\mathbb{E}_{i=km}\left(H(\mu_{x,i}*\nu_{y,i},\mathcal{D}_{(k+1)m}|\mathcal{D}_{km})\right)+O(\log r), \end{eqnarray*} Dividing by $n$, we have shown that \[ H_{n}(\mu*\nu)\geq\frac{m}{n}\sum_{k=0}^{n/m-1}\mathbb{E}_{i=k}\left(H_{m}(\mu^{x,i}*\nu^{xy,i})\right)+O(\frac{m}{n}+\frac{\log r}{n}). \] Now do the same for the sum $k=p$ to $n/m+p$ for $p=0,1,\ldots,m-1$. Averaging the resulting expressions gives the first inequality. The second inequality follows from the first using \begin{eqnarray*} H(\mu_{x,i}*\nu_{x,i},\mathcal{D}_{(k+1)m}|\mathcal{D}_{km}) & = & H(\mu^{x,i}*\nu^{y,i},\mathcal{D}_{m}|\mathcal{D}_{0})\\ & = & H(\mu^{x,i}*\nu^{y,i},\mathcal{D}_{m})+O(1)\\ & = & mH_{m}(\mu^{x,i}*\nu^{y,i})+O(1), \end{eqnarray*} where the $O(1)$ error term arises because $\mu^{x,i}*\nu^{x,i}$ is supported on $[0,2)^{d}$ and hence meets $O(1)$ sets in $\mathcal{D}_{0}$. \end{proof} \subsection{Covering lemmas} We will require some simple combinatorial lemmas. \begin{lem} \label{lem:covering-by-intervals}Let $I\subseteq\{0,\ldots,n\}$ and $m\in\mathbb{N}$ be given. Then there is a subset $I'\subseteq I$ such that $I\subseteq I'+[0,m]$ and $[i,i+m]\cap[j,j+m]=\emptyset$ for distinct $i,j\in I'$.\end{lem} \begin{proof} Define $I'$ inductively. Begin with $I'=\emptyset$ and, at each successive stage, if $I\setminus\bigcup_{i\in I'}[i,i+m]\neq\emptyset$ then add its least element to $I'$. Stop when $I\subseteq\bigcup_{i\in I'}[i,i+m]$. \end{proof} \begin{lem} \label{lem:translation-invariant-covering}Let $I,J\subseteq\{0,\ldots,n\}$ and $m\in\mathbb{N}$, $\delta>0$. Suppose that $|[i,i+m]\cap J|\geq(1-\delta)m$ for $i\in I$. Then there is a subset $J'\subseteq J$ such that $|J'\cap(J'-\ell)|\geq(1-\delta-\frac{\ell}{m})|I|$ for $0\leq\ell\leq m$.\end{lem} \begin{proof} Let $I'\subseteq I$ be the collection obtained by applying the previous lemma to $I$,$m$. Let $J'=J\cap(\bigcup_{i\in I'}[i,i+m])$. Then \[ J'\cap(J'-\ell)\supseteq J\cap\bigcup_{i\in I'}([i,i+m]\cap[i-\ell,i+m-\ell])=\bigcup_{i\in I'}(J\cap[i,i+m-\ell]) \] Also $|J\cap[i,i+m-\ell]|\geq(1-\delta-\frac{\ell}{m})m$ for $i\in I'$ , and $I\subseteq\bigcup_{i\in I'}[i,i+m]$, so by the above, \[ |J'\cap(J'-\ell)|\geq(1-\delta-\frac{\ell}{m})\cdot|\bigcup_{i\in I'}[i,i+m]|\geq(1-\delta-\frac{\ell}{m})|I|\qedhere \] \end{proof} \begin{lem} \label{lem:pair-of-coverings}Let $m,\delta$ be given and let $I_{1},J_{1}$ and $I_{2},J_{2}$ be two pairs of subsets of $\{0,\ldots,n\}$ satisfying the assumptions of the previous lemma. Suppose also that $I_{1}\cap I_{2}=\emptyset$. Then there exist $J'_{1}\subseteq J_{1}$ and $J'_{2}\subseteq J_{2}$ with $J'_{1}\cap J'_{2}=\emptyset$ and such that $|J'_{1}\cup J'_{2}|\geq(1-\delta)^{2}|I_{1}\cup I_{2}|$.\end{lem} \begin{proof} Define $I'_{1}\subseteq I_{1}$ and $J'_{1}=J_{1}\cap\bigcup_{i\in I'}[i,i+m]$ as in the previous proof, so taking $\ell=0$ in its conclusion, $|J'_{1}|\geq(1-\delta)|I_{1}|$. Let $U=\bigcup_{i\in I'_{1}}[i,i+m]$, and recall that $|J'_{1}|=|U\cap J_{1}|\geq(1-\delta)|U|$. Since $I_{1}\subseteq U$ and $I_{1}\cap I_{2}=\emptyset$, \[ |J'_{1}\cap I_{2}|\leq|U|-|I_{1}|\leq\frac{1}{1-\delta}|J'_{1}|-|I_{1}| \] Hence, using $|J'_{1}|\geq(1-\delta)|I_{1}|$, \begin{eqnarray*} |J'_{1}\cup I_{2}| & = & |J'_{1}|+|I_{2}|-|J'_{1}\cap I_{2}|\\ & \geq & |J'_{1}|+|I_{2}|-(\frac{1}{1-\delta}|J'_{1}|-|I_{1}|)\\ & \geq & |I_{1}|+|I_{2}|-\frac{\delta}{1-\delta}|J'_{1}|\\ & \geq & (1-\delta)|I_{1}|+|I_{2}| \end{eqnarray*} Now perform the analysis above with $I_{2}\setminus J'_{1},J_{2}$ in the role of $I_{1},J_{1}$ and with $J'_{1}$ in the role of $I_{2}$ (thus $(I_{2}\setminus J'_{1})\cap J'_{1}=\emptyset$ as required). We obtain $J'_{2}\subseteq J_{2}$ such that \begin{eqnarray*} |J'_{2}\cup J'_{1}| & \geq & (1-\delta)|I_{2}\setminus J'_{1}|+|J'_{1}|\\ & = & (1-\delta)|J'_{1}\cup I_{2}| \end{eqnarray*} Substituting the previous bound $|J'_{1}\cup I_{2}|\geq(1-\delta)|I_{1}|+|I_{2}|$ gives the claim, except for disjointness of $J'_{1},J'_{2}$, but clearly if they are not disjoint we can replace $J'_{1}$ with $J'_{1}\setminus J_{2}$. \end{proof} \subsection{Atomicity and uniformity of components} We shall need to know almost-atomicity and almost-uniformity passes to component measures. It will be convenient to replace the notion of $\varepsilon$-atomic measures, introduced in Section \ref{sub:inverse-theorem}, with one that is both stronger and more convenient to work with. \begin{defn} \label{def:ep-em-almost-atomic}A measure $\mu\in\mathcal{P}([0,1])$ is $(\varepsilon,m)$\emph{-atomic} if $H_{m}(\mu)<\varepsilon$. \end{defn} Recall that $H_{m}(\mu)=0$ if and only if $\mu$ is supported on a single interval $I\in\mathcal{D}_{m}$ of length $2^{-m}$. Thus, by continuity of the entropy function $(p_{i})\mapsto-\sum p_{i}\log p_{i}$, if $\varepsilon$ is small compared to $m$ then any $(\varepsilon,m)$-atomic measure is $2^{-m}$-atomic. The reverse implication is false: indeed, a measure may be $\varepsilon$-atomic for arbitrarily small $\varepsilon$ and at the same time have its mass divided evenly between two (adjacent) intervals $I,I'\in\mathcal{D}_{m}$, in which case $H_{m}(\mu)=\frac{1}{m}$. Thus, for $\varepsilon$ small compared to $m$, the most one can say in general about an $\varepsilon$-atomic measure is that it is $(\frac{1}{m},m)$-atomic. Thus the definition above is slightly stronger. \begin{lem} \label{lem:almost-atomic-measures-have-almost-atomic-components}If $\mu\in\mathcal{P}([0,1])$ is $(\varepsilon,m)$-atomic then for $k<m$, \[ \mathbb{P}_{0\leq i\leq m}\left(\mu^{x,i}\mbox{ is }(\varepsilon',k)\mbox{-atomic}\right)>1-\varepsilon' \] for $\varepsilon'=\sqrt{\varepsilon+O(\frac{k}{m})}$. \end{lem} \begin{proof} By Lemma \ref{lem:entropy-local-to-global}, \[ \mathbb{E}_{0\leq i\leq m}(H_{k}(\mu^{i,x}))\leq H_{m}(\mu)+O(\frac{k}{m})<\varepsilon+O(\frac{k}{m}). \] Since $H_{k}(\mu^{i,x})\geq0$, the first claim follows by Markov's inequality.\end{proof} \begin{lem} \label{lem:saturation-passes-to-components}If $\mu\in\mathcal{P}([0,1])$ is $(\varepsilon,n)$-uniform then for every $1\leq m<n$, \[ \mathbb{P}_{0\leq i\leq n}\left(\mu^{x,i}\mbox{ is }(\varepsilon',m)\mbox{-uniform}\right)>1-\varepsilon' \] where \textup{$\varepsilon'=\sqrt{\varepsilon+O(\frac{m}{n})}$.}\end{lem} \begin{proof} The proof is the same as the previous lemma and we omit it. \end{proof} We also will repeatedly use the following consequence of Chebychev's inequality: \begin{lem} \label{lem:Chebyshev}Suppose that $\mathcal{A}\subseteq\mathcal{P}([0,1])$ and that \[ \mathbb{P}_{0\leq i\leq n}(\mu^{x,i}\in\mathcal{A})>1-\varepsilon \] Then there is a subset $I\subseteq\{0,\ldots,n\}$ with $|I|>(1-\sqrt{\varepsilon})n$ and \[ \mathbb{P}_{i=q}(\mu^{x,i}\in\mathcal{A})>1-\sqrt{\varepsilon}\qquad\mbox{for }q\in I \] \end{lem} \begin{proof} Consider the function $f:\{0,\ldots,m\}\rightarrow[0,1]$ given by $f(q)=\mathbb{P}_{i=q}(\mu^{x,i}\in\mathcal{A})$. By assumption $\mathbb{E}_{0\leq q\leq n}(f(q))>1-\varepsilon$. By Chebychev's inequality, there is a subset $I\subseteq\{0,\ldots,n\}$ with $|I|\geq(1-\sqrt{\varepsilon})n$ and $f(q)>1-\sqrt{\varepsilon}$ for $q\in I$, as desired. \end{proof} \section{\label{sec:Entropy-growth-for-convolutions}Convolutions} \subsection{\label{sub:Covariance-matrices}\label{sub:Normal-measures-and-Berry-Esseen}The Berry-Esseen theorem and an entropy estimate} For $\mu\in\mathcal{P}(\mathbb{R})$ let $m(\mu)$ denote the mean, or barycenter, of $\mu$, given by \[ \left\langle \mu\right\rangle =\int x\, d\mu(x), \] and let $\var(\mu)$ denote its variance: \[ \var(\mu)=\int(x-\left\langle \mu\right\rangle )^{2}\, d\mu(x). \] Recall that if $\mu_{1},\ldots,\mu_{k}\in\mathcal{P}(\mathbb{R})$ then $\mu=\mu_{1}*\ldots*\mu_{k}$ has mean $\left\langle \mu\right\rangle =\sum_{i=1}^{k}\left\langle \mu_{i}\right\rangle $ and $\var(\mu)=\sum_{i=1}^{k}\var(\mu_{i})$. The Gaussian with mean $m$ and variance $\sigma^{2}$ is given by $\gamma_{m,\sigma^{2}}(A)=\int_{A}\varphi((x-m)/\sigma^{2})dx$, where $\varphi(x)=\sqrt{2\pi}\exp(-\frac{1}{2}|x|^{2})$. The central limit theorem asserts that, for $\mu_{1},\mu_{2},\ldots\in\mathcal{P}(\mathbb{R}^{d})$ of positive variance, the convolutions $\mu_{1}*\ldots*\mu_{k}$ can be re-scaled so that the resulting measure is close in the weak sense to a Gaussian measure. The Berry-Esseen inequalities quantify the rate of this convergence. We use the following variant from \cite{Esseen1942}. \begin{thm} \label{thm:Berry-Esseen-Rotar}Let $\mu_{1},\ldots,\mu_{k}$ be probability measures on $\mathbb{R}$ with finite third moments $\rho_{i}=\int|x|^{3}\, d\mu_{i}(x)$. Let $\mu=\mu_{1}*\ldots*\mu_{k}$, and let $\gamma$ be the Gaussian measure with the same mean and variance as $\mu$. Then% \footnote{In the usual formulation one considers the measure $\mu'$ defined by scaling $\mu$ by $\var(\mu)$, and $\gamma'$ the Gaussian with the same mean and variance $1=\var(\mu')$, and gives a similar bound for $|\mu'(J)-\gamma'(J)|$ as $J$ ranges over intervals. The two formulations are equivalent since $\mu(I)-\gamma(I)=\mu'(J)-\gamma'(J)$ where $J$ is an interval depending in the obvious manner on $I$, and $I\rightarrow J$ is a bijection.% } for any interval $I\subseteq\mathbb{R}$, \[ |\mu(I)-\gamma(I)|\leq C_{1}\cdot\frac{\sum_{i=1}^{k}\rho_{i}}{\var(\mu)^{3/2}}, \] where $C_{1}=C_{1}(d)$. In particular, if $\rho_{i}\leq C$ and $\sum_{i=1}^{k}\var(\mu_{i})\geq ck$ for constants $c,C>0$ then \[ |\mu(I)-\gamma(I)|=O_{c,C}(k^{-1/2}). \] \end{thm} \subsection{\label{sub:Estimating-modulus-of-continuity}Multiscale analysis of repeated self-convolutions} In this section we show that for any measure $\mu$, every $\delta>0$, every integer scale $m\geq2$, and appropriately large $k$, the following holds: typical levels-$i$ components of the convolution $\mu^{*k}$ are $(\delta,m)$-uniform, unless in $\mu$ the level-$i$ components are typically $(\delta,m)$-atomic. The main idea is to apply the Berry-Esseen theorem to convolutions of component measures. \begin{prop} \label{pro:entropy-convolution-estimate}Let $\sigma>0$, $\delta>0$, and $m\geq2$ an integer. Then there exists an integer $p=p_{0}(\sigma,\delta,m)$ such that for all $k\geq k_{0}(\sigma,\delta,m)$, the following holds: Let $\mu_{1},\ldots,\mu_{k}\in\mathcal{P}([0,1])$, let $\mu=\mu_{1}*\ldots*\mu_{k}$, and suppose that $\var(\mu)\geq\sigma k$. Then \begin{equation} \mathbb{P}_{i=p-[\log\sqrt{k}]}\left(\mu^{x,i}\mbox{ is }(\delta,m)\mbox{-uniform}\right)>1-\delta.\label{eq:3} \end{equation} \end{prop} Note that $p-[\log\sqrt{k}]$ will generally be negative. Dyadic partitions of level $q$ with $q<0$ are defined in the same manner as for positive $q$, that is by $\mathcal{D}_{q}=\{[r2^{q},(r+1)2^{q})\}_{r\in\mathbb{Z}}$. For $q<0$ this partition consists of intervals of length is $2^{|q|}$ with integer endpoints. Thus, the conclusion of the proposition concerns the $\mu$-probabilities of nearby intervals of length $O_{p}(\sqrt{k})=O_{\sigma,\delta,m}(\sqrt{k})$ (since $p=p_{0}(\sigma,\delta,m)$). This is the natural scale at which we can expect to control such probabilities: indeed, $\mu$ is close to a Gaussian $\gamma$ of variance $\sigma k$, but only in the sense that for any $c$, if $k$ is large enough, $\mu$ and $\gamma$ closely agree on the mass that they give to intervals of length $c\sqrt{\var(\mu)}=c\sqrt{k}$. \begin{proof} Let us first make some elementary observations. Suppose that $\gamma\in\mathcal{P}(\mathbb{R})$ is a probability measure with continuous density function $f$, and $x\in\mathbb{R}$ is such that $f(x)\neq0$. Since $\gamma(I)=\int_{I}f(y)dy$, for any interval $I$ we have \[ \left|\frac{\gamma(I)}{|I|}-f(x)\right|\leq\sup_{z\in I}|f(x)-f(z)| \] where $|I|$ is the length of $I$. By continuity, the right hand side tends to $0$ uniformly as the endpoints of $I$ approach $x$. In particular, if $n$ is large enough, for any $I\subseteq\mathcal{D}_{n}(x)$ the ratio $\frac{\gamma(x)}{|I|}$ will be arbitrarily close to $f(x)$. Therefore, since $f(x)\neq0$, for any fixed $m$, if $n$ is large enough then $|\frac{\gamma(I)}{\gamma(J)}-1|=|\frac{\gamma(I)/|I|}{\gamma(J)/|J|}-1|$ for all intervals $I,J\in\mathcal{D}_{n+m}$ with $I,J\subseteq\mathcal{D}_{n}(x)$. In other words, the distribution of $\gamma^{x,n}$ on the level-$m$ dyadic subintervals of $[0,1)$ approaches the uniform one as $n\rightarrow\infty$. Now, \[ H_{m}(\mu^{x,n})=-\sum_{I\in\mathcal{D}_{n+m},I\subseteq\mathcal{D}_{n}(x)}\mu(I)\log\mu(I), \] and the function $t\log t$ is continuous for $t\in(0,1)$. Therefore, writing $u$ for the uniform measure on $[0,1)$, we conclude that \[ \lim_{n\rightarrow\infty}H_{m}(\gamma^{x,n})=H_{m}(u)=1. \] This in turn implies that $\mathbb{E}_{i=p}(H_{m}(\gamma^{x,p}))\rightarrow1$ as $p\rightarrow\infty$. Finally, the rate of convergence in the limits above is easily seen to depend only on the value $f(x)$ and the modulus of continuity of $f$ at $x$. Fix $0<\sigma,\delta<1$ and consider the family $\mathcal{G}$ of Gaussians with mean 0 and variance in the interval $[\sigma,1]$. For every interval $I=[-R,R]$, the restriction to $I$ of the density functions of measures in $\mathcal{G}$ form an equicontinuous family. Also, by choosing a large enough $R$ we can ensure that $\inf_{g\in\mathcal{G}}\gamma([-R,R])$ is arbitrarily close to $1$. Therefore, by the previous discussion, there is a $p=p_{0}(\sigma,\delta,m)$ such that $\mathbb{P}_{i=p}(H_{m}(\gamma^{x,i})>1-\delta)>1-\delta$ for all $\gamma\in\mathcal{G}$. Now, if $\mu_{i}$ and $\mu$ are as in the statement and $\mu'$ is $\mu$ scaled by $2^{-[\log\sqrt{k}]}$ (which is up to a constant factor the same as $1/\sqrt{k}$), then by the Berry-Esseen theorem (Theorem \ref{thm:Berry-Esseen-Rotar}) $\mu'$ agrees with the Gaussian of the same mean and variance on intervals of length $2^{-p-m}$ to a degree that can be made arbitrarily small by making $k$ large in a manner depending on $\sigma,p$. In particular for large enough $k$ this guarantees that $\mathbb{P}_{i=p}(H_{m}((\mu')^{x,i})>1-\delta)>1-\delta$. All that remains is to adjust the scale by a factor of $2^{[\log\sqrt{k}]}$. Then the same argument applied to $\mu$ instead of the scaled $\mu'$ gives $\mathbb{P}_{i=p-[\log\sqrt{k}]}(H_{m}((\mu)^{x,i})>1-\delta)>1-\delta$, which is \eqref{eq:3}. \end{proof} We turn to repeated self-convolutions. \begin{prop} \label{prop:saturation-of-components-of-convolution}Let $\sigma,\delta>0$ and $m\geq2$ an integer. Then there exists $p=p_{1}(\sigma,\delta,m)$ such that for sufficiently large $k\geq k_{1}(\sigma,\delta,m)$, the following holds. Let $\mu\in\mathcal{P}([0,1])$, fix an integer $i_{0}\geq0$, and write \[ \lambda=\mathbb{E}_{i=i_{0}}\left(\var(\mu^{x,i})\right). \] If $\lambda>\sigma$ then for $j_{0}=i_{0}-[\log\sqrt{k}]+p$ and $\nu=\mu^{*k}$ we have \[ \mathbb{P}_{j=j_{0}}\left(\nu^{x,j}\mbox{ is }(\delta,m)\mbox{-uniform}\right)>1-\delta. \] \end{prop} \begin{proof} Let $\mu$, $\lambda$ and $m$ be given. Fix $p$ and $k$ (we will later see how large they must be). Let $i_{0}$ be as in the statement and $j_{0}=i_{0}-[\log\sqrt{k}]+p$. Let $\widetilde{\mu}$ denote the $k$-fold self-product $\widetilde{\mu}=\mu\times\ldots\times\mu$ and let $\pi:(\mathbb{R})^{k}\rightarrow\mathbb{R}$ denote the addition map \[ \pi(x_{1},\ldots,x_{k})=\sum_{i=1}^{k}x_{i}. \] Then $\nu=\pi\widetilde{\mu}$, and, since $\widetilde{\mu}=\mathbb{E}_{i=i_{0}}\left(\widetilde{\mu}_{x,i}\right)$, we also have by linearity $\nu=\mathbb{E}_{i=i_{0}}\left(\pi\widetilde{\mu}_{x,i}\right)$. By concavity of entropy and an application of Markov's inequality, there is a $\delta_{1}>0$, depending only on $\delta$, such that the proposition will follow if we show that with probability $>1-\delta_{1}$ over the choice of the component $\widetilde{\mu}_{x,i_{0}}$ of $\widetilde{\mu}$, the measure $\eta=\pi\widetilde{\mu}_{x,i_{0}}$ satisfies \begin{equation} \mathbb{P}_{j=j_{0}}\left(\eta^{y,j}\mbox{ is }(\delta_{1},m)\mbox{-uniform}\right)>1-\delta_{1}.\label{eq:5} \end{equation} The random component $\widetilde{\mu}_{x,i_{0}}$ is itself a product measure $\widetilde{\mu}_{x,i}=\mu_{x_{1},i_{0}}\times\ldots\times\mu_{x_{k},i_{0}}$, and the marginal measures $\mu_{x_{j},i_{0}}$ of this product are distributed independently according to the distribution of the raw components of $\mu$ at level $i_{0}$. Note that these components differ from the re-scaled components by a scaling factor of $2^{i_{0}}$, so the expected variance of the raw components is $2^{-2i_{0}}\lambda$. Recall that \[ \var(\pi(\mu_{x_{1},i_{0}}\times\ldots\times\mu_{x_{k},i_{0}}))=\sum_{j=1}^{k}\var(\mu_{x_{j},i_{0}}). \] Thus for any $\delta_{2}>0$, by the weak law of large numbers, if $k$ is large enough in a manner depending on $\delta_{2}$ then with probability $>1-\delta_{2}$ over the choice of $\widetilde{\mu}_{x,i_{0}}$ we will have% \footnote{We use here the fact that we have a uniform bound for the rate of convergence in the weak law of large numbers for i.i.d. random variables $X_{1},X_{2},\ldots$. In fact, the rate can be bounded in terms of the mean and variance of $X_{1}$. Here $X_{1}$ is distributed like the variance $\var(\mu_{x,i_{0}})$ of a random component of level $i_{0}$, and the mean and variance of $X_{1}$ are bounded independently of $\mu\in\mathcal{P}([0,1])$.% } \begin{equation} |\frac{1}{k}\var(\pi\widetilde{\mu}_{x,i_{0}})-2^{-2i_{0}}\lambda|<2^{-2i_{0}}\delta_{2}.\label{eq:4} \end{equation} We can choose $\delta_{2}$ small in a manner depending on $\sigma$, so \eqref{eq:4} implies \begin{eqnarray} \var(\pi\widetilde{\mu}_{x,i_{0}}) & > & 2^{-2i_{0}}\cdot k\sigma/2.\label{eq:6} \end{eqnarray} But now inequality \eqref{eq:5} follows from an application of Proposition \ref{pro:entropy-convolution-estimate} with proper choice of parameters.\end{proof} \begin{lem} \label{lem:concentration-from-covariance-matrix}Fix $m\in\mathbb{N}$. If $\var(\mu)$ is small enough then $H_{m}(\mu)\leq\frac{2}{m}$. If $H_{m}(\mu)$ is small enough then $\var(\mu)<2^{-m}$.\end{lem} \begin{proof} If $\var(\mu)$ is small then most of the $\mu$-mass sits on an interval of length $2^{-m}$, hence on at most two intervals from $\mathcal{D}_{m}$, so $H_{m}(\mu)$ is roughly $\frac{1}{m}$ (certainly $<\frac{2}{m}$). Conversely, if $H_{m}(\mu)$ is small then most of the $\mu$-mass sits on one interval from $\mathcal{D}_{m}$, whose length is $2^{-m}$, so $\var(\mu)$ is of this order. \end{proof} Recall Definitions \ref{def:almost-uniform} and \ref{def:ep-em-almost-atomic}. \begin{cor} \label{cor:components-of-measures-with-small-variance}Let $m\in\mathbb{N}$ and $\varepsilon>0$. For $N>N(m,\varepsilon)$ and $0<\delta<\delta(m,\varepsilon,N)$, if $\mu\in\mathbb{P}([0,1])$ and $\var(\mu)<\delta$, then \[ \mathbb{P}_{0\leq i\leq N}(\var(\mu^{x,i})<\varepsilon\mbox{ and }\mu^{x,i}\mbox{ is }(\varepsilon,m)\mbox{-atomic})>1-\varepsilon \] \end{cor} \begin{proof} Using the previous lemma choose $m',\varepsilon'$ such that $H_{m'}(\theta)<\varepsilon'$ implies $\var(\theta)<\varepsilon$. Then it suffices to find $\delta,N$ such that \[ \mathbb{P}_{0\leq i\leq N}(H_{m'}(\mu^{x,i})<\varepsilon'\mbox{ and }H_{m}(\mu^{x,i})<\varepsilon)>1-\varepsilon \] By Lemma \ref{lem:almost-atomic-measures-have-almost-atomic-components} (applied twice), if $\varepsilon''>0$ is small enough then for large enough $N$ the last inequality follows from $H_{N}(\mu)<\varepsilon''$. Finally, by the last lemma again, if $N$ is large enough, this follows from $\var(\mu)<\delta$ if $\delta$ is sufficiently small.\end{proof} \begin{thm} \label{thm:saturation-of-repeated-convolutions}Let $\delta>0$ and $m\geq2$. Then for $k\geq k_{2}(\delta,m)$ and all sufficiently large $n\geq n_{2}(\delta,m,k)$, the following holds: For any $\mu\in\mathcal{P}([0,1])$ there are disjoint subsets $I,J\subseteq\{1,\ldots,n\}$ with $|I\cup J|>(1-\delta)n$ such that, writing $\nu=\mu^{*k}$, \begin{eqnarray} \mathbb{P}_{i=q}\left(\nu^{x,i}\mbox{ is }(\delta,m)\mbox{-uniform}\right)\geq1-\delta & & \mbox{ for }q\in I\label{eq:73}\\ \mathbb{P}_{i=q}\left(\mu^{x,i}\mbox{ is }(\delta,m)\mbox{-atomic}\right)\geq1-\delta & & \mbox{ for }q\in J.\label{eq:74} \end{eqnarray} \end{thm} \begin{proof} Let $\delta$ and $m\geq0$ be given, we may assume $\delta<1/2$. The proof is given in terms of a function $\widetilde{\rho}:(0,1]\rightarrow(0,1]$ with $\widetilde{\rho}(\sigma)$ depending on $\sigma,\delta,m$. The exact requirements will be given in the course of the proof. The definition of $\widetilde{\rho}$ uses the functions $k_{1}(\cdot)$ and $p_{1}(\cdot)$ from Proposition \ref{prop:saturation-of-components-of-convolution} and we assume, without loss of generality, that these functions are monotone in each of their arguments. Our first requirement of $\widetilde{\rho}$ will be that $\widetilde{\rho}(\sigma)<\sigma$. Consider the decreasing sequence $\sigma_{0}>\sigma_{1}>\ldots$ defined by $\sigma_{0}=1$ and $\sigma_{i}=\widetilde{\rho}(\sigma_{i-1})$. Assume that $k\geq k_{1}(\sigma_{\left\lceil 1+2/\delta\right\rceil },\delta,m)$; this expression can be taken for $k_{2}(\delta,m)$. Fix $\mu$ and $n$ large, we shall later see how large an $n$ is desirable. For $0\leq q\leq n$ write \[ \lambda_{q}=\mathbb{E}_{i=q}\left(\var(\mu^{x,i})\right). \] Since the intervals $(\sigma_{i},\sigma_{i-1}]$ are disjoint, there is an integer $1\leq s\leq1+\frac{2}{\delta}$ such that $\mathbb{P}_{0\leq q\leq n}(\lambda_{q}\in(\sigma_{s},\sigma_{s-1}])<\frac{\delta}{2}$. For this $s$ define \begin{eqnarray*} \sigma & = & \sigma_{s-1}\\ \rho & = & \widetilde{\rho}(\sigma)\;=\;\sigma_{s}, \end{eqnarray*} and set \begin{eqnarray*} I' & = & \{0\leq q\leq n\,:\,\lambda_{q}>\sigma\}\\ J' & = & \{0\leq q\leq n\,:\,\lambda_{q}<\rho\}. \end{eqnarray*} Then by our choice of $s$, \begin{equation} |I'\cup J'|>(1-\frac{\delta}{2})n.\label{eq:71} \end{equation} Let $\ell\geq0$ be the integer \[ \ell=[\log\sqrt{k}]-p_{1}(\sigma,\delta,m). \] Since we may take $n$ large relative to $\ell$, by deleting at most $\ell$ elements of $I'$ we can assume that $I'\subseteq[\ell,n]$ and that \eqref{eq:71} remain valid. Let \[ I=I'-\ell \] Since $k\geq k_{1}(\sigma,\delta,m)$, by our choice of parameters and the previous proposition, \[ \mathbb{P}_{i=q}\left(\nu^{x,i}\mbox{ is }(\delta,m)\mbox{-uniform}\right)>1-\delta\qquad\mbox{for }q\in I, \] which is \eqref{eq:73}. We now turn to the slightly harder task of choosing $n$ (i.e. determining the appropriate condition $n\geq n_{2}$). By definition of $J'$, \[ \mathbb{E}_{i=q}\left(\var(\mu^{x,i})\right)=\lambda_{q}<\rho\qquad\mbox{for }q\in J'. \] This and Markov's inequality imply \begin{equation} \mathbb{P}_{i=q}\left(\var(\mu^{x,i})<\sqrt{\rho}\right)>1-\sqrt{\rho}\qquad\mbox{for }q\in J'.\label{eq:76} \end{equation} Fix a small number $\rho'=\rho'(\delta,\sigma)$ and a large integer $N=N(\ell,\delta,\rho')$ upon which we place constraints in due course. Since we can take $n$ large relative to $N$, we can assume $I',J'\subseteq\{\ell,\ldots,n-N\}$ without affecting the size bounds. Assuming $\rho$ is small enough, Corollary \ref{cor:components-of-measures-with-small-variance} tells us that any measure $\theta\in\mathcal{P}([0,1])$ satisfying $\var(\theta)<\sqrt{\rho}$ also satisfies \[ \mathbb{P}_{0\leq i\leq N}\left(\var(\theta^{y,i})<\sigma\mbox{ and }\theta^{y,i}\mbox{ is }(\delta,m)\mbox{-atomic}\right)>1-\rho' \] Assuming again that $\sqrt{\rho}<\rho'$, the last equation and \eqref{eq:76} give \begin{eqnarray*} \mathbb{P}_{q\leq i\leq q+N}\left(\var(\mu^{x,i})<\sigma\mbox{ and }\mu^{x,i}\mbox{ is }(\delta,m)\mbox{-atomic}\right) & > & (1-\sqrt{\rho})(1-\rho')\\ & > & 1-2\rho'.\qquad\mbox{for }q\in J' \end{eqnarray*} Let \[ U=\left\{ q\in\mathbb{N}\,:\,\mathbb{P}_{i=q}(\var(\theta^{y,i})<\frac{\sigma}{2}\mbox{ and }\theta^{y,i}\mbox{ is }(\delta,m)\mbox{-atomic})>1-\sqrt{2\rho'}\right\} . \] By Lemma \ref{lem:Chebyshev} (i.e. Chebychev's inequality), \[ |U\cap[q,q+N]|\geq(1-\sqrt{2\rho'})N\qquad\mbox{for }q\in J'. \] Apply Lemma \ref{lem:translation-invariant-covering} to $J'$ and $U$ to obtain $U'\subseteq U$ satisfying $|U'|>(1-\sqrt{2\rho'})|J'|$ and $|U'\cap(U'-\ell)|>(1-2\sqrt{2\rho'}-\frac{\ell}{N})|U'|$. Defining \[ J=U'\cap(U'-\ell) \] and assuming that $\frac{\ell}{N}<2\sqrt{\rho'}$ we conclude that \[ |J|\geq(1-3\sqrt{2\rho'})|J'| \] We claim that $I\cap J=\emptyset$. Indeed, suppose $q\in I\cap J$. Then $q+\ell\in I'$, so $\lambda_{q+\ell}\geq\sigma$. On the other hand, $q\in J\subseteq U'-\ell$ implies $q+\ell\in U'\subseteq U$, so by definition of $U$ and assuming that $3\sqrt{3\rho'}<\sigma$, \begin{eqnarray*} \lambda_{q+\ell} & = & \mathbb{E}_{i=q+\ell}(\var(\mu^{x,i}))\\ & \leq & \frac{\sigma}{2}\cdot\mathbb{P}_{i=q+\ell}(\var(\mu^{x,i})<\frac{\sigma}{2})+1\cdot\mathbb{P}_{i=q+\ell}(\var(\mu^{x,i})\geq\frac{1}{2})\\ & < & \frac{\sigma}{2}\cdot1+1\cdot3\sqrt{3\rho'}\\ & < & \sigma. \end{eqnarray*} This contradiction shows that $I\cap J=\emptyset$. Finally, $I'\cap J'=\emptyset$ and $|I'\cup J'|>(1-\frac{\delta}{2})n$, so, assuming that $3\sqrt{3\rho'}<\delta$, \[ |I\cup J|=|I|+|J|\geq|I|+(1-3\sqrt{3\rho'})|J'|>(1-\frac{\delta}{2})|I'\cup J'|>(1-\frac{\delta}{2})^{2}n. \] This completes the proof. \end{proof} \subsection{\label{sub:Kaimanovitch-Vershik-Tao-theorem}The Ka\u\i{}manovich-Vershik lemma} The Pl\"{u}nnecke-Rusza inequality in additive combinatorics toughly states that if $A,B\subseteq\mathbb{Z}$ and $|A+B|\leq C|A|$, then there is a subset $A_{0}\subseteq A$ of size comparable to $A$ such that $|A_{0}+B^{\oplus k}|\leq C^{k}|A|$. The second ingredient in our proof of Theorem \ref{thm:inverse-thm-Rd} is the following elegant analog for entropy: \begin{lem} \label{thm:Kaimanovitch-Vershik-Tao} Let $\Gamma$ be a countable abelian group and let $\mu,\nu\in\mathcal{P}(\Gamma)$ be probability measures with $H(\mu)<\infty$, $H(\nu)<\infty$. Let \[ \delta_{k}=H(\mu*(\nu^{*(k+1)}))-H(\mu*(\nu^{*k})). \] Then $\delta_{k}$ is non-increasing in $k$. In particular, \[ H(\mu*(\nu^{*k}))\leq H(\mu)+k\cdot(H(\mu*\nu)-H(\nu)). \] \end{lem} This lemma above first appears in a study of random walks on groups by Ka\u\i{}manovich and Vershik \cite{KaimanovichVershik1983}. It was more recently rediscovered and applied in additive combinatorics by Madiman and his co-authors \cite{Madiman2008,MadimanMarcusTetali2012} and, in a weaker form, by Tao \cite{Tao2010}, who later made the connection to additive combinatorics. For completeness we give the short proof here. \begin{proof} Let $X_{0}$ be a random variable distributed according to $\mu$, let $Z_{n}$ be distributed according to $\nu$, and let all variables be independent. Set $X_{n}=X_{0}+Z_{1}+\ldots+Z_{n}$, so the distribution of $X_{n}$ is just $\mu*\nu^{*n}$. Furthermore, since $G$ is abelian, given $Z_{1}=g$, the distribution of $X_{n}$ is the same as the distribution of $X_{n-1}+g$ and hence $H(X_{n}|Z_{1})=H(X_{n-1})$. We now compute: \begin{eqnarray} H(Z_{1}|X_{n}) & = & H(Z_{1},X_{n})-H(X_{n})\nonumber \\ & = & H(Z_{1})+H(X_{n}|Z_{1})-H(X_{n})\nonumber \\ & = & H(\nu)+H(\mu*\nu^{*(n-1)})-H(\mu*\nu^{*n}).\label{eq:9} \end{eqnarray} Since $X_{n}$ is a Markov process, given $X_{n}$, $Z_{1}=X_{1}-X_{0}$ is independent of $X_{n+1}$, so \[ H(Z_{1}\,|\, X_{n})=H(Z_{1}\,|\, X_{n},X_{n+1})\leq H(Z_{1}\,|\, X_{n+1}). \] Using \eqref{eq:9} in both sides of the inequality above, we find that \[ H(\mu*\nu^{*(n-1)})-H(\mu*\nu^{*n})\leq H(\mu*\nu^{*n})-H(\mu*\nu^{*(n+1)}), \] which is the what we claimed. \end{proof} For the analogous statement for the scale-$n$ entropy of measures on $\mathbb{R}$ we use a discretization argument. For $m\in\mathbb{N}$ let \[ M_{m}=\{\frac{k}{2^{m}}\,:\, k\in\mathbb{Z}\} \] denote the group of $2^{m}$-adic rationals. Each $D\in\mathcal{D}_{m}$ contains exactly one $x\in M_{m}$. Define the $m$-discretization\emph{ }map $\sigma_{m}:\mathbb{R}\rightarrow M_{m}$ by $\sigma_{m}(x)=v$ if $\mathcal{D}_{m}(x)=\mathcal{D}_{m}(v)$, so that $\sigma_{m}(x)\in\mathcal{D}_{m}(x)$. We say that a measure $\mu\in\mathcal{P}(\mathbb{R}^{d})$ is $m$-discrete if it is supported on $M_{m}$. For arbitrary $\mu$ its $m$-discretization is its push-forward $\sigma_{m}\mu$ through $\sigma_{m}$, given explicitly by: \[ \sigma_{m}\mu=\sum_{v\in M_{m}^{d}}\mu(\mathcal{D}_{m}(v))\cdot\delta_{v}. \] Clearly $H_{m}(\mu)=H_{m}(\sigma_{m}\mu)$. \begin{lem} \label{lem:entropy-of-discretized-convolutions}Given $\mu_{1},\ldots,\mu_{k}\in\mathcal{P}(\mathbb{R})$ with $H(\mu_{i})<\infty$ and $m\in\mathbb{N}$, \[ |H_{m}(\mu_{1}*\mu_{2}*\ldots*\mu_{k})-H_{m}(\sigma_{m}\mu_{1}*\ldots*\sigma_{m}\mu_{k})|=O(k/m). \] \end{lem} \begin{proof} Let $\pi:\mathbb{R}^{k}\rightarrow\mathbb{R}$ denote the map $(x_{1},\ldots,x_{k})\mapsto\sum_{i=1}^{k}x_{i}$. Then $\mu_{1}*\ldots*\mu_{k}=\pi(\mu_{1}\times\ldots\times\mu_{k})$ and $\mu_{1}^{(m)}*\ldots*\mu_{k}^{(m)}=\pi\circ\sigma_{m}^{k}(\mu_{1}\times\ldots\times\mu_{k})$ (here $\sigma_{m}^{k}:(x_{1},\ldots,x_{k})\mapsto(\sigma_{m}x_{1},\ldots,\sigma_{m}x_{k})$). Now, it is easy to check that \[ |\pi(x_{1},\ldots,x_{k})-\pi\circ\sigma_{m}^{k}(x_{1},\ldots,x_{k})|=O(k) \] so the desired entropy bound follows from Lemma \ref{lem:entropy-weak-continuity-properties} \eqref{enu:entropy-geometric-distortion}. \end{proof} \begin{prop} \label{cor:non-discrete-KVT-theorem}Let $\mu,\nu\in\mathcal{P}(\mathbb{R})$ with $H_{n}(\mu),H_{n}(\nu)<\infty$. Then \begin{equation} H_{n}(\mu*(\nu^{*k}))\leq H_{n}(\mu)+k\cdot\left(H_{n}(\mu*\nu)-H_{n}(\mu)\right)+O(\frac{k}{n}).\label{eq:10} \end{equation} \end{prop} \begin{proof} Writing $\widetilde{\mu}=\sigma_{n}(\mu)$ and $\widetilde{\nu}=\sigma_{n}(\nu)$, Theorem \ref{thm:Kaimanovitch-Vershik-Tao} implies \[ H(\widetilde{\mu}*(\widetilde{\nu}^{*k}))\leq H(\widetilde{\mu})+k\cdot(H(\widetilde{\mu}*\widetilde{\nu})-H(\widetilde{\nu})). \] For $n$-discrete measures the entropy of the measure coincides with its entropy with respect to $\mathcal{D}_{n}$, so dividing this inequality by $n$ gives \eqref{eq:10} for $\widetilde{\mu},\widetilde{\nu}$ instead of $\mu,\nu$, and without the error term. The desired inequality follows from Lemma \ref{lem:entropy-of-discretized-convolutions}. \end{proof} We also will later need the following simple fact: \begin{cor} \label{lem:entropy-monotonicity-under-convolution}For $m\in\mathbb{N}$ and $\mu,\nu\in\mathcal{P}([-r,r]^{d})$ with $H_{n}(\mu),H_{n}(\nu)<\infty$, \[ H_{m}(\mu*\nu)\geq H_{m}(\mu)-O(\frac{1}{m}). \] \end{cor} \begin{proof} This is immediate from the identity $\mu*\nu=\int\mu*\delta_{y}\, d\nu(y)$, concavity of entropy, and Lemma \ref{lem:entropy-weak-continuity-properties} \eqref{enu:entropy-translation} (note that $\mu*\delta_{y}$ is a translate of $\mu$). \end{proof} \subsection{\label{sub:Proof-of-inverse-theorem}Proof of the inverse theorem} Recall Definitions \ref{def:almost-uniform} and \ref{def:ep-em-almost-atomic}. \begin{thm} For every $\varepsilon_{1},\varepsilon_{2}>0$ and integers $m_{1},m_{2}\geq2$, there exists a $\delta=\delta(\varepsilon_{1},\varepsilon_{2},m_{1},m_{2})$ such that for all $n>n(\varepsilon_{1},\varepsilon_{2},m_{1},m_{2},\delta)$, if $\nu,\mu\in\mathcal{P}([0,1])$ then either $H_{n}(\mu*\nu)\geq H_{n}(\mu)+\delta$, or there exist disjoint subsets $I,J\subseteq\{0,\ldots,n\}$ with $|I\cup J|\geq(1-\varepsilon)n$ and \begin{eqnarray*} \mathbb{P}_{i=k}\left(\mu^{x,i}\mbox{ is }(\varepsilon_{1},m_{1})\mbox{-uniform}\right)\;>\;1-\varepsilon & & \mbox{for }k\in I\\ \mathbb{P}_{i=k}\left(\nu^{x,i}\mbox{ is }(\varepsilon_{2},m_{2})\mbox{-atomic}\right)\;>\;1-\varepsilon & & \mbox{for }k\in J. \end{eqnarray*} \end{thm} \begin{rem*} Since, given $\varepsilon$, for a suitable choice of $\varepsilon_{2},m_{2}$ any $(\varepsilon',m')$-atomic measure is $\varepsilon_{1}$-atomic, the statement above implies Theorem \ref{thm:inverse-thm-Rd}.\end{rem*} \begin{proof} We begin with $\varepsilon_{1}=\varepsilon_{2}=\varepsilon$ and $m_{1}=m_{2}=m$ and assume that $m$ is large with respect to $\varepsilon$ (we shall see how large below). We later explain how to remove this assumption. Choose $k=k_{2}(\varepsilon,m)$ as in Theorem \ref{thm:saturation-of-repeated-convolutions}, with $\delta=\varepsilon/2$. We shall show that the conclusion holds if $n$ is large relative to the previous parameters. Let $\mu,\nu\in\mathcal{P}([0,1))$. Denote \[ \tau=\nu^{*k}. \] Assuming $n$ is large enough, Theorem \ref{thm:saturation-of-repeated-convolutions} provides us with disjoint subsets $I,J\subseteq\{0,\ldots,n\}$ with $|I\cup J|>(1-\varepsilon/2)n$ such that \begin{equation} \mathbb{P}_{i=k}\left(\tau^{x,i}\mbox{ is }(\frac{\varepsilon}{2},m)\mbox{-uniform}\right)>1-\frac{\varepsilon}{2}\qquad\mbox{for }k\in I\label{eq:55} \end{equation} and \begin{equation} \mathbb{P}_{i=k}\left(\nu^{x,i}\mbox{ is }(\varepsilon,m)\mbox{-atomic}\right)\geq1-\frac{\varepsilon}{2}\qquad\mbox{for }k\in J.\label{eq:11} \end{equation} Let $I_{0}\subseteq I$ denote the set of $k$ such that \begin{equation} \mathbb{P}_{i=k}\left(\mu^{x,i}\mbox{ is }(\varepsilon,m)\mbox{-uniform}\right)>1-\varepsilon\qquad\mbox{for }k\in I.\label{eq:12} \end{equation} If $|I_{0}|>(1-\varepsilon)n$ we are done, since by \eqref{eq:11} and \eqref{eq:12}, the pair $I_{0},J$ satisfy the second alternative of the theorem. Otherwise, let $I_{1}=I\setminus I_{0}$, so that $|I_{1}|=|I|-|I_{0}|>\varepsilon n/2$. We have \[ \mathbb{P}_{i=k}\left(\tau^{x,i}\mbox{ is }(\frac{\varepsilon}{2},m)\mbox{-uniform and }\mu^{y,i}\mbox{ is not }(\varepsilon,m)\mbox{-uniform}\right)>\frac{\varepsilon}{2}\qquad\mbox{for }k\in I_{1}. \] For $\mu^{x,i},\tau^{y,i}$ in the event above, this just means that $H_{m}(\tau^{y,i})>H_{m}(\mu^{x,i})+\varepsilon/2$ and hence $H_{m}(\mu^{x,i}*\tau^{y,i})\geq H_{m}(\mu^{x,i})+\varepsilon/2-O(1/m)$. For any other pair $\mu^{x,i},\tau^{y,i}$ we have the trivial bound $H_{m}(\mu^{x,i}*\tau^{y,i})\geq H_{m}(\mu^{x,i})-O(1/m)$. Thus, using Lemmas \ref{lem:entropy-local-to-global}, \ref{lem:entropy-of-convolutions-via-component-convolutions}, \ref{lem:entropy-monotonicity-under-convolution}, \begin{eqnarray*} H_{n}(\mu*\tau) & = & \mathbb{E}_{0\leq i\leq n}(H_{m}(\mu^{x,i}*\tau^{y,i}))+O(\frac{m}{n})\\ & = & \frac{|I_{1}|}{n+1}\mathbb{E}_{i\in I_{1}}(H_{m}(\mu^{x,i}*\tau^{y,i}))+\frac{n+1-|I_{1}|}{n+1}\mathbb{E}_{i\in I_{1}^{c}}(H_{m}(\mu^{x,i}*\tau^{y,i}))+O(\frac{m}{n})\\ & > & \frac{|I_{1}|}{n+1}\left(\mathbb{E}_{i\in I_{1}}(H_{m}(\mu^{x,i}))+(\frac{\varepsilon}{2})^{2})\right)+\frac{n+1-|I_{1}|}{n+1}\mathbb{E}_{i\in I_{1}^{c}}(H_{m}(\mu^{x,i}))+O(\frac{1}{m}+\frac{m}{n})\\ & = & \mathbb{E}_{0\leq i\leq n}(H_{m}(\mu^{x,i}))+(\frac{\varepsilon}{2})^{3}+O(\frac{1}{m}+\frac{m}{n})\\ & = & H_{n}(\mu)+(\frac{\varepsilon}{2})^{3}+O(\frac{1}{m}+\frac{m}{n}). \end{eqnarray*} So, assuming that $\varepsilon$ was sufficiently small to begin with, $m$ large with respect to $\varepsilon$ and $n$ large with respect to $m$, we have \[ H_{n}(\mu*\tau)>H_{n}(\mu)+\frac{\varepsilon^{3}}{10}. \] On the other hand, by Proposition \ref{cor:non-discrete-KVT-theorem} above, \[ H_{n}(\mu*\tau)=H_{n}(\mu*\nu^{*k})\leq H_{n}(\mu)+k\cdot\left(H_{n}(\mu*\nu)-H_{n}(\mu)\right)+O(\frac{k}{n}). \] Assuming that $n$ is large enough in a manner depending on $\varepsilon$ and $k$, this and the previous inequality give \[ H_{n}(\mu*\nu)\geq H_{n}(\mu)+\frac{\varepsilon^{3}}{100k}. \] This is the desired conclusion, with $\delta=\varepsilon^{3}/100k$. We now remove the largeness assumption on $m$. Let $\varepsilon,m_{1},m_{2}$ be given and choose $\varepsilon'>0$ small compared to $\varepsilon$, and $m'$ appropriately large for $\varepsilon,m_{1},m_{2}$. Applying what we just proved for a large enough $n$ we obtain corresponding $I,J\subseteq[0,n]$. It will be convenient to denote $U_{1}=I$ and $U_{2}=J$. Now, for $i\in U_{1}$, by definition of $U_{1}$ and Lemma \ref{lem:saturation-passes-to-components}, and assuming $m_{1}/m'$ small enough, \[ \mathbb{P}_{i\leq j\leq i+m'}(\mu^{x,j}\mbox{ is }(\sqrt{2\varepsilon'},m_{1})\mbox{-uniform})>1-\sqrt{2\varepsilon'} \] Thus, assuming as we may that $\varepsilon<\sqrt{2\varepsilon'}$, if we set \[ V_{1}=\{j\in[0,n]\,:\,\mathbb{P}_{u=j}(\mu^{x,u}\mbox{ is }(\varepsilon,m_{2})\mbox{-uniform})>1-\varepsilon\} \] then by Lemma \ref{lem:Chebyshev} (Chebychev's inequality), $|[i,i+m']\cap V_{1}|>(1-(2\varepsilon)^{1/4})m'$. Similarly, defining \[ V_{2}=\{j\in[0,n]\,:\,\mathbb{P}_{u=j}(\mu^{x,u}\mbox{ is }(\varepsilon,m)\mbox{-atomic})>1-\varepsilon\} \] and using Lemma \ref{lem:almost-atomic-measures-have-almost-atomic-components}, if $m_{2}/m$ is small enough then $|[j,j+m']\cap V_{2}|>(1-(2\varepsilon)^{1/4})m'$ for all for $j\in U_{2}$. Now, applying Lemma \ref{lem:pair-of-coverings} to $U_{1},V_{1}$ and $U_{2},V_{2}$, we find $U'_{1}\subseteq U_{1}$ and $U'_{2}\subseteq U_{2}$ as in that lemma. Taking $I'=U'_{1}$ and $J'=U'_{2}$, these are the desired sets. Lastly, to allow for different parameters $\varepsilon_{1},\varepsilon_{2}$, just take $\varepsilon=\min\{\varepsilon_{1},\varepsilon_{2}\}$ and apply what we have already seen. Then any $(\varepsilon,m_{1})$-uniform measure is $(\varepsilon_{1},m_{1})$-uniform and any $(\varepsilon,m_{2})$-atomic measure is also $(\varepsilon_{2},m_{2})$-atomic, and we are done. \end{proof} Theorems \ref{thm:inverse-thm-R} and \ref{thm:self-convolution} are formal consequences of Theorem \ref{thm:inverse-thm-Rd}, as discussed in Section \ref{sub:inverse-theorem}. \section{\label{sec:Parameterized-families-of-self-similar-measures}Self-similar measures } \subsection{\label{sub:Components-of-self-similar-measures}Uniform entropy dimension and self-similar measures} The entropy dimension of a measure $\theta\in\mathcal{P}(\mathbb{R})$ is the limit $\lim_{n\rightarrow\infty}H_{n}(\theta)$, assuming it exists; by Lemma \ref{lem:entropy-local-to-global}, this i limit is equal to $\lim_{n\rightarrow\infty}\mathbb{E}_{0\leq i\leq n}(H_{m}(\theta^{x,i}))$ for all integers $m$. The convergence of the averages does not, however, imply that the entropies of the components $\theta^{x,i}$ concentrate around their mean, and examples show that they need not. We introduce the following stronger notion: \begin{defn} A measure $\theta\in\mathcal{P}(\mathbb{R})$ has \emph{uniform entropy dimension} $\alpha$ if for every $\varepsilon>0$, for large enough $m$, \begin{equation} \liminf_{n\rightarrow\infty}\mathbb{P}_{0\leq i\leq n}(|H_{m}(\theta^{x,i})-\alpha|<\varepsilon)>1-\varepsilon.\label{eq:uniform-e-dim} \end{equation} \end{defn} Our main objective in this section is to prove: \begin{prop} \label{prop:component-entropy-concentration}Let $\mu\in\mathcal{P}(\mathbb{R})$ be a self-similar measure and $\alpha=\dim\mu$. Then $\mu$ has uniform entropy dimension $\alpha$. \end{prop} For simplicity we first consider the case that all the contractions in the IFS contract by the same ratio $r$. Thus, consider an IFS $\Phi=\{\varphi_{i}\}_{i\in\Lambda}$ with $\varphi_{i}(x)=r(x-a_{i})$, $0<r<1$. We denote the attractor by $X$ and without loss of generality assume that $0\in X\subseteq[0,1]$, which can always be arranged by a change of coordinates and may be seen not to affect the conclusions. Let $\mu=\sum_{i\in\Lambda}p_{i}\cdot\varphi_{i}\mu$ be a self-similar measure and as usual write $\varphi_{i}=\varphi_{i_{1}}\circ\ldots\circ\varphi_{i_{n}}$ and $p_{i}=p_{i_{1}}\cdot\ldots\cdot p_{i_{n}}$ for $i\in\Lambda^{n}$. Let \[ \alpha=\dim\mu \] As we have already noted, self-similar measures are exact dimensional \cite{FengHu09}, and for such measures the dimension and entropy dimension coincide: \begin{equation} \lim_{n\rightarrow\infty}H_{n}(\mu)=\alpha.\label{eq:14} \end{equation} Fix $\widetilde{x}\in X$ and define probability measures \[ \mu_{x,k}^{[n]}=c\cdot\sum\left\{ p_{i}\cdot\varphi_{i}\mu\,:\, i\in\Lambda^{n}\,,\,\varphi_{i}\widetilde{x}\in\mathcal{D}_{k}(x)\right\} , \] where $c=c(x,\widetilde{x},k,n)$ is a normalizing constant. Thus $\mu_{x,k}^{[n]}$ differs from $\mu_{x,k}$ in that, instead of restricting $\mu=\sum_{i\in\Lambda^{n}}p_{i}\cdot\varphi_{i}\mu$ to $\mathcal{D}_{k}(x)$, we include or exclude each term in its entirety depending on whether $\varphi_{i}\widetilde{x}\in\mathcal{D}_{k}(x)$. Since $\varphi_{i}\mu$ may not be supported entirely on either $\mathcal{D}_{k}(x)$ or its complement, in general we have neither $\mu_{x,k}^{[n]}\ll\mu_{x,k}$ nor $\mu_{x,k}\ll\mu_{x,k}^{[n]}$. Note that the definition of $\mu_{x,k}^{[n]}$ depends on the point $\widetilde{x}$, but this will not concern us. For $0<\rho<1$ it will be convenient to write \[ \ell(\rho)=\left\lceil \log\rho/\log r\right\rceil , \] so $\rho,r^{\ell(\rho)}$ differ by a multiplicative constant. Recall that $\left\Vert \cdot\right\Vert $ denotes the total variation norm, see Section \ref{sub:Preliminaries-on-entropy}. \begin{lem} \label{lem:approximate-components-TV-bound}For every $\varepsilon>0$ there is a $0<\rho<1$ such that, for all $k$ and $n=\ell(\rho2^{-k})$, \begin{equation} \mathbb{P}_{i=k}\left(\left\Vert \mu_{x,i}-\mu_{x,i}^{[n]}\right\Vert <\varepsilon\right)>1-\varepsilon.\label{eq:13} \end{equation} Furthermore $\rho$ can be chosen independently of $\widetilde{x}$ and of the coordinate system on $\mathbb{R}$ (so the same bound holds for any translate of $\mu$).\end{lem} \begin{proof} It is elementary that if $\mu$ is atomic then it consists of a single atom. In this case the statement is trivial, so assume $\mu$ is non-atomic. Then% \footnote{This is the only part of the proof of Theorem \ref{thm:main-individual-entropy-1} which is not effective, but with a little more work one could make it effective in the sense that, if $\liminf-\log\Delta^{(n)}=M<\infty$, then at arbitrarily small scales one can obtain estimates of the continuity of $\mu$ in terms of $M$.% } given $\varepsilon>0$ there is a $\delta>0$ such that every interval of length $\delta$ has $\mu$-mass $<\varepsilon^{2}/2$. Choose an integer $q$ so that $r^{q}<\delta/2$ and let $\rho=r^{q}$. Let $k\in\mathbb{N}$ and $\ell=\ell(2^{-k})$, so that $2^{-k}\cdot r\leq r^{\ell}\leq2^{-k}$. Let $i\in\Lambda^{\ell}$ and consider those $j\in\Lambda^{q}$ such that $\varphi_{ij}\mu$ is not supported on an element of $\mathcal{D}_{k}$. Then $\varphi_{ij}\mu$ is supported on the interval $J$ of length $\delta$ centered at one of the endpoints of an element of $\mathcal{D}_{k}$. Since $\varphi_{i}\mu$ can give positive mass to at most two such intervals $J$, and $\varphi_{i}\mu(J)<\varepsilon^{2}/2$ for each such $J$, we conclude that in the representation $\mu_{i}=\frac{1}{p_{i}}\sum_{j\in\Lambda^{q}}p_{ij}\cdot(\varphi_{ij}\mu)$, at least $1-\varepsilon^{2}$ of the mass comes from terms that are supported entirely on just one element of $\mathcal{D}_{k}$. Therefore the same is true in the representation $\mu=\sum_{u\in\Lambda^{\ell+q}}p_{u}\cdot\varphi_{u}\mu$. The inequality \eqref{eq:13} now follows by an application of the Markov inequality. Finally, Since our choice of parameters did not depend on $\widetilde{x}$ and is invariant under translation of $\mu$ and of the IFS, the last statement holds.\end{proof} \begin{lem} \label{lem:component-entropy-lower-bound}For $\varepsilon>0$, for large enough $m$ and all $k$, \[ \mathbb{P}_{i=k}\left(H_{m}(\mu^{x,i})>\alpha-\varepsilon\right)>1-\varepsilon, \] and the same holds for any translate of $\mu$.\end{lem} \begin{proof} Let $\varepsilon>0$ be given. Choose $0<\varepsilon'<\varepsilon$ sufficiently small that $\left\Vert \nu-\nu'\right\Vert <\varepsilon'$ implies $|H_{m}(\nu)-H_{m}(\nu')|<\varepsilon/2$ for every $m$ and every $\nu,\nu'\in\mathcal{P}([0,1]^{d})$ (Lemma \ref{lem:entropy-total-variation-continuity}). Let $\rho$ be as in the previous lemma chosen with respect to $\varepsilon'$. Assume that $m$ is large enough that $|H_{m}(\mu')-\alpha|<\varepsilon/2$ whenever $\mu'$ is $\mu$ scaled by a factor of at most $\rho$ ($m$ exists by \eqref{eq:14} and Lemma \ref{lem:entropy-weak-continuity-properties} \eqref{enu:entropy-change-of-scale}). Now fix $k$ and let $\ell=\ell(\rho2^{-k})$. By the previous lemma and choice of $\varepsilon'$, it is enough to show that $\frac{1}{m}H(\mu_{x,k}^{[\ell]},\mathcal{D}_{k+m})>\alpha-\varepsilon/2$. But this follows from the fact that $\mu_{x,k}^{[\ell]}$ is a convex combination of measures $\mu_{j}$ for $j\in\Lambda^{\ell}$, our choice of $m$ and $\ell$, and concavity of entropy. \end{proof} We now prove Proposition \ref{prop:component-entropy-concentration}. Let $0<\varepsilon<1$ be given and fix an auxiliary parameter $\varepsilon'<\varepsilon/2$. We first show that this holds for $m$ large in a manner depending on $\varepsilon$. Specifically let $m$ be large enough that the previous lemma applies for the parameter $\varepsilon'$. In particular for any $n$, \begin{equation} \mathbb{P}_{0\leq i\leq n}\left(H_{m}(\mu^{x,i})>\alpha-\varepsilon'\right)>1-\varepsilon'.\label{eq:60} \end{equation} By \eqref{eq:14}, for $n$ large enough we have $|H_{n}(\mu)-\alpha|<\varepsilon'/2$, so by Lemma \ref{lem:entropy-local-to-global}, for large enough $n$ we have \[ |\mathbb{E}_{0\leq i\leq n}\left(H_{m}(\mu^{x,i})\right)-\alpha|<\varepsilon'. \] Since $H_{m}(\mu^{x,i})\geq0$, the last two equalities imply \[ \mathbb{P}_{0\leq i\leq n}\left(H_{m}(\mu^{x,i})<\alpha+\varepsilon''\right)>1-\varepsilon'' \] for some $\varepsilon''$ that tend to $0$ with $\varepsilon'$. Thus, choosing $\varepsilon'$ small enough, the last inequality and \eqref{eq:60} give \eqref{eq:uniform-e-dim}, as desired. When the contraction ratios are not uniform, $\varphi_{i}=r_{i}x+a_{i}$, some minor changes are needed in the proof. Given $n$, let $\Lambda^{(n)}$ denote the set of $i\in\Lambda^{*}=\bigcup_{m=1}^{\infty}\Lambda^{m}$ such that $r_{i}<r^{n}\leq r_{j}$, where $j$ is the same as $i$ but with the last symbol deleted (so its length is one less than $i$). This ensures that $\{r_{i}\}_{i\in\Lambda^{(n)}}$ are all within a multiplicative constant of each other (this constant is $\min\{r_{j}\,:\, j\in\Lambda\}$). It is easy to check that $\Lambda^{(n)}$ is a section of $\Lambda^{*}$ in the sense that every sequence $i\in\Lambda^{*}$ with $r_{i}<r^{n}$ has a unique prefix in $\Lambda^{(n)}$. Now define $\mu_{x,k}^{[n]}$ as before, but using $\varphi_{i}\mu$ for $i\in\Lambda^{(n)}$, i.e. \[ \mu_{x,k}^{[n]}=c\cdot\sum\left\{ p_{i}\cdot\varphi_{i}\mu\,:\, i\in\Lambda^{(n)}\,,\,\varphi_{i}\widetilde{x}\in\mathcal{D}_{k}(x)\right\} . \] With this modification all the previous arguments now go through. Finally, let us note the following consequence of the inverse theorem (Theorem \ref{thm:inverse-thm-R}). \begin{cor} For every measure $\mu\in\mathcal{P}(\mathbb{R})$ with uniform entropy dimension $0<\alpha<1$, and for every $\varepsilon>0$, there is a $\delta>0$ and such that for all large enough $n$ and every $\nu\in\mathcal{P}([0,1])$, \[ H_{n}(\nu)>\varepsilon\qquad\implies\qquad H_{n}(\mu*\nu)\geq H_{n}(\mu)+\delta. \] \end{cor} Similar conclusions hold for dimension. \subsection{\label{sub:Proof-of-main-thm-on-R}Proof of Theorem \ref{thm:main-individual-entropy-1} } We again begin with the uniformly contracting case, $\varphi_{i}=rx+a_{i}$, and continue with the notation from the previous section, in particular assume that $0$ is in the attractor. Recall from the introduction that \[ \nu^{(n)}=\sum_{i\in\Lambda^{n}}p_{i}\cdot\delta_{\varphi_{i}(0)}. \] Define \[ \tau^{(n)}(A)=\mu(r^{-n}A). \] One may verify easily, using the assumption $0\in X$, that \begin{equation} \mu=\nu^{(n)}*\tau^{(n)}.\label{eq:scale-n-convolution} \end{equation} As in the introduction, write \[ n'=[n\log(1/r)]. \] Thus $\tau^{(n)}$ is $\mu$ scaled down by a factor of $r^{n}=2^{-n'}$ and translated. Using \eqref{eq:14}, Lemma \ref{lem:entropy-weak-continuity-properties}, and the fact that $\tau^{(n)}$ is supported on an interval of order $r^{n}=2^{-n'}$, we have \[ \lim_{n\rightarrow\infty}\frac{1}{n'}H(\nu^{(n)},\mathcal{D}_{n'})=\lim_{n\rightarrow\infty}\frac{1}{n'}H(\mu,\mathcal{D}_{n'})=\dim\mu=\alpha. \] Suppose now that $\alpha<1$. Fix a large $q$ and consider the identity \begin{eqnarray*} \frac{1}{qn}H(\mu,\mathcal{D}_{qn}) & = & \frac{n'}{qn}\cdot\left(\frac{1}{n'}H(\mu,\mathcal{D}_{n'})\right)+\frac{qn-n'}{qn}\cdot\left(\frac{1}{qn-n'}H(\mu,\mathcal{D}_{qn}|\mathcal{D}_{n'})\right)\\ & = & \frac{[\log(1/r)]}{q}\left(\frac{1}{n'}H(\mu,\mathcal{D}_{n'})\right)+\frac{q-[\log(1/r)]}{q}\left(\frac{1}{qn-n'}H(\mu,\mathcal{D}_{qn}|\mathcal{D}_{n'})\right). \end{eqnarray*} The left hand side and the term $\frac{1}{n'}H(\mu,\mathcal{D}_{n'})$ on the right hand side both tend to $\alpha$ as $n\rightarrow\infty$. Since $r,q$ are independent of $n$ we conclude that \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{qn-n'}H(\mu,\mathcal{D}_{qn}|\mathcal{D}_{n'})=\alpha.\label{eq:61} \end{equation} From the identity $=\mathbb{E}_{i=n'}(\nu_{y,i}^{(n)})$ and linearity of convolution, \[ \mu=\nu^{(n)}*\tau^{(n)}=\mathbb{E}_{i=n'}\left(\nu_{y,i}^{(n)}*\tau^{(n)}\right). \] Also, each measure $\nu_{y,i}^{(n)}*\tau^{(n)}$ is supported on an interval of length $O(2^{-n'})$ so \[ |H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n'})-H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})|=O(1). \] By concavity of conditional entropy (Lemma \ref{lem:entropy-combinatorial-properties} \eqref{enu:entropy-concavity}), \begin{eqnarray*} H(\mu,\mathcal{D}_{qn}|\mathcal{D}_{n'}) & = & H(\nu^{(n)}*\tau^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n'})\\ & \geq & \mathbb{E}_{i=n'}\left(H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n'})\right)\\ & = & \mathbb{E}_{i=n'}\left(H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})\right)+O(1), \end{eqnarray*} so by \eqref{eq:61}, \begin{equation} \limsup_{n\rightarrow\infty}\frac{1}{qn-n'}\mathbb{E}_{i=n'}\left(H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})\right)\leq\alpha.\label{eq:29} \end{equation} Now, we also know that \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{qn-n'}H(\tau^{(n)},\mathcal{D}_{qn})=\alpha,\label{eq:62} \end{equation} since, up to a re-scaling, this is just \eqref{eq:14} (we again used the fact that $\tau^{(n)}$ is supported on intervals of length $2^{-n'}$). By Lemma \ref{lem:entropy-monotonicity-under-convolution}, for every component $\nu_{y,i}^{(n)}$, \[ \frac{1}{qn-n'}H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})\geq\frac{1}{qn-n'}H(\tau^{(n)},\mathcal{D}_{qn})+O(\frac{1}{qn-n'}). \] Therefore for every $\delta>0$, \[ \lim_{n\rightarrow\infty}\mathbb{P}_{i=n'}\left(\frac{1}{qn-n'}H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})>\alpha-\delta\right)=1 \] which, combined with \eqref{eq:29}, implies that for every $\delta>0$, \[ \lim_{n\rightarrow\infty}\mathbb{P}_{i=n'}\left(\left|\frac{1}{qn-n'}H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})-\alpha\right|<\delta\right)=1, \] and replacing $\alpha$ with the limit in \eqref{eq:62}, we have that for all $\delta>0$, \begin{equation} \lim_{n\rightarrow\infty}\mathbb{P}_{i=n'}\left(\left|\frac{1}{qn-n'}H(\nu_{y,i}^{(n)}*\tau^{(n)},\mathcal{D}_{qn})-\frac{1}{qn-n'}H(\tau^{(n)},\mathcal{D}_{qn})\right|<\delta\right)=1.\label{eq:59} \end{equation} Now let $\varepsilon>0$. By Proposition \ref{prop:component-entropy-concentration} and the assumption that $\alpha<1$, for small enough $\varepsilon$, large enough $m$ and all sufficiently large $n$, \begin{eqnarray*} \mathbb{P}_{n'<i\leq qn'}\left(H_{m}((\tau^{(n)})^{x,i})<1-\varepsilon\right) & \geq & \mathbb{P}_{n'<i\leq qn'}\left(H_{m}((\tau^{(n)})^{x,i})<\alpha+\varepsilon\right).\\ & > & 1-\varepsilon \end{eqnarray*} Choose $\delta>0$ smaller than the constant of the same name in the conclusion of Theorem \ref{thm:inverse-thm-R}. Then, for sufficiently large $n$, we can apply Theorem \ref{thm:inverse-thm-R} to the components $\nu_{y,i}^{(n)}$ in the event in equation \eqref{eq:59} (for this we re-scale by $2^{n'}$ and note that the measures $\nu_{y,n'}^{(n)}$ are supported on level-$n'$ dyadic cells and $\tau^{(n)}$ is supported on an interval of the same order of magnitude). We conclude that every component $\nu_{y,i}^{(n)}$ in the event in question satisfies $\frac{1}{qn-n'}H(\nu_{y,i}^{(n)},\mathcal{D}_{qn})<\varepsilon$, and hence by \eqref{eq:59}, \[ \lim_{n\rightarrow\infty}\mathbb{P}_{i=n'}\left(\frac{1}{qn-n'}H(\nu_{y,i}^{(n)},\mathcal{D}_{qn})<\varepsilon\right)=1. \] Thus, from the definition of conditional entropy and the last equation, \begin{eqnarray*} \lim_{n\rightarrow\infty}\frac{1}{qn-n'}H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n'}) & = & \lim_{n\rightarrow\infty}\frac{1}{qn-n'}\mathbb{E}_{i=n'}\left(H(\nu_{y,i}^{(n)},\mathcal{D}_{qn})\right)\\ & = & \lim_{n\rightarrow\infty}\mathbb{E}_{i=n'}\left(\frac{1}{qn-n'}H(\nu_{y,i}^{(n)},\mathcal{D}_{qn})\right)\\ & < & \varepsilon. \end{eqnarray*} Since $\varepsilon$ was arbitrary, this is Theorem \ref{thm:main-individual-entropy-1}. \subsection{\label{sub:Inverse-thm-proof-non-uniformly-contracting}Proof of Theorem \ref{thm:main-individual-entropy-1-1} (the non-uniformly contracting case)} We now consider the situation for general IFS, in which the contraction $r_{i}$ of $\varphi_{i}$ is not constant. Again assume that $0$ is in the attractor. Let $r=\prod_{i\in\Lambda}r_{i}^{p_{i}}$, $n'=\log_{2}(1/r)$ as in the introduction, and define $\widetilde{\nu}^{(n)}$ as before. Given $n$, let \[ R_{n}=\{r_{i}\,:\, i\in\Lambda^{n}\}. \] Note that $|R_{n}|=O(n^{|\Lambda|})$. Therefore $H(\widetilde{\nu}^{(n)},\{\mathbb{R}\}\times\mathcal{F})=O(\log n)$, and consequently for all $k$ \[ H(\widetilde{\nu}^{(n)},\widetilde{\mathcal{D}}_{k})=H(\nu^{(n)},\mathcal{D}_{k})+O(\log n). \] Thus \[ H(\widetilde{\nu}^{(n)},\widetilde{\mathcal{D}}_{qn}|\widetilde{\mathcal{D}}_{n'})=H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n})+O(\log n), \] and our goal reduces to proving that for every $q>1$, \[ \frac{1}{qn}H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n'})\rightarrow0\qquad\mbox{as }n\rightarrow\infty. \] Furthermore, for every $\varepsilon>0$ \[ H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{(1-\varepsilon)n'})=H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{n'})-O(\varepsilon n), \] so it will suffice for us to prove that \[ \limsup_{n\rightarrow\infty}\frac{1}{qn}H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{(1-\varepsilon)n})=o(1)\qquad\mbox{as }\varepsilon\rightarrow0. \] Fix $\varepsilon>0$. For $t\in R_{n}$ let \begin{eqnarray*} \Lambda^{n,t} & = & \{i\in\Lambda^{n}\,:\, r_{i}=t\}\\ p^{n,t} & = & \sum_{i\in\Lambda^{n,t}}p_{i}, \end{eqnarray*} so $\{p^{n,t}\}_{t\in R_{n}}$ is a probability vector. It will sometimes be convenient to consider $i\in\Lambda^{n}$, $i\in\Lambda^{n,t}$ and $t\in R_{n}$ as random elements drawn according to the probabilities $p_{i}$, $p_{i}/p^{n,t}$, and $p^{n,t}$, respectively. Then we interpret expressions such as $\mathbb{P}_{i\in\Lambda^{n}}(A)$, $\mathbb{P}_{i\in\Lambda^{n,t}}(A)$ and $\mathbb{P}_{t\in R_{n}}(A)$ in the obvious manner, and similarly expectations. With this notation, we can define \begin{eqnarray*} \nu^{(n,t)} & = & \mathbb{E}_{i\in\Lambda^{n,t}}(\delta_{\varphi_{i}(0)})=\frac{1}{p^{n,t}}\sum_{i\in\Lambda^{n,t}}p_{i}\cdot\delta_{\varphi_{i}(0)}. \end{eqnarray*} This a probability measure on $\mathbb{R}$ representing the part of $\nu^{(n)}$ coming from contractions by $t$; indeed, \begin{eqnarray} \nu^{(n)} & = & \mathbb{E}_{t\in R_{n}}(\nu^{(n,t)}).\label{eq:57} \end{eqnarray} For $t>0$ let $\tau^{(t)}$ be the measure \[ \tau^{(t)}(A)=\tau(tA) \] (note that we are no longer using logarithmic scale, so the measure that was previously denoted$\tau^{(n)}$ is now $\tau^{(2^{-n})}$). We then have \begin{equation} \mu=\mathbb{E}_{t\in R_{n}}(\nu^{(n,t)}*\tau^{(t)}).\label{eq:56} \end{equation} Fix $\varepsilon>0$. Arguing as in the previous section, using equation \eqref{eq:56} and concavity of entropy, we have \begin{eqnarray} \alpha & = & \lim_{n\rightarrow\infty}\frac{1}{qn-(1-\varepsilon)n'}H(\mu,\mathcal{D}_{qn}|\mathcal{D}_{(1-\varepsilon)n'})\nonumber \\ & \geq & \limsup_{n\rightarrow\infty}\frac{1}{qn-(1-\varepsilon)n'}\mathbb{E}_{t\in R_{n}}\left(H(\nu^{(n,t)}*\tau^{(t)},\mathcal{D}_{qn}|\mathcal{D}_{(1-\varepsilon)n'})\right).\label{eq:58} \end{eqnarray} By the law of large numbers, \[ \lim_{n\rightarrow\infty}\mathbb{P}_{i\in\Lambda^{n}}\left(2^{-(1+\varepsilon)n'}<r_{i}<2^{-(1-\varepsilon)n'}\right)=1, \] or, equivalently, \begin{equation} \lim_{n\rightarrow\infty}\mathbb{P}_{t\in R_{n}}\left(2^{-(1+\varepsilon)n'}<t<2^{-(1-\varepsilon)n'}\right)=1.\label{eq:63} \end{equation} Using $H_{k}(\mu)\rightarrow\alpha$ and the definition of $\tau^{(t)}$, we conclude that \[ \lim_{n\rightarrow\infty}\mathbb{P}_{t\in R_{n}}\left(\frac{1}{qn-(1-\varepsilon)n'}H(\tau^{(t)},\mathcal{D}_{qn})\geq(1-\varepsilon)\alpha\right)=1. \] Also, since $\tau^{(t)}$ is supported on an interval of order $t$, from \eqref{eq:63}, \eqref{eq:58} and concavity of entropy, \begin{eqnarray} \alpha & \geq & \limsup_{n\rightarrow\infty}\frac{1}{qn-(1-\varepsilon)n'}\mathbb{E}_{t\in R_{n}}\mathbb{E}_{i=n'}\left(H(\nu_{y,i}^{(n,t)}*\tau^{(t)},\mathcal{D}_{qn}|\mathcal{D}_{(1-\varepsilon)n'})\right)\nonumber \\ & = & \limsup_{n\rightarrow\infty}\frac{1}{qn-(1-\varepsilon)n'}\mathbb{E}_{t\in R_{n}}\mathbb{E}_{i=n'}\left(H(\nu_{y,i}^{(n,t)}*\tau^{(t)},\mathcal{D}_{qn}\right).\label{eq:64} \end{eqnarray} This is the analogue of Equation \ref{eq:29} in the proof of the uniformly contracting case and from here one proceeds exactly as in that proof to conclude that there is a function $\delta(\varepsilon)$, tending to $0$ as $\varepsilon\rightarrow0$, such that \[ \mathbb{P}_{t\in R_{n}}\left(\mathbb{P}_{i=n'}\left(\frac{1}{qn-(1-\varepsilon)n}H(\nu^{(n,t)},\mathcal{D}_{qn})<\delta(\varepsilon)\right)\right)=1. \] Now, using Equation \eqref{eq:57} and the fact that the entropy of the distribution $\{p^{(n,t)}\}_{t\in R_{n}}$ is $o(n)$ as $n\rightarrow\infty$, by Lemma \ref{lem:entropy-combinatorial-properties} \eqref{enu:entropy-almost-convexity} one concludes that \[ \limsup_{n\rightarrow\infty}H(\nu^{(n)},\mathcal{D}_{qn}|\mathcal{D}_{(1-\varepsilon)n'})\leq\delta(\varepsilon), \] which is what we wanted to prove. \subsection{\label{sub:Transversality-and-exceptions}Transversality and the dimension of exceptions} In this section we prove Theorem \ref{thm:main-parametric}. Let $I\subseteq\mathbb{R}$ be a compact interval for $t\in I$ and let $\Phi_{t}=\{\varphi_{i,t}\}_{i\in\Lambda}$ be an IFS, $\varphi_{i,t}(x)=r_{i}(t)(x-a_{i}(t))$. We define $\varphi_{i,t}$ and $r_{i}(t)$ for $i\in\Lambda^{n}$ as usual, set $\Delta_{i,j}(t)=\varphi_{i,t}(0)-\varphi_{j,t}(0)$ when $i,j\in\Lambda^{n}$ and for $i,j\in\Lambda^{\mathbb{N}}$ define $\Delta_{i,j}(t)=\lim\Delta_{i_{1}\ldots i_{n},j_{1}\ldots j_{n}}(t)$ (this is well defined since $\lim\varphi_{i_{1}\ldots i_{n}}(0)$ converges, in fact exponentially, as $n\rightarrow\infty$). For $i,j\in\Lambda^{n}$ or $i,j\in\Lambda^{\mathbb{N}}$ let $i\land j$ denote the longest common initial segment of $i,j$, and $|i\land j|$ its length, so $|i\land j|=\min\{k\,:\, i_{k}\neq j_{k}\}-1$. Let \[ r_{min}=\min_{i\in\Lambda}\min_{t\in I}|r_{i}(t)|, \] so $0<r_{min}<1$. For a $C^{k}$-function $F:I\rightarrow\mathbb{R}$ write $F^{(p)}=\frac{d^{p}}{dt^{p}}F$, and \[ \left\Vert F\right\Vert _{I,k}=\max_{p\in\{0,\ldots,k\}}\max_{t\in I}|F^{(p)}(t)|. \] In particular we write \[ R_{k}=\max_{i\in\Lambda}\left\Vert r_{i}\right\Vert _{I,k}. \] \begin{defn} The family $\{\Phi_{t}\}_{t\in I}$ is \emph{transverse of order $k$ }if $r_{i}(\cdot),a_{i}(\cdot)$ are $k$-times continuously differentiable and there is a constant $c>0$ such that for every $n\in\mathbb{N}$ and distinct $i,j\in\Lambda^{n}$, \begin{equation} \forall\; t_{0}\in I\quad\exists\; p\in\{0,1,2,\ldots,k\}\quad\mbox{such that}\quad|\Delta_{i,j}^{(p)}(t_{0})|\geq c\cdot|i\land j|^{-p}\cdot r_{i\land j}(t_{0}).\label{eq:transversality} \end{equation} \end{defn} The classical notion of transversality roughly corresponds to the case $k=1$ in this definition, see e.g. \cite[Definition 2.7]{PeresSchlag2000}. Unlike the classical notion, which either fails or is difficult to verify in many cases of interest, higher-order transversality holds almost automatically. To begin with, let $i,j\in\Lambda^{n}$ and observe that \begin{eqnarray*} \Delta_{i,j}(t) & = & r_{i\land j}(t)\widetilde{\Delta}_{i,j}(t), \end{eqnarray*} where, writing $u,v$ for the sequences obtained from $i,j$ after deleting the longest initial segment, \begin{eqnarray*} \widetilde{\Delta}_{i,j}(t) & = & \Delta_{u,v}(t). \end{eqnarray*} Differentiating $p$ times, \begin{eqnarray*} \widetilde{\Delta}_{i,j}^{(p)}(t) & = & \frac{d^{p}}{dt^{p}}(r_{i\land j}(t)^{-1}\cdot\Delta_{i,j}(t))\\ & = & \sum_{q=0}^{p}\binom{p}{q}\cdot\frac{d^{q}}{dt^{q}}(r_{i\land j}(t)^{-1})\cdot\Delta_{i,j}^{(p-q)}(t). \end{eqnarray*} A calculation shows that \[ |\frac{d^{q}}{dt^{q}}(r_{i\land j}(t)^{-1})|\leq O_{q,r_{min},R_{q}}(|i\land j|^{q}\cdot r_{i\land j}(t)^{-1}). \] Thus we have the bound \[ |\widetilde{\Delta}_{i,j}^{(p)}(t)|=O_{p,r_{min},,R_{p}}\left(\max_{0\leq q\leq p}\left(|i\land j|^{q}\cdot r_{i\land j}(t)^{-1}\cdot|\Delta_{i,j}^{(q)}(t)|\right)\right). \] \begin{prop} \label{prop:analitic-implies-transverse}Suppose $r_{i}(\cdot),a_{i}(\cdot)$ are real-analytic on $I$. Suppose that for $i,j\in\Lambda^{\mathbb{N}}$, $\Delta_{i,j}\equiv0$ on $I$ if and only if $i=j$. Then the associated family $\{\Phi_{t}\}_{t\in I}$ is transverse of order $k$ for some $k$.\end{prop} \begin{proof} First, for $x\in I$ we can extend $r_{i},a_{i}$ analytically to a complex neighborhood $U_{x}$ of $x$ on which $|r_{i}|$ are still bounded uniformly away from $1$. Define $\Delta_{i,j}(z)$ as before for $i,j\in\Lambda^{n}$ and $z\in U_{x}$, and note that for $i,j\in\Lambda^{\mathbb{N}}$ the limit $\Delta_{i,j}(z)=\lim\Delta_{i_{1}\ldots i_{n},j_{1}\ldots j_{n}}(z)$ is uniform for $z\in U_{x}$. This shows that $\Delta_{i,j}(t)$ is also real-analytic on $I$ Given $k$, from the expression for $\widetilde{\Delta}_{i,j}^{(p)}$ above, we see that if $c>0$ and there exists $t_{0}\in I$ such that $|\Delta_{i,j}^{(p)}(t_{0})|\leq c\cdot|i\land j|^{-p}\cdot r_{i\land j}(t_{0})$ for all $0\leq p\leq k$, then $|\widetilde{\Delta}_{i,j}^{(p)}(t_{0})|\leq c'$ for all $0\leq p\leq k$, where $c'=O_{k,R_{k}}(c)$. For each $k$ choose $c_{k}>0$ such that the associated $c'_{k}$ satisfies $c'_{k}<1/k$. Suppose that for all $k$ the family $\{\Phi_{t}\}$ is not transverse of order $k$. Then by assumption we can choose $n(k)$ and distinct $i^{(k)},j^{(k)}\in\Lambda^{n(k)}$, and a point $t_{k}\in I$, such that $|\Delta_{i^{(k)},j^{(k)}}^{(p)}(t_{k})|\leq c_{k}\cdot|i^{(k)}\land j^{(k)}|^{-p}\cdot r_{i^{(k)}\land j^{(k)}}(t_{k})$ for $0\leq p\leq k$, and hence $\widetilde{\Delta}_{i^{(k)},j^{(k)}}^{(p)}(t_{k})\leq c'_{k}$. Let $u^{(k)}$ and $v^{(k)}$ denote the sequences obtained from $i^{(k)}$ and $j^{(k)}$ by deleting the first $|i^{(k)}\land j^{(k)}|$ symbols, so that the first symbols of $u^{(k)}$ and $v^{(k)}$ now differ and $\Delta_{u^{(k)},v^{(k)}}=\widetilde{\Delta}_{i^{(k)},j^{(k)}}$. Hence we have \begin{equation} |\Delta_{u^{(k)},v^{(k)}}^{(p)}(t_{k})|\leq c'_{k}<1/k\quad\mbox{for all }\quad0\leq p\leq k.\label{eq:53} \end{equation} Passing to a subsequence $k_{\ell}$, we may assume that $t_{k_{\ell}}\rightarrow t_{0}$ and that $u^{(k_{\ell})}\rightarrow u\in\Lambda^{\mathbb{N}}$ and $v^{(k_{\ell})}\rightarrow v\in\Lambda^{\mathbb{N}}$ (the latter in the sense that all coordinates stabilize eventually to the corresponding coordinate in the limit sequence). Note that $u\neq v$, because $u^{(k_{\ell})},v^{(k_{\ell})}$ differ in their first symbol for all $\ell$, hence so do $u,v$. It follows that $\Delta_{u^{(k_{\ell})},v^{(k_{\ell})}}\rightarrow\Delta_{u,v}$ uniformly and that the same holds for $p$-th derivatives. Hence for all $p\geq0$, using uniform convergence and \eqref{eq:53}, \[ |\Delta_{u,v}^{(p)}(t_{0})|=\lim_{\ell\rightarrow\infty}|\Delta_{u^{(k_{\ell})},v^{(k_{\ell})}}^{(p)}(t_{k_{\ell}})|=0. \] But $\Delta_{u,v}$ is real analytic so the vanishing of its derivatives implies $\Delta_{u,v}\equiv0$ on $I$, contrary to the hypothesis. \end{proof} We turn now to the implications of transversality. The key implication is provided by the following simple lemma. \begin{lem} \label{lem:transversality}Let $k\in\mathbb{N}$ and let $F$ be a $k$-times continuously differentiable function on a compact interval $J\subseteq\mathbb{R}$. Let $M=\left\Vert F\right\Vert _{J,k}$ and let $0<c<1$ be such that for every $x\in J$ there is a $p\in\{0,\ldots,k\}$ with $|F^{(p)}(x)|>c$. Then for every $0<\rho<c/2^{k}$, the set $F^{-1}(-\rho,\rho)\subseteq J$ can be covered by $O_{k,M,|J|}(1/c^{2})$ intervals of length $\leq2(\rho/c)^{1/2^{k}}$ each. \end{lem} \begin{proof} For brevity, we shall suppress dependence on the parameters $k,M,|J|$, so throughout this proof, $O(\cdot)=O_{k,M,|J|}(\cdot)$. The proof is by induction on $k$. For $k=0$ the hypothesis is that $|F^{(0)}(x)|=|F(x)|>c$ for all $x\in J$, hence $F^{-1}(-\rho,\rho)=\emptyset$ for $0<\rho<c=c/2^{0}$, and the assertion is trivial. Assume that we have proved the claim for $k-1$ and consider the case $k$. Let $J'$ be a maximal closed interval in $F^{-1}[-c,c]$ and let $G=F'|_{J'}$. Note that $G$ satisfies the hypothesis for $k-1$ and the same value of $c$ and $M$, and $\sqrt{c\rho}<c/2^{k-1}$, so from the induction hypothesis we find that $G^{-1}(-\sqrt{c\rho},\sqrt{c\rho})$ can be covered by $O(1/c)$ intervals of length $<2(\sqrt{c\rho}/c)^{1/2^{k-1}}=2(\rho/c)^{1/2^{k}}$ each. Let $U$ denote the union of this cover and consider the intervals $J'_{i}$ which are the closures of the maximal sub-intervals in $J'\setminus U$. By the above, the number of such intervals $J'_{i}$ is $\leq O(1/c)$. Now, on each $J_{i}'$ we have $|F'|\geq\sqrt{c\rho}$, so by continuity of $F'$ either $F'\geq\sqrt{c\rho}$ or $F'\leq-\sqrt{c\rho}$ in all of $J_{i}'$. An elementary consequence of this is that $J'_{i}\cap F^{-1}(-\rho,\rho)$ is an interval of length at most $2\rho/\sqrt{c\rho}=2\sqrt{\rho/c}\leq2(\rho/c)^{1/2^{k}}$. In summary we have covered $J'\cap F^{-1}(-\rho,\rho)$ by $O(1/c)$ intervals of length $2(\rho/c)^{1/2^{k}}$ each. It remains to show that there are $O(1/c)$ maximal intervals $J'\subseteq F^{-1}[-c,c]$ as in the paragraph above. In fact, we only need to bound the number of such $J'$ that intersect $F^{-1}(-\rho,\rho)$. For $J'$ of this kind, if $J'=J$ we are done, since this means there is just one such interval. Otherwise there is an endpoint $a\in J'$ with $|F(a)|=c$. There is also a point $b\in J'$ with $|F(b)|<\rho<c/2^{k}$. Since $|F'|\leq M$, we conclude that $|J'|\geq|b-a|\geq(c-\rho)/M\geq c/2M$. Thus, since the intervals $J'$ are disjoint, their number is $\leq|J|/(c/2M)=O(1/c)$, completing the induction step. \end{proof} Let $\bdim X$ denote the upper box dimension of a set $X$, defined by \[ \bdim X=\limsup_{r\rightarrow0}\frac{\log\#\min\{\ell\,:\, X\mbox{ can be covered by }\ell\mbox{ balls of radius }r\}}{\log(1/r)}. \] One always has $\dim X\leq\bdim X$. The packing dimension is defined by \[ \pdim X=\inf\{\sup_{n}\bdim X_{n}\,:\, X\subseteq\bigcup_{n=1}^{\infty}X_{n}\}. \] Note that $\dim X\leq\pdim X$, and $Y\subseteq X$ implies $\pdim Y\leq\pdim X$. \begin{thm} \label{thm:transverse-implies-small-exceptions}If $\{\Phi_{t}\}_{t\in I}$ satisfies transversality of order $k\geq1$ on the compact interval $I$, then\textup{\emph{ the set $E$ of ``exceptional'' parameters in Theorem \ref{thm:description-of-exceptional-params} has}} packing (and hence Hausdorff) dimension $0$. \end{thm} \begin{proof} Write \[ M=\sup_{n}\sup_{i,j\in\Lambda^{n}}\left\Vert \Delta_{i,j}\right\Vert _{I,k}. \] That $M<\infty$ follows from $k$-fold continuous differentiability of $r_{i}(\cdot),a_{i}(\cdot)$ and the fact that $|r_{i}|$ are bounded away from $1$ on $I$. By transversality there is a constant $c>0$ such that for every $t\in I$, every $n$ and all distinct $i,j\in\Lambda^{n}$, \[ |\frac{\partial^{p}}{\partial t^{p}}\Delta_{i,j}(t)|>c\cdot|i\land j|^{-p}\cdot r_{min}^{|i\land j|}\qquad\mbox{for some }p\in\{0,\ldots,k\}. \] In what follows we suppress the dependence on $k,M,c$ and $|I|$ in the $O(\cdot)$ notation: $O(\cdot)=O_{k,M,c,|I|}(\cdot)$. We may assume that $c<1$ and $k\geq2$. Let $\varepsilon<cr_{min}/2k$ and fix $n$ and distinct $i,j\in\Lambda^{n}$. By the previous lemma, for all $0<\rho<c|i\land j|^{-k}r_{min}^{|i\land j|}/2^{k}$, and in particular for $0<\rho<cr_{min}^{n}/(2n)^{k}$, the set $\{t\in I\,:\,|\Delta_{i,j}|<\rho\}$ can be covered by at most $O((2n)^{k}/r_{min}^{n})$ intervals of length $2((2n)^{k}\rho/r_{min}^{n})^{1/2^{k}}$ each. Now set $\rho=\varepsilon^{n}$ (our choice of $\varepsilon$ guarantees that $\rho$ is in the proper range) and let $i,j$ range over their $\leq|\Lambda|^{n}$ different possible values. We find that the set \[ E_{\varepsilon,n}=\bigcup_{i,j\in\Lambda^{n}\,,\, i\neq j}(\Delta_{i,j})^{-1}(-\varepsilon^{n},\varepsilon^{n}) \] can be covered by $O((2n)^{k}|\Lambda|^{n}/r_{min}^{n})$ intervals of length $\leq((2n)^{k}\varepsilon^{n}/r_{min}^{n})^{1/2^{k}}$. Now, $E\subseteq E_{\varepsilon}$ where \begin{equation} E_{\varepsilon}=\bigcup_{N=1}^{\infty}\bigcap_{n>N}E_{\varepsilon,n}.\label{eq:46} \end{equation} By the above, for each $\varepsilon$ and $N$ we have \begin{eqnarray*} \bdim\left(\bigcap_{n>N}E_{\varepsilon,n}\right) & \leq & \lim_{n\rightarrow\infty}\frac{\log\left(O(2n)^{k}|\Lambda|^{n}/r_{min}^{n}\right)}{\log\left(((2n)^{k}\varepsilon^{n}/r_{min}^{n})^{1/2^{k}}\right)}\\ & = & O(2^{k}\frac{\log(|\Lambda|/r_{min})}{\log(\varepsilon/r_{min})}). \end{eqnarray*} The last expression is $o(1)$ as $\varepsilon\rightarrow0$, uniformly in $N$. Thus by \eqref{eq:46}, the same is true of $E_{\varepsilon}$, and $E\subseteq E_{\varepsilon}$ for all $\varepsilon$, so $E$ has packing (and Hausdorff) dimension $0$. \end{proof} Theorem \ref{thm:main-parametric} now follows by combining Proposition \ref{prop:analitic-implies-transverse} and Theorem \ref{thm:transverse-implies-small-exceptions}. \subsection{\label{sub:Applications}Miscellaneous proofs} To complete the proof of Corollary \ref{cor:algebraic-parameters} we have: \begin{lem} \label{lem:Liouville-bound}Let $A\subseteq\mathbb{R}$ be a finite set of algebraic numbers over $\mathbb{Q}$. Then there is a constant $0<s<1$ such that any polynomial expression $x$ of degree $n$ in the elements of $A$, either $x=0$ or $|x|>s^{n}$. \end{lem} \begin{proof} Choose an algebraic integer $\alpha$ such that $A\subseteq\mathbb{Q}(\alpha)$. Since the statement is unchanged if we multiply all elements of $A$ by an integer, we can assume that the elements of $A$ are integer polynomials in $\alpha$ of degree $\leq d$ and coefficients bounded by $N$, for some $d,N$. Substituting these polynomials into the expression for $x$, we have an expression $x=\sum_{k=0}^{dn}n_{k}\alpha^{k}$ where $n_{k}\in\mathbb{N}$ and $|n_{k}|\leq N$. It suffices to prove that any such expression is either $0$ or $\geq s^{n}$ for $0<s<1$ independent of $n$ (but which may depend on $\alpha$ and hence on $d,N$). In proving this last statement we may assume that $d=1$ (replace $s$ by $s^{1/d}$ and change variables to $n'=dn$). Let $\alpha=\alpha_{1},\alpha_{2},\ldots,\alpha_{d}$ denote the algebraic conjugates of $\alpha$ and $\sigma_{1},\sigma_{2},\ldots,\sigma_{d}$ the automorphisms of $\mathbb{Q}(\alpha)$, with $\sigma_{i}\alpha=\alpha_{i}$. If $x\neq0$ then $\prod_{i=1}^{d}\sigma_{i}(x)\in\mathbb{Z}$, so \[ 1\leq|\prod_{i=1}^{d}\sigma_{i}(x)|=x\cdot\prod_{i=2}^{d}|\sum_{k=0}^{n}n_{k}\sigma_{i}(x)^{k}|\leq x\cdot\prod_{i=2}^{d}\sum_{k=0}^{n}n_{k}|\alpha_{i}|^{k}\leq x\cdot(n\cdot N\cdot\alpha_{\max}^{n})^{d}, \] where $\alpha_{\max}=\max\{|\alpha_{2}|,\ldots,|\alpha_{d}|\}$. Dividing out gives the lemma. \end{proof} We finish with some comments on Sinai's problem, Theorem \ref{thm:Sinais-problem}. We first state a generalization of Theorem \ref{thm:description-of-exceptional-params} needed to treat families of IFSs that contract only on average. Suppose that for $t\in I$ we have a family $\Phi_{t}=\{\varphi_{i,t}\}_{i\in\Lambda}$ of (not necessarily contracting) similarities of $\mathbb{R}$, and as usual write $\varphi_{i,t}=r_{i,t}U_{i,t}+a_{i,t}$. Let $p$ be a fixed probability vector and suppose that for each $t$ we have $\sum p_{i}'\log r_{i}<0$, i.e. the systems contract on average. One can then show that there is a unique probability measure $\mu_{t}$ on $\mathbb{R}$ satisfying $\mu_{t}=\sum_{i\in\Lambda}p_{i}\cdot\varphi_{i,t}\mu_{t}$ \cite{NicolSidorovBroomhead2002}, that $H(\mu_{t},\mathcal{D}_{m})<\infty$ for every $t$ and $m$, and that $\mu_{t}([-R,R])\rightarrow1$ as $R\rightarrow\infty$ uniformly in $t$. Under these conditions one can verify the stronger property that for every $t\in I$ we have \[ \left|H_{m}(\mu_{t})-H_{m}((\mu_{t})_{[-R,R]})\right|=o(1)\qquad\mbox{as }R\rightarrow\infty \] uniformly in $t$ and $m$. \begin{thm} Let $(\Phi_{t})_{t\in I}$, $p$, and $\mu_{t}$ be as in the preceding paragraph. Let $\widetilde{\mu}$ denote the product measure on $\Lambda^{\mathbb{N}}$ with marginal $p$, and suppose that $A\subseteq\Lambda^{\mathbb{N}}$ is a Borel set such that $\widetilde{\mu}(A)>0$. Write \[ E=\bigcap_{\varepsilon>0}\left(\bigcup_{N=1}^{\infty}\,\bigcap_{n>N}\left(\bigcup_{i,j\in A}(\Delta_{i,j})^{-1}((-\varepsilon^{n},\varepsilon^{n}))\right)\right). \] Then $\dim\mu_{t}=\min\{d,\sdim\mu_{t}\}$ for every $t\in I\setminus E$. Furthermore suppose that $I\subseteq\mathbb{R}$ is compact and connected, and that the parametrization is analytic in the sense of Theorem \ref{thm:main-parametric}. If \[ \forall i,j\in A\quad\left(\;\Delta_{i,j}\equiv0\mbox{ on }I\quad\iff\quad i=j\;\right) \] then the set $E$ above is of packing (and Hausdorff) dimension at most $k-1$, and in particular of Lebesgue measure $0$. \end{thm} The proof is the same as the proofs of Theorems \ref{thm:description-of-exceptional-params} and \ref{thm:main-parametric}, except that in analyzing the resulting convolution one must approximate $\mu_{t}$ by $(\mu_{t})_{[-R,R]}$ for an appropriately large $R$ that is fixed in advance, with the scale $n$ large relative to $R$. We omit the details. Let us see how this applies to Theorem \ref{thm:Sinais-problem}, where $\varphi_{-1,\alpha}(x)=(1-\alpha)x-1$ and $\varphi_{1,\alpha}(x)=(1+\alpha)x+1$ for $\alpha\in(0,1]$, and $p=(1/2,1/2)$. It suffices to consider the system for $\alpha\in[s,1]$ for some $s>0$. Let $A$ be the set of $i\in\Lambda^{\mathbb{N}}$ such that $|\frac{1}{N}\sum_{n=1}^{N}i_{n}-\frac{1}{2}|<\delta$ for $n>N(\delta)$, where $\delta>0$ small enough to ensure that $|\varphi_{i_{1}\ldots i_{n}}|<1$ when this condition holds, and $N(\delta)$ large enough that $\widetilde{\mu}(A)>0$; in fact we can make $\widetilde{\mu}(A)$ arbitrarily close to $1$, by the law of large numbers. It remains to verify for $i,j\in A$ that $\Delta_{i,j}$ vanishes on $[s,1]$ if and only if $i=j$. Note that for $i\in\{-1,1\}^{n}$, \[ \varphi_{i,\alpha}(0)=1+(1+i_{1}\alpha)+(1+i_{1}\alpha)(1+i_{2}\alpha)+\ldots+\prod_{k=1}^{n}(1+i_{k}\alpha). \] Thus $\Delta_{i,j}$ is a series whose terms are of the form $c_{k,m}(1-\alpha)^{k}(1+\alpha)^{m}$ for some $c_{k,m}\in\{0,\pm1\}$, and $i=j$ if and only if all terms are $0$. Furthermore, there is an $n_{0}$ such that if $k+m\geq n_{0}$ and $c_{k,m}\neq0$, then $k>(1-\delta)m$. Thus since $s\leq\alpha\leq1$ and $\delta$ was chosen small enough, the series converges uniformly on $[s,1]$, and furthermore there is ~an $\varepsilon>0$ such that the series converges uniformly on some larger interval $[s,1+\varepsilon]$, and even in a neighborhood of $1$ in the complex plane. Hence $\Delta_{i,j}(\cdot)$ is real-analytic on $[s,1+\varepsilon]$ and is given by this series. Now, if $i\neq j$ we can divide out by the highest power $(1-\alpha)^{k_{0}}$ that is common to all the terms (possibly $k_{0}=0$), and evaluate the resulting function at $\alpha=1$. We get a finite sum of the form $\sum_{(k,m)\in U}c_{m,k}2^{m}$ for some finite set of indices $U\in\mathbb{N}^{2}$ such that $c_{m,k}\in\{\pm1\}$ for $(k,m)\in U$. Such a sum cannot vanish, hence by analyticity $\Delta_{i,j}\not\equiv0$ on every sub-interval of $[s,1+\varepsilon]$, and in particular $\Delta_{i,j}\not\equiv0$ on $[s,1]$, as desired. } \bibliographystyle{plain}
1,116,691,500,612
arxiv
\section{Introduction} \quad\quad The initial Hardy-type inequality is the following type, given by G.Hardy \cite{Ha04}.\\ {\bf Proposition 1}\quad If $p>1, f(x)\geq0,$ and $F(x)=\int_0^x f(t)dt,$ then \begin{eqnarray}\label{1.1} \int_0^{\infty}\bigg(\frac{F}{x}\bigg)^p dx< \bigg(\frac{p}{p-1}\bigg)^p\int_0^{\infty}f^p dx , \end{eqnarray} unless $f\equiv0.$ The constant is the best possible.\\ \quad In general, for $1<p<n,$ we have \begin{eqnarray}\label{a1.1}\bigg\|\frac{f}{|x|}\bigg\|_p\leq \frac{p}{n-p}\|\nabla f\|_p.\end{eqnarray} It is an immediate result of the following proposition when $p=q<n$. The proof of which is given by T.Cazenave \cite {Ca03}.\\ {\bf Proposition 2}\quad Let $1\leq p<\infty.$ If $q<n$ is such that $0\leq q \leq p$, then $\frac{|u(\cdot)|^p}{|\cdot|^q}\in L^1(\R^n)$ for every $u\in W^{1,p}(\R^n)$. Furthermore, \begin{eqnarray}\label{1.2} \int_{\R^n}\frac{|u(\cdot)|^p}{|\cdot|^q}dx\leq\Big(\frac{p}{n-q}\Big)^q\|u\|_{L^p}^{p-q}\|\nabla u\|_{L^p}^q, \end{eqnarray} for every $u\in W^{1,p}(\R^n)$. Our goal is to prove the following theorem:\\ {\bf Theorem}\quad Let $p>1, 0\leq s<{\frac n p}$, then there exists a constant $C$ such that for ${\forall u}\in \dot W^{s,p}(\R^n)$, \begin{eqnarray}\label{1.3} \int_{\R^n}\frac{|u(x)|^p}{|x|^{sp}}dx\leq C \|u\|_{\dot W^{s,p}}^p. \end{eqnarray} {\bf Remark:}\quad $(i)$ \quad If $s=0$, it is obvious that the result holds true for $\forall\ {1<p<\infty}$. Compare (\ref{1.3}) with (\ref{a1.1}), condition $sp<n$ is essential, and so (\ref{1.3}) can admit $p\ge n$. \vskip0.2cm $(ii)$ \quad For $\forall$ $0\leq s \leq 1$, by interpolation, we know that $$ \|u\|_{\dot W^{s,p}}^p\leq C\|u\|_{L^p}^{p(1-s)}\|\nabla u\|_{L^p}^{ps}. $$ Hence (\ref{1.3}) implies (\ref{1.2}) without considering the constant if we choose $s=\frac q p$. \\ \vskip0.2cm $(iii)$\quad Without considering the constant $C$, the inequality is sharp. In details, for $p=\frac{n}{s}$, if we take $f(x)\in C_c^{\infty}(\R^n)$ satisfying $f(x)=1$ for $|x|\leq1$ $f(x)=0$ for $|x|\geq2$, then the above theorem can not hold. In fact $$\int_{\R^n}\frac{|f(x)|^p}{|x|^{n}}dx\geq{\int_{|x|\leq1}\frac{1}{|x|^{n}}dx}=\infty$$ while\\ $$\|f\|_{\dot W^{s,p}}^p<\infty.$$ \vskip0.2cm $(iv)$ \quad For the case $p=2, 0\leq s<\frac{n}{p},$ the corresponding result is\\ $$\int_{\R^n}\frac{|u(\cdot)|^2}{|\cdot|^{2s}}dx\leq C\|u\|_{\dot{H^s}}^2 .$$ Chemin proves it by duality method and Littlewood-Paley decomposition, Tao uses frequency splitting technology giving another proof. The details can be found in in J.-Y. Chemin\cite {Ch06} and T.Tao\cite {TT06}. The two methods can't be used in our case. The method involved in our paper is different from those given by Cazenave , Chemin and T.Tao; it relies on the method of Riesz potential for homogeneous Sobolev spaces and Maximal function theory cited in Section 2.\\ The theorem will be proved in Section 3. First we give some notations and preliminary work. \section{Preliminaries} \quad First we introduce a kind of definition to homogeneous Sobolev space by using Reisz potential, we can find the detail in M.Stein\cite{ST70} and C. Miao \cite{Mi04} .\\ {\bf { Definition 1}}\quad Define Reisz potential $I_{\alpha} f$ by $$I_{\alpha} f=(-\triangle)^{-\frac{\alpha}2} f(x)=C_{n,\alpha}\int_{\R^n}|x-y|^{-n+{\alpha}}f(y)dy,$$ where $C_{n,\alpha}$ is the constant dependent in $\alpha$ and $n$. The norm of homogenous sobolev space is given by\\ $$\|f\|_{{\dot W}^{s,p}}=\|(-\triangle)^{\frac{s}2} f(x)\|_p=\|I_{-s} f\|_p.$$ {\bf{ Definition 2}}\quad Let $f\in \ L_{loc}({\R^n}),x\in\R^n$, $B\subset\R^n$ be a sphere and $x\in B,$ we define $$ Mf(x)=\sup_{x\in B}{\frac 1{|B|}}\int_B |f(y)|dy,\quad |B|=m(B) $$ $Mf$ is called H-L maximal function, $M$ is called maximal operator.\\ If $B(r,x)\subset\R^n$ is a sphere with it's center and radius $x,$ $r$, define $$ Mf(x)=\sup_{r>0}{\frac 1{|B(r,x)|}}\int_{B(r,x)} |f(y)|dy$$ $Mf$ is called H-L centered maximal function,\quad M is called centered maximal operator.\\ As we know, the two definitions are equivalent, maximal operator has the following properties:\\ {\bf{Lemma}}\quad Let $f(x)$ be a measurable function on $\R^n,$ then \\ \quad (i)\quad If $f\in \ L^p({\R^n}),1\leq p\leq \infty,$ then for a.e.$x\in\R^n$, $Mf(x)< \infty.$\\ \quad (ii)\quad Operator $M$ is weak (1,1) type , in detail, $\forall {f\in\ L^1(\R^n)}, \alpha > 0$, we have\\ $$ m\{{x|x\in\R^n;Mf(x)>\alpha}\}\leq \frac A\alpha\int_{\R^n}|F(x)|dx,$$ where constant {A} depends on {n}.\\ \quad (iii)\quad Let $1<p\leq\infty$, then operator $M$ is strong ($p$,$p$) type, that is $\forall {f\in\ L^p(\R^n)},$ we have ${Mf\in L^p(\R^n)},$ with\\ $$\|Mf\|_p\leq A_p\|f\|_p ,$$ where $A_p$ is a constant only dependent on $n$ and $p$.\\ As an immediate result, we have the following corollary:\\ {\bf{Corollary}}\quad For $\forall {f\in\ L^{p'}(\R^n)},\forall {{p'}>q}$,we have $$ \|(M(|f|^q))^\frac 1 q\|_{p'}\leq C \|f\|_{p'}.$$ In fact, as ${p'}>q$, that is $\frac{p'}{q}>1,$ by $\textbf{Lemma },$ we have\\ $$\|(M(|f|^q))^\frac 1 q\|_{L_{p'}}=\|M(|f|^q)\|_{L^{\frac {p'} q }}^\frac{1}{q}\leq C \||f|^q\|_{L^{\frac {p'} q}}^{\frac{1}{q}} = C \|f\|_{L^{p'}.} \quad \quad \Box $$ \section{Proof of the theorem} Proof:\quad Let ${I_{-s} u}=f$, then $u=I_s f$.\; Using the definition of Sobolev space, it is sufficient to show that:\\ \begin{eqnarray}\label{3.1} \bigg\|\frac{I_s f}{|x|^s}\bigg\|_p\leq C\|f \|_p. \end{eqnarray} Let \begin{align*} A f&=\frac{I_s f}{|x|^s}=\int_{\R_n} \frac{f(y)}{|x-y|^{n-s}|x|^s}dy \\ &=\int_{|x-y|\leq 100|x|}\frac{f(y)}{|x-y|^{n-s}|x|^s}dy + \int_{|x-y|\geq100|x|}\frac{f(y)}{|x-y|^{n-s}|x|^s}dy \\ &={A_1 f + A_2 f} \end{align*} To prove (\ref{3.1}), we need only prove that both $A_1 $ and $A_2 $ are strong $(p,p)$ type.\\ We consider $A_1 f$ first. Notice that $s > 0,$ we have \begin{align*} A_1 f&=\sum_{j\leq 0}\int_{{2^{j-1}100|x|}\leq|x-y|\leq{2^j}100|x|}\frac{|f(y)|}{|x-y|^{n-s}|x|^s}dy\\ &=\sum_{j\leq 0}\int_{|x-y|\sim{2^j}100|x|}\frac{|f(y)|}{|x-y|^{n-s}|x|^s}dy\\ &\leq \sum_{j\leq 0}\int_{|x-y|\sim{2^j}100|x|}\frac{|f(y)|}{{2^j}100|x|^{n-s}|x|^s}dy\\ &\leq\sum_{j\leq 0}\int_{|x-y|\leq{2^j}100|x|}\frac{|f(y)|}{2^{nj}|x|^s}dy\cdot 2^{js}\\ &\leq C{\sum_{j\leq 0}2^{js}}Mf\\ &\leq C' Mf \end{align*} as $p>1,$ from $\textbf{Lemma}$, we have $\|A_1 f\|_p\leq C\|f\|_p.$\\ Now, we are in position to consider $A_2 f $ . It is easy to see $$A_2 f(x)=\int_{|x-y|\geq100|x|}\frac{f(y)}{|x-y|^{n-s}|x|^s}dy \leq \int_{|y|\geq{99|x|}}\frac{f(y)}{|y|^{n-s}|x|^s}dy \triangleq B_2 f(x).$$ For $\forall {g\in\ L^{p'}(\R^n)}$, \begin{align*} \big(B_2 f(x),g(x)\big)&=\bigg(\int_{|y|\geq{99|x|}}\frac{f(y)}{|y|^{n-s}|x|^s}dy,g(x)\bigg)\\ &=\int_{\R^n}{\int_{|y|\geq{99|x|}}\frac{f(y)}{|y|^{n-s}|x|^s}dy \cdot g(x)dx}\\ &=\int_{\R^n}{\frac{1}{|y|^{n-s}}\int_{|x|\leq\frac{|y|}{99}}\frac{g(x)}{|x|^{s}}dx\cdot f(y)dy}\\ &=\big(Tg(y),f(y)\big) \end{align*} where $$Tg(y)={\frac{1}{|y|^{n-s}}\int_{|x|\leq\frac{|y|}{99}}\frac{g(x)}{|x|^{s}}dx. }$$ To prove $B_2$ to be strong $(p, p)$ type, it is sufficient to show that \\ \begin{eqnarray}\label{3.2} \|Tg(y)\|_{p'}\leq C\|g\|_{p'}. \end{eqnarray} In fact, consider \begin{align*} \big(B_2f(x),g(x)\big)&=\big(Tg(y),f(y)\big)\\ &=\|Tg(y)\|_{p'}\|f(y)\|_p\\ &\leq C\|g\|_{p'}\|f(y)\|_p, \end{align*} we get that $B_2$ is strong $(p,p)$ type by definition of norm. Hence, $A_2$ is also a strong $(p,p)$. Now let us prove formula (\ref{3.2}). For $s>0,$ when ${sq'}<n$, that is $q>\frac{n}{n-s},$ we have, by H\"{o}lder inequality \begin{align*} {|Tg(y)|} &\leq {\frac{1}{|y|^{n-s}}\bigg(\int_{|x|\leq\frac{|y|}{99}}|g(x)|^qdx\bigg)^{\frac{1}{q}} \bigg(\int_{|x|\leq\frac{|y|}{99}}\frac{1}{|x|^{sq'}}dx\bigg)^{\frac{1}{q'}}} \\ &\leq C\frac{1}{|y|^{n-s}}\cdot\bigg(\int_{|x|\leq\frac{|y|}{99}}|g(x)|^qdx\bigg)^{\frac{1}{q}}\cdot{|y|^{{(n-{sq'})}\frac{1}{q'}}}\\ &\leq C\frac{1}{|y|^{\frac{n}{q}}}\cdot\bigg(\int_{|x|\leq\frac{|y|}{99}}|g(x)|^qdx\bigg)^{\frac{1}{q}}\\ &\leq C(M(|g|^q))^{\frac{1}{q}}(x) \end{align*} for $\forall {p'\geq q>{\frac{n}{n-s}},}$ that is $1<p<{\frac{n}{s},}$ followed by {\bf{Corollary}}, we get $T$ strong ($p'$,$p'$) type. This proves the theorem.\quad \quad $\Box$ \\ {\bf Remark:}\quad For $p=1$, using our method, we can only get $A_1$ is weak $(1,1)$ type.\\ {\bf Acknowledge:}\quad The author is grateful to Prof. Xiaochun Li and Prof. Changxing Miao for their valuable suggestions. \begin{center}
1,116,691,500,613
arxiv
\section{Introduction} Starting from an interior metric of a known relativistic source, the gravitational field of that source is unique, and is described by a solution of the vacuum Einstein equations, which matches satisfactorily on the boundary that delimits the compact gravitational object. By studying such outer metric it is possible to obtain information about the source, and this is something to which many research papers have been devoted. In particular, one technique that has proved very useful is to use the RMM to describe the field or its gravitational effects on test particles in the presence of such fields. Thus, for example, we were able to distinguish sources by studying gyroscopic precession \cite{gyros}, geodesics \cite{geod}, as well as the study of circular orbits and ISCOs \cite{iscos}, \cite{sanabria}, or gravitational radiation \cite{RG} or collapse processes \cite{collapse}, or even to obtain vacuum solutions with a prescribed set of RMM \cite{MQ} On the contrary, if one has at the beginning a known vacuum solution and tries to determine the source, one finds that there are an infinite number of possible interior metrics and distributions of different matter matching with that exterior. What has been attempted in many lines of research is to try to relate the RMM, which were so useful in describing the external gravitational field, to the source \cite{epjc}. In this line of work a recent result was obtained in \cite{weylsources}, \cite{kerrsource} where interior metrics are computed and properly matched for any of the external Weyl metrics (vacuum solutions with static and axial symmetries) \cite{weyl},\cite{quevedo},\cite{tesis} as well as stationary axial (in particular Kerr). The relevance of the result is enhanced by the fact that the inner line element is constructed in terms of the external metric functions evaluated on the boundary, so that we relate the interior of the source (as well as its momentum energy tensor by means of the Einstein equations) to the gravitational field. The question arising here is: whom actually do the RMM belong to? Do they belong to the exterior or interior metric?. The answer to that question is clear: RMM are quantities belonging to the global solution at the whole space-time, in such a way that every interior solution appropriately matched with a vacuum solution automatically assume those RMM. Arguments and specific calculations to prove this assert will be done in the next sections. The fact is that the RMM have been definined in the literature \cite{geroch}, \cite{thorne}, \cite{hansen}, \cite{FHP}, by means of the exterior metric (Geroch, Hansen, Fodor-Hoenselaers-Perjes method, Thorne...) as well as the result obtained in \cite{RMMsource}. A relationship between those quantities and the material content of the source would provide the RMM with interesting physical meaning. In a recent paper a relativistic generalized Gauss theorem (RGGT) \cite{RMMsource} was presented which allows to calculate the RMM defined by Geroch \cite{geroch} and Thorne \cite{thorne}, through a specific integral definition trying to generalize the newtonian scenario for that quantities in terms of the source. Such a calculation is possible if we know the expression in harmonic coordinates of the metric at infinity, and then the volume integral defining the RMM can be calculated, using the RGGT, as an integral over the surface at infinity. The aim of this paper is to use that volume integral, which can be explicitly calculated once we know the interior metric matching the vacuum Weyl family of solutions, to link the RMM to the source of the exterior field (to the energy--momentum components as well as the interior metric). Doing so we will be able to connect the RMM that characterize the space--time with physical properties of the source by means of specific integral expressions inside the boundary of the compact object. Although very different in its presentation, this program is somehow similar to the one presented in \cite{gurle}. To achieve our goal we shall extensively use a general method to construct global static axially symmetric solutions to Einstein equations deployed in \cite{weylsources}. A very brief rewiew of this method is presented in the next section, all the details may be found in that reference. Section 3 is devoted to explicitely calculate the integral definition of RMM both in its volume or surface integral version, providing so a prove of the relativistic Gauss theorem. The result obtained previously is use in Section 4 to establish a relationship between RMM and the source, and some examples are outlined in Section 5. \section{The global static and axisymmetric metric in Erez-Rosen coordinates} We shall write the global static and axisymmetric line element in Erez-Rosen coordinates \cite{ER} as: \begin{equation} ds^2=-e^{2\sigma} dt^2+e^{2\nu} dr^2+e^{2 \eta} r^2 d\theta^2+e^{-2 \mu} r^2\sin^2\theta d\varphi^2, \label{ERglobal} \end{equation} where the metric functions depend on $r$ and $\theta$, and are defined as follows \begin{equation} e^{2\sigma}=\left\lbrace \begin{matrix} & e^{2\hat a} Z^2 & \ , \ r\leq r_{\Sigma} \nonumber \\ & e^{2 \psi} & \ , \ r\geq r_{\Sigma} \nonumber \end{matrix} \right. \quad , \ e^{2\nu}=\left\lbrace \begin{matrix} & \frac{e^{2\hat g-2\hat a}}{A} & \ , \ r\leq r_{\Sigma} \nonumber \\ & e^{-2\psi+2 \hat{\gamma}}& \ , \ r\geq r_{\Sigma} \nonumber \end{matrix} \right. \end{equation} \begin{equation} e^{2\eta}=\left\lbrace \begin{matrix} & e^{2\hat g-2\hat a} & \ , \ r\leq r_{\Sigma} \nonumber \\ & e^{-2\hat{\psi}+2 \hat{\gamma}}& \ , \ r\geq r_{\Sigma} \nonumber \end{matrix} \right. \quad , \ e^{-2\mu}=\left\lbrace \begin{matrix} & e^{-2\hat a} & \ , \ r\leq r_{\Sigma} \nonumber \\ & e^{-2\hat{\psi}}& \ , \ r\geq r_{\Sigma} \nonumber \end{matrix} \right. \end{equation} where the boundary surface of the source is defined by a constant value $r_{\Sigma}$ of the radial coordinate, $r=r_{\Sigma}$. $\displaystyle{A\equiv 1-\frac{2Mr^2}{r_{\Sigma}^3}}$, and $Z=\frac 32 A( r_{\Sigma})-\frac 12 A$, $M$ being the mass. $\hat{\gamma}\equiv \gamma-\gamma_s$, $\hat{\psi}\equiv \psi-\psi_s$, where $\psi$, $\gamma$ are any metric functions of the Weyl family of vacuum solutions, $\gamma_s$, $\psi_s$ being the corresponding metric functions of the Schwarzschild solution. $\hat a$, $\hat g$ are functions in the variables $(r,\theta)$ suitable constructed in \cite{weylsources} in order to guarantee a good physical behaviour of the energy-momentum tensor and the matching (Darmois) conditions. That function $\hat a=a-a_s$ is constructed in such a way that $a(r_{\Sigma})=\psi(r_{\Sigma})$, $a_s(r_{\Sigma})=\psi_s(r_{\Sigma})$, also $\hat g=g-g_s$ is such that $g(r_{\Sigma})=\gamma(r_{\Sigma})$, $g_s(r_{\Sigma})=\gamma_s(r_{\Sigma})$. The general solution for the exterior metric function is (Weyl family) \cite{weyl}, \cite{tesis} \begin{equation} \psi=\sum_{n=0}^{\infty}(-1)^{n+1} q_n Q_n(x) P_n(y), \label{erezrosenfamily} \end{equation} where $P_n(\cos \theta)$ are Legendre Polynomials, $Q_n(x)$ (with $x\equiv (r-M)/M$) are Legendre functions of second kind and $q_n$ a set of arbitrary constants. The relationship between the canonical Weyl coordinates $\lbrace R, \omega=\cos\Theta\rbrace$ and the Erez-Rosen $\lbrace r, y=\cos\theta \rbrace$ system of coordinates is as follows \begin{eqnarray} R&=&\sqrt{(r-M)^2-M^2(1-y^2)},\nonumber \\ \omega&=&y\frac{( r -M)}{R}. \label{weylcoord} \end{eqnarray} In addition, to assure a good behaviour of the physical variables at the center of the inner distribution we shall demand (the field equations have been used): $\hat a^{\prime}_0={\hat a}_{,\theta 0}={\hat a }_{,\theta \theta 0}={\hat a}^{\prime}_{,\theta 0}={\hat a }^{\prime}_{,\theta \theta 0}=0,\ \hat g^{\prime}_0={\hat g}_{,\theta 0}={\hat g }_{,\theta \theta 0}={\hat g}^{\prime}_{,\theta 0}={\hat g }^{\prime}_{,\theta \theta 0}=0, \hat g^{\prime \prime}_0={\hat g}^{\prime \prime}_{,\theta 0}=0$, where prime denotes derivative with respect to $r$, and the subscript $\theta$ denotes derivative with respect to $\theta$, and the subscript $0$ indicates that the quantity is evaluated at the center of the distribution. With all these considerations we get for the metric functions the following expressions \cite{weylsources}: \begin{eqnarray} \hat a(r,\theta)&=&\hat \psi_{\Sigma} s^2(3-2s) +r_{\Sigma}\hat \psi^{\prime}_{\Sigma}s^2(s-1)+{\mathbb F},\nonumber \\ \hat g(r,\theta)&=&\hat \Gamma_{\Sigma} s^3(4-3s) +r_{\Sigma}\hat \Gamma^{\prime}_{\Sigma}s^3(s-1)+{\mathbb G}. \label{aygsimple} \end{eqnarray} with $s\equiv r/r_{\Sigma} \in \left[0,1\right]$ and ${\mathbb F} \equiv (r-r_{\Sigma})^2F(r,\theta)$, ${\mathbb G}\equiv (r-r_{\Sigma})^2G(r,\theta)$ arbitrary functions with the following constraints: $F(0,\theta)=F^{\prime}(0,\theta)=0$, $G(0,\theta)=G^{\prime}(0,\theta)=G^{\prime \prime}(0,\theta)=0$. These metric functions, satisfy the junction conditions and produce physical variables which are regular within the fluid distribution. Furthermore the vanishing of $\hat g$ on the axis of symmetry, as required by the regularity conditions, necessary to ensure elementary flatness in the vicinity of the axis of symmetry, and in particular at the center, is assured by the fact that $\hat \Gamma_{\Sigma}$ and $\hat \Gamma^{\prime}_{\Sigma}$ vanish on the axis of symmetry. Even more, at this level of generality we can assure that the junction conditions imply the vanishing of the radial pressure $(P_{rr}\equiv g_{rr}T^1_1)_{\Sigma}=0$ at the boundary, and it can be shown that $T_1^2$ vanishes on the boundary surface as well \cite{weylsources}. When $\hat a=\hat g=0$ we recover the spherical case of a perfect fluid with isotropic pressures: \begin{eqnarray} ds^2_I&=& -Z(r)^2 dt^2+ \frac{1}{A( r)} d{ r}^2+{ r}^2 (d \theta^2+\sin^2\theta d\varphi^2), \\ ds^2_E&=& -\left(1-\frac{2M}{r}\right) dt^2+ \frac{ d{ r}^2}{1-\frac{2M}{r}}+{r}^2( d \theta^2+\sin^2\theta d\varphi^2).\nonumber \label{sphericalER} \end{eqnarray} Thus, the global line element (\ref{ERglobal}) describes in the vacuum any solution of the Weyl family ($\psi, \gamma$) and a good behaved interior solution with an isotropic perfect fluid limit when matching with Schwarzschild space--time. In \cite{weylsources} the case $F=G=0$ was studied for some examples, in particular the resulting sources for the exterior field of the MQ$^{1}$ \cite{MQ} and Zipoy-Vorhees \cite{zipoy}, \cite{vor} solutions. Now, the point is that, for any exterior gravitational field an infinite number of sources exist. Accordingly the obvious question arises: how can we relate the possible sources of a given exterior solution belonging to the Weyl family with the multipole structure of the latter? \ In what follows we shall see how to answer to the above question by using the RGGT and the definition of RMM given in \cite{RMMsource}. \section{The integral definition of RMM} A definition of multipole moments was introduced in \cite{RMMsource} for axially symmetric space-times by means of the following integral \begin{equation} I_n= \frac{1}{4\pi} \int_V \left[ H_n \hat{\triangle} \xi -\xi \hat{\triangle}(H_n)\right] \sqrt{\hat g}d^3{\vec x}, \label{MMdef} \end{equation} where the volume of integration must be extended to the whole space, and the following notation is used: \begin{equation} H_n \equiv \frac{(2n-1)!!}{n!} x^{i_1i_2..i_n} e_{i_1i_2..i_n} \qquad , \qquad \xi \equiv\sqrt{-g_{00}}, \end{equation} where $e^{i_1i_2..i_n}\equiv (e^{i_1}e^{i_2}...e^{i_n})^{TF}$ with $e^k$ being the unit vector along the positive direction of the symmetry axis, and $TF$ denoting its trace free part. The Laplacian operator is denoted by $\hat{\triangle}\equiv \frac{1}{\sqrt{\hat g}} \partial_k\left(\sqrt{\hat g} \hat g^{kj}\partial_j \right)$, and $\hat g$ is the determinant of the three-dimensional metric. Now, the crucial point here is that the integrand in (\ref{MMdef}) is a divergence (see \cite{RMMsource} for details) \begin{equation}\left[ H_n \hat{\triangle} \xi -\xi \hat{\triangle}(H_n)\right] \sqrt{\hat g}= \partial_k\left[ \sqrt{\hat g} \left(H_n \hat g^{kj} \partial_j \xi-\xi \hat g^{kj} \partial_j H_n\right)\right], \label{divergencia} \end{equation} and accordingly, that integral can be evaluated either as a volume integral (\ref{MMdef}), or as a surface integral: \begin{equation} I_n= \frac{1}{4\pi} \int_{\partial V} \left[H_n \hat g^{kj} \partial_j \xi-\xi \hat g^{kj} \partial_j H_n\right] d\sigma_k, \label{floworvolu} \end{equation} $\partial V$ being the boundary and $d \sigma_k$ its corresponding surface element. Two comments are in order at this point: \begin{itemize} \item In the weak field limit limit $\hat{\triangle}\sim \triangle_f$ (where $\triangle_f$ denotes the Laplacian in flat space--time) and $H_n=R^n P_n(\omega)$, and hence the second term in (\ref{MMdef}) vanishes since $\triangle_f\left[R^n P_n(\omega)\right]=0$. Furthermore, in the same weak field limit $e^{\Psi}\sim 1+\Phi$, $\Phi$ being the Newtonian potential which verifies the Poisson equation $\triangle_f \Phi=\rho$, and therefore equation (\ref{MMdef}) turns out to be the classical newtonian moments \begin{equation} I_n=M_n^N= \frac{1}{4\pi} \int_V R^n P_n \rho \ dV. \label{NMM} \end{equation} \item As it was shown in \cite{RMMsource} these integrals calculated in harmonic coordinates through the surface integral (\ref{floworvolu}) recover the RMM defined by Geroch (up to a known specific factor). \end{itemize} \subsection{Calculation of the surface integral} We shall now proceed to evaluate the integral expression (\ref{MMdef}) by means of the surface integral (\ref{floworvolu}). First we need to calculate $H_n$, and we obtain: \begin{eqnarray} H_n&=& (r-M)^{2k} P_{2k}(y)+\nonumber \\ &+&\sum_{i=0}^{k-1} (r-M)^{2i}\left[-M^2(1-y^2)\right]^{k-i} Q_{2i}^{(k)} (y), \end{eqnarray} \begin{equation} Q_{2i}^{(k)} (y)=\sum_{j=0}^i L_{2k,2j}\left(\begin{matrix} k-j\\ k-j-i \end{matrix}\right) y^{2j}, \end{equation} where $n=2k$ (only even order index has been taken since the Weyl solutions used to be considered posses equatorial symmetry), and $ L_{2k,2j}$ denotes the coefficient of the Legendre polynomial $P_{2k}(x)$ corresponding to the monomial $x^{2j}$, i.e. ${\displaystyle P_{2k}(x)=\sum_{j=0}^k L_{2k,2j} x^{2j} }.$ Similar expression for the odd index can be obtained. In fact, we have that $H_{2n}$ becomes the product $R^{2n} P_{2n}(\omega)$ written in Erez-Rosen coordinates, which according to (\ref{weylcoord}) is ${\displaystyle H_{2n}= \left((r-M)^2-M^2(1-y^2)\right)^{n} P_{2n}\left(\frac{(r-M)y}{R}\right) }$. It is easy to see that the surface integral (\ref{floworvolu}) leads to the following flux evaluated at the infinity surface ${\displaystyle F_n^{\infty}(\psi^{\prime})}$ since the integration is done over all the space, \begin{equation} F_n(\psi^{\prime})=\frac 12\int_{-1}^{1} r (r-2M) \left(\psi^{\prime} H_n(r)-H_n^{\prime}(r)\right) dy \ , \end{equation} which may be reduced to \begin{equation} F_n(\psi^{\prime})=\frac 12\int_{-1}^{1} r (r-2M) \psi^{\prime} H_n(r) dy \ , \label{flujoMn} \end{equation} since the following integration in the variable $y$ vanishes for any value of the radial coordinate: \begin{equation} \int_{-1}^1 H_n^{\prime}(r) dy =0. \end{equation} This integral (\ref{flujoMn}) can be explicitely calculated by taking into account that the exterior metric function $\psi$ (with equatorial symmetry) looks like (see equation (\ref{erezrosenfamily})) a series in the Erez-Rosen family of solutions \cite{quevedo} and then ${\displaystyle \hat{\psi^{\prime}}=-\frac 1M \sum_{k>0}^{\infty}q_{2k}\partial_x Q_{2k}(x)P_{2k}(y)}$, and the following relations hold (where ${\displaystyle x= \frac rm -1}$ is used): \begin{equation} \int_{-1}^1 H_{2n} P_{2k}(y) dy=\left\lbrace \begin{matrix} &2 N_{2n,2k} P_{2k}(x) & \ , \ k\leq n \nonumber \\ & 0 & \ , \ k>n ,\nonumber \end{matrix} \right. \end{equation} with ${\displaystyle N_{2n,2k}=\frac{(2n)! M^{2n}}{((2n+2k+1)!!(2n-2k)!!}}$. Hence, equation (\ref{flujoMn}) evaluated at infinity ${\displaystyle F_n^{\infty}(\psi^{\prime})}$ becomes \begin{eqnarray} &&F_n^{\infty}(\psi^{\prime})=-a_n^s+\nonumber \\ &-&\frac{1}{M} \left[r(r-2M)\sum_{k>1}^{n}q_{2k}N_{2n,2k}\partial_x Q_{2k}(x)P_{2k}(x)\right]_{r_{\infty}}. \label{flujoinfity} \end{eqnarray} The limit at the radial infinity of the above equation for such term in the index $k$ leads to a factor ${\displaystyle -\frac{2k+1}{4k+1}}$, and so we finally obtain \begin{equation} F_n^{\infty}(\psi^{\prime})=-a_n^s+\sum_{k>1}^{n}q_{2k}N_{2k}(2n)\frac{2k+1}{4k+1}. \end{equation} Hence we conclude that the definition of RMM (\ref{MMdef}) is coordinate dependent since the use of harmonic coordinates for the calculation of that integral in \cite{RMMsource} leads to the RMM of Geroch-Hansen, instead of certain combination of Weyl moments ($a_n$ or the corresponding $q_n$ of the Erez-Rosen representation) that we obtain when the calculation of equation (\ref{MMdef}) is performed in Erez-Rosen coordinates. Nevertheless, this result allows us to relate the RMM with volume integrals over the source involving the matter distribution and the interior metric as well, by means of the Gauss theorem and the knowledge of the coeficients $a_n$ and $q_n$ in terms of the RMM \cite{MQ}. We address this aim to the section $4$, and now calculate for completeness the volume integral (\ref{MMdef}). \subsection{Calculation of the volume integral} In order to show that the Gauss theorem is perfectly satisfied, we proceed now to evaluate the volume integral (\ref{MMdef}) with the global metric (\ref{ERglobal}). The integral (\ref{MMdef}) for the volume extended from the boundary to the infinity, $I_n^E$, is \begin{eqnarray} &&I_n^E=\int_{r_{\Sigma}}^{\infty} \frac{r^2}{2} dr \int_{-1}^1 dy \left\lbrace \frac{\psi_{,\theta} H_{n,\theta}}{r^2}-\frac{H_{n,\theta}}{r^2} \frac{\cos \theta}{\sin\theta}+\nonumber\right. \\ &-&\left.\frac{H_{n,\theta \theta}}{r^2}-\left(\frac{r-2M}{r}\right)\left[\left(-\psi^{\prime}+\frac{2(r-M)}{r(r-2M)}\right)H_n^{\prime}+H_n^{\prime \prime}\right]\nonumber \right.\\ &+&\left.H_n\left[\left(\psi^{\prime \prime} +\frac{2(r-M)\psi^{\prime}}{r(r-2M)}\right)\frac{r-2M}{r}+\frac{\psi_{,\theta \theta}}{r^2}+\frac{\psi_{,\theta}}{r^2}\frac{\cos \theta}{\sin\theta} \right] \right\rbrace \nonumber \\ \label{IE} \end{eqnarray} whereas for the interior volume, $I_n^I$, we have \begin{eqnarray} &&I_n^I=\frac 12 \int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy \left\lbrace H_n\left[\frac{3M}{r_{\Sigma}^3}-\frac{A^{\prime} \hat a^{\prime}}{4}+\nonumber \right.\right.\\ &+&\left.\left.Z \left(\frac{2\hat a^{\prime}(1-\frac{3Mr^2}{r_{\Sigma}^3})}{r\sqrt A}+\hat a^{\prime \prime} \sqrt A \frac{{\hat a}_{,\theta \theta}+{\hat a}_{,\theta}\frac{\cos\theta}{\sin\theta}}{r^2 \sqrt A}\right) \right]+\nonumber \right.\\ &-&\left. Z\left[\left(-\hat a^{\prime} \sqrt A +\frac{2(1-\frac{3Mr^2}{r_{\Sigma}^3})}{r\sqrt A}\right)H_n^{\prime} +\sqrt A H_n^{\prime \prime}+\nonumber \right.\right. \\ &+&\left.\left.\frac{H_{n,\theta \theta}+H_{n,\theta}\frac{\cos\theta}{\sin\theta}}{r^2\sqrt A}-\frac{{\hat a}_{,\theta}}{r^2}\frac{H_{n,\theta}}{\sqrt A}\right]\right\rbrace, \end{eqnarray} where $A$ and $Z$ are the previously defined functions only depending on the radial coordinate which are involved in the interior line element (\ref{ERglobal}). Since $\psi$ is solution of the vacuum field equations, we may write \begin{equation} \triangle \psi\equiv\left[\psi^{\prime \prime} +\frac{2(r-M)\psi^{\prime}}{r(r-2M)}\right]\frac{r-2M}{r}+\frac{\psi_{,\theta \theta}}{r^2}+\frac{\psi_{,\theta}}{r^2}\frac{\cos \theta}{\sin\theta}=0, \label{lapla} \end{equation} also $H_{n,\theta \theta}+ H_{n,\theta} \frac{\cos\theta}{\sin\theta}=(1-y^2)\partial_{yy}H_n-2y\partial_yH_n=\\=\partial_y\left[ (1-y^2)\partial_y H_n\right]$, and \begin{equation} \int_{-1}^1 H_n^{\prime} dy =\int_{-1}^1 H_n^{\prime \prime} dy=0. \end{equation} Using the above expressions, the integrals $I_n^I$ and $I_n^E$ may be simplified further as follows \begin{equation} I_n^E=\frac 12 \int_{r_{\Sigma}}^{\infty} r^2 dr \int_{-1}^1 dy \left[\hat \psi_{,\theta}\frac{ H_{n,\theta}}{r^2}+\left(\frac{r-2M}{r}\right)\hat \psi^{\prime}H_n^{\prime}\right], \label{VE} \end{equation} and \begin{eqnarray} &&I_n^I=\frac 12 \int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy H_n\left[\frac{3M}{r_{\Sigma}^3}-\frac{A^{\prime} \hat a^{\prime}}{4} \right]+\nonumber\\ &+&\int_{0}^{r_{\Sigma}} \frac{r^2}{2} dr \frac{Z}{\sqrt A}\int_{-1}^1 dy\left\lbrace H_n\left[\hat a^{\prime \prime} A+\hat a^{\prime}\left(\frac 2r-\frac{6Mr}{r_{\Sigma}^3}\right)\right]+\nonumber \right.\\ &+&\left. \hat a ^{\prime} A H_n^{\prime} \right\rbrace+\nonumber\\ &+&\frac 12 \int_{0}^{r_{\Sigma}} dr \frac{Z}{\sqrt A}\int_{-1}^1 dy \left\lbrace H_n \partial_y \left[(1-y^2)\partial_{y}\hat a\right]+\nonumber \right.\\ &+&\left.(1-y^2)\partial_y \hat a \partial_y H_n\right\rbrace. \label{Vint} \end{eqnarray} Next, notice that the last term of equation (\ref{Vint}) vanishes after integration in the variable $y$, since \begin{eqnarray} &&\int_{-1}^1 dy \left[ H_n \partial_y \left[(1-y^2)\partial_{y}\hat a\right]+(1-y^2)\partial_y \hat a \partial_y H_n \right]=\nonumber \\ &=&H_n (1-y^2)\partial_y \hat a \ \rvert^1_{-1}=0. \end{eqnarray} The second term of equation (\ref{Vint}) can be integrated with respect to the radial coordinate. The integration for those terms with a factor $H_n$ produces \begin{eqnarray} &&\int_{0}^{r_{\Sigma}} dr \left( \hat a^{\prime} C_1+\hat a^{\prime \prime} C_2\right)=\left(\hat a^{\prime} C_2 \ \right)\rvert^{r_{\Sigma}}_{0}-\int_{0}^{r_{\Sigma}} \hat a^{\prime} \ \alpha \ dr = \nonumber \\ &=&\hat{\psi}^{\prime}_{\Sigma} C_2(r_{\Sigma})-\int_{0}^{r_{\Sigma}} \hat a^{\prime} \ \alpha \ dr, \end{eqnarray} where $C_1\equiv\frac{Z}{\sqrt A}r^2H_n \left(\frac 2r-\frac{6Mr}{r_{\Sigma}^3}\right)$, $\ \alpha\equiv -C_1+C_2^{\prime}$ and $C_2\equiv\frac{Z}{\sqrt A}r^2H_n \left(1-\frac{2Mr^2}{r_{\Sigma}^3}\right)$ In the above the matching conditions $\hat a(r_{\Sigma})=\hat \psi(r_{\Sigma})$, $\hat a^{\prime}(r_{\Sigma})=\hat \psi^{\prime}(r_{\Sigma})$, as well as the assumed behaviour at the center $\hat a_0=\hat a^{\prime}_0=0$, have been taken into account. Also, the evaluation of $\alpha$ produces $\alpha=H_n M \frac{r^3}{r_{\Sigma}^3}+H_n^{\prime} \frac{Z}{\sqrt A}r^2A$. Using all these expresions, the integral for the interior volume finally reduces to \begin{equation} I_n^I=\frac 12 \int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy H_n\left[\frac{3M}{r_{\Sigma}^3} \right]+\frac 12\int_{-1}^1 dy \ \hat{\psi}^{\prime}_ {\Sigma} C_2(r_{\Sigma}). \label{Vint2} \end{equation} By comparing equation (\ref{Vint2}) with the flux (\ref{flujoMn}) evaluated at the boundary surface ${\displaystyle F_n^{\Sigma}(\psi^{\prime})} $ we conclude that they are equal since $C_2(r_{\Sigma})=r_{\Sigma} (r_{\Sigma}-2M) H_n(r_{\Sigma})$ and the first term in equation (\ref{Vint2}) is \begin{equation} \frac 12 \int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy H_n \left[\frac{3M}{r_{\Sigma}^3}\right]=\frac{M^{n+1}}{n+1}=-a_n^s, \end{equation} where $a_n^s$ are the Weyl coefficients of Schwarzschild, and $\psi^{\prime}_{\Sigma}=\psi^{\prime s}_{\Sigma}+\hat\psi^{\prime}_{\Sigma}=\frac{M}{r_{\Sigma} (r_{\Sigma}-2M) }+\hat\psi^{\prime}_{\Sigma}$ and hence the equation (\ref{flujoMn}) evaluated at the boundary surface is equivalent to \begin{eqnarray} F_n^{\Sigma}(\psi^{\prime})&=&\frac 12\int_{-1}^{1} r_{\Sigma} (r_{\Sigma}-2M) \psi_{\Sigma}^{\prime} H_n(r_ {\Sigma}) dy=\nonumber \\ &=&-a_n^s+\frac 12\int_{-1}^{1} r_{\Sigma} (r_{\Sigma}-2M) \hat \psi_{\Sigma}^{\prime} H_n(r_ {\Sigma}) dy,\nonumber\\ \label{flujorE} \end{eqnarray} that is to say, the integral extended to the interior volume $I_n^I$ recovers the flux through the boundary $F_n^{\Sigma}(\psi^{\prime})$. Thus we have verified (as expected) that the volume integral extended to the interior volume delimited by the boundary equals the flux integral through that surface. Next, let us calculate the volume integral at the exterior of the source $I_n^E$. To do so, the second term of equation (\ref{VE}) can be integrated in the radial variable leading to \begin{eqnarray} &&\frac 12 \int_{-1}^1 [B H_n]_{r_{\infty}} dy -\frac 12\int_{-1}^1 r^2_{\Sigma}A_{\Sigma} \hat\psi^{\prime}_{\Sigma} H_n({\Sigma}) dy+\nonumber \\ &-&\frac 12 \int_{r_{\Sigma}}^{\infty} dr B^{\prime} \int_{-1}^1 H_n dy, \label{VEprima} \end{eqnarray} with $B\equiv r(r-2M)\hat{\psi}^{\prime}$, and $[B H_n]_{r_{\infty}}$ denoting the value of that function over the surface at infinity, whereas the first term of equation (\ref{VE}) can be integrated in the angular variable producing \begin{equation} -\frac 12 \int_{r_{\Sigma}}^{\infty} dr \int_{-1}^1 H_n \partial_y[(1-y^2)\hat{\psi_y}]dy, \label{VEpunto} \end{equation} Hence, the sum of equations (\ref{VEprima}) and (\ref{VEpunto}) leads to the following expression for the exterior volume integration \begin{eqnarray} I_n^E&=&\frac 12 \int_{-1}^1 [B H_n]_{r_{\infty}} dy -\frac 12\int_{-1}^1 r^2_{\Sigma}A_{\Sigma} \hat\psi^{\prime}_{\Sigma} H_n({\Sigma}) dy \equiv \nonumber\\ &\equiv& F_n^{\infty}(\psi^{\prime})- F_n^{\Sigma}(\psi^{\prime}) , \label{sumadepartes} \end{eqnarray} since $\hat\psi$ being a solution of the exterior field equations, verifies $\triangle \hat{\psi}=0$, and the contribution of the Schwarzschild term $\psi^{\prime s}$ in the flux is $a_n^s$ whatever the surface of integration is considered. Therefore, the definition of RMM (\ref{MMdef}), evaluated as a volume integral, corresponding to the sum of the quantities $I_n^I$ and $I_n^E$ leads to the surface integral ${\displaystyle F_n^{\infty}(\psi^{\prime})}$ at the spatial infinity, which is the result obtained if one performs the definition (\ref{MMdef}) as a surface integral. The following remarks are in order at this point: \begin{itemize} \item This result is true for any interior metric function $\hat a$ (with the only constraints derived from the matching conditions and the well-behaviour at the center). Hence the matching conditions arise as the necessary and suficient condition to hold the equivalence between both kinds of integrals (Gauss theorem). \item For any source whose interior metric matches appropriately with a specific Weyl solution at the exterior, the RMM are the same in all the cases, and they are the ones corresponding to that Weyl solution. Whatever the matter distribution of the source be, the RMM are the same ones, since they are determined by the exterior metric to which it is matched. \item It seems to be that the RMM are exclusively related to the gravitational exterior field, but in fact that is a result due to the intrinsic characteristic of the own definition (\ref{MMdef}) and its equivalence between volume or surface integral (\ref{divergencia}), (\ref{floworvolu}). \item Therefore, the definition cannot be used to constrain the interior solution (the source), since this volume integral (\ref{MMdef}) used to calculate the RMM aparently exclude from the integration the source of the gravitational field. However the flux throuhg the boundary surface contains information both from the source and the gravitational field whose RMM are known. This fact will allows us to link the RMM and the source. As it is known in Newtonian gravity (NG) we can calculate the NMM of a source from its matter distribution by means of a volume integral (\ref{NMM}) and the gravitational exterior field is characterized by those NMM which are fully determined by the physics of the source. Whereas that identification in NG is forthright is not the case for GR, but nevertheless it is possible to connect the RMM with some volume integrals involving the matter distribution and the interior metric, as we shall see in what follows: \end{itemize} \section{The relationship between RMM and the source} In an attempt to relate the matter distribution of the source with the RMM, we keep in mind that the Einstein equations connect the metric with the energy-momentum tensor, and we have linked the interior metric functions with the exterior ones as clearly exhibited in (\ref{aygsimple}). Thereferore, we can recalculate the integral definition used \cite{RMMsource} at the same time that we introduce the matter distribution of the source into those integrals by means of the so-called Tolman density \begin{equation} \rho_T\equiv \sqrt{-g_{00}}\left(-T_0^0+T_i^i\right) , \end{equation} since we know \cite{RMMsource}, \cite{weylsources} that ${\displaystyle \hat \triangle \sqrt{-g_{00}}=4\pi \rho_T}$. With this consideration, equation (\ref{MMdef}) becomes \begin{eqnarray} I_n&=& \int_V H_n \rho_T \sqrt{\hat g} d^3\vec x-\frac{1}{4\pi}\int_V\xi \partial_k\left(\sqrt{\hat g}\hat g^{kj}\partial_j H_n\right) d^3 \vec x \equiv \nonumber \\ &\equiv& T_n+S_n, \label{tolmanRMM} \end{eqnarray} where ${\displaystyle T_n=\int_V H_n \rho_T \sqrt{\hat g} d^3\vec x}$ denotes the part of the integral involving the Tolman density (content material of the distribution), and ${\displaystyle S_n=-\frac{1}{4\pi}\int_V\xi \partial_k\left(\sqrt{\hat g}\hat g^{kj}\partial_j H_n\right) d^3 \vec x}\quad $ holds with the other part of the integral. This expression (\ref{tolmanRMM}) used to define the RMM generalizes the definition of the Tolman mass \cite{tolman} (Monopole $M_0$),or Komar \cite{komar} moments, since $H_0=1$ and the second term $S_n$ vanishes for that case. The volume integrals in (\ref{tolmanRMM}) must be calculated, as we said above, extended to the whole space, but in the first term of this expression $T_n$ we can limit ourself to the interior of the source since the energy momentum tensor in vacuum vanishes (assuming vanishing electromagnatic field). Equivalently this point is in agreement with the fact that $\int_V H_n \hat{\triangle} \xi \sqrt{\hat g} d^3 \vec x=0$ at the outside of the source from equations (\ref{IE}) and (\ref{lapla}). As we prove above (\ref{sumadepartes}) \begin{equation} I_n^E\equiv T_n^E+S_n^E=S_n^E=F_n^{\infty}(\psi^{\prime})-F_n^{\Sigma}(\psi^{\prime}) \end{equation} where superscript $E$ denotes exterior volume. Since the evaluation of the integral $I_n$ over the whole space leads to the flux at infinity ${\displaystyle F_n^{\infty}(\psi^{\prime})}$, then \begin{equation} F_n^{\infty}(\psi^{\prime})=I_n^I+F_n^{\infty}(\psi^{\prime})-F_n^{\Sigma}(\psi^{\prime}), \end{equation} where the superscipt $I$ denotes interior volume. Consequently we obtain the following relation \begin{equation} T_n^I=T_n=-S_n^I+F_n^{\Sigma}(\psi^{\prime}) \label{sumadeints} \end{equation} which is just the conclusion obtained from equations (\ref{Vint2})-(\ref{flujorE}). The flux over the boundary surface $F_n^{\Sigma}(\psi^{\prime})$ can be obtained from (\ref{flujoinfity}) as follows \begin{eqnarray} &&F_n^{\Sigma}(\psi^{\prime})=-a_n^s+\nonumber \\ &-&\tau (r_{\Sigma}-2M)\left[\sum_{k>1}^{n}q_{2k}N_{2n,2k}\partial_x Q_{2k}(x)P_{2k}(x)\right]_{x=\tau-1}. \label{fujoinfity} \end{eqnarray} ${\displaystyle \tau\equiv \frac{r_{\Sigma}}{M}}$ being the compactness factor of the source. Therefore, the flux provides over the surface $r_{\Sigma}$ information about the RMM since we know the coeficients $q_{2k}$ in terms of the Weyl coeficients $a_n$ \cite{MQ}, and these ones in terms of the RMM \cite{MSA}: \begin{equation} q_0=1 \ , \quad q_2=\frac {15}{2} \frac{M_2}{M^3} \ , \quad q_4=\frac{45}{4}\frac{M_2}{M^3}+\frac{315}{8}\frac{M_4}{M^5} \ , \cdots \end{equation} Hence the equation (\ref{sumadeints}) allows us to write each RMM in terms of two kind of volume integrals one of them $T_n$ involving the matter distribution by means of the Tolman density and the other one $S_n^I$ bringing in the interior metric to the evaluation. The first RMM can be obtained as follows: \begin{eqnarray} M_0&=& T_0+S_0^I\nonumber \\ M_2&=&\frac{-1}{\tau(\tau-2) \beta_2(\tau)}\left[-\frac{M^3}{3}+T_2+S_2^I \right] \nonumber \\ M_4&=&\frac{2M^2\left(1+12\frac{\beta_2(\tau)}{\beta_4(\tau)}\right)}{7 \tau(\tau-2) \beta_2(\tau)}\left[-\frac{M^3}{3}+T_2+S_2^I \right]+\nonumber\\ &-&\frac{4}{\tau(\tau-2) \beta_4(\tau)}\left[-\frac{M^5}{5}+T_4+S_4^I \right] \nonumber \\ \label{formulas} \end{eqnarray} where the following notation has been used: \begin{equation} \beta_n(\tau)\equiv \left[ P_n(x)\partial_x Q_n(x)\right]_{x=\tau-1} \ , \end{equation} $P_n(x)$ and $Q_n(x)$ being the Legendre polynomials and the Legendre polynomials of second kind respectively. In the classical gravitational analogy the multipole momentum is obtained only as an integral over the source because we do not have in newtonian gravityy an interior metric. As already seen in \cite{weylsources} the integral $T_0$ gives the mass for any source properly attached to a Weyl exterior, while $S_0^I$ is identically zero since $H_0=1$. These expressions can be understood in two alternative ways: a qualitative one in which the formulas (\ref{formulas}) should be read in the sense of explaining in what way the source participates in the definition of each RMM, or which characteristics of the source (its density, anisotropic pressure, and the interior metric itself) contribute to the construction of the RMM. We assume that such moments are already known from any of the historically defined calculation methods by means of the exterior metric. Alternatively, the explicit knowledge of the inner metric allows to calculate the RMM in terms of the physical characteristics of the source. Evidently the energy-momentum tensor is related through the Einstein equations to the inner metric so that this distinction between the two types of integrals $T_n$ and $S_n^I$ is formal, although significant as we will see in the next section. This quantitative point of view of the formulae (\ref{formulas}) provide us with an explicit calculation of the RMM using the source itself rather than the exterior metric. \section{Some examples} \subsection{The global model for any Weyl solution} The equation (\ref{formulas}) is still general for any source appropriately matched to any Weyl exterior solution, since the flux which generates the combinations of RMM was calculated with the vacuum solution. Now we compute the volume integral expresions $T_n$ and $S_n^I$ with a global model metric including all the admissible sources for any axially symmetric vacuum gravitational field in the Weyl gauge \cite{weylsources}. It is easy to see from (\ref{Vint}) that the integrals $S_n^I$ result to be ($V_I$ denotes the volume limited by the boundary surface of the source) \begin{eqnarray} &&S_n^I\equiv-\frac{1}{4\pi}\int_{V_I}\xi \partial_k\left(\sqrt{\hat g}\hat g^{kj}\partial_j H_n\right) d^3 \vec x= \nonumber \\ &=&\frac 12\int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy \frac{Z}{\sqrt{A}}\left[\hat a^{\prime} A H_n^{\prime}+ \frac{(1-y^2)}{r^2} \partial_y \hat a \partial_y H_n \right]\nonumber \\ \label{second} \end{eqnarray} With respect to the volume integral $T_n$ involving the Tolman density $\rho_T$ we have that \begin{eqnarray} T_n&\equiv& \int_V H_n \rho_T \sqrt{\hat g} d^3\vec x=\nonumber \\ &=&2 \pi \int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy \frac{Z}{\sqrt{A}}H_n\left[(\mu+3P)-\frac{E}{8 \pi} \right] \label{first} \end{eqnarray} where we have used the following notation for the energy momentum tensor (see \cite{weylsources} for details\footnote{Please take into account a missprint in that paper for the expression of $E$ and $\hat p_{zz}$, as well as in the formula (24) in that paper derived from the previous mistaken formulae: the second derivative of the function $\hat g$ with respect to the variable $s$ must contain a forgotten factor $A$. Same missprints are reproduced in \cite{kerrsource}. The calculations and conclusions derived in both papers are still appropriated and right, since it is a matter of a missprint in the edition of the latex version.} ): \begin{eqnarray} -T^0_0&=&\kappa \left(8 \pi \mu+\hat p_{zz}-E\right),\nonumber\\ T^1_1&=& \kappa \left(8 \pi P-\hat p_{xx}\right),\nonumber \\ T^2_2&=& \kappa \left(8 \pi P+\hat p_{xx}\right),\nonumber \\ T^3_3&=&\kappa \left(8 \pi P-\hat p_{zz}\right),\nonumber \\ T^2_1&=&-\frac{\kappa}{r^2} \hat p_{xy} \qquad , \qquad \kappa\equiv \frac{e^{2\hat a-2\hat g}}{8 \pi} \label{eegeneral} \end{eqnarray} with \begin{eqnarray} &&E=-2 \Delta \hat a+(1-A)\left[2 \frac{\hat a^{\prime}}{r}\frac{9 \sqrt{A_{\Sigma}}-4 \sqrt{A}}{3 \sqrt{A_{\Sigma}}- \sqrt{A}}+2 \hat a^{\prime \prime}\right],\nonumber\\ &&\Delta \hat a= \hat a^{\prime \prime}+2\frac{\hat a^{\prime}}{r}+\frac{{\hat a}_{,\theta \theta} }{r^2}+\frac{{\hat a}_{,\theta} }{r^2}\frac{\cos \theta}{\sin \theta},\nonumber \\ &&\hat p_{xx}=-\frac{{\hat a}_{,\theta} ^2}{r^2}-\frac{\hat g^{\prime}}{r}+\hat a^{\prime 2}+\frac{{\hat g}_{,\theta} }{r^2}\frac{\cos \theta}{\sin \theta}+\nonumber \\ &+&(1-A)\left[2 \frac{\hat a^{\prime}}{r}\frac{\sqrt{A}}{3 \sqrt{A_{\Sigma}}- \sqrt{A}}- \hat a^{\prime 2} +\frac{\hat g^{\prime}}{r}\frac{3 \sqrt{A_{\Sigma}}-2 \sqrt{A}}{3 \sqrt{A_{\Sigma}}-\sqrt A}\right], \nonumber \\ &&\hat p_{zz}=-\frac{{\hat a}^2_{,\theta} }{r^2}-\frac{\hat g^{\prime}}{r}-\hat a^{\prime 2}-\frac{{\hat g}_{,\theta \theta} }{r^2}-\hat g^{\prime \prime}+\nonumber\\ &+&(1-A)\left[-2 \frac{\hat a^{\prime}}{r}\frac{\sqrt{A}}{3 \sqrt{A_{\Sigma}}- \sqrt{A}}+ \hat a^{\prime 2} +2\frac{\hat g^{\prime}}{r}+\hat g^{\prime \prime}\right], \nonumber \\ &&\hat p_{xy}=2{\hat a}_{,\theta}\hat a^{\prime}-\hat g^{\prime}\frac{\cos(\theta)}{\sin(\theta)} -\frac{{\hat g}_{,\theta}}{r} +\frac{(1-A)(2{\hat a}_{,\theta}-{\hat g}_{,\theta})}{r \sqrt A (3 \sqrt{A_{\Sigma}}-\sqrt A)}\nonumber \\ \label{eegeneraldet} \end{eqnarray} where $\mu$ and $P$ are the homogeneous and isotropic density and pressure respectively: \begin{equation} P=\mu\left(\frac{\sqrt A-\sqrt{A_{\Sigma}}}{3\sqrt{ A_{\Sigma}}-\sqrt A} \right) \end{equation} If we rewrite the above expression (\ref{first}) in terms of the physcial parameters of the source we have that \begin{equation} T_n=\frac{M^{n+1}}{n+1}- \frac 14\int_{0}^{r_{\Sigma}} r^2 dr \int_{-1}^1 dy \frac{\mu}{\mu+3P}H_n E \label{firstfisical} \end{equation} where \begin{eqnarray} E&=&-2\left(1-\frac 83 \pi r^2 \mu \right)\hat a^{\prime \prime}- \frac{\hat 4 a^{\prime}}{r} \left(1-\frac 23 \pi r^2 (5\mu-3P)\right)+\nonumber \\ &-&\frac{2}{r^2}\partial_y\left[(1-y^2)\partial_y \hat a\right] \label{E8pi} \end{eqnarray} In \cite{epjc} (see for details therein) the physical characteristic of the source arising from the energy-momentum tensor are expressed in terms of the following pressures and anistropies ${\displaystyle T_m\equiv \frac{T_1^1+T_2^2}{2}, \Pi_{31}, \Pi_{23}, \Pi_{xy}=\frac{-T_m}{8\pi P r^2}\hat p_{xy}}\qquad $ with the notation ${\displaystyle \Pi_{ij}\equiv T_i^i-T_j^j}$ and the interior metric functions are related as follows ${\displaystyle \hat g=\hat a-\frac 12 \ln\left(\frac{T_m}{P}\right)}$ Hence, the contributions of the integrals $T_n$ to the RMM are due to the inhomogneity of the density along with the pressure $T_m$. \subsection{Sources of Schwarzschild space-time} When the particular spherical case is regarded, both the isotropic or anisotropic spherical sources of Schwarzschild vacuum solution lead to the following metric function \begin{equation} \hat a=\frac 12 \ln\left(\frac{T_m}{P}\right) \label{ahat} \end{equation} since $\hat g=0$, $\Pi_{xy}=0$, and the unique non vanishing anistropic pressure is ${\displaystyle \Pi_{31}=\frac{T_m}{4 \pi P}\hat p_{xx} }$ since $\hat p_{xx}=-\hat p_{zz}$. {\bf A) } Now, on the one hand the isotropic perfect fluid subcase requires $\hat p_{xx}=0$ leading to $T_1^1=T_2^2=T_3^3$ The only possible solution satisfying at the same time the junction conditions, is $\hat a=0$ and therefore we get the vanishing of the anisotropy $\Pi_{31}=0$ as well as the isotropic perfect fluid limit $T_m = P$. In this particular case (\ref{second}) vanish for any value of the index, i.e. $S_n^I =0$ and (\ref{firstfisical}) $T_n=\frac{M^{n+1}}{n+1}$ and hence the unique multipole moment is the mass $M_0=M$ from the equations (\ref{formulas}). {\bf B) } Indeed, the same result must be obtained, on the other hand for anistropic sources, since the exterior solution is also Schwarzschild space-time. In fact the spherical case considered $\hat a=\hat a(r)$ and the equation (22) lead to $S_n^I=0$ (\ref{second}) as well. With respect to the integrals $T_n$ (\ref{firstfisical}) the independence of the physical parameters in the angular variable leads to the following expression \begin{equation} T_n=\frac{M^{n+1}}{n+1}-\frac{M^{n}}{2(n+1)}\int_{0}^{r_{\Sigma}} r^2 dr \frac{\mu}{\mu+3P} E \label{constrainprevio} \end{equation} and hence the following constraint arises from (\ref{formulas}) since all RMM higher than $M_0$ must vanish: \begin{eqnarray} 0&=& \int_{0}^{r_{\Sigma}} r^2 dr \frac{\mu}{\mu+3P}\left[\left(1-\frac 83 \pi r^2 \mu \right)\hat a^{\prime \prime}+ \nonumber\right.\\ &+&\left.\frac{2 \hat a^{\prime}}{r} \left(1-\frac 23 \pi r^2 (5\mu-3P)\right)\right] \label{constrain} \end{eqnarray} where the expression for $E$ (\ref{E8pi}) has been considered, and the derivatives of $\hat a$ are obtained from (\ref{ahat}): \begin{equation} \hat a^{\prime}=\frac 12 \left( \frac{T_m^{\prime}}{T_m}-\frac{P^{\prime}}{P}\right)\ , \ \hat a^{\prime \prime}=\frac 12\left(\frac{T_m^{\prime\prime}}{T_m}-\frac{T_m^{\prime 2}}{T_m^2}-\frac{P^{\prime\prime}}{P}+\frac{P^{\prime 2}}{P^2}\right) \end{equation} Thus the above equation (\ref{constrain}) implies a condition over the pressure $T_m$ or equivalently the anitropy of the source. Nevertheless that condition is fullfilled and the equation (\ref{constrain}) is just an identity, since the integral results to be \begin{eqnarray} \left[\frac{\mu(\mu+3P)}{(\mu+P)^2} \hat a^{\prime} r^2 A_{\Sigma}\right]_0^{r_{\Sigma}}&=&A_{\Sigma} r_{\Sigma}^2 \hat a^{\prime}_{\Sigma} =\nonumber\\ &=&A_{\Sigma} \frac {r_{\Sigma}^2}{2} \left( \frac{T_m^{\prime}}{T_m}-\frac{P^{\prime}}{P}\right)_{r_{\Sigma}} \end{eqnarray} that is null because the pressure $T_m$ behaves at the boundary equal than $P$ and $P(r_{\Sigma})=0$ \cite{epjc}. Equivalently we can argue that $\hat a^{\prime}_{\Sigma}$ vanishes since the boundary condition for that interior metric function establishes that $\hat a^{\prime}_{\Sigma}=\hat \psi^{\prime}_{\Sigma}$ and it is null because the exterior metric function is the corresponding to Schwarzschild. \vskip 2mm \subsection{Sources of non-spherical vacuum space-time} In this case the integrals $S_n^I$ no longer vanish (in general) because of the angular dependence of the metric functions, as well as the integrals $T_n$ from (\ref{firstfisical})-(\ref{E8pi}) incorporate those contribution missing in the spherical case. The simplest interior metric functions are \cite{weylsources} those of equation (\ref{aygsimple}) with $F=G=0$. As a matter of ilustration let us calculate the contributions to the quadrupole moment of both kind of volume integrals over the source for the Erez-Rosen space-time \cite{ER} which the exterior metric function ${\displaystyle \psi=\psi^s-q_2 Q_2(x)P_2(y)}$ with arbitrary Weyl coeficient (of the Erez-Rosen representation) $q_2$, and the prolate spheroidal coordinate $x\equiv \frac rm-1$. From equations (\ref{formulas}) we obtain the following quadrupole moment $M_2$: \begin{eqnarray} &&M_2=\frac{2q_2}{5\tau(\tau-2) \beta_2(\tau)}\times \nonumber\\ &\times&\left[ \int_0^{r_{\Sigma}}\frac{r^2 M^2\mu}{3(\mu+3P)} P_2(x) \left[ c_1(r)+c_2(r) \mu+c_3(r) P\right] dr +\right. \nonumber \\ &+&\left. \int_0^{r_{\Sigma}} r^2\frac{Z}{\sqrt A} \left[2M^2P_2(x) c_4(r)+A c_5(r) r (r-M) \right] dr \right] \nonumber\\ \label{M2ER} \end{eqnarray} with \begin{eqnarray} c_1(r)&=&-2s \ a(\tau) +6s \ b(\tau) \nonumber \\ c_2(r)&=&-4\pi r^2\left[2(7-9s)a(\tau) +\left(9s-\frac{14}{3}\right)b(\tau)\right] \nonumber \\ c_3(r)&=&4\pi r^2\left[6(1-s)a(\tau) +(3s-2)b(\tau)\right] \nonumber \\ c_4(r)&=&(3-2s)a(\tau) +(s-1)b(\tau) \nonumber \\ c_5(r)&=&6(1-s)a(\tau) +(3s-2)b(\tau) \end{eqnarray} where $s\equiv r/r_{\Sigma}$ and the notation ${\displaystyle a(\tau) \equiv \frac{Q_2(\tau-1)}{r_{\Sigma}^2}}\quad $, ${\displaystyle b(\tau) \equiv \frac{\partial_x Q_2(x)_{(\tau-1)}}{r_{\Sigma} M}}$ has been used for the Legendre polynomial of second kind $Q_2(x)$ and its derivative, both of them evaluated in the boundary $x_{\Sigma}=\tau-1$. \section{Conclusions} We have shown that we are able to relate the sources of Weyl solutions with their RMM. The procedures for the explicit calculation of the RMM of any space-time are circumscribed to the metric that describe the gravitational field. The definitions of Geroch-Hansen and Thorne involve vacuum solutions as well as the method of Fodor-Hoenselaers-Perjes (FHP), or \cite{suecos} and others developing Thorne's definition by using harmonic coordinates manage the exterior gravitational field of the compact object. In this work a definition of RMM \cite{RMMsource} extended to whole space-time is developed explicitly for a global metric. Due to the own characteristics of that definition it was shown in \cite{RMMsource} that the RMM can be calculated by means of a flux integral at the infinity just by using the exterior metric out there. But that flux integral is equivalent to a volume integral through a generalized Gauss theorem. Nevertheless neither the interior metric nor the source itself were used in that work since harmonic coordinates were used to implement the flux whereas the interior metric in that system of coordinates is unknown. Now an space-time described by a global metric is used to implement the definition from \cite{RMMsource} in such a way that relevant expressions for the RMM are obtained in terms of the material content of the source and the interior metric. Hence, those integral expressions constrained to the volume of the source allows us to calculate The RMM from the physical characteristics of the source. This result is providing a relevant generalization of the procedure commonly developed in newtonian gravity to define the multipole moments by means of the density of the compact object. And, at the same time a generalization of the definition of Tolman mass \cite{tolman} and Komar moments \cite{komar} is derived. In this work it is proved that the RMM can actually be considered as physical characteristics both of the gravitational field as well as the source conveniently matched to that exterior metric. Besides, it is the proper matching condition the guarantee to derive the volume integral expressions or the flux integrals version that lead to the relationship linking those expressions involving the source with the RMM known in the literature associated to the gravitational field. Consequently, from now onwards it is not necessary to know the RMM of the gravitational field matched to a determined source but it is sufficient the energy-momentum tensor and the interior metric to calculate those RMM. Up to now the RMM gave relevant physical information about the source (flattening, shape, symmetry,..) and they could be connected to the orbital movement of test particles (see for instance a recent paper \cite{kopeikin} on relativistic celestial mechanics discussing the Post-Newtonian dynamics of an isolated gravitating system consisting of N extended bodies moving on a curved space-time and the relevance of multipole moments for accurate prediction of orbital dynamics of extended bodies in inspiraling binary systems or construction of templates of gravitational waves at the merger stage when the strong gravitational interaction between the higher-order multipoles of the bodies play a dominant role). Nevertheless, if this was the case before, and it still works so satisfactorily, from now on it may be otherwise, since the result presented here allows us to link directly the RMM with the physics of the source. \section*{Acknowledgments} This work was partially supported by the Spanish Ministerio de Ciencia, Innovaci\'on y Universidades. Subdirecci\'on General de Proyectos de Investigaci\'on, under Research Project No. PGC2018-096038-B-I00 (MINECO/FEDER), as well as the Consejer\'\i a de Educaci\'on of the Junta de Castilla y Le\'on under the Research Project Grupo de Excelencia GR234 Ref.:SA096P20 (Fondos Feder y en l\'\i nea con objetivos RIS3).
1,116,691,500,614
arxiv
\section{Introduction} Some neutron stars (NSs) formed during the core collapse of massive stars are suggested to rotate very rapidly and, possibly for the same reason, to be strongly magnetized (\citealt{duncan1992}; \citealt{moesta2015}). These strongly magnetized, rapidly rotating NSs are often referred as ``millisecond magnetars,'' although their connection$-$if any$-$to the high energy Galactic transients known also as magnetars is presently unclear. Millisecond magnetars possess a prodigious reservoir of rotational energy $\sim 10^{51}-10^{52}$ erg, which can be extracted during the first seconds to weeks after the explosion through electromagnetic dipole spin-down. If the energy from the magnetar wind is efficiently thermalized behind the expanding supernova (SN) ejecta shell (\citealt{Metzger+14}, see also \citealt{badjin2016}), then the resulting power source can greatly enhance the luminosity of the SN \citep[e.g.,][]{ostriker1971,shklovskii1976,mazzali2006,maeda2007}. Recent work on the magnetar model was motivated by the discovery of superluminous SNe (SLSNe)\footnote{In this paper, we focus on Type~Ic SLSNe and refer them as ``SLSNe''. Most Type~II SLSNe are Type~IIn SNe and their power source is likely the interaction between SN ejecta and dense circumstellar media (e.g., \citealt{moriya2013}, but see also \citealt{inserra2016}).}, events with peak luminosities greater than $\sim 10^{44}~\mathrm{erg~s^{-1}}$, i.e., more than an order of magnitude brighter than typical core-collapse SNe \citep{quimby2011,gal-yam2012}. For a surface magnetic dipole field strength of $\sim 10^{14}~\mathrm{G}$ and an initial rotational period of $\sim$ few $\mathrm{ms}$, the magnetar releases sufficient rotational energy ($\gtrsim 10^{51}~\mathrm{erg}$) over a proper timescale ($\sim 1-10$~days) to power SLSNe \citep{kasen2010}. Although magnetars provide one possible explanation for SLSNe \citep[e.g.,][]{kasen2010,woosley2010,dessart2012,inserra2013,Metzger+14,bersten2016,sukhbold2016,mazzali2016}, several alternative models are also actively being explored \citep[e.g.,][]{moriya2010,chevalier2011,kasen2011,chatzopoulos2012b,dessart2013,kozyreva2014,sorokina2015}. The growing number of well-sampled SLSN light curves (LCs) has revealed a rich diversity of behaviors. Some, and possibly all \citep{nicholl2016}, SLSNe LCs show a ``precursor bump'' prior to the main peak (e.g., \citealt{leloudas2012,nicholl2015b}; \citealt{smith2016}). This early maxima may be related to the existence of dense circumstellar media (CSM, \citealt{moriya2012}), shock breakout from an unusually extended progenitor star \citep[e.g.,][]{piro2015}, or the interaction between the SN ejecta and the progenitor's companion star \citep{moriya2015}. Within the magnetar scenario, \citet{kasen2016} show that precursor emission results naturally from the shock driven through the SN ejecta by the hot bubble inflated inside the expanding stellar ejecta by the magnetar wind (see also \citealt{bersten2016}). If this ``magnetar-driven'' shock is strong enough, it becomes radiative near the stellar surface, powering an early LC bump. This emission component is distinct from the normal shock breakout signature from the SN explosion, which occurs at earlier times and is much less luminous due to the more compact initial size of the progenitor star. The extremely luminous transient ASASSN-15lh \citep{dong2016} also presents a challenge to SLSN models. The peak luminosity of ASASSN-15lh exceeds that of other SLSNe by about 1 magnitude, and its total radiated energy now exceeds $3\times 10^{52}$~erg (\citealt{godoy-rivera2016}; \citealt{brown2016}). As this is near the maximum allowed rotational energy of a 1.4~\ensuremath{M_\odot}\ NS, ASASSN-15lh was argued to challenge the magnetar model for SLSNe \citep{dong2016}. However, \citet{metzger2015} demonstrate that the maximum rotational energy increases with the NS mass, reaching $\approx 10^{53}$ erg for a NS close to the maximum observed mass of $\approx 2M_{\odot}$ for a range of nuclear equations of state consistent with measured NS masses and radii. Extremely luminous transients like ASASSN-15lh may indicate that some magnetars illuminating SNe can be very massive, although ASASSN-15lh itself may not only be explained by the magnetar model \citep[e.g.,][]{chatzopoulos2016} or may not even be a SLSN \citep{leloudas2016}. Slowly-rotating NSs can be supported against gravity only up to a maximum mass, which must exceed $\approx 2M_{\odot}$ but is otherwise poorly constrained (however, \citet{Ozel&Freire16} argue that this maximum mass is likely to be $\lesssim 2.2M_{\odot}$). Solid body rotation can stabilize NSs with masses up to $\approx 10\%$ higher than the maximum non-rotating mass for sufficiently rapid rotation. However, if the rotational energy of such {\it supramassive} NSs decreases below a critical minimum value (\ensuremath{E_\mathrm{coll}}), then the NS will collapse to a BH on a dynamical timescale (e.g.,~\citealt{Shibata+00}). Thus, if the magnetar produced in a core collapse SN has a mass in the supramassive range, and if it spins down to the point where its rotational energy becomes less than \ensuremath{E_\mathrm{coll}}, then it will suddenly collapse to a black hole (BH) and the central energy source powering the SN will suddenly cease. In this paper, we investigate the effect of the sudden termination of magnetar energy input due to BH transformation on the LCs of magnetar-powered SNe. \begin{table*} \begin{center} \caption{Initial magnetar and SN ejecta properties of our synthetic models} \begin{tabular}{ccccccccc} \tableline\tableline model & NS mass & $E_m$ & $t_m$ & \ensuremath{E_\mathrm{coll}} & \ensuremath{t_\mathrm{BH}} & $\ensuremath{t_\mathrm{BH}}/t_m$ & \ensuremath{M_\mathrm{ej}} &\ensuremath{^{56}\mathrm{Ni}}\ mass \\ & \ensuremath{M_\odot} & $10^{52}$ erg & day & $10^{52}$ erg & day & \ensuremath{M_\odot} & \ensuremath{M_\odot} & \ensuremath{M_\odot} \\ \tableline NS2p3m1 & 2.3 & 5.0 & 5 & 3.2 & 2.8 & 0.56 & 5 & 0.1 \\ NS2p3m2 & 2.3 & 3.5 & 5 & 3.2 & 0.47 & 0.094& 5 & 0.1 \\ NS2p4m1 & 2.4 & 12.5 & 5 & 9.3 & 1.7 & 0.34& 5 & 0.1 \\ NS2p4m2 & 2.4 & 11 & 5 & 9.3 & 0.91 & 0.18& 5 & 0.1 \\ NS2p4m3 & 2.4 & 10 & 5 & 9.3 & 0.38 & 0.075& 5 & 0.1 \\ NS2p4m4 & 2.4 & 11 & 1 & 9.3 & 0.18 & 0.18& 5 & 0.1 \\ NS2p4m5 & 2.4 & 11 & 10& 9.3 & 1.80 & 0.18& 5 & 0.1 \\ NS2p4m6 & 2.4 & 11 & 5 & 9.3 & 0.91 & 0.18& 10& 0.1 \\ NS2p5m1 & 2.5 & 17.7 & 5 & 15.4 & 0.75 & 0.15& 5 & 0.1 \\ NS2p5m2 & 2.5 & 16 & 5 & 15.4 & 0.19 & 0.040& 5 & 0.1 \\ \tableline \end{tabular} \label{table:magnetarproperties} \end{center} \end{table*} \section{Methods} \subsection{Energy input from magnetar spin-down} We assume that the rotational energy of the central magnetar is emitted in a magnetized wind at the rate given by dipole vacuum or force-free spin-down (\citealt{ostriker1971}; \citealt{Contopoulos+99}). The spin-down luminosity can be expressed as \begin{equation} L_\mathrm{mag}(t)=\frac{E_m}{t_m}\left(1+\frac{t}{t_m}\right)^{-2},\label{eq:dipole} \end{equation} where $t$ is the time after the explosion, $E_m$ is the initial rotational energy of the magnetar, and $t_m$ is its spin-down timescale. If the magnetar is supramassive and transforms to a BH after losing sufficient rotational energy, the central energy input from the magnetar suddenly ceases. From equation (\ref{eq:dipole}), the time of BH formation (\ensuremath{t_\mathrm{BH}}) is estimated to be \begin{equation} \ensuremath{t_\mathrm{BH}} = \frac{\Delta E}{\ensuremath{E_\mathrm{coll}}}t_m, \label{eq:tbhtotm} \end{equation} where $\Delta E\equiv E_m - \ensuremath{E_\mathrm{coll}}$. Thus, the central energy input from a supramassive magnetar can be expressed as \begin{equation} L_\mathrm{mag}(t)= \left\{ \begin{array}{lll} \frac{E_m}{t_m}\left(1+\frac{t}{t_m}\right)^{-2} & & (t \leq \ensuremath{t_\mathrm{BH}}), \\ \\ 0 & & (t > \ensuremath{t_\mathrm{BH}}). \end{array} \right. \label{eq:actualinput} \end{equation} Although we assume that central engine activity abruptly ceases after the BH transformation, ongoing fallback accretion to the BH may in some cases provide an additional ongoing source of energy (e.g., \citealt{dexter2013,gilkis2015}; \citealt{perna2016}). According to equation~(\ref{eq:tbhtotm}), there exists a maximum value of the ratio $\ensuremath{t_\mathrm{BH}}/t_m$ due to the maximum value of $E_m$ which can be achieved for NSs of a given mass due to the mass-shedding limit \citep{metzger2015}. Figure~\ref{fig:max} shows the value of this maximum ratio as a function of the NS mass, based on Figure~4 of \citet{metzger2015}. Observe that $\ensuremath{t_\mathrm{BH}}/t_m$ becomes $\lesssim 1$ for NSs heavier than $\sim 2.3~\ensuremath{M_\odot}$. Such massive NSs, approaching the upper allowed supramassive range, transform to BHs before losing a significant amount of rotational energy. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{tbhovertmmax.eps} \end{center} \caption{ Maximum allowed ratio of BH formation time \ensuremath{t_\mathrm{BH}}\ to magnetar spin-down time $t_m$, as a function of the NS gravitational mass, based on Figure~4 in \citet{metzger2015}. The structure of the solid-body rotating NS is calculated using the {\tt rns} code (\citealt{sf95}) assuming a parametrized piecewise polytropic EOS with an adiabatic index $\Gamma = 3$ above the break density of $\rho_{1} = 10^{14.7}$ g cm$^{-3}$ at a pressure of $P_{1} = 3.2\times 10^{34}$ dyn cm$^{-2}$ (\citealt{mmb15}). The chosen EOS results in a 1.4$~M_{\odot}$ NS radius of 10.6~km and maximum non-rotating mass of $\approx 2.24~M_{\odot}$, consistent with observational constraints. }\label{fig:max} \end{figure} \subsection{Light-curve calculations} We employ the one-dimensional radiation hydrodynamics code \texttt{STELLA} for our numerical LC calculations \citep[e.g.,][]{blinnikov1993,blinnikov1998,blinnikov2006}. \texttt{STELLA} implicitly treats time-dependent equations of the angular moments of intensity averaged over a frequency bin using the variable Eddington method. We adopt 100 frequency bins from 1~\AA\ to $5\times 10^4$~\AA\ on a log scale. Local themodynamic equilibrium is assumed to determine the ionization levels of materials. The opacities of each frequency bin is evaluated by taking photoionization, bremsstrahlung, lines, and electron scattering into account. In particular, approximately 110 thousand lines in the list of \citet{kurucz1991} are taken into account for line opacities and they are estimated by using the approximation introduced in \citet{eastman1993}. See, e.g., \citet{blinnikov2006} for more detailed description of the code. Starting from the initial condition described below, we deposit energy from the magnetar spin-down at the center of the exploding star. We assume that the radiation energy from the magnetar is totally thermalized (\citealt{Metzger+14}), using $L_\mathrm{mag}(t)$ (Equation~\ref{eq:actualinput}) directly as a source of thermal energy in \texttt{STELLA} \citep[cf.][]{tominaga2013}. For comparison, we also show semi-analytic LC models from \citet{arnett1982}, which is suitable for hydrogen-free SNe \citep[cf.][]{valenti2008,chatzopoulos2012,inserra2013}. The semi-analytic LC is obtained by numerically integrating the following, \begin{equation} L(t)=\int^t_02\tau_m^{-2}L_\mathrm{mag}(t')t'e^{\left(\frac{t'-t}{\tau_m}\right)^2}dt'. \label{eq:semiana} \end{equation} The effective diffusion time $\tau_m$ is expressed as \begin{equation} \tau_m = 1.05\left(\frac{\kappa_e}{\beta c}\right)^{0.5}\ensuremath{M_\mathrm{ej}}^{0.75}\ensuremath{E_\mathrm{ej}}^{-0.25}, \end{equation} where $M_{\rm ej}$ and $E_{\rm ej}$ are the total mass and kinetic energy, respectively, of the initial explosion. We assume $\kappa_e=0.1~\mathrm{cm^2~g^{-1}}$ as the electron-scattering opacity of the SN ejecta and $\beta=13.8$ \citep{arnett1982}. \subsection{Initial SN ejecta properties} We adopt a broken power-law structure for the initial density structure of the SN ejecta for simplicity, with a profile $\rho_\ensuremath{\mathrm{ej}}\propto r^{-\delta}$ at small radii which transitions to $\rho_\ensuremath{\mathrm{ej}}\propto r^{-n}$ outside of a break radius. Assuming homologous expansion of the SN ejecta $(r=v_\mathrm{ej}t)$, we can express the initial density structure as \citep[e.g.,][]{chevalier1989} \begin{equation} \rho_\ensuremath{\mathrm{ej}}\left(v_\ensuremath{\mathrm{ej}},t\right)=\left\{ \begin{array}{ll} \frac{1}{4\pi(n-\delta)} \frac{[2(5-\delta)(n-5)E_\ensuremath{\mathrm{ej}}]^{(n-3)/2}}{ [(3-\delta)(n-3)M_\ensuremath{\mathrm{ej}}]^{(n-5)/2}} t^{-3}v_\ensuremath{\mathrm{ej}}^{-n} & (v_\ensuremath{\mathrm{ej}}>v_t), \\ \frac{1}{4\pi(n-\delta)} \frac{[2(5-\delta)(n-5)E_\ensuremath{\mathrm{ej}}]^{(\delta-3)/2}}{ [(3-\delta)(n-3)M_\ensuremath{\mathrm{ej}}]^{(\delta-5)/2}} t^{-3}v_\ensuremath{\mathrm{ej}}^{-\delta} & (v_\ensuremath{\mathrm{ej}}<v_t), \\ \end{array} \right. \label{eq:density} \end{equation} and \begin{equation} v_t=\left[ \frac{2(5-\delta)(n-5)E_\ensuremath{\mathrm{ej}}}{(3-\delta)(n-3)M_\ensuremath{\mathrm{ej}}} \right]^{\frac{1}{2}}, \end{equation} is the transitional velocity. We adopt $n=10$ and $\delta=1$ as typical values \citep[e.g.,][]{matzner1999}. We adopt an initial value of $t = 10^{3}$ s in Eq.~(\ref{eq:density}). The composition is assumed to be 50\%\ carbon and 50\%\ oxygen for simplicity. In our fiducial model, we adopt typical properties for the SN ejecta in magnetar-powered SLSNe models of $\ensuremath{M_\mathrm{ej}}=5~\ensuremath{M_\odot}$ and $\ensuremath{E_\mathrm{ej}}=10^{51}~\mathrm{erg}\equiv 1~\mathrm{B}$ \citep[e.g.,][]{nicholl2015}. We fix $\ensuremath{E_\mathrm{ej}}=10^{51}~\mathrm{erg}$ in all the models, instead varying \ensuremath{M_\mathrm{ej}}\ to investigate the effect of the SN ejecta on the LC properties (the effects of changing \ensuremath{M_\mathrm{ej}}\ and \ensuremath{E_\mathrm{ej}}\ are degenerate in the LC modeling). We also place 0.1~\ensuremath{M_\odot}\ of the radioactive \ensuremath{^{56}\mathrm{Ni}}\ at the center of the SN ejecta, although it has little effect on early LCs. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{example5e52erg.eps} \end{center} \caption{ Bolometric LCs powered by magnetar spin-down. All the models have the same input parameters except for the BH transformation time (\ensuremath{t_\mathrm{BH}}). The solid lines are LC models obtained numerically and the dot-dashed line is the LC model obtained with the semi-analytic method assuming $\ensuremath{t_\mathrm{BH}} = 1~\mathrm{day}$. The dotted line near the semi-analytic model is the numerical LC model with $\ensuremath{t_\mathrm{BH}}=1~\mathrm{day}$ where the electron-scattering opacity is forced to be $0.1~\mathrm{cm^2~g^{-1}}$. The top dotted line is the input magnetar spin-down energy without BH formation. }\label{fig:lightcurve} \end{figure} \section{Light curves} \subsection{Effect of BH transformation} Figure~\ref{fig:lightcurve} shows a series of LCs corresponding to different NS collapse times, demonstrating the effect of BH formation on the LCs of magnetar-powered SNe. Other parameters, namely the initial rotational energy ($E_m=5\times 10^{52}~\mathrm{erg}$), spin-down timescale ($t_m=5~\mathrm{days}$), and SN ejecta properties ($\ensuremath{M_\mathrm{ej}} =5~\ensuremath{M_\odot}$, $\ensuremath{E_\mathrm{ej}} = 10^{51}$~erg, and $\ensuremath{M_\mathrm{\Ni}}=0.1~\ensuremath{M_\odot}$), are held fixed in the models in Figure~\ref{fig:lightcurve}. The overall behavior of the LC evolution is reproduced reasonably well by the semi-analytic model, which in Figure~\ref{fig:lightcurve} is shown by a dot-dashed line for the same magnetar luminosity input as for the $\ensuremath{t_\mathrm{BH}} = 1~\mathrm{day}$ model. One difference between the numerical and semi-analytic models appears in the early rising part of the LC. While the semi-analytic model shows a continuous luminosity increase from day zero, the numerical model shows an early rise and maxima starting about 1~day after the explosion. This is the effect of the magnetar-driven shock breakout described by \citet{kasen2016}. Due to the large value of $E_m$ released by supramassive NSs, the shock is strong and radiative, and the resulting magnetar-driven shock easily reaches the stellar surface. Figure~\ref{fig:hydro} shows the hydrodynamic evolution of the numerical model, confirming that the shock breakout indeed occurs $\approx 0.5-1$~days after the explosion. Because the shock velocity is about $10000~\ensuremath{\mathrm{km~s^{-1}}}$ and the remaining distance from the shock to the surface is about $10^{14}~\mathrm{cm}$ at the time of the shock breakout, the photon diffusion time above the shock is about $10^{5}~\mathrm{sec}$. Therefore, the LC reaches the first maxima at about 1~day after the shock breakout. In general, the magnetar-driven breakout bump is more prominent in the LC in cases of early BH formation. This is due to the greater peak luminosity which, for a fixed value of the spin-down time $t_{m}$, increases with the magnetar lifetime. Although the luminosity contribution due to the direct diffusion of the spin-down power eventually comes to exceed that of shock breakout, the latter persists even after this time. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{hydro.eps} \end{center} \caption{ Hydrodynamic evolution of the numerical model with $\ensuremath{t_\mathrm{BH}} =1~\mathrm{day}$ in Figure~\ref{fig:lightcurve}. The shock breakout occurs at around $0.5-1$~day after the explosion. The solid lines represent the density structure and the dotted lines show the temperature structure. }\label{fig:hydro} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{photo.eps} \end{center} \caption{ Photospheric temperature evolution of the numerical model with $\ensuremath{t_\mathrm{BH}}=1~\mathrm{day}$ shown in Figure~\ref{fig:lightcurve}. }\label{fig:photo} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{multicolor.eps} \end{center} \caption{ NUV and optical LCs of the numerical model with $\ensuremath{t_\mathrm{BH}}=1~\mathrm{day}$ shown in Figure~\ref{fig:lightcurve}. }\label{fig:multicolor} \end{figure} \citet{kasen2016} also found that the suppression of the spin-down power is required to make the LC peak due to the magnetar-driven shock breakout prominent. \citet{kasen2016} argued that the breakout peak can clearly appear if the thermalization of the spin-down energy from the magnetars is insufficient. In our model, the thermalization is kept efficient but the spin-down energy itself is shut down because of the BH transformation to make the LC peak prominent. After the initial breakout peak, the numerical and semi-analytic LCs match reasonably well for some period of time. However, after $t \approx$ 20~days the numerical LC begins to decline faster than the analytic expectation, presumably due to the effects of recombination and efficient adiabatic cooling in the SN ejecta. The semi-analytic model assumes a constant electron-scattering opacity of $0.1~\mathrm{cm^2~g^{-1}}$, which in reality will begin to decrease as the SN ejecta expand and cool due to recombination. For comparison, we show a numerical LC model with $\ensuremath{t_\mathrm{BH}}=1~\mathrm{day}$ where the electron-scattering opacity is forced to be $0.1~\mathrm{cm^2~g^{-1}}$. We can see that the numerical LC with $0.1~\mathrm{cm^2~g^{-1}}$ declines slower than the actual numerical LC model, but they still decline faster than the semi-analytic model. The remaining difference likely comes from the more efficient adiabatic cooling in the numerical model with $0.1~\mathrm{cm^2~g^{-1}}$ than the semi-analytic model. A part of the magnetar energy input is used to accelerate the SN ejecta in the numerical model and the kinetic energy of the SN ejecta is increased by the magnetar. Therefore, the adiabatic cooling is more efficient in the numerical model than in the semi-analytic model where no dynamical effect of the magnetar is taken into account. In the late phases, the numerical LC tracks the decay of \ensuremath{^{56}\mathrm{Co}}\ decay resulting from the initial 0.1~\ensuremath{M_\odot}\ of \ensuremath{^{56}\mathrm{Ni}}. Figure~\ref{fig:photo} shows the photospheric temperature evolution of the numerical model with $\ensuremath{t_\mathrm{BH}} = 1~\mathrm{day}$, while Figure~\ref{fig:multicolor} shows the near ultra-violet (NUV) and optical LC evolution of the same model. The multicolor LCs are obtained by convolving the filter functions of Swift/UVOT ($uvw1$, $uvm2$, and $uvw2$; \citealt{poole2008}) and Subaru/HSC ($u$, $g$, $r$, $i$, and $z$; \citealt{miyazaki2012}) with the spectral energy distribution obtained by \texttt{STELLA}. Although the early shock breakout bump is clearly visible in the bolometric LC, it does not contribute appreciably to the NUV and optical bands because of the very high photospheric temperature at the time of shock breakout. The high photospheric temperature also renders the NUV and optical LCs relatively faint. Although the bolometric luminosity reaches values of $10^{44}~\mathrm{erg~s^{-1}}$ within the SLSNe range, the high photospheric temperature makes NUV and optical peak between $-19$ and $-20$~mag, below those of SLSNe. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{lightcurvediff.eps} \end{center} \caption{ Numerical LC models with different initial magnetar properties and BH formation times. The initial conditions are summarized in Table~\ref{table:magnetarproperties}. }\label{fig:lightcurvediff} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{lightcurvediffsame.eps} \end{center} \caption{ Numerical LC models with different SN ejecta and magnetar properties (see Table~\ref{table:magnetarproperties}). The magnetar initial rotational energy ($E_m = 1.1\times 10^{53}~\mathrm{erg}$) and the NS mass (2.4~\ensuremath{M_\odot}, i.e., $\ensuremath{E_\mathrm{coll}} = 9.3\times 10^{52}~\mathrm{erg}$) are the same as those in NS2p4m2. The difference between NS2p4m2 ($\ensuremath{M_\mathrm{ej}}=5~\ensuremath{M_\odot}$, $\ensuremath{E_\mathrm{ej}} = 10^{51}~\mathrm{erg}$, and $t_m=5~\mathrm{days}$) and the other models are indicated in the figure. }\label{fig:lightcurvediffsame} \end{figure} \subsection{Parameter dependence} The previous section addressed how the process of BH transformation for different formation times \ensuremath{t_\mathrm{BH}}\ changes the LC properties of magnetar-powered SNe for fixed magnetar and SN ejecta properties. Here, we explore the effect of changing properties of the magnetar (NS mass, $E_m$, and $t_m$) and the SN ejecta (\ensuremath{M_\mathrm{ej}}\ and \ensuremath{^{56}\mathrm{Ni}}\ mass) within their physical ranges. For a given NS mass, there is the maximum value of $E_m$ corresponding to the mass-shedding limit \citep[e.g.,][]{metzger2015}, such that rotational energy $E_m$ must lie in the range [\ensuremath{E_\mathrm{coll}},max($E_m$)]. Thus, once $E_m$ and $t_m$ are fixed, the value of \ensuremath{t_\mathrm{BH}}\ is no longer a free parameter (cf. Eq.~\ref{eq:tbhtotm}). We take these constraints into account in the models presented in this section, as summarized Table~\ref{table:magnetarproperties}. Figure~\ref{fig:lightcurvediff} shows numerical LCs calculated for different $E_m$ but fixing the value of $t_m$ and the SN ejecta properties. Generally, the peak luminosity increases with higher $E_m$. However, both the values of \ensuremath{E_\mathrm{coll}}\ and max($E_m$) increase monotonically with the NS mass. For NS with masses approaching the maximum range of supramassive NSs, the values of max($E_m$) and \ensuremath{E_\mathrm{coll}}\ become sufficiently close that the NS collapses to a BH before releasing the significant amount of the rotational energy. Extremely massive magnetars do not therefore produce bright SNe, despite the large rotational energy $E_m$ available.\footnote{The remaining rotational energy is ultimately trapped in the spin of the BH.} The maximum peak luminosity we obtain is around $10^{45}~\mathrm{erg~s^{-1}}$, which are powered by the NSs of mass $\lesssim 2.5~\ensuremath{M_\odot}$ for the assumed equation of state (Fig.~\ref{fig:lightcurvediff}). The peak luminosity ranges between $10^{43}~\mathrm{erg~s^{-1}}$ and $10^{45}~\mathrm{erg~s^{-1}}$ in our models. Figure~\ref{fig:lightcurvediffsame} shows the numerical LCs for a fixed value of $E_m = 1.1\times 10^{53}$ erg but varying the spin-down time $t_m$ and the SN ejecta properties. Because $\ensuremath{t_\mathrm{BH}}/t_m$ is fixed for a given value of $E_m$ and \ensuremath{E_\mathrm{coll}}, then the value of $\ensuremath{t_\mathrm{BH}}$ decreases with $t_m$. Magnetars with shorter spin-down times $t_m$ result in smaller peak luminosities because the BH transformation occurs earlier, such that most of the magnetar energy is lost to PdV expansion instead of being released as radiation. Models with a larger ejecta mass \ensuremath{M_\mathrm{ej}}\ results in the longer LC duration because of the longer diffusion time, as expected. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{comparison.eps} \end{center} \caption{ Comparison between the synthetic $R$-band LCs and rapidly-evolving bright transients. We show the LCs of SN~2002bj \citep{poznanski2010}, a typical rapidly declining SLSN (SN~2010gx, \citealt{pastorello2010}), and the Arcavi transients (PTF10iam, SNLS04D4ec, SNLS05D2bk, and SNLS06D1hc, \citealt{arcavi2016}). The triangles in the figure indicate the upper limits of the observations. }\label{fig:comparison} \end{figure} \section{Discussion} \subsection{Comparison with observations} Figure~\ref{fig:comparison} compares our synthetic $R$-band LCs to the measured LCs of rapidly-evolving luminous transients. As previously discussed, the optical luminosity of the synthetic LCs are relatively faint compared to the bolometric luminosity because of the high photospheric temperature. The peak luminosities of our models are typically between $-20$ and $-21$~mag. Therefore, our transients are brighter than most core-collapse SNe, yet fainter than typical SLSNe. \citet{arcavi2016} recently reported transients precisely within this luminosity range, some of which show similar LC behavior to those predicted by our model. Although the peak luminosities of the Arcavi transients can be explained by magnetars of typical masses with the initial spin of a few ms and the magnetic field strength of $\sim 10^{15}$~G, \citet{arcavi2016} found that the overall rapid LC evolution is hard to be explained by the magnetar model. However, we accomplished the rapid evolution by shutting down the magnetar power by the BH transformation. The rapid LC evolution in our models is also consistent with that of SN~2002bj \citep{poznanski2010}. If the \citet{arcavi2016} events and SLSNe are both powered by magnetars, then one might assume that they should occur in similar host galaxies. However, the Arcavi transients occur in relatively higher metallicity environments than SLSNe, which instead prefer low metallicity \citep[e.g.,][]{chen2016,perley2016,leloudas2015,lunnan2015}. On the other hand, normal SLSNe are likely powered by less massive, stable magnetars, while the magnetars described in this work are necessarily very massive. Core collapse explosions giving rise to different NS masses could in principle map to different progenitor environments. Alternatively, the Arcavi transients may have several distinct origins, including those powered by magnetars transforming to BHs, and thus could originate from a diversity of environments. We have focused on the magnetars with the spin-down timescales of the order of days which correspond to the magnetic field strengths of $\sim 10^{14}~\mathrm{G}$. Their spin-down timescales can be shorter (seconds or less) with stronger magnetic fields and such magnetars can be progenitors of, e.g., gamma-ray bursts (GRBs) \citep[e.g.,][]{metzger2015}. The BH transformation can also occur in such magnetars possibly affecting the observational properties of GRBs, but this is beyond the scope of this paper. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{redshifted.eps} \end{center} \caption{ Redshifted LCs of the model with $t_m=1~\mathrm{day}$ in Fig.~\ref{fig:lightcurve} observed with the $g$ band of Subaru/HSC. }\label{fig:redshifted} \end{figure} \subsection{Observing the magnetar-driven shock breakout bump} Although the signature of magnetar-driven shock breakout is clearly visible in our bolometric LCs, they are difficult to observe in optical bands because of their high photospheric temperature and short duration. However, optical surveys could detect them more readily at high redshifts due to $K$-correction and time dilation effects. Figure~\ref{fig:redshifted} shows the LCs of SN powered by magnetars transforming to BHs as observed at redshifts $z = 0.5-6$ in the $g$ band of Subaru/HSC. The flat LC becomes visible after the initial LC rise for events at $z\gtrsim 2$, while a clear shock breakout bump appears only at $z\gtrsim 5$. Unfortunately, a transient survey with a depth of 28~mag would be required to detect the bump in this case. However, depending on the initial properties of the magnetar and the SN ejecta, the bump may become brighter than assumed in our fiducial models, in which case detection might still be feasible by deep transient surveys by instruments like LSST or Subaru/HSC with a proper cadence \citep{tanaka2012}. \section{Conclusions} We have investigated the observational properties of SNe powered by temporarily stable supramassive magnetars which transform to BHs following a brief spin-down phase. This sudden collapse to a BH results in an abrupt cessation of energy input from the central engine. Our LC modeling of such transients have shown that their LCs decline much quicker than those of SN powered by the indefinitely stable, lower mass magnetars, which are usually invoked as the engines of SLSNe. We also find that the magnetar-driven shock breakout signal can be more significant in SNe powered by magnetars transforming to BHs, due in part to the higher rotational energy of a massive NS and the fact that prompt BH formation can allow the breakout signal to more readily shine above the normal spin-down powered LC. Unfortunately, this breakout signal is not readily visible in NUV or optical wavebands because of the high photospheric temperature at early times. Nevertheless, such a breakout signal could be more readily detected in optical at high redshifts, or at low redshifts by future wide-field UV transient surveys. The multi-dimensional effects like Rayleigh-Taylor instabilities in the shell causing the magnetar-driven shock breakout \citep[e.g.,][]{kchen2016} may also affect the shock breakout signatures. Our synthetic LCs of short-lived magnetars appear to be consistent with some of the rapidly-evolving bright transients recently reported by \citet{arcavi2016}. \acknowledgments{ We thank the anonymous referee for the comments that improved this paper. TJM is supported by the Grant-in-Aid for Research Activity Start-up of the Japan Society for the Promotion of Science (16H07413). BDM gratefully acknowledges support from NASA grants NNX15AU77G (Fermi), NNX15AR47G (Swift), and NNX16AB30G (ATP), and NSF grant AST-1410950, and the Alfred P. Sloan Foundation. The work of S.Blinnikov on development of STELLA code is supported by Russian Science Foundation grant 14-12-00203. Numerical computations were partially carried out on the PC cluster at Center for Computational Astrophysics, National Astronomical Observatory of Japan. } \bibliographystyle{yahapj}